:0 «I . \Ii .3‘ [I]: i, It.” in.-. t x . 4 1. SIMULATED PRODUCT SALES FORECASTING: AN ANALYSIS OF FORECASTING AND OPERATING DISCREPANCIES IN THE PHYSICAL DISTRIBUTION SYSTEM By Jeffrey Robert Sims A DISSERTATION Submitted to Michigan State University in partial fuifiilment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Marketing and Transportation Administration 1978 ABSTRACT SIMULATED PRODUCT SALES FORECASTING: AN ANALYSIS OF FORECASTING AND OPERATING DISCREPANCIES IN THE PHYSICAL DISTRIBUTION SYSTEM By Jeffrey Robert Sims In the management of distribution operations, two types of uncertainty must be recognized and their impacts evaluated. The first, demand uncertainty, deals with the rate at which a product is demanded. The second, operating uncertainty, deals with the channel's ability to replenish inventories as they are depleted. The combination of these factors affects system performance. The objective of this research was to measure the combined impacts of variations in demand and operating uncertainty on the performance of a channel system. Measures to reduce the effects of demand and operating uncer- tainty are generally considered independently. However, when management seeks to implement an improved forecasting technique, two factors must be considered: (l) the accuracy of the preposed technique in comparison to that presently in use; and (2) whether the distribution system can effectively support the sales as forecasted by the improved technique. Using the Simulated Product Sales Forecasting (SPSF) Testing Environment, the performance of a distribution system was analyzed under four time series forecasting techniques in combination with varying levels of (Emand and operating uncertainty. Twenty-four combinations were tested Jeffrey Robert Sims for a simulated period of 240 days. Channel performance was tested in terms of sales, service and cost, to identify the effects of changes in forecast accuracy, and demand and operating uncertainty. The total discrepancy between sales and demand was separated into forecasting and operating error. Three general hypotheses were tested using analysis of variance. l. Different levels of each uncertainty have different impacts on channel performance. 2. Variations in forecast accuracy have significant impacts on channel performance. 3. Different combinations of demand uncertainty, operating uncertainty and forecast accuracy have different impacts on channel performance. The major conclusions of this research are: l. Increases in the variation of demand result in increased stockouts and reduced profit regardless of the level of operating uncertainty or the forecasting technique employed. The more complex the forecasting technique, the smaller the decreases in performance resulting from increased demand uncertainty. 2. Increases in operating uncertainty result in increased stockouts and reduced profit. This result was consistent for all combinations of forecasting techniques and demand patterns except one. However, changes in system performance varied across the forecasting techniques. The less complex the technique, the smaller the decrease in channel performance. I.” Jeffrey Robert Sims 3. Variations in forecast accuracy lead to variations in channel performance. This result was consistent across demand patterns and levels of operating uncertainty. The complexity of the forecasting techniques was inversely related to accuracy when considered across all demand patterns and levels of operating uncertainty. Considered only across demand patterns, the more complex techniques were better able to adapt to increased demand variation. 4. There is an interaction between demand and operating uncertainty in their effects on channel performance. Any discrepancy between demand and sales is the result of the combined effects of these factors and may be separated into Forecasting and Operating Discrepancy. The effects of increases in demand and operating uncertainty tend to cancel each other. The Total Discrepancy between demand and sales is less than the sum of Forecasting and Operating Discrepancy. A number of implications for distribution management follow from the results of this research. First, defining forecast error as the difference between sales and forecast is incorrect. Such a procedure generates future forecasts based upon past levels of oper- ating as well as forecast discrepancy. Second, more consistent system performance is achieved using more simple forecasting techniques. Complex techniques, although more able to track highly variable demand patterns, are also more affected by variations in operating uncertainty. Finally, the performance of the channel must be monitored and analyzed from a system's perspective to separate forecasting and operating discrepancies. ACKNOWLEDGMENTS Many individuals contributed to the completion of this dissertation. It is a pleasure to acknowledge their support and guidance. First, I wish to express sincere appreciation to the dissertation committee which consisted of Dr. Donald J. Bowersox, Professor of Marketing and Transportation; Dr. Leo G. Erickson, Professor of Marketing; and Dr. George Wagenheim, Associate Professor of Marketing and Transportation, at Michigan State University. To Dr. Bowersox, the committee chairman, I owe a special debt of gratitude. His concern and professional guidance contributed greatly to this candidate's successful completion of the graduate program. His sanguine support throughout the develOpment of the thesis greatly assisted in improving the quality and facilitating the completion of this research. I am indebted to Dr. Erickson for serving as an invaluable committee member and friend. The enthusiasm and expedience with which Dr. Erickson read through the draft of the thesis were a continuous source of motivation. His scholarly insight and outlook on life aided in overcoming problems which at times appeared to be insuperable. The friendship and guidance of Dr. Hagenheim are especially appreciated. His willingness to bear numerous interruptions and his ii logical research approach aided in the initial structuring of this research and its general development. The Johnson and Johnson Domestic Operating Company and the Whirlpool Corporation are gratefully acknowledged for their support of the SPSF Basic Research. The National Council of Physical Distribution Management is especially appreciated for awarding a special grant of the A. T. Kearney Research Grant to aid in the funding of this dissertation. To Tom Mentzer and Dave Close I am forever grateful. As fellow members of the SPSF Research Team, their insight, suggestions and personal support were fundamental in the completion of this research. For their friendship and many good memories, my deepest thanks. To Pam Cook, who was unlucky enough to type the initial drafts, special thanks for her perseverance. To Grace Rutherford, who completed the final typing, my thanks for an excellent job and helpful suggestions. Finally, I am indebted to my family whose understanding and affection gave much meaning to my work. To my parents, Doug and Mary, belong a special appreciation for their continued encouragement and support for many years. To my wife, Jennie, who shared the many joys and frustrations of a doctoral student and who provided me with the needed love and support only she could provide, I owe a debt I cannot repay. iii TABLE OF CONTENTS Page LIST OF TABLES .......................... vii LIST OF FIGURES ......................... xi Chapter I. INTRODUCTION ....................... 1 General Problem Statement ......... . ...... l Detailed Problem Statement ............... 4 Definitions ...................... 6 Research Procedure ................... 8 Thesis Outline ..................... l2 II. THE SPSF TESTING ENVIRONMENT ............... l4 Introduction ...................... l4 The SPSF Concept .................... l4 General SPSF Design .................. l5 Operations Module ................... l8 Demand Module ..................... 24 Forecast Module .................... 29 Analysis Module .................... 34 Cost Generator ................... 34 Cost Components .................. 35 Report Generator .................. 38 Conclusion ....................... 40 III. LITERATURE REVIEW .................... 41 Introduction ...................... 4l The Nature of Future Sales Estimation ......... 4l Synthesis of Time Series Approaches .......... 43 Moving Average ................... 45 Exponential Smoothing ............... 46 Adaptive Smoothing ................. 47 Spectral Analysis ................. 48 Technique Review .................... 50 R. G. Brown's Technique .............. 50 Trigg and Leach's Adaptive Smoothing Model ..... 52 iv Chapter Page Smith's Adaptive Model Corrector .......... 53 P. R. Winters' Exponentially Weighted Moving Averages ..................... 55 Theil and Wage's Simplified Exponential Smoothing Model ................. 60 Roberts and Reed Self-Adaptive Forecasting Technique .................... 64 D. C. Whybark's Technique ............. 68 Chen and Winters' Hybrid Exponential Model ..... 69 Wheelwright and Makridakis' Adaptive Filtering . . . 73 The Box-Jenkins Methodology ............ 77 General Class of Models ............ 8l Advantages and Disadvantages .......... 88 Conclusion ....................... 88 IV. ENVIRONMENT SPECIFICATION ................ 89 Introduction ...................... 89 OPTS System l ..................... 89 Lead Time Probability Distributions .......... 93 Criteria for Selection ............... 94 The Log-Normal Distribution ............ 98 The Gamma Probability Family ............ TOO The Erlang Distribution .............. l04 Selection and Generation .............. lOS Forecasting Techniques and Selection of Smoothing Constants ...................... lO7 Demand Generation ................... l08 Conclusion ....................... lll V. HYPOTHESES AND RESEARCH METHODOLOGY ........... ll2 Introduction ...................... ll2 Relative Forecast Accuracy ............... l13 Run Specification for Forecast Accuracy ...... ll4 Statistical Analysis ................ ll7 Factorial Design for Analysis of Variance ....... l18 Run Specifications for Analysis of Variance . . . . ll9 Statistical Analysis ................ l22 One Factor ANOVA ................ l22 Two-Factor ANOVA ................ l26 Three-Factor ANOVA Design ........... l29 Post Hoc Analysis ............... T32 Analysis Procedure ................... l34 Specific Research Hypotheses .............. 137 Summary ........................ l39 Chapter VI. EXPERIMENTAL RESULTS ................... Introduction ...................... Selection of Smoothing Constants ............ Initial Investigations: The Roberts and Reed Technique ...................... System Performance: Analyses of Variance ....... Analysis of Average Inventory Variance ....... Analysis of Stockout Variance ........... Analysis of Sales Variance ............. Analysis of Forecast Discrepancy (FD) ....... Analysis of Operating Discrepancy (OD) ....... System Performance--Economic Variables ......... Conclusion ....................... VII. CONCLUSIONS ....................... Introduction ...................... Integration of Findings and Hypotheses ......... Average Inventory ................. Stockouts ..................... Sales ....................... Forecast Discrepancy (FD) ............. Operating Discrepancy (OD) ............. Generalized Research Conclusions ............ Average Inventory ................. Sales and Stockouts ................ Forecast Discrepancy (FD) ............. Operating Discrepancy (OD) ............. Research Implications ................. Limitations of the Research .............. Future Research .................... APPENDIX ............................. BIBLIOGRAPHY ........................... vi Page l4O 140 141 145 148 148 161 173 184 194 203 217 219 219 220 237 Table 01010101-b-h-bh-b00N 01-bC/ON 4300“) 0101010101 oowmm —J LIST OF TABLES Cost Function Summary ................. Report Categories and Records ............. Autocorrelation Example ................ Cost and Throughput Factors--DC ............ Mode Characteristics ................. Product Characters .................. Order Cycle Time Probability Distributions ...... Demand Patterns: Mean Expected Daily Sales by Period . . File Specification .................. Response Variables Recorded .............. Experimental Factors and Levels ............ Analysis of Inventory Levels Across Forecasting Techniques ...................... Summary Table for One-Factor ANOVA .......... Two-Factor ANOVA ................... Summary Table for Two-Factor ANOVA .......... Summary Table for Three-Factor ANOVA ......... Experimental Conditions for Selection of Smoothing Constants, Given a Demand Pattern ........... MAPE Values for Brown's Technique Under Demand Pattern l ....................... MAPE Values for Brown's Technique Under Demand Pattern 2 ....................... vii Page 36 39 80 9O 92 92 106 110 ll6 ll6 120 123 125 127 128 131 142 142 143 Table .10 .11 .12 .13 .14 .15 .16 .17 .18 .19 MAPE Values for Winters' Technique Under Demand Pattern 1 ....................... MAPE Values for Winters' Technique Under Demand Pattern 2 ....................... Experimental Factors and Levels: Summary of Three-Way ANOVA on Average Inventory . . . . Summary of Three-Way ANOVA on Average Inventory-- No Perfect Forecasts or Constant Order Cycle Times Two-Way ANOVA on Average Inventory Given Demand Pattern 1: No Perfect Forecasts or Constant Order Cycle Times ................... Two-Way ANOVA on Average Inventory Given Demand Pattern 2: No Perfect Forecasts or Constant Order Cycle Times ................... Mean Values and Levels of Significance--One-Way ANOVA's on Average Inventory by FORTECH ........ Mean Values and Levels of Significance--One-Way ANOVA's on Average Inventory by OCTV Summary of Three-Way ANOVA on Stockouts ........ Summary of Three-Way ANOVA on Stockouts--No Perfect Forecasts or Constant Order Cycle Times ........ Summary of Two-Way ANOVA on Stockouts Given Demand Pattern 1: No Perfect Forecasts or Constant Order Cycle Times ...................... Two-Way ANOVA on Stockouts Given Demand Pattern 2: No Perfect Forecasts or Constant Order Cycle Times Mean Values and Levels of Significance-~One-Way ANOVA's on Stockouts by FORTECH ............ Mean Values and Levels of Significance-~One-Way ANOVA's on Stockouts by OCTV Summary of Three-Way ANOVA on Sales .......... viii Three-Way ANOVA . . . . Page 143 144 149 150 153 155 156 158 160 162 166 167 168 169 174 Table .20 .21 .22 .23 .24 .25 .26 .27 .28 .29 .30 .31 .32 .33 .34 Summary of Three-Way ANOVA on Sales--No Perfect Forecasts or Constant Order Cycle Times ........ Summary of Two-Way ANOVA on Sales Given Demand Pattern 1: No Perfect Forecasts or Constant Order Cycle Times ...................... Summary of Two-Way ANOVA on Sales Given Demand Pattern 2: No Perfect Forecasts or Constant Order Cycle Times ...................... Mean Values and Significant Differences-~One-Way ANOVA's on Sales by FORTECH .............. Mean Values and Significant Differences--One-Way ANOVA's on Sales by OCTV Summary of Three-Way ANOVA on FD Summary of Three-Way ANOVA on FD: No Perfect Forecast or Constant Order Cycle Times Summary of Two-Way ANOVA on FD Given Demand Pattern 1: No Perfect Forecast or Constant Order Cycle Times ................... Summary of Two-Way ANOVA on F0 Given Demand Pattern 2: No Perfect Forecast or Constant Order Cycle Times ................... Mean Values and Levels of Significance--One-Way ANOVA's on FD by FORTECH Mean Values and Levels of Significance--One-Way ANOVA's on FD by OCTV ................. Summary of Three-Way ANOVA on OD Summary of Three-Way ANOVA on DD: No Perfect Forecasts or Constant Order Cycle Times ........ Summary of Two-Way ANOVA on OD Given Demand Pattern 1: No Perfect Forecasts or Constant Order Cycle Times ................... Summary of Two-Way ANOVA on OD Given Demand Pattern 2: No Perfect Forecasts or Constant Order Cycle Times ................... ix Page 177 178 179 181 181 185 187 189 190 191 191 195 198 199 200 Table Page 6.35 Mean Values and Levels of Significance--One-Way ANOVA's on OD By FORTECH ............... 201 6.36 Mean Values and Levels of Significance--One-Way ANOVA's on OD By OCTV ................. 203 6.37 Performance Summary--Demand Pattern 1 ......... 205 6.38 Changes in Response Variables Across OCTV--Given DMDl and Perfect Forecast ............... 206 6.39 Changes in Response Variables Across OCTV--Given DMDl and R. G. Brown Forecast ............. 208 6.40 Changes in Response Variables Across OCTV--Given DMDl and a Trigg and Leach Forecast .......... 209 6.41 Changes in Response Variable Across OCTV--Given DMDl and a P. R. Winters Forecast ........... 210 6.42 Performance Summary--Demand Pattern 2 ......... 212 6.43 Changes in Response Variables Across OCTV--Given DMD2 and a Perfect Forecast .............. 213 6.44 Changes in Response Variables Across OCTV--Given DMD2 and a Brown Forecast ............... 214 6.45 Changes in Response Variables Across OCTV—-Given DMD2 and a Trigg and Leach Forecast .......... 215 6.46 Changes in Response Variables Across OCTV--Given DMD2 and a P. R. Winters Forecast ........... 216 7.1 Mean Levels of Response Variables as a Percentage of Average Period Demand Under DMDl .......... 238 7.2 Mean Levels of Response Variables as a Percentage of Average Period Demand Under DMD2 .......... 239 Figure N & #43.wa 01010101 (JON kwN LIST OF FIGURES SPSF Testing Environment--General Design Example of Simplified Distribution Structure ..... Example of Complex Distribution Structure ....... Wave Periodicity and Amplitude ............ Representative Physical Distribution Network ..... Log Normal Distributions ............... Gamma Distributions With Unit 8 But Different Values of a ......................... Typical Gamma Distributions .............. Run Specifications .................. Three-Factor ANOVA Design ............... Three-Way Interaction of Main Effects on Average Inventory ....................... Three-Way Interaction of Main Effects Three-Way Interaction of Main Effects Three-Way Interaction of Main Effects Three-Way Interaction of Main Effects xi 0" 011 011 on Stockouts Sales FD ...... OD ...... 101 102 121 130 152 164 175 186 196 CHAPTER I INTRODUCTION General Problem Statement The distribution processes of the business firm in the United States economy are characterized by the anticipatory commitment of significant levels of productive resources to meet expected sales demand. From the identification of the market opportunity to the strategic movement of finished product, the distribution channel should ideally function in both an efficient and effective manner to create the time and place utilities necessary for profitable transactions. The decision variables inherent in this process are such that the capital commitments made in the design of the channel establish the flexibility limits to adapt to changes during the operating period. It is within this environment of fixed facilities and significant resource commitments that the distribution manager seeks to achieve stated service and cost objectives during day to day operations of the firm. The short run decisions of the distribution manager are made within an environment of risk. To the extent that decisions are com- petitively correct, the extensive operations which have culminated in the market availability of a product or service are rewarded. In contrast, an incorrect physical distribution decision may effectively negate all previous marketing efforts. The critical nature of the inherent risk in this distribution process is evidenced by the fact that it represents the final step in the value added process of mar- keting. Thus, only by the placement of the right amount of inventory at the right place at the right time may the marketing process succeed. In the placement of inventory throughout a distribution channel, two types of uncertainty are experienced. The first is the rate at which the product is demanded over a specified time period at each inventory stocking location. The second is the variation in the channel's ability to replenish inventory at each location as needed. To the extent that these factors may be reduced by application of management science techniques, the expected result is greater physical distribution performance. Unfortunately the relative impact of these two uncertainties has not been amenable to direct analysis. With noted exceptions, the prevailing approaches to reducing each type of uncertainty has been independent. In the short run, however, more attention is given to sales uncertainty than to channel variation, due to the comparatively fixed nature of channel facilities and relationships. The typical procedure for reducing variation in replenishment performance has been to audit order cycle operations over time to identify problem areas. The solution is typically to invest in new facilities or relationships which increase either the speed or dependability of various order cycle components. Such decisions, by their very nature, cannot be implemented on a day-to—day basis. Thus, in the operational period, the distribution manager is typically constrained to the employment of safety stocks to deal with operating uncertainty. The most common procedure to reduce the impact of sales uncertainty is to use formal forecasting procedures. Given a forecast, business activity is planned to meet the expected level and pattern of sales activity. At the conclusion of an operating period, considerable attention is then given to comparing forecasted results to actual operating experience. In general, the results of this comparison range from disappointing to disastrous. Beyond the defect of unsat- isfactory results, typical forecasting procedures have two noteworthy deficiencies. First, typical procedures provide no mechanism wherein the causal factors resulting in forecast error can be isolated for analysis. At the conclusion of the forecast period, it is difficult if not impos- sible to isolate why actual sales did not match expected sales. The question remains unanswered: Did the forecast error result from inaccurate sales prediction or from the inability of the enterprise operational system to provide timely inventory for customer sales service? If the cause was excessive prediction or alternatively inadequate demand, then the error experienced was in fact a forecast error. However, it is also possible that the forecast was accurate but the expected sales were not realized due to a wide variety of possible operating deficiencies such as stockouts, misallocation of inventories, production deficiencies, and so forth. The second type of forecast deficiency is more subtle. This type of forecast error is difficult to detect since expected sales are in fact achieved. The problem is that the forecasted sales figure falls short of anticipating what ggulg_have been sold during the forecast period. Thus, the forecast, which falls short of estimating potential, may appear in retrospect to have been highly accurate since expected and actual sales are closely matched. In fact, the forecast was in error and at best represented little more than a self-fulfilling prophecy. This second type of forecast error can represent a signif- icant source of lost profits during periods of generally rising sales. Detailed Problem Statement When management seeks to implement an improved forecasting technique, two factors must be taken into consideration: (1) the relative accuracy of the proposed technique in comparison to that technique presently in use; and (2) whether the distribution system can effectively support the forecasted sales levels as generated by the improved technique. The general thrust of this research is an investigation into each of these factors. The first of these factors deals specifically with the statistical accuracy of the techniques in question. The performance of each in estimating future sales can be analyzed and compared statistically. Such an evaluation of four time series analysis forecasting techniques represents the first area addressed in this research. However, such an analysis is not an adequate base for the determination of whether to adopt the proposed technique. This is due to the fact that improved forecasting accuracy may prove useless if the distribution system itself is inadequate. To deal with the second factor noted above, the performance of the distribution system under each of the forecasting techniques under consideration must be analyzed, the critical response variables being the service levels achieved by the system and the total costs incurred. These variables are dependent upon the operation of the total system as directed by the level of expected sales activity as forecasted. As such, any discrepancy between the ideal performance of the total system and the actual performance achieved may be the result of either the inaccuracy of the forecasting technique in estimating what could have been sold (Forecast Discrepancy) or the inability of the distribution system to efficiently support the level of sales activity which could have been achieved (Operating Discrepancy), or both. These factors provide a measure of the Total Discrepancy (the difference between the level of sales which could have been realized and that actually achieved) within any operating period. The identification of Forecast and Operating Discrepancy and the analysis of their separate and com- bined impacts on system performance constitute the second portion of this research. In time series analysis forecasting, the incorporation of forecast error into the generation of future forecasts is a critical element. However, failure to recognize the composition of Total Discrepancy has resulted in the practice of defining forecast error as the difference between the forecasted level of sales and the level of sales actually achieved. As such, future forecasts are affected by Operating Discrepancy of the distribution system which should have been ignored. Only the Forecast Discrepancy should be used. In addition to the adjustment of future forecasts according to past Forecast Discrepancy, the potential exists for the evaluation of a similar procedure relating Operating Discrepancy and the set of managerial parameters defining the structure and policies of the distribution system. Although it is unlikely that any automatic process could be developed to adjust the operating system, the ability to trace errors to a specific node or link should provide valuable insight leading to improved performance. Definitions The concepts discussed in the previous section are employed repeatedly throughout the remainder of this document. The purpose of this section is to provide definition to terms critial to this research. Demangf-the total quantity of units ordered by the retail facility from the distribution center over a specified time period. The time periods employed in this research are twenty days each. Forecast--the total quantity of units expected to be ordered by the retail facility from the distribution center over a specified time period. Forecast Discrepancy (FD)--the forecast error; any difference between the quantity of units expected to be ordered from the distribution center and the actual quantity ordered (i.e., demanded) over a specified period of time; the portion of the Total Discrepancy which is due to inaccurate forecasting. Operation Discrepancy (OD)--the operating error; any difference between the quantity of units ordered from the distribution center and the quantity of units sold as a result of defi- ciencies in operations such as unexpectedly long lead times; the portion of the Total Discrepancy which is due to variations in the order cycle time. Ordersf-demand; the total quantity of units ordered by the retail facility from the distribution center over a specified period of time. Order_Cycle Time--the total elapsed time from the issuance of an order by a facility to the receipt of the units ordered; the combination of communication time, order processing time and transit time. Sglggr-the total quantity of units ordered which were filled by the distribution center over a specified period of time; gross dispersements from the distribution center. Total Discrepancy (TD)--any difference between the quantity of units ordered from the distribution center and the quantity of sales over a specified period of time. Research Procedure Based on the Detailed Problem Statement, three specific objectives in this research may be outlined as follows: 1. to determine the relative accuracy of selected time series analysis forecasting techniques under selected demand conditions; 2. to evaluate the performance of a specific physical distribution system in response to expected sales levels as estimated by forecasting techniques of varying levels of sophistication; and 3. to quantify the level of Total Discrepancy and to determine the relative levels of Forecasting and Operating Discrepancy under varying conditions of demand uncertainty and order cycle time variability. Ideally, each of these objectives would be achieved by performing a series of experiments on an existing channel of dis- tribution. In this manner, the researcher could observe how the system reacted to changes in the level of forecast accuracy and order cycle time variability. It is, however, impossible to control all of the relevant variables in the system and to manipulate the level and variability of demand or order cycle time performance. Therefore, the solution to this research problem lies not in direct experimen- tation, but with experimentation on a replication or model of a channel system. A model is generally regarded as an abstraction or simpli- fication of a system. A mathematical model describes the system, its components and their interactions in quantitative terms. The model thus allows the researcher to abstract the essential charac- teristics of a system and thereby observe and eventually predict how that system will function. Models cannot replace actual experience; at best they reduce a complex system to manageable proportions or serve to crystalize our perceptions.‘ Once the analyst has achieved a parallelism between the actual situation and the model, the model may be manipulated to examine characteristics of the problem situation under analysis.2 Simulation is one form of modeling which has been successfully employed to replicate physical channel systems.3 Simulation models mathematically represent a system, but when applied to problem solving do not necessarily lead to an optimum solution. Computer simulation models may be characterized as static or dynamic. Static models analyze the state of the variables and relationships at a given point in time. Dynamic models, on the other hand, depict the performance of a system over time permitting an investigation into the time dependent relationships between both the parameters and variables of the modeled system. In addition, computer simulation has been 1Claude McMillan and Richard Gonzalez, Systems Analysis: A Computer Approach to Decision Models (Homewood, 111.: Richard D. Irwin, Inc., T968)—, p. 9. 2Ellwood S. Buffa, Models for Prediction and Operations Management (New York: John Wiley & Sons, Inc., 1963), p. 9. 3Jay W. Forrester, Industrial Dynamics (Cambridge, Mass.: The MIT Press, 1961), pp. 47-59; Harvey’N. Shycon and Richard B. Maffei, "Simulation: Tool for Better Distribution," in Readings in Physical Distribution Management, eds. Donald J. Bowersox, Edward W. Smykay and Bernard J. LaLonde (New York: The Macmillan Company, 1969), pp. 243-344; and Donald J. Bowersox, Logistical Management, 2nd ed. (New York: The Macmillan Company, 1978):'pp.'354-363. 10 characterized as a viable technique for modeling systems which exhibit great complexity, probablistic or stochastic processes, and whose variables are difficult to analyze in precise mathematical terms.“ Simulation is also quite amenable to experimentation in that once a computer model of a system has been developed, the model's output may be statistically analyzed under different input conditions.5 Therefore, a computer simulation model of a physical channel system was selected to investigate the relationships between forecast accuracy, operating uncertainty and system performance. The specific simulation model employed in this research is the Simulated Product Sales Forecasting (SPSF) model.6 Undertaken as a joint industry--university effort in 1975, SPSF was specifically designed to provide a managerial testing environment for the evaluation of the nature and causes of forecast inaccuracy. This research provides an application of the model to such an investigation. l'Daniel Teichroew and John Lubin, "Computer Simulation-- Discussion of the Technique and Comparison of Languages," Communications of the ACM, October 1966, p. 724. SRonald H. Ballou, Business Logistics Management (Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1973), p. 81. 6Donald J. Bowersox et al., "Simulated Product Sales Fore- casting: Present Status and Future Potential," Proceedings of the Seventh Annual Transportation and Logistics Educators Conference (Columbus, Ohio: Transportation and Logistics Research Fufid} The Ohio State University, 1976), pp. 27-30; Bowersox et al., "Short Range Product Sales Forecasting," Proceedings of the Fourteenth Annual National Council of Physical Distribution Managers (1976), pp. 193:216; and Bowersox et 31:, Simulated Product Sales FOrecasting: Managerial Documentation (Bureau of Business Research, Michigan State Univer51ty, forthcoming). 11 The SPSF model has the following important characteristics: 1. It provides a comprehensive model of a physical distribution system capable of measuring both cost and service performance; 2. The model incorporates a multiecheloned structure; 3. The model is dynamic allowing distribution planning and analysis over time; 4. The model replicates both the spatial and temporal dimensions of actual operations; and 5. The model combines a dynamic computer simulation of physical distribution operations with demand replication and time series analysis forecasting capabilities. The specific characteristics of the SPSF model are reviewed in detail in Chapter II. The SPSF model provides the framework for the investigation into the effects of variations in forecast accuracy and lead time variability upon the performance of a simulated physical distribution channel. Specifically, this research investigates the following factors: 1. The relative accuracy with which selected time series analysis forecasting techniques project selected patterns of demand; 2. The effects of variation in the level of forecast. accuracy upon the performance of a simulated phy51cal distribution channel; and 3. The combined and relative effects of sales and lead time uncertainty on the performance of the phy51cal distribution system. To investigate these factors a series of computer simulations were completed combining all possible combinations between four fore- casting techniques, two demand patterns exhibiting different levels 12 of variation and seasonality, and two levels of order cycle time variability. In other words, the forecasting techniques, demand patterns and levels of order cycle time variability were employed as experimental (manipulated) conditions. The results of each experimental run analyzed included measures of system service, forecast accuracy, operating discrepancy, total discrepancy and average inventory. These response variables were analyzed using standard analysis of variance techniques. The specific hypotheses tested are developed in Chapter V. In addition, post-hoc analysis was also performed on those hypotheses which showed significant experimental effects. Thesis Outline This dissertation consists of seven chapters. After the introductory chapter, Chapter II describes the conceptualization of the SPSF concept. In addition, the model itself is described in detail. Chapter III provides a review of general forecasting procedures and the underlying components of time series analysis. Specific fore- casting techniques based on time series analysis are reviewed in the order of increasing complexity. Chapter IV develops the environmental conditions employed in this research. Specifically, modifications to the SPSF model for this research are discussed as well as the method of demand and forecast generation. 13 Chapter V develops the research design and specific hypotheses tested with the design. The results of the analysis are reported in Chapter VI. Each general hypothesis is reviewed and the post-hoc analysis (when appropriate) is presented. Chapter VII summarizes the findings and suggests general- izations to be drawn from the research. Areas of future research and the limitations of the present research are also outlined. The Appendix provides synopses of the literature on various aspects of time series analysis forecasting which were not described in the main body. CHAPTER II THE SPSF TESTING ENVIRONMENT Introduction The primary focus of the SPSF research was to develop a computerized analysis tool to assist management in improving forecast accuracy. The overall model developed is identified as the SPSF Testing Environment. This research details one type of applied investigation employing the SPSF Testing Environment. The purpose of this chapter is to provide an overview of the SPSF model and its analysis capabilities. The initial section of the chapter provides a brief background statement regarding the nature of the SPSF concept and basic SPSF approach. Next, the general SPSF system is reviewed and the four modules that make up the testing environment are introduced. The final sections of the chapter discuss the modules in depth. The SPSF Concept The SPSF concept is relatively simple. To provide a forecast test environment, the attributes of market area demand simulation, dynamic operational simulation, and statistical sales forecasting are combined into a single computer model. The SPSF model is capable of rendering a sales forecast while simultaneously creating customer 14 15 orders and replicating the physical distribution process of providing timely inventory to satisfy customer order requirements. Thus, through the combined capabilities of two types of simulation and statistical forecasting, a time-sequenced record of events leading to forecasting deficiency is captured and documented. Such documentation provides the basis for post-mortem evaluation of the reason for the forecast error and formulates the basis for subsequent sensitivity analysis. Perhaps the most beneficial feature of the SPSF model is that it provides a testing environment for controlled experimentation. Three assumptions are critical to a generalized testing environment capable of controlled experimentation. First is the fundamental belief that if appropriate variables are identified and incorporated into forecasting, available statistical techniques can produce accurate demand estimates. Second, operational performance can be recorded for effectiveness analysis and the potential exists to quantify and incorporate such data as a forecast variable. Finally, the results of experimentation in a testing environment can be accurately generalized to a broad range of markets without extensive duplicate analysis. General SPSF Design, The SPSF testing environment contains the following modules: 1. Operations Module; Demand Module; Forecast Module; and #0)“) Analysis Module. 16 Each module is briefly reviewed in terms of its function within the overall SPSF testing environment. The function of the Operations module is to simulate the physical distribution system supplying the test market with products being forecasted. This module is capable of replicating inventory availability and movements based upon a variety of different replen- ishment policies. Utilizing input from both the forecast and demand modules, the operations module replicates the performance of the test operating system across the time horizon of the forecast period. The operations simulator is designed as a generalized stochastic model capable of replicating the performance of a distribution system with multiple echelons. To obtain maximum realism it is designed on a dynamic basis wherein the state of the model at any given point in time is dependent upon the performance of the system in preceding periods and, in addition, forms the basis for operating performance in future periods. Dynamic simulation possesses the capability to replicate the time dependent nature of actual operations in determining the values of both state and flow variables within the system. For example, the operations module adjusts inventory levels on a time dependent basis to replicate both receipts and shipments over time.1 1Initial inventory levels at each stocking location are input by the user. For each day of the simulation, these levels are adjusted automatically to reflect units shipped and/or received. As such, the level of inventory replenishment and order filling activity in period t will directly affect inventory levels in period t + 1. For a more detailed discussion of the nature of dynamic simulation, see Donald J. Bowersox, ngistical Management (New York: Macmillan Publishing Co., Inc., 1974), p. 394. 17 This time-dependent design permits a simulation of the capability of a physical distribution system to satisfy market requirements. Operational deficiency caused by uncertainty of demand and lead time are capable of measurement for analysis purposes. To obtain the maximum measurement of the SPSF operations simulator, generalized cost equations are included in the analysis module to facilitate cost/revenue analysis. The Demand module provides a methodology for creating potential sales. The purpose of this module is to produce synthetic orders from a geographical market area for which the forecast module is to render a product sales forecast. Thus, the demand module is employed to quantify pattern, level, and dispersion of product orders over the forecast period. The design approach of this module is important to the SPSF testing environment as it provides the primary data for evaluation of forecast accuracy. Four alternative demand generating procedures are included in the market area demand module. The Forecast module provides software procedures for rendering statistical forecasts. Sales forecasts establish inventory levels for the operations module. Considerable difference exists between the degree of technical sophistication in the techniques contained in the forecast module. The SPSF testing environment provides an application of four short range product forecasting procedures widely used in industry. The Analysis module is the fourth module of the overall testing environment and is primarily concerned with the diagnostic reporting of 18 overall SPSF testing. The primary information flow for analysis is the time sequenced relationship between forecasted sales, simulated demand and simulated sales. Based on these linkages, the module provides management status, activity and cost computation reports. The analysis module also quantifies Forecast and Operating Discrepancy, providing the opportunity to evaluate and modify future forecasts and/or operating policies. Figure 2.1 provides an overview of the SPSF Testing Environment general design and the foundation for a more detailed discussion of the four modules. As a matter of terminology, the term "simulator" is used to refer to the combined use of the operations forecast and demand modules. The term "SPSF Testing Environment" is used to refer to the above defined simulator plus the analysis module. Operations Module The operations module is a dynamic simulator that integrates input from the forecasting module and the demand module in a manner that permits both the time- and function-dependent observations to be replicated and observed. The operations simulator is designed to model individual events involved in the performance of the physical distribution process. Typical events include order processing, order shipping, inventory management, production and transportation. Each event is modeled to occur on an independent basis, the dynamic charac- teristic of actual operations being achieved through event interaction. For example, the output of the inventory management event is the input q I 19 A)- DPERATIONS MODULE PRODUCTS PRODUCTS ABC DE MANUFACTURERS DISTRIBUTION ABCDE CENTERS ___________________ ___, l"””*,/' l I I ,_JL__ : I ABCDE I ABCOE ABCDE RETAILERS : . l . . . I ' I I I I l I : I I I U I ' f : L---f-—L-l-- ------- : : I I I l l I ' I Y. ...... 5.--.1. ....... 4-..." | I : | I l | I I I I SIMULATED 1 I HISTORY I SALES | DATA I BASE : , : BASE : I l I I I 1 I I : FORECAST EORECAST-- --—- J I '-I - SIMULATED __ DEMAND I PROCEDURE ' SALES L.. ..... .1 : .. ..... .I.T DEMAND GENERATION I I L——J 1 : : I FORECAST MODULE I g : DEMAND MODULE : I I I 1 i I I OUTPUT : ANALYSIS I g g -» MANAGEMENT I y ”””” "" REPORTS : TOTAL I ————————————— -I —--- ERROR p-————-—-—--------’— ANALYSIS MODULE Figure 2.1 SPSF Testing Environment-~Genera1 Design. 20 to the inventory replenishment event which in turn creates a requirement for the transportation event. In the linkage of events, the manner in whcih elapsed per- formance time is replicated is critical for the model to capture the dynamic realism of distribution operations. Such realism is achieved through the inclusion of probabilistic time variables. Typical compo- nents requiring probability variables are transportation, communication and order processing time. Selected events, such as inventory management and error management, require the accumulation of status across time. For this reason, events in the operations module may be scheduled to occur on a one-time or-a recursive basis in any analysis situation. Examples of events typically scheduled only once for each computer run are data read and the network configuration events. Events typically scheduled to occur on a repetitive basis are replenishment, forecasting and report events. In addition to the incorporation of modular interaction and temporal uncertainty, the design of the operations module provides a flexible model permitting multiple plants, distribution centers, echelons and products, as well as alternative Operating policies. The operations module may simulate up to 15 locations and handle up to 10 individual or groupings of products.2 Locations may be 2These limits, 15 locations and 10 individual or groupings of products, correspond to the current program limits available to the SPSF research project at Michigan State University. These limits were set to minimize core and time requirements for current research, however, minimal effort is required to alter them. 21 arranged in any number of echelons from manufacturing plants to final customers.3 Customer locations can represent specific accounts or areas of aggregated demand.“ These capabilities permit the operations module to replicate the typical physical distribution configuration used to replenish inventory to a specific geographic area. Figures 2.2 and 2.3 illustrate the structural flexibility inherent in the operations module. The Simplified distribution structure in Figure 2.2 illustrates inventory stocking at three locations with product shipment from a manufacturing plant to a distribution center and then to the customer. Although such a simplified distribution structure would be rare in actual market situations, it provides a useful basis for analysis. Typical policies capable of alternative testing are illustrated on the right side of Figure 2.2. Figure 2.3 provides a more complex example of a distri- bution structure which consists of 12 locations in four echelons. This network has multiple source destinations and direct or by-pass shipment capabilities from the manufacturer to the distribution center location. It is important to note that these examples illustrate only two potential system configurations which can be structured within the operations module. The specific network employed in this research is developed in Chapter IV. The illustrations in Figures 2.2 and 2.3 do 3The number of echelons in a specified structure is limited only by the maximum number of locations as input by the user. l'Up to 10 individual accounts and/or areas of aggregate demand may be employed in any network. LOCATION DESCRIPTION MANUFACTURER DISTRIBUTION CENTER RETAILER CUSTOMER PRODUCTS STOCKED ABCDE CF ABCDE I 3 I 3 1 0* E} 22 ALTERNATIVE POLICIES PRODUCTION POLICIES INVENTORY POLICIES ORDER PROCESSING TIMES AVAILABLE TRANSIT MODES SPEED AND CONSISTENCY OF COMMUNICATION AND TRANSIT TIMES FORECASTING METHODOLOGY INVENTORY AND STOCKING POLICIES ORDER PROCESSING TIMES REVIEW PERIODS DEMAND DRAW-OFF FUNCTIONS REPLENISHMENT ORDER FACTORING METHODS AVAILABLE TRANSIT MODES SPEED AND CONSISTENCY OF COMMUNICATION AND TRANSIT TIMES INVENTORY AND STOCKING POLICIES REVIEW PERIODS REPLENISHMENT ORDER FACTORING METHODS FORECASTING METHODOLOGY DEMAND LEVEL AND VARIABILITY Figure 2.2 Example of Simplified Distribution Structure. 23 LOCATION DESCRIPTION PRODUCTS STOCKED WHOLESALER n cos DISTRIBUTION CENTER ABCDE o o o 0 Figure 2.3 Example of Complex Distribution Structure. 24 not include the total simulator of SPSF Testing Environment in that the forecast, demand and analysis modules are not displayed. Demand Module In the SPSF Testing Environment the applicability and validity of results is in part dependent upon the quality of demand generation. Once a physical distribution operating system is defined, the analysis of potential system efficiency rests on the capability to support a specified demand pattern. If the representation of demand is unreal- istic or provides an inadequate approximation of the actual demand, modeling results will have limited operational validity. Similar relationships exist in all forms of simulation experimentation. The applicability of experimental results is directly proportional to the validity of the demand replication. Three alternatives are included for demand generation in the SPSF Testing Environment: (1) the direct input of actual orders from sales history; (2) the use of Monte Carlo processes or probability distributions to create individual orders;5 and (3) determination of an aggregate sales figure, such as the industry sales for a market area which is reduced to arrive at individual product orders. The first method of demand generation, directly inputing product orders is termed Procedure 1 in the demand module. The use of orders as sales history is the most realistic and simple method of 5Given a mean and variance of daily sales, these techniques generate demand levels through a random process. 25 demand replication. However, this procedure has the shortcomings of requiring more data than alternative methods of demand generation and an inability to create experimental test conditions. Because of the assumed accuracy, the direct use of historical sales was one method of demand generation used to validate the basic SPSF operations module. Procedure 2 employs statistical procedures such as normal, log-normal, erlang and poisson distributions to generate total daily sales. Although easy to implement, this approach offers the researcher no assured approximation to market reality. Even if the probability distribution used is statistically fitted according to historical sales, the parameters of the model are static. Also, this method is based upon the assumption that information gained in experimen- tation against such static parameters is applicable to the actual market situation. Once the sales level is identified, it is reduced into daily orders for use by the operations module. The general procedure is to randomly select orders from a predetermined order file until the daily demand is equaled. Several order files may be predetermined, each containing a sample of 100 orders from a different seasonality period, the file to be used being determined according to the season in question. Pseudo orders can be structured to permit experimental conditions or to replicate events such as new product introduction or the marketing of deals. The addition of the dollar value of the last order from the predetermined file will seldom equal daily sales. Therefore, as a standard procedure, half of the time the last order 26 is dropped from the day's sales and half of the time accepted. Over the course of the simulation horizon this procedure will average out to the daily sales figure. The third technique for demand generation involves the adjustment of historical sales levels based upon correlation to economic variables. Procedures 3 and 4 of the demand module utilize different correlation procedures. In a typical test situation, it is not difficult to identify several economic indices that can be corre- lated to sales. Such factors can be determined through statistical analysis of past data available from market sources in combination with company history. Typical factors which may be considered are population, net income, Gross National Product, the number of housing starts and other selected indicators, depending upon the product in question. The correlation approach Offers a more realistic approximation of an actual market condition than either Procedure 1 or 2. By using economic indices as independent variables it is conceptually possible to simulate numerous market conditions. In addition to simulation of the current market condition indices may be projected to estimate future conditions in an effort to set up experimental situations. By repeatedly altering projections and analyzing corresponding per- formance, critical variables can be isolated and related operating structures offering the greatest productivity identified. The more accurate the representation of the actual market economies, the greater the validity of the analysis. The major deficiency of using correlation 27 analysis is in the validation of the basic level of demand. While the correlation can be statistically validated, the relevancy cannot.6 Two procedures in the demand module use correlation analysis. In Procedure 3, firm sales is determined using multiple linear regres- sion. The effect of various factors upon firm sales is measured using the method of least squares. Given a number of factors as independent variables, a linear regression equation is used to arrive at firm sales for a particular month. Daily sales are generated by multiplying this monthly sales level by a daily sales factor based on an expected mean and variance for the month in question. In Procedure 4, industry demand is generated within the market area by inputing independent variables into a correlation equation of the general form: IS = a + b1X + b X + b3X3 + ... + b X l 2 2 n n where: IS = industry sales for the period in question; a = the vertical axis intercept; Xl-n = the independent variable influencing industry sales; and bl-n = the respective factors of the independent variables. 6The degree of relationship between the values of selected independent variables and the level of sales can be statistically validated. This does not mean, however, that the relationship is meaningful or reasonable to the user. 28 This form of generalized equation permits inclusion of any independent variables deemed relevant to the determination of market area industry sales. It also allows selection of different variables for different forecast Situations. For instance, a finn selling household appliances may find housing starts, wholesale price index, and per capita income useful for rendering aggregate forecasts. After industry sales are determined, they are reduced to monthly estimates by historical monthly sales rankings as a percentage of annual sales. Next, specific market share is determined using the approach proposed by Kotler.7 This method uses a ratio of the firm's "market effort" to the industry "marketing effort" to arrive at specific market share. The result Of the regression analysis for Procedures 3 and 4 is a monthly or weekly sales estimate. Given this figure, daily sales are determined from a normal distribution using the expected mean and variance of daily sales for the month under analysis. Given the level of daily sales, the approach outlined for Procedure 2 is used to arrive at product orders. Regardless of which demand module procedure is in use, the series of orders comprising daily sales are submitted to the Order Processing routine in the operations simulator. 7Philip Kotler, Marketing Decision Making: A Model Building Approach (New York: Holt, Rinehart and Winston, 1971). 29 Forecast Module Four procedures for time series analysis forecasting are available in the SPSF Testing Environment.8 They are: l. Brown's Basic Exponential Smoothing; Trigg and Leach's Adaptive Smoothing; P. R. Winters' Exponentially Weighted Moving Averages; and #00“) Roberts' and Reed's Self-Adaptive Forecasting Technique. Each was selected as representative of a level of forecasting sophis- tication. Each level and the corresponding procedure is discussed breifly. More detailed explanations are contained in Chapter III. Before discussing the properties of the procedures, however, the method through which the forecast is incorporated into the simulation is reviewed. Given a product, the first forecasting option is to generate independent forecasts for each location in the operations simulator structure. Such locations may be on the same echelon level or at various levels within the system. The applicable forecast is gen- erated from an analysis of historical patterns of sales (throughput) at each location. As such, each forecast is developed independent of all others. 8A fifth forecasting procedure, the Box-Jenkins Forecast Methodology was originally scheduled to be included in the SPSF Forecast module. However, because of the extent of the manual interface required in the development of a forecast model for each different environmental situation, and because of the need to employ numerous such environments, this procedure was eliminated. 30 The second option generates forecasts at the echelon level serving the final destination customer. A forecast is made for a number of periods in advance. This forecast is then lagged back to the source locations which replenish the customer service location. At the upper echelons of the operating structure the period to be forecasted depends on the estimated lead time from the location in question to the market. The estimate is computed for each product in an event subroutine. The first forecast procedure contained in the forecast module is the basic exponential smoothing model developed by R. G. Brown.9 Brown's model applies a static smoothing constant (a) to the previous period's sales and forecast by the formula: Forecast1 = a (Saleso) + (l-o) (Forecasto). As this model requires little input, andEUIunchanging smoothing factor, it is easy to use. Since it contains no forecast error adaptability or specific trend or cyclical elements it offers a limited capability. However, exponential smoothing provides a control measure by which the benefit of added sophistication and complexity can be evaluated.10 9R. G. Brown and Richard F. Meyer, "The Fundamental Theorem of Exponential Smoothing,“ _perations Research 9 (No. 5; September- October 1961): 673-685. 10By comparing the performance of a simulated system when using Brown's procedure to the performance of the same system when using a more complex procedure, the determination of whether or not to imple- ment the advanced procedure may be made on the basis of quantified levels of improved performance. Such a comparative analysis could be .made between any combination of the four available forecast procedures. 31 A formulation developed by P. R. Winters11 constitutes the second forecast procedure. Winters' approach is technically similar to Brown and Meyer's exponential smoothing. The major difference is that Winters incorporates seasonality and trend variation. The same smoothing constant (a) is used in this technique as in Brown and Meyer's. Additional constants are also included for smoothed estimates of seasonality (8) and of trend (y). When these three factors are combined, a sales forecast is obtained for a period which considers seasonal and trend projections. The estimates for seasonality and trend are derived by weighting past estimates by their smoothing constants. That is: Seasonality estimate1 = B(Seasonality0) + (l-B) (Seasonality estimateo) Trend estimate1 = y(Trend0) + (l-y) (Trend estimateo). Since the Winters' technique incorporates trend and seasonal variations, it represents a more sophisticated approach which should improve forecast accuracy. Winters' technique has the deficiency of not adapting smoothing constants to changes in the level of forecast error. Procedure 3 in the forecasting module was selected from among several techniques designed to increase the adaptability of 11P. R. Winters, "Forecasting Sales by Exponentially Weighted Moving Averages," Management Science 6 (No. 3; April 1960): 324-342. 32 exponential smoothing. The procedure employed was designed by Trigg and Leach12 and is representative of the techniques of adaptive smoothing. Trigg and Leach's technique operates with a smoothing constant (a) set equal to the absolute value of a tracking signal. The track- ing Signal is a measure relating the degree of forecast error during the most recent period to that in the preceding periods. When error becomes large, the tracking signal approaches a value of one. As the value of the tracking signal increases so does the corresponding value of a, allowing the forecast to adapt more quickly to changes in demand. As this adaptation leads to decreases in the error, the tracking signal and the value of a will decline. Although the Trigg and Leach technique offers adaptability, consideration of trend and/or seasonality is omitted. The next step in sophistication is Procedure 4 which includes adaptability plus trend and seasonal components. This is the Self-Adaptive Forecasting Technique (SAFT) developed by Roberts and Reed13 and based upon the work of P. R. Winters‘“ and G. E. P. Box.ls SAFT is similar in concept 12D. W. Trigg and A. G. Leach, "Exponential Smoothing With an Adaptive Response Rate," _perations Research Quarterly 18 (No. 1; March 1967): 53-59. 13Stephen D. Roberts and Ruddell Reed, Jr., "The Development of a Self-Adaptive Forecasting Technique," AIIE Transactions 1 (No. 4; December 1969): 314-322. 1"Winters, pp. 324-342. '50. E. P. Box, "Evolutionary OperationS--A Method for Increasing Industrial Productivity," Applied Statistics 6 (No. 2; June 1957). 33 to the Trigg and Leach approach but employs three smoothing constants instead of one. SAFT employs the Evolutionary Operations (EVOP) technique of response surface analysis to determine optimal combinations of the values of the three smoothing constants. Using historical data, the EVOP systematically varies the values of each of the smoothing con- stants, obtaining a measure of the forecast error for each combination. Each of the error terms so determined is squared, and together they form the response surface. This surface is then searched to determine the minimum squared error, with the set of smoothing constants corre- sponding to this error being automatically fed into Winters' smoothing model. Roberts and Reed have developed what may be termed a closed loop dynamic forecasting technique. Any radical change in the pat- terns of basic sales, trend, or seasonality which causes an increase in the forecast error is automatically accounted for by the adjustment of smoothing constant values. The purpose of the Forecast package is to provide a range of techniques for use in the SPSF Testing Environment. One benefit is that the accuracy of the four techniques can be subjected to cost/ benefit analysis under controlled conditions. For any specified market area it is possible to experimentally evaluate the accuracy of each technique, the value of increased accuracy between techniques, and the cost of obtaining such improvement. Each technique may be used in combination with demand Procedure 1 as a basis for a comparison 34 against the actual operating results achieved using the firm's own forecast procedure. By using SPSF capabilities such as these, the decision of whether to implement a more sophisticated forecast tech- nique can be made on the basis of expected costs and benefits. As noted in Chapter I, one of the Objectives in this research was to detail such an analysis, the results of which are presented in Chapter V. Analysis Module The analysis module of the SPSF testing environment calculates the levels of cost and service variables for each simulation run. It consists of two distinct routines, the Cost Generator and the Report Generator, each of which is discussed. Cost Generator The SPSF cost generator is an independent routine which computes the various costs incurred by simulated distribution Opera- tions. The independent operation of this routine fulfills two func- tions. First, the absence of cost components in the main body of the simulation model provides more efficient utilization of computer core. Second, the use of an independent cost generator permits sensitivity analysis of different cost assumptions without incurring the additional expense and time of duplicate runs. The cost generator computes the expense of various distribution activities on the basis of accumulated flow and activity from the 35 operations module. Cost elements include transportation, warehousing, storage, inventory, ordering and backorder costs. Each element includes a fixed and/or variable component depending on the function. The computation of costs is linear and dependent on a unit of through- put or mean activity levels, depending upon the function. Since the operations simulator is a dynamic model, an accurate measurement and costing of both status and flow variables is possible. A summary of the elements with methods of computation is presented in Table 2.1. The cost generator obtains inputs for calculations from two sources. Data related to activity and average level of distribution functions are obtained directly from the operations module. Cost parameters applied to these operations employed in this research are Specified in Chapter IV. Given these sources of input, the cost generator creates a new data file which contains the simulator output and calculated costs. This new file provides input to the management report generator. Cost Components As a final element in the overview of the cost generator, this part reviews the general characteristics of each cost element. Transportation cost is computed for all replenishment orders on the basis of the weight shipped. In addition to being dependent on the source and destination of the shipment, the rate is also based on the transit mode. For each source-destination pair, up to six different weight breaks may be used to represent any combination of modes. There is a fixed cost available to transportation at each Table 2.1 Cost Function Summary Can Be Uniquely Cost Element Defined By Is A Function Of Components Transportation Shipping source Weight shipped Variable Shipment destination .................................... p-—-—--——-—----—----d-----------— Warehousing Location Weight, unit, Fixed (throughput) or cube Product Throughput Variable (in, out, or in+-out/2) ——————————————— +—------—-————----———-—_—---—-—--——-—------—+———--——-——_— Warehousing Location Average inventory Variable (storage) level 1 Product Elapsed time ...................................... 4_———-----—--—--—-—--fi---—----—--— Inventory cost Location Average inventory Variable (includes level storage) Elapsed time ................ r----------------_---_-1--------_----_------ L------_--_- Order cost Location Number of orders Fixed (shipping) Number of lines ——————————————— #—------------------- —-------—---—-----—- b-------——-- Backorder cost Product Number of orders Variable (Shipping) backordered 37 origin location to permit simulation of private fleet Operations. To complete total costing of private transport, a variable cost per hundred weight is included between each origin and destination. Warehousing cost is divided into handling and storage. Handling cost is unique by product and facility location and based on facility throughput. Handling costs include fixed expense related to labor, supplies and material handling equipment. The variable portion of handling is based on weight, cube, or units, and can be calculated on inbound volume, outbound volume, or an average of the two. Storage cost is variable based on inventory level and unique to each facility location. The components of storage include building depreciation, utilities, and any other facility specific charges. Inventory cost is calculated as a function of the average inventory level. The assessment percentage may be varied by product and location if management desires. The cost for order placement and processing is assessed at the shipping location on both a fixed and variable basis. The variable cost can be calculated as a function of the number of orders and/or the number of lines shipped. The variable cost covers such expenditures as labor and supplies. The fixed cost element is assessed on a per order basis and covers such items as hardware and supervisory expenses. The final cost relates to backorders. The variable charge is assessed against all out-of-stock replenishment orders. The charge is assigned on a location basis which can be varied. 38 The calculation Of cost elements based upon the level of activity in the operations simulator provides the basis for validation of simulated operations by management. Given a specific distribution configuration, these costs should generate expense levels approximating those actually experienced by the firm over the period corresponding to that of the order history in use. The reports to be printed based upon these expense levels are discussed in the next part. Report Generator This subsystem of the analysis module produces management reports on system performance. Independent of the remainder of the SPSF Testing Environment, an intermediate file is used to pass output variables from the simulation to the report generator. The input to the report generator includes simulator output and a small number of parameter cards. Parameter input includes descriptive names of facility locations, customers and products, run identification information, and report selection parameters. The descriptions make the model output more intelligible to man- agement users while the report selection parameters define the level of reporting detail that is desired. There are 24 potential reports which may be divided into four categories. Each category is divided into records which may be indi- vidually selected. The categories and their records are listed in Table 2.2. These reports provide the means whereby such operations may be analyzed to determine critical events or variables. 39 Table 2.2 Report Categories and Records Category Records Sales and physical levels System customer sales System shipments System receipts System inventory report Replenishment sales Replenishment sales and in-transit inventory Replenishment volume: Weight shipped Product inventory report Product sales report (units) System bleeding report Product bleeding report Costs System cost Replenishment volume: Cost of shipping Service System service measure System percentage of orders by quantity met System backorder recovery and thousands of dollars filled within days System backorder recovery: Tens of units filled within days Replenishment order cycle time summary Error Operating error report Forecast error report Managerial summary Run summary: Sales Run summary: Service Run summary: Inventory and backorders Run summary: Costs 40 Conclusion This chapter has provided an overview of the SPSF Testing Environment. The primary purpose of the chapter was a detailed discussion of each module that forms an integral part of SPSF.16 Chapter III provides a technique and literature review of time series analysis forecasting techniques. Chapter IV details the structure of the SPSF Testing Environment as employed in this research. 16For the complete documentation of the SPSF Testing Environment see Donald J. Bowersox et al., Simulated Product Sales Forecasting: Managerial Documentation (East Lansing: Bureau of Business Research, Michigan State University, forthcoming). CHAPTER III LITERATURE REVIEW Introduction To reach the stated objectives of this research, four time series analysis forecasting techniques of varying levels of sophis- tication were employed in the experimental design. The selected techniques represent only a sample of a broad range of currently available models. The purpose of this chapter is to provide an overview of the more commonly employed Short-term forecasting tech- niques, including those employed in this research. Before dealing with specific techniques, however, the three basic approaches for estimating future sales will be reviewed followed by a general discussion of time series analysis. The Nature of Future Sales Estimation Concern over the importance of improving forecasting techniques has varied with the state of the business environment. In times of continually climbing demand, business concerns have tended to de-emphasize improvement. This is due to the phenomenon of the "Self-fulfilling prophecy," i.e., no matter what forecast is made, demand is always greater. Since production is geared to meet the 41 42 forecast and, in most cases, all available units are sold, the forecast is self-fulfilling and the unfilled demand or full potential of the market goes unnoticed. In periods where demand is oscillating greatly around either a stationary or decreasing average demand pattern, the inadequacies of forecasting procedures become more obvious. The alternately rising and falling sales patterns characteristic of the past several years have generated particular concern for improved Short-range forecasting of sales for the purpose of inventory and production level control. A short-range forecasting period is defined as any time span up to one year, encompassing forecast periods as short as one day. This lit- erature review is concerned with such short-range forecasting. Three major approaches for estimating future sales can be identified from the practice of business. The first, qualitative analysis, is a "what do you think" or "seat of the pants" approach that formulates predictions based upon experience and intuition of key personnel. This approach, due to its opinionated base, is inade- quate to handle periods of oscillating sales and does not lend itself well to routinization. For purposes of this review, qualitative techniques will not be referred to as forecasts. The term "forecast" is reserved for procedures which utilize a formalized statistical or mathematical model to arrive at the estimate of future events. All less formal procedures are referred to as "predictions." The second approach is a formal forecast procedure identified as correlation analysis. Correlation analysis utilizes the method of 43 least squares to relate selected measurable factors to future sales. Although easily computerized, it takes a considerable amount of data to render an accurate forecast. For this reason, correlation analysis is more applicable to long-range trend analysis and is not the most efficient procedure for rendering short-range forecasts. A third procedure is time series analysis. Similar to correlation analysis, time series analysis represents a formalized approach and includes the publicized procedures of exponential smoothing and spectral analysis. This technique review and this research are limited to the broad range of time series techniques because of their applicability to short-term forecasting. This group of techniques is deemed particularly applicable to the development of routinized pro- cedures for the purposes of short-range forecasting of individual product sales. The next section provides a synthesis of the two categories of time series forecasting procedures which directly relate to short- range product sales forecasting. The second section details individual forecasting techniques germane to the purposes of this research. Synthesis of Time Series Approaches Time series analysis encompasses numerous forecasting techniques in which the patterns and movements of historical data are analyzed for recursive characteristics. Based upon the specific attributes exhibited by the data, techniques of varying sophistication may be employed to project future levels of variables under forecast. This section of the 44 review discusses each of the general time series techniques in the order of increasing complexity. The section begins with a review of basic components which comprise time series analysis. These components consist of trend, cyclical variation, seasonality, and irregular factors. Trend is a long-range pattern of change in the level of expected sales which may alternately reflect a prolonged increase or decrease in expected sales. This change can take place at a constant or accelerating rate. Trend components are normally produced by a multitude of macro factors that may range from changes in market share to growth in the number of customers or fundamental shifts in buying habits. Cyclical variations reflect a recurring pattern of sales above and below the trend line occurring over an extended period. These patterns may stem from swings in the economy as well as from changes in established industry operating practices. Seasonality is similar to cyclical variations, but is of a short-term nature. Factors causing seasonality may consist of weather conditions, holidays, or seasons of the year, as the name implies. The last component of basic time series consists of irregular factors or "noise." Noise is defined as random variation of actual sales around that sales figure predicted by the combined trend, cyclical, and seasonal estimates. Noise is by definition unexplainable and it is desirable to minimize its impact. The basic approach of time series forecasting is to isolate the three predictable components and to estimate the value of each 45 during a selected future period. The forecast results from reaggregation of these projected components into a future sales figure. This procedure is common to all forms of time series analysis. Techniques which utilize more sophisticated quantitative methods are reviewed in this section. However, each represents an attempt to estimate future sales using historical patterns of basic time series components. The general approaches, reviewed by order of increasing complexity, consist of moving average, exponential smoothing, adaptive smoothing, and spectral analysis. Moving Average Moving average may be described as a two-step process. First, the arithmetic mean of the data points for a selected number of past periods is computed. Second, this mean value is used in conjunction with a trend estimate for forecasting future sales. Each period, when a new forecast is made, the oldest data in the series is discarded and replaced by the current period actual data. The method is extremely Simple and assigns equal weight to each period regardless of age. This makes the technique unresponsive to changes in the demand pattern with the result that forecasts lag behind such changes considerably. The technique is applicable in situations where demand is relatively stable with little or no random variation. The number of periods used in calculating the mean, i.e., the length of the interval, is subjectively determined in normal practice. AS such, all the disadvantages inherent in a judgmental decision are present. However, subjective analysis may be advantageous 46 if based on an understanding of the market. Caution should be exercised in the selection of a time interval to ensure that it will reflect the effects of seasonal, irregular, and cyclical factors. For example, if the data exhibit seasonality, it is common to use an interval which corresponds to one complete seasonal cycle. Exponential Smoothing Exponential smoothing represents an extension of the moving average method and is more sophisticated in that it assigns more importance to the recent data. It is based upon the definition of a smoothing constant which assigns weights to each period's data according to age. Increased importance (weight) can be assigned to more current data by using a smoothing constant, a, with a value that approaches one. This capability is strengthened exponentially by the adjustment factor (l-alpha)", where n is the number of periods under consideration. The advantage of this operation is that the "average" will respond more quickly to changes in the level of demand. Smoothing constants may also be employed to forecast values of trend (y) and seasonality (B) for the future period. When either trend or seasonality is considered separately in the forecast, it is termed double exponential smoothing. When both trend and seasonality are included, it is triple exponential smoothing. Exponential smoothing does not eliminate the need for judg- mental decisions. The forecaster is required to set the values of the smoothing constants. Herein lies the basic disadvantage of this technique. If a large value of alpha is used, the system will respond 47 quickly to any change in the mean level of demand. However, such a large value tends to increase any random fluctuations about the mean, giving the forecast series a more sporadic appearance. On the other hand, if the value of alpha is decreased during periods of stationary demand, the model's ability to adjust to a sudden increase in the mean level of demand is reduced. The user is thus faced with a tradeoff between limiting the noise exhibited in the series and the quickness with which the system responds to changes in demand. Adaptive Smoothing One of the major disadvantages of techniques employing exponential smoothing is that they are Slow to recognize turning points. When a shock occurs in the series, the process takes several time periods to adjust to the change, depending upon the value of 0. Where large values of 6 enable the system to adapt more quickly, they are usually found unfavorable for general use because they fail to filter out noise and actually accentuate such sporadic deviations. When small values of alpha are used, the system takes a long time to adjust, with biased forecasts occurring until the model homes in again. In an effort to allevaite this paradox, adaptive techniques employ tracking signals to monitor the error in the system. When a certain level of error is exceeded, these techniques issue messages inviting manual intervention. More sophisticated techniques automatically increase the value of the smoothing constant to reduce the error. Once the error is reduced, they gradually return the smoothing 48 constant to its original value. It is this approach for automatic adjustment which is known as adaptive smoothing. Spectral Analysis Spectral analysis1 is a statistical technique which employs sinusoidal functions to analyze time series data. By combining various sine and cosine functions, it is possible to approximate the cyclical components of a time series enabling the forecaster to obtain a power spectrum. The power spectrum, or spectral density, converts the variance of the time series data into a set of components which may then be attributed to various characteristics displayed by the series. Spectral analysis is based upon Fourier Analysis Theory which says that any series of data or mathematical function may be approximated by combining different sine and cosine functions with different periods and different amplitudes. Basically, the process requires fitting an equation such as: Y(t) a1 Cosine (W1t) + a2 Cosine (Wzt) t ... an Cosine (Wnt) + E(t) to a set of time series data, Y(t). In Figure 3.1 the a's are represented by the amplitude of the wave, while the periods of the cyclical components are the w's. The error turn E(t), performs the same function it does in regression analysis, providing a measure of the variance which is unexplained by the sinusoidal function. ‘Discussion based upon Steven C. Wheelwright and Spyros Makridakis, Forecasting Methods for Management (New York: John Wiley & Sons, 1973), pp. 126-129. 49 J}AMPLITUDE V ( PERIOD W ’ Figure 3.1 Wave Periodicity and Amplitude. When the periods in the time series are unknown, spectral analysis provides an ideal tool for determining the cyclical components which the series exhibits. When the periods are known beforehand, the sinusoidal functions are treated as independent variables with the result that the process is similar to regression analysis. Spectral analysis also aids in the study of time series data by enabling the forecaster to remove the trend component through the process of "filtering." The correct filter (the determination of which is based on the statistical goals of the analysis) will remove unwanted characteristics of the series, such as trend and seasonal variation, without causing a disturbance to the desired components. The major advantage of this process is its ability to illustrate whether or not the series is actually a random process, and to enable the forecaster to study trend and seasonal factors. 50 The major disadvantage of spectral analysis is the amount of data needed to obtain accuracy. Although the number of observations may be increased by Shortening the period (Wlt) under analysis, this in no way increases accuracy. Furthermore, spectral analysis must be combined with other statistical techniques as it does not in itself adjust series data. Technigue Review This section reviews specific forecasting techniques contained within the general approaches discussed above. As such a review of every available technique in these approaches is beyond the purpose of this research, the techniques reviewed below are intended as a sample of the levels of complexity and sophistication currently available. R. G. Brown's Technique In his initial development of exponential smoothing, Brown2 described the basic rule of the method as follows: New average = o(new demand) + (l—o) (Old average). Thus, the new forecast would be equal to last month's forecast plus some fraction of the forecast error. 2R. G. Brown, Statistical Forecasting for Inventory Control (New York: McGraw-Hill, 1959). 51 In an effort to monitor the accuracy of this system, Brown developed a tracking signal given as: Sum of errors MAD Tracking Signal = where: Sum of errors = previous sum of errors + latest error; and Mean absolute deviation = (l-a) previous MAD + (a) latest MAD. If the sum of the forecast errors exceeded four mean absolute deviations, the tracking signal was "trippled'I and a notice inviting manual intervention was issued. This technique was criticized and subsequently improved upon by D. W. Trigg in 1964.3 Trigg pointed out two disadvantages inherent in Brown's tracking signal: (1) because it is based on cumulative forecast errors, when the limits are exceeded the tracking signal may not return within the limits even if the forecast model regains control; and (2) if the model gives extremely accurate forecasts, the tracking Signal will go out of control as the MAD decreases. To overcome these disadvantages, Trigg recommended the following updating equations: Smoothed error = (l-o) previous smoothed error + (a) latest error MAD = (l-a) previous MAD + (a) least absolute error . . _ Smoothed error Tracking Signal - MAD 3D. W. Trigg, "Monitoring a Forecasting System," Operations . Research Quarterly 18 (NO. 1; March 1964): 271-274. 52 This simple modification limits the tracking signal to values between plus and minus unity and overcomes the limitations of Brown's method. Brown recognized the advantages of Trigg's technique and proceeded to develop an adaptive smoothing model.“ Given an initial value of the smoothing constant, a, Brown employed Trigg's tracking signal to automatically increase a to a predetermined level for nine consecutive periods when the forecast error exceeded a set limit. At the end of the nine periods, the value of a was automatically reduced to its initial value. If the tracking signal was "tripped" during the nine periods when a was at an increased level, a signal was issued requiring intervention to re-estimate the initial coefficients. The basic advantage of this technique over Brown's earlier model is that the smoothed error will return to zero if the increased value of a does correct the bias. The cumulative sum of the errors had to be reset by manual intervention in Brown's original method. Trigg and Leach's Adaptive Smoothing M2931 To overcome the need for manual intervention, Trigg and Leach5 developed a procedure to automatically adjust the value of a in response to the magnitude of the forecast error. This is achieved by setting: a = modulus of the tracking signal. “R. G. Brown, Decision Rules for Inventory Management (New York: Holt, Rinehart & Winston, 1967). 5D. W. Trigg and A. G. Leach, "Exponential Smoothing With An Adaptive Response Rate,“ Operations Research Quarterly 18 (No. 1; March 1967): 381-383. 53 This simple, yet effective, mechanism allows the system to adjust quickly to changes in the data by automatically increasing the value of 6. Once the adjustment has been made and the forecasting system is again in control, the value of o is automatically reduced to filter out randomness. The tracking signal is actually a measure of how large the recent forecast error is in comparison to past smoothed errors. The smoothed error will vary about zero as long as the forecasts are close to the actual values, producing a tracking signal which also varies about zero, between plus and minus unity. More SOphisticated models are available. One adaptation of Brown's method (described previously) is to set the absolute value of the tracking signal equal to the first element in the smoothing vector. Such a model filters out noise as accurately as the more common tech- niques employing a fixed response rate and yet has a much more rapid response to shocks in the series. This method also eliminates the need to determine a proper value of a. However, the forecaster must still set the value of a used in computing the tracking signal. The lower the value of a, the more cautiously the system will respond to shocks. Smith's Adaptive Model Corrector Following the work of Trigg and Leach, Smith6 developed an adaptive model corrector which seeks to adjust the coefficients of 6David E. Smith, "Adaptive Response for Exponential Smoothing: _ Comparative System Analysis," pperations Research Quarterly 25 (No. 3; September 1974): 421-433. 54 the forecasting model to the best estimates of the "ideal" coefficients. Implicit in the concept of "ideal" is the notion of a set of coeffi- cients capable of leading to the lowest level of forecast errors possible. Coefficient adjustments are made at two levels: Changes in the origin of time and correction for forecast errors. The level of adjustment is expressed as: h - e(t) = model correction for forecast error, where: 3" 1| smoothing vector; and e(t) current error. The smoothing vector (h) is made responsive to random variations in demand through the use of smoothed error (SME) and smoothed mean absolute deviation (SMAD): h: |SME| SMAD The smoothing vector, being a function of the rate of response 8 of the forecasting system is taken as "an exact statement of the proper B," or: 55 The steps for implementation of the method are as follows: a. based on the forecast errors observed, the smoothed error and the smoothed mean absolute deviation are calculated, determining the level of error bias; b. error bias is used in a "beta function" to determine the level of response 8; c. B is smoothed and used to adjust the coefficients in the forecasting model. Smith tested this method under stationary demand with random variations, and non-stationary demand where the pattern of the series varied over time. Smith felt that a good adaptive technique should be responsive only to changes in the pattern and not just by random variations around it. As such, 8 Should Show stability under random demand, and adaptivity under conditions of non-stationarity. The results of his testing under this criteria showed that the adaptive model corrector performed well in comparison with Trigg's constant coefficient adjustment. However, Smith's model does utilize the concept of a tracking signal (smoothed error divided by the mean absolute deviation) earlier devised by Trigg and later expanded by Trigg and Leach. P. R. Winters' Exponentially Weighted Moving Averages Although the adaptive techniques discussed above represented a significant improvement over basic exponential smoothing, their applicability is logically limited due to their failure to consider the individual elements of a time series. The fact that adaptive forms of simple exponential smoothing do not specifically monitor 56 trend and seasonal elements limits their usefulness in the business environment. Added realism and potentially greater accuracy could be achieved through consideration of such elements. A technique employing this methodology is that developed by P. R. Winters.7 Winters' model is exemplary of techniques which have sought to make more efficient use of time series data while decreasing calculation time and storage requirements. The ultimate method, as seen below, represents the final step in a process which started out as a simple application of weighted average and was improved and made more realistic through the incorporation of seasonal and trend factors. In its Simplest form, Winters' technique may be mathematically expressed as: S = a - 2 (1-OI)n - S + (1-OI)MH - S , t _ t-n b n-O where: St = forecast for period t; St-n = actual sales in period t-n; Sb = beginning value of S; number of observations under analysis; and smoothing constant. Q 11 )M+1 In the case of a large N value, (l-a becomes very small and may be discarded altogether. 7P. R. Winters, "Forecasting Sales by Exponentially Weighted Moving Averages," Management Science 6 (NO. 3; April 1960): 324-342. 57 The Simplicity of the above equation permits easy computation of the sales forecast but does not consider seasonality. Contending that more often than not, "the amplitude of the seasonal is proportional to the level of sales," Winters incorporated a seasonal factor (Ft) into the equation in a multiplicative fashion. Thus, M S ~ t-n M+1 s=a- z (I-a)"-—————+(I+a) -s t n=O Ft-L-n b and J S t-n-L J+l F =B- z (l-B)"-(~ )+(l-B) -F. t n=O St-n-L bt where: B = smoothing constant for seasonals; O s B s l; Fbt = initial value of F for the period in question; La II the largest integer less than or equal to M/L; and r II periodicity of the seasonal effect. Thus, the forecast is a function of past sales, the weights a and B, the initial value of Sb, and the set of values of Fbt’ which are L in number. In implementing the above model, St is revised each period, while F is revised only once per cycle. Thus, for forecasting one period ahead, the equations with seasonality would be: St t-L + (l-a) - S 0 s a s 1, MI I t T a F t-l’ and 0 IA ID IA ..a S t as, + ‘l- . F S ( B) _n II ‘00 o t-1’ 58 Generalizing for any one period into the future, one would have: UN St,T = t ' Ft-L+l’ or St,T = St . Ft-L+T’ where: T s L, and is the number of periods into the future. Going a step further and incorporating trend effects into the analysis, the model becomes: St ~ + (1-a) (S t-L t‘1 where: Rt-l = most recent estimate of the additive trend factor. The trend effect is incorporated in an additive fashion because it represents the units per period that the expected sales rate, St’ is increasing or decreasing. In forecasting for one period ahead, the revised form of the seasonal remains the same; that is: St Ft = 8 ° gt + (1'8) ° Ft-L (2) whereas the revised estimate of the trend is made equal to: Rt = Y ° (St- St-I) + (I'Y) ° Rt-I; (3) where: y = smoothing constant for the trend value. 59 Generalizing, the forecast of sales T periods into the future would be equal to: St’T = [St + T0 Rt] ' Ft-L+T, T =1, 2, 3, ... L. (4) The implementation of this model may be summarized in the following steps: a. actual sales at period t is recorded; b. using equation (1), evaluate St, utilizing St_1 and Rt-l from the last period, and Ft-l as computed during the previous cycle; c. using equation (2), calculate Ft’ which now replaces Ft-L; d. using equation (3), calculate Rt’ which now replaces Rt-l; e. forecasts are then made using equation (4); and f. St_] is then replaced by St, and the data is ready for use in the coming period. Winters tested his method by first defining the initial values of S, R, and F at t= 1, based upon an analysis of past sales from t-n to t-%n. Various combinations of a, B, and y, ranging in value from 0.0 to 1.0, were then fed into the system and analyzed over the period t-%n to t, the best combination being that which generated the smallest standard deviation in the forecast error. He also compared his method with two other techniques: The first, a simple arithmetic average model; and the second, a model using seasonally adjusted exponential smoothing. Forecasts were generated and analyzed with each of the methods for sales of three different types of goods. In each case, Winters' exponentially weighted moving 60 average (seasonally and trend adjusted) produced better results expressed in lower levels of standard deviation. Theil and Wage's Simplified Exponential Smoothing Model As seen above, smoothing as a forecasting method has evolved into a technique accounting not only for a past time series of data but also for seasonality and trend effects. The way these variables have been incorporated into the models has varied extensively, but only on a few occasions has it simplified the technique. Such a simplifi- cation is found in Theil and Wage,8 in which they propose a linear approach to the incorporation of seasonals. That is, they view seasonality in an additive rather than multiplicative manner, as Winters did. They adopt a Simultaneous approach to the computation of the forecast rather than the more common recursive one. In defining the problem in linear form, the time series is broken down in the following components: Xt ='§ + St + residual Tt = Xt-l + et’ where: Xt = time series; ii = trend value; et = trend change; and St = seasonal coefficient. 8H. Theil and S. Wage, "Some Observations on Adaptive Fore- casting," Management Science 10 (No. 2; January 1964): 198-206. 61 Based on these components and by making the residual equal to zero, the simplified forecasting model can then be summarized as a weighted averaging of two sources of evidence, the latest observation and the value computed one period before. Thus, xt = a ' (xt - St-L) + (l-a) ' (Xt-l + et_]); (1) new evidence old value on trend level where: 8t = B . (Xt - Xt'1) + (1'8) . et'] (2) which is the trend smoothing equation, and, St = Y(Xt - Xt) + (l-Y) St-L (3) is the seasonal smoothing equation. L is defined as the length of the seasonal cycle, whereas a, B and y are the weights or smoothing factors, such that O < a, B, y < l. The steps for implementation of this technique are as follows: a. equations (1), (2) and (3) provide the values of §£, et, St and St-l; b. extrapolate the most recent trend change, using the most recent seasonal coefficient and neglecting the residual; c. use the predictor P and obtain the forecast for r periods t ahead using the equation: Pt(xt+r) = Xt + r ' et + S t-L+r 62 where: ii = trend; r-et = adjusted trend change; St-L+r = seasonal adjusted for r periods ahead; and r = number of periods ahead for which one may be making the prediction. Since the model seeks to minimize the forecast error, this factor must be related to the trend value, trend change, and seasonal components. Theil and Wage accomplished this through the use of "adaptation equations" given as follows: Yt ' (it-T + et-l) = 0' ' (Xt ‘ Pt-l ' Xt) = '0‘ ' ft-1,t (4) “- adjusted old value where: ft-1,t = forecast error. Notice that the only element adapted ends up being inversely propor- tional to the most recent forecast error. Thus, when the observation Xt TS below the prediction P - Xt’ the old trend value Xt-l + e,c_1 t-l is lowered. Using the same approach as above, the trend change and seasonal are also adjusted out of equations (2) and (3): _e =-aoBof t-1 t-1,t = -(1-0) ° Y ° ft-I,t (6) 63 It is important to note that given 8 and y, seasonal adjustments are normally larger than the trend adjustments when a is small. This is due to the fact that ii is more dependent on the trend corrected §£_1, than on the seasonally corrected Xt’ when a is small. This is evidenced in equation (1). Since the trend changes are considered more stable, the seasonal factor will be subject to more adaptation. Finally, equations (1), (2) and (3) are performed recursively, with past data on seasonality and trend to determine ii, which is then used to compute the new trend and seasonal. However, Theil and Wage contend that it is not sufficient to use only past data in the compu- tation of ii; the current values should be used as well. These values (both past and current) may be used simultaneously, modifying equation (1), which becomes: R, = a - Ixt - St) + (l-a) - (TH + et) <7) Substituting equations (2) and (3) into equation (7), one obtains: xt = a - Ixt - SH) + (l -a') - (TM + em) (8) which is the same as equation (1), with only a different a. When current data are used (simultaneous approach), a = a' only when 8 = y. In conclusion, Theil and Wage's version of exponential smoothing has added realism over that of Winters, due to the incorporation of current data in a simultaneous fashion. The technique is also somewhat more simplified than Winters' method, due to the additive consideration Of the seasonal component. 64 Roberts and Reed Self-Adaptive Forecasting Technigue According to these authors, 9 most exponential smoothing techniques suffer from the fact that they must make certain critical assumptions about a time series in order to mathematically derive the "optimal" smoothing constant. To overcome the need for such assump- tions, Roberts and Reed have developed a self-adaptive forecast technique (SAFT) based on the work of P. R. Wintersl° and G. E. P. Box.11 SAFT is similar to the Trigg and Leach model in philosophy but employs three smoothing constants instead of one. SAFT provides a means by which optimal smoothing constants may be evolved and then monitored to form an automatic, closed loop system. To accomplish this, the exponential forecasting model of Winters is combined with a response surface analysis technique to determine optimal values for the smoothing constants. Winters' technique breaks a time series into four components: Level, seasonal, trend, and random. As Winters' model has been discussed previously, only the basic formulas involved are repeated. Leveling factor: X ' ) + (l-a) [St_](x) + Moo] (1) 5(=( t x) a Ft-L 9Stephen D. Roberts and R. Reed, Jr., "The Development of a Self-Adaptive Forecasting Technique," AIIE Transactions 1 (No. 4; December 1969): 314-322. 1°Winters. PP. 324-342. 11G. E. P. Box, "Evolutionary 0perations--A Method for Increasing Industrial Productivity," _pplied Statistics 6 (No. 2; June 1957). 65 where: St(x) = estimate of the level component of time t; x = actual observation of the time series at time t; and o = smoothing constant such that 0 < a < 1. Seasonal factor: Ft = 8(F:EL) + (1-8) Ft_1 (2) where: Ft = seasonality factor at time t; = periodicity of the season; and B = smoothing constant such that O < 8 < 1. Trend factor: Rt(x) = y[st(x) - sMIxfl + (I-Y) Rt_1(x) (3) where: Rt(x) = the trend at time t; and y smoothing constant such that O < y < l. The forecast error for Winters' model is defined as: E(t+ T) = SF(t+ T) - x(t+ T) (4) where: E(tI-T) = error of the forecast SF and observation x. (5) 66 The square of the forecast error is thus: RS(t+T) = Emu) - x(t+T)]2 where: RS = response surface for time (t+-T). The response surface analysis is perfonmed by the Evolutionary Operation (EVOP) technique developed by Box. The response surface itself is based upon the square of the forecast error for various combinations of the smoothing constants. A series of forecasts for past sales is made allowing a, B and y to vary, systematically, over a predefined range. A three-level factorial is employed to allow each constant to be set at a high, medium, and low value. Each forecast is then compared to the most recent period's sales to obtain a set of forecast errors, one for each different combination of smoothing constants. Each of the forecast errors is squared, and the set of these squared values form the response surface. The response surface may be one-, two- or three-dimensional, depending on whether all three smoothing constants are employed in the model. Once the response surface has been formed, the EVOP technique searches the surface for that combination of smoothing constants with the lowest squared error. By using the squared error, more attention is given to larger values, and all of the response surface values will be greater than or equal to zero. It is important to note that each of the smoothing constants may be set at three separate levels within a predetermined range. 67 By having a middle value in addition to the high and low values, the convexity (or concavity) of the surface may be studied. Roberts and Reed have called the effect resulting from such an analysis the change in mean effect, CIM. If the CIM exceeds the predetermined positive limit, the surface is convex, while exceeding the negative limit is an illustration of concavity. Once a complete cycle of the EVOP has been completed, a measure of the variance of the response surface may be obtained. The magnitude of this variance determines the need for adjustment of the smoothing constants. Given the variance, the smoothing constants are analyzed to determine whether the change in the forecast error is statistically significant. If so, the values which the smoothing constant(s) is (are) allowed to assume are changed in the direction of the signif- icance. For example, in a one-parameter model using only one smoothing constant, a, forecasts are initially made using three values of a. For purposes of illustration, let these values be .20, .25, and .30. Roberts and Reed determine the "effect A" according to: E=T-‘Y' (6) where: Y} = the average square of the forecast error for point i. If Ea exceeds a 99% confidence limit the "model design,‘I or the values which a is allowed to assume, are changed. If Ea exceeded the positive confidence limit, the new values might be .25, .30, and .35. Similar adjustments would be made if Ea exceeded the negative limit. In a 68 Similar manner, the parameters for a two- or three-dimensional model may also be adjusted. When the smoothing constant has been changed, the new values are automatically fed into the forecasting model. When a new forecast is made and the error of that forecast determined, the cycle is repeated. Roberts and Reed have combined the works of several past researchers to produce a dynamic forecasting technique. Through the use of response surface analysis, the accuracy of the forecasting model is constantly monitored, the values of the constants being changed automatically in order to minimize the squared error. During their initial research, Roberts and Reed compared the SAFT technique to the methods of Winters, Brown, and Chow. According to the authors, both Winters' model and SAFT compared favorably to the techniques of Brown and Chow in accuracy and rate of response. SAFT performed as well as the Winters technique, except that it required more computational time. D. C. Whypark's Technique Another continuous evaluation technique similar to those described above is that developed by Whybark.12 Whybark generated forecasts using Winters' model and sought to adjust the a level automatically in response to the error of the system. Instead of allowing the a level to vary according to some function, however, Whybark changes the smoothing value to a predetermined value if the 12D. Clay Whybark, "Testing an Adaptive Inventory Control Model," Working Paper No. 289 (Lafayette, Ind.: Purdue University, 1970). 69 forecast error exceeds established limits. Two separate criteria, or control limits, are used in monitoring the error of the system. First, the tracking signal calls for a change when a single data point is found to be outside a range of :20 from the current demand; and, second, a change is initiated when, for two consecutive periods, the data points were found to be :1.20 away from the mean level, and both were in the same direction. The new levels of a to be applied are predetermined. In the first period after either of the control limits have been exceeded, the value of a is automatically set at .8; for the second period the value is automatically reduced to .4; and in the third period, a returns to its initial value as set by the forecaster. This process is repeated each time either of the control limits is exceeded. In a recent study13 the Whybark model was found to react slightly faster to shifts in the mean level of demand than the Trigg and Leach method. The Trigg and Leach model, on the other hand, was found to be more sensitive to small fluctuations during a stable period. The cost of operation, computer time, and the standard deviation of the forecast error showed extremely little difference between the two techniques. Chen and Winters' Hybrid Exponential M This model was developed as a tool to forecast the peak load demand for an electric utility company on a daily basis. Because of 130. Clay Whybark, "A Comparison of Adaptive Forecasting Techniques," Logistical Transportation Review 8 (No. 3; July 1973): 13-26. 70 the direct influence which numerous causal variables exert on demand, and because of the inability of exponential and adaptive smoothing models to consider such factors, Chen and Winters combined subjective analysis based on experience with exponential smoothing to form a hybrid model.'“ The model considers four components, two of which are based on policy decisions with the remaining two being exponentially adjusted factors. The exponentially adjusted components include: B = Base demand for day t, updated daily according to business conditTOns, season of the year, and hours of daylight, growth in population, per capita use of electricity, steel production, etc. W. = Day of the week effect that is added to the base demand where j = l, 2, ... 7. The subjective components are: T = Temperature effect which is expressed through a rule which deteraneS whether or not Bt should be augmented according to the temperature on day t; the rule being as follows for Tt+l: (Tt+1 - 65°F), TH1 > 65°F; a - 0, 50°F 5 TM 65°F; (50 - Tt+,), Tt+1 ; 50°F; 1"Gordon K. C. Chen and P. R. Winters, "Forecasting Peak Demand for an Electric Utility With a Hybrid Exponential Model," Management Science 12 (No. 12; August 1966): 531-537. 71 where: 1(000) kilowatts; and Q II CL Cloud cover effect expressed through a rule that indicates how many kilowatts to add to the base demand as a function of the forecasted cloud condition for one day ahead; the rule for this model was defined as equal to: CLt+1 = 0, clear; CLt+1 = l, partly cloudy; CL“1 = 2, cloudy; and CLt+1 3, precipitation; where company value of 8 = 1(OOO) kilowatts. Having the forecasts ft+l and CL the forecasting equation t+1’ for a (t+ 1) period ahead is equal to: Dt+1 = Bt + Wj+1 + a - O + B °(CLt+l) (1) (50 - Tt+]) Given the actual demand (At)’ and before having the forecast for the day ahead developed, both the base demand effect and the day-of-the week effect are then updated with the equations below: Tt-65 Bt = a - At-(Wj+-a- o + B- CLt) + (l-a)- Bt_1 (2) ISO-Ti; 72 Tt - 65 Wj=b- At-(Bt+a- o +B-CLt) +(I-b1-wj (3) 50 - Tt where: (l-b)Wj = smoothing of last week Wj; and a,b = smoothing factors. Thus, the steps required for implementation are as follows: a. observe actual peak demand today as well as actual temperature and cloud cover; b. update B1 and NJ with equations (2) and (3); and c. forecast temperature and cloud cover for the one-day-ahead and use equation (1) to update peak demand for the next day. The method was tested by simulating different values for a, b, a, and B and by computing the square forecasting error: tn ( )2 ¢ = 2 D -A . t=t1 t t The optimal combination of a, b, a, and B was the one that minimized o. The values of B and W were arbitrarily initialized. In order to eliminate the effects of this specification from the system, the authors used the same device here as the one Winters used when testing his model. That is, computer runs were performed for a determined number of periods during which the transient effects of the initial values were supposedly eliminated. After this warm-up period, B and 73 W were updated and forecasts made with different values of a, b, a and B. Chen and Winters judged the performance of the model as excellent, based upon the fact that for 65% of the time under consideration the errors were less than 3%. Although this technique was designed for a specific industry, it appears to justify the use of exogenous variables in certain situations. Basically, all that is required is the definition of practical rules responsive to the varying characteristics of demand. Wheelwright and Makridakis' Adaptive Filtering According to these forecasters15 the general class of tech- niques which forecast using a weighted sum of past observations may be presented as follows: N SW = 3 W1. x1 , (l) i—l where: St+l = forecast for period t + 1; W1 = weight assigned to each observation i; X. = value of the ith observation; and 2.. II number of periods and weights. Wheelwright and Makridakis contend that such models do not guarantee that optimum forecasts will be made when using minimum error 15Steven C. Wheelwright and Spyros Makridakis, "An Examination of the Use of Adaptive Filtering in Forecasting," Operations Research Quarterly 24 (No. 1; March 1973): 55-65. 74 as the judgment criterion. They base their criticism of such techniques as moving average and exponential smoothing on the fact that the forecaster arbitrarily specifies either the number of obser- vations or the value(s) of the smoothing constant(s). These values are not determined through the application of any type of optimization principle. Accordingly, Wheelright and Makridakis propose the use of adaptive filtering, which they contend "will always do as well if not better than either moving averages, exponential smoothing, or any other technique which uses a relationship between the weights that are independent of the time series in question."16 Adaptive filtering has as its major attribute the ability to define a signal pattern in a series of data rather than just smoothing the noise of the data. The process seeks to minimize the mean squared error and may be described as follows: a. a series of past observations of the variable to be forecasted is obtained; b. an initial value is specified for each of the weights, W.- 1’ c. a forecast is made using equation (1); d. the forecast is compared with the actual data, the mean squared error is determined; and e. steps (b) through (d) are repeated using different weights until a minimum error value is reached. The adjustment of the weights is made using the following rule: =.+ ., 2 w W] 2KeJX () 3+1 16Ib'id. 75 where: Wj+1 = reVTsed weight vector; W. = old weight vector; = learning constant which determines how fast the weights are adjusted; e. = forecast error using Wj; and = vector of past observations. According to Wheelwright and Makridakis, the use of this rule "guarantees that the error will always decrease and will never increase."17 However, the effective use of the rule requires that the forecaster have a thorough understanding of the relationships existing between (K), the number of "training" iterations necessary to reach the minimum value, and the number of periods and weights. This is necessary because if the forecaster sets K at a high number, such as 101, the rule may determine the optimal at iteration number 95, and from that point the values will oscillate above and below the optimum. Also, even if a large value of K is used, the process will fail throughout to determine the optimal weights. The forecaster must also consider both the type of demand and the degree of randomness it exhibits. Wheelwright and Makridakis explain the above factors by discussing the application of the method to constant, linear, and cyclical series. In the case of a constant, linear, non-random series, the series may be visualized as a bowl-shaped function with the minimum error value being at the bottom of the bowl. The bowl shape is due to 17Ib‘id. 76 the fact that the mean squared error is a quadratic function of the weights. The larger the value of K the faster one gets to the bottom of the bowl. Also, increasing the number of weights substantially increases the number of iterations to determine the optimal weights. Because of these facts, both the rate of change in the mean squared error and the magnitude of the error for each iteration must be con- sidered in determining the proper value for K. In the case of a linear series with randomness, it was found that "as the randomness Of the series increases, the loss of accuracy from having too large a K value also increases."18 Again, an increased number of weights or more extensive use of historical data helped to reduce the error, and this smoothing advantage compensated for the larger number of iterations required. In terms of a cyclical series, the technique also showed good results, with the number Of weights employed being the most significant factor: increasing the number of weights for the same number of training sessions or iterations resulted in smaller mean squared errors. A comparative analysis of performance between adaptive filter- ing and regression analysis using data on champagne sales in France (May 1962 to December 1965) showed the former to be slightly better than the latter. In addition, adaptive filtering was found to be more effective (lower error) than seasonal time series analysis. 18Ibid. 77 Although this display of performance was not broad enough to allow any generalizations, it is clear that adaptive filtering requires less technical expertise to train a set of weights to describe a series than does regression analysis. In comparison to weighted averages and exponential smoothing, these authors feel that adaptive filtering is more accurate because both techniques are methods of forecasting through adaptive filtering. However, Wheelwright and Makridakis do point out that adaptive filtering cannot be used for long-range forecasts, as regression analysis may, and that adaptive filtering requires more computer time than either moving average or exponential smoothing. The Box-Jenkins Methodology The statistical forecasting methodology developed by G. E. P. Box and G. M. Jenkins19 is the most sophisticated time series analysis/ projection technique presently available. Other techniques based on exponential smoothing may actually be considered as Specific cases of the Box-Jenkins method. Probably the most statistically accurate technique, Box-Jenkins is also one of the most expensive and time- consuming methods available. All forecasting techniques based upon time series analysis assume that there exists some basic underlying pattern to the data, combined with a certain amount of random variation. Most techniques such as regression analysis and the various forms of exponential smoothing are used in an effort to identify and project such a pattern. ' 19G. E. P. Box and G. M. Jenkins, Time Series Analysis, Forecasting and Control (San Francisco: Holden-Day, 1970). 78 Each varies in its accuracy depending upon the skill of the forecaster and the extent to which the model's assumptions about the data are in fact true. For example, in regression analysis the user must Specify what he feels to be the basic pattern, while exponential smoothing assumes a horizontal pattern within the data. When using the Box-Jenkins methodology, there is no need to Specify or assume an initial pattern in the series. The technique aids the forecaster in fitting a mathematical model to the time series, and is especially adept at handling situations in which the series itself is extremely complex and/or the pattern is not readily apparent. Once a tentative model has been developed, the technique provides the forecaster with explicit information to aid him in determining whether or not the initial model is an adequate representation of the pattern. If it is not, the Box-Jenkins technique provides general directions as to the steps which should be taken to improve the fit. Once an ade- quate representation has been achieved, forecasts may be made directly, the user being supplied with statistical analysis on the accuracy of the forecasts. These features, as well as the fact that the technique determines the lengths of the moving averages and the weights to be assigned to the historical data, gives the forecaster improved versatility over methods which require the periods to be specified and/or do not consider causal relationships. The Box-Jenkins meth- odology does not, however, generate fully automatic forecasts in the sense that manual intervention is required in their development. 79 This feature, in contrast to those models which automatically generate forecasts, offers greater adaptability and requires a greater amount of expertise in the exercise of personal judgment. The Box-Jenkins approach does not provide the user with a "canned" forecasting model but a strategy by which the user is aided in developing and testing a model to meet the specific characteristics of the data. To develop such a model, Box and Jenkins have described the necessary steps as: (l) identification, (2) estimation, and (3) diagnostic checking. Each of these steps and the basic models of the method will be discussed below. First, however, it should be helpful to review the principles of autocorrelation, the key tool in analyzing the pattern of the series. Autocorrelation2° is similar to the concept of correlation which measures the degree of association between two variables. Autocorrelation, however, is a measure of the degree of association between two values of the same variable at different time periods. This concept may best be understood by means of the following example. Suppose the variable A is monthly sales from January through A A A A dummy variable, B, may be created April, or A t’ t+1’ t+2’ t+3° by letting Bt+n = At+n+1° In other words, the second value of A becomes the first value of B, the third value of A becomes the second value of B, and so on. Now 2°Ibid. 80 the degree of correlation between these variables can be measured. However, because both variables measure sales, their degree of association is called autocorrelation. If Wt represents the series of sales, the degree of autocorrelation between the variables Wt and wt-k may be determined as: E[(wt-U)(wt'k-U)] autocorrelation of order k = Pk = , and Pk = P-k' E[(wt'U)2] In this example, K= l and the degree of autocorrelation between Wt and Wt”1 would be of the order 1. In the same manner, one could find the autocorrelation of order 2, 3, or n, simply by constructing dummy variables C, D, or N. Such a procedure may be illustrated as follows: Table 3.1 Autocorrelation Example Variable Time A B C D 1 10/20/15 25 2 20/15 25 3O 3 15 25 30 ”””A' 27 4 25 30 /27 5 30"””’J,.27 6 27 When variable B was constructed using a time lag of K: 1, variables C and D are created simply by using time lags of 2 and 3, respectively. 81 The general class of models in the Box-Jenkins methodology is described in the next subsection, followed by a review of the steps used to implement the technique. General Class of Models The general class of models in the Box-Jenkins methodology may be written as: ¢p(B)Yt = so + eqmet . (I) where: Yt = some stationary trend; 60 = a deterministic trend; ¢p(B) and eq(B) = polynomials of order p and q; B = a backshift operator such that B(Yt) = Yt—1; and et = uncorrelated deviates or "white noise" distributed as N(O,oa2). This general class of models may be adjusted to describe any type or pattern of data. However, as it is too broad for specific application, Box and Jenkins have developed submodels described as: (1) Auto-Regressive (AR), (2) Moving Average (MA), (3) mixed Auto- Regressive Moving Average (ARMA). When each of these submodels has been reviewed they will be combined to produce the general class of models given in (1). The basic form of the model for regression analysis may be written as: 82 Y = a + blxl + b2X2 + b3X3 + . bpxp + e (2) where: Y = the dependent variable; Xi-p = independent variables; b. = relative weight assigned to each of the independent 1'p variables; some constant; and D! II the random variation unexplained by the model. 0 II In using this technique, the attempt is to analyze the effect which Xp independent variables have on the dependent variable Y. If, however, one were to construct Xp dummy variables from the dependent variable Y (as in the example on autocorrelation), the relationship between different values of the same variable could be represented as: Yt = ¢1Yt-l + ¢2Yt-2 + ¢3Yt-3 + ... + ¢th-p + et (3) where: Yt = some stationary trend; et = random variation unexplained by the model; and DP = gheYrelative weights assigned to each of the past values t The only difference between equation (3) and the model of regression analysis given in (2) is that the Xp in (2) were different independent variables, p in number, while the Yt-n in (3) are dummy variables created from the dependent variable Yt, with time lags of l, 2, 3, ... p periods. Thus, equation (3) relates different past 83 values of the dependent variable to its value of time t. In other words, equation (3) states that there exists a specific relationship, or pattern, between future values of sales and past values of sales. Because of the similarities of equation (3) to regression analysis, and because equation (3) relates different time values of the same dependent variable Y, it is called an Auto-Regressive model. If, in fact, equation (3) adequately represents the pattern of the data, and if the values of ¢]. 62, O3, ... Op, are estimated, one may easily forecast the future value of Yt' In some cases it may not be possible to adequately represent the data with the Auto-Regressive model given in equation (3). In anticipation of this, Box and Jenkins have provided a second subclass of models which may replace or be combined with the Auto-Regressive model. This second subclass of models are the Moving Average models, written as: Yt = et - 61et_1 - 62et_2 - 63et_3 - ... - eqet-p (4) where: Oq = relative weights assigned to past values of the error. Just as equation (3) sought to relate future sales with past sales, equation (4) seeks to relate future sales to the error term for several past periods. In other words, the errors e , are viewed as the t-l,t-q independent variables. If equation (4) is an adequate representation of the data, one can generate forecasts by supplying the past errors and estimating their 84 relative weights, the oq. If the data cannot be represented by equation (4), there is one final alternative: to combine the models given by equations (3) and (4). The combination of the Auto-Regressive model (3) and the Moving Average model (4), the ARMA, may be written as: Yt = ¢1Yt-l + ¢2Yt-2+ ‘I’3Yt-3’r "°+ ¢th-p + et+ et - e-le - 62et_2- coo ' eqet-q which states that the future value of sales is dependent upon past sales and past forecast errors. Recall from equation (1) that B was called a "backshift Operator" such that: = Y (6) and that ¢p(B) and eq(B) were defined as polynomials of order p and q, respectively. It is now possible to elaborate upon these terms in order to simplify equation (5). ¢p(B), defined as: __ l 2 3 _ P ¢p(B) - 1 - 61B - 628 - 63B - ... ¢pB (7) is called the autoregressive operator, and may be applied to Yt such that: 1 2 3 p _ ¢2Yt-2‘ ¢3Yt-3‘ ‘bth-p 85 (Note the similarity between equation (8) and equation (3).) eq(B) defined as: 1 2 3 - O B - ... - e Bq (9) - O B 3 q q is called the moving average operator, and may be applied to et such that: l 2 3 q _ (1- 01B - 02B - 63B - ...- qu ) Yt - e - O1e - 02et_2- O3 t_3- ... eqet_q (Note the similarity between equation (10) and equation (4).) Now, substituting equations (7) and (9) into (5): Yt = ¢p(B)Yt - eq(B)et (11) which becomes: ¢p(B)Yt = Yt + eq(B)et (12) which is, in effect, the general class of models given in equation (1). In each of the models described above, it was assumed that Yt was some stationary series. To achieve stationarity, the original series Z may be differenced through application of: t _ d d1 Yt - (l-B) (l-BS) 2t (13) 86 where: d = number of regular differences; s = the period or length of the season; and d1 = the number of seasonal differences. Thus, the general class of models given in equation (1) may be written as: ep(i-B)d(i-BS)‘”Zt = 60 + eq(B)et (i4) which is described as an autoregressive moving average model of the order (p,d,dl,q). This model may be further expanded to estimate seasonal series through the application of seasonal autoregressive and moving average operators given as ¢p1(BS) and eq1(BS), respectively. Thus, equation (13) would become: IIMIMPMLMRLPH'=%+9 p p (8)6q1(Bs)et (i5) 9 Given this basic model, the forecaster must now proceed through three basic steps to derive his own model. Step 1: Model identification. The general model given in equation (8) represents too broad a class to be used for estimation. It is necessary to study the sample autocorrelation functions and various differencing patterns of the original series to identify a subclass of models. A tentative (p,d,dl,q) model is criticized by matching the pattern of the sample autocorrelations with a particular 87 theoretical pattern. It is here that the characteristic of stationarity becomes important, as stationary series exhibit specific patterns in their sample autocorrelations. Non-stationary series, in contrast, exhibit sample autocorrelations which fail to dampen out with increasing lags. When this characteristic is illustrated by the data, further differencing is needed. Step 2: Estimation. Having identified a tentative model, the parameters (¢], 62, ... ¢ ; 60, e], 62 ... eq) are estimated by P minimizing the sum of the square residuals according to: A AA=A2= - 2 S(¢,0) 2et 2(2t Zt) Step 3: Diagnostic checking, Having estimated the parameters for the fitted model, the user must now examine the sample autocorre- lations of the residual series, at, to ensure that the model is an adequate representation of the actual series, Yt' If the model is an adequate representation, the fit will be independently distributed about a mean of zero, or N(O,8a2). If the fit are not independently distributed, the significant lags in the pattern of the sample auto- correlation will indicate the direction for improvement. For example, a significant autocorrelation in the data at lag n may indicate the need for a seasonal moving average parameter such as (1-enP"). With this new information, the user will again perform the three basic steps, repeating the process until an adequate representation is achieved. 88 Advantages and Disadvantages The Box-Jenkins methodology offers the forecaster improved versatility over more automatic methods. By his examination of the autocorrelation patterns within the data, the user may actually design his own model and evaluate the adequacy of its estimation once he has differenced his data to obtain stationarity. With the gradual accu— mulation of data and repetition of the three-step process, the model may be modified according to the characteristics of the developing series. Because of the personal judgment involved, more expertise and statistical knowledge is required to properly use the Box-Jenkins methodology than is necessary in the more automatic techniques. The method is also more statistically complex, requiring more time and data to achieve an adequate approximation. Conclusion This chapter has provided a review of the basic elements of time series analysis forecasting and a detailed discussion of selected time series analysis forecasting techniques. The objective of this chapter was to provide the reader with an overview of the types of techniques available and an illustration of their varying complexities. The Appendix provides further support for this review in the form of synopses of selected texts and articles dealing with various theoretical and applied aspects of time series analysis forecasting. Chapter IV details the environmental conditions employed in this research. CHAPTER IV ENVIRONMENT SPECIFICATION Introduction This chapter presents the environmental conditions employed throughout this research. The first section details the structure and parameters of the physical distribution network simulated by the Operations Module of the SPSF Testing Environment. The physical char- acteristics of the products being Simulated are also presented. The second section provides a review of lead time probability distributions and the criteria established for selecting a theoretical probability distribution to represent lead times. The selection is made and the specific distributions employed are provided. The third section presents the four time series forecasting techniques employed. The final section details the demand patterns employed, and the method of their generation. OPTS System 1 The physical distribution system employed in this research was that replicated by OPTS l of the SPSF Operations Module. Specifically, this system has the following characteristics: a. a multiple echelon structure with inventory capability at each echelon; and b. a Single facility location at each echelon. 89 90 The structure of the network is illustrated in Figure 4.1. In addition to the nodes or storage points of the system, the figure also illustrates the links for communication and inventory flow which provide the interaction between the nodes. The lead time probability distributions used to Simulate communication and transportation times are discussed below. Costs and throughput results for the simulated network are recorded only at the DC. Only variable cost factors are recorded and analyzed. The factors, and their levels to be employed are detailed in Table 4.1. Other cost elements at the DC (such as ordering cost) were not monitored due to their independence of the forecast technique being employed. Table 4.1 Cost and Throughput Factors--DC Factor Measurement Handling costs $0.10/unit of goods Shipped Inventory costs 25% of sales (S) Between each pair of nodes, products are shipped by three separate modes, differing only in their cost and volume characteristics. These modes, their volumes and costs per hundred weight are provided in Table 4.2. 91 I I I 1 l l I l 1 l i i I F 0° 1 I' I ’ 1 I I '. I I I ~II v I [Retailer‘] --—- Communication links -———- Transportation links Figure 4.1 Representative Physical Distribution Network. 92 Table 4.2 Mode Characteristics Mode Weight (lbs.) Rate/th. ($) One O-3,000 10.00 Two 3,001-5,000 9.00 Three 5,001-9,999 8.00 The modeled network handled 10 similar products in an identical manner. Not only did each simulated product have the same physical and economic characteristics, each was subject to the same cost and operating functions at the DC, as well as to and from the DC. The purpose was not to simulate a wide array of products, but to gain 10 observations from each Simulation run. The physical and economic characteristics of the products are provided in Table 4.3. Table 4.3 Product Characteristics Weight Cube Price Cost Product (lbs.) (ft.3) ($) ($) l-lO lO 2 10 5 In addition to having the same physical and economic charac- teristics, each product had the same basic demand pattern. Each product's demand pattern had the same parameters. They were not identical, however, each being generated on a random basis. The procedure used in generating the demand patterns is discussed below. 93 Demand for the products was placed against the DC in the form of daily orders from the retailer. These orders were filled from inventory held at the DC. In the event that the order could not be filled in total, a partial shipment was sent exhausting the supply at the DC. The remainder of the order was recorded as a stockout. The DC issued replenishment orders to the Plant. The reorder- point (ROP) for the DC was 10 days of inventory for each product, the total number of units per product being dependent on the current fore- casted daily sales level for that product. The order quantity was also set at 10 days of forecasted sales per product. Orders placed against the Plant were filled from an infinite inventory. The next section provides a general discussion of the nature of lead times. In addition, specific theoretical probability distri- butions are reviewed and those employed in this research are specified. Lead Time Probability Distributions One purpose of this research was to analyze the effect of increased lead time variability upon OD and TD. Lead time is one of the sources of uncertainty in the research model. It is defined as the elapsed time from placement of an order to receipt of the order. Lead time is composed of several activities and is generally viewed as having three elements: order communication, order processing and order Shipment. The common characteristic of all lead time activities is that each requires some time to be performed. Thus, in a realistic sense, 94 there exists some minimum amount of time required for ordered goods to be received by the facility initiating the order. In addition, individual activity time and total lead time varies from one order to the next. The possible reasons for time differences between suc- cessive lead times are many. This results in a practical inability to forecast lead time variability. Given the nature and characteristics of lead time, it can be viewed probabilistically. "Probability enters into the process by playing the role of a substitute for complete knowledge."1 The greater the degree of variability inherent in the probability distribution, the greater the uncertainty under which the distribution system must oper- ate. The impact of such uncertainty is evident in both 00 and TD, as well as in the operating costs of the distribution system. Numerous probability distributions have been employed to represent lead times. However, for the purposes of this research, the distribution employed had to meet several criteria. These are reviewed below. Those distributions meeting each of the criteria are then presented and the one selected for use specified. Criteria for Selection The first criterion was that the distribution selected for use should contribute to the generality of the results of the research as much as possible. This criterion was based upon research conducted 1Charles T. Clark and Lawrence T. Schkade, Statistical Methods fOr Business Decisions (Cincinnati, Ohio: Southwestern Publishing Co., 1969), P. 181. - 95 by Dr. George Wagenheim to determine the relative impact of various types of lead time probability distributions upon the performance of a simulated physical distribution system.2 Dr. Wagenheim analyzed the relative levels of demand stocked out and various operating costs when lead times were characterized by the following types of probability distributions: poisson, normal, log normal, gamma, erlang and expo- nential. For the level of demand stocked out and the total costs, all distributions, with the exception of the exponential, were found to have statistically similar effects on the system. For transportation costs, all situations showed similar results except for the t-test comparing the normal and the erlang. For facility costs, throughput costs and inventory costs, seven comparisons, each involving the exponential distribution against one other distribution, showed significantly different results. All other comparisons between all other distributions were found to have no statistically different impacts.3 In view of Dr. Wagenheim's results, the generality of this research may be increased by employing either the normal, log normal, poisson, erlang or the gamma distribution to represent lead times. Each of these distributions is evaluated below. The second criterion for the selection of a probability distribution was that it had to be logically characteristic of lead 2George D. Wagenheim, "The Performance of a Physical Dis- tribution Channel System Under Various Conditions of Lead Time Uncertainty: A Simulation Experiment" (Ph.D. dissertation, Michigan State University, 1974), pp. 168-171. 3Ibid. 96 times. Here, two requirements were to be satisfied: first, the distribution had to be limited to non-negative values greater than some arbitrary minimum; and second, the distribution had to be skewed to the right, exhibiting a greater range of possible values above the mean than below it. The first of these requirements resulted from the fact that there is some minimum amount of time which must elapse between the placement and the receipt of an order. The second requirement was due to the fact that while there is some required minimum amount of time which must elapse before the completion of the order cycle, there is not a maximum amount before which the cycle must be completed. Thus, while it is possible that the order cycle may be completed in less than the expected mean elapsed time, it is more logical to expect variation to occur in excess of the mean. Of the five probability distributions considered, only the normal distribution failed to meet the second criterion. Although Dr. Wagenheim found no statistically significant differences between the normal and the other four distributions (in terms of their impact upon response variables) its symmetrical nature rendered it unaccept- able. The log-normal, poisson, erlang and gamma distributions, however, may each assume a skewed form. The final criterion for selection was that the distribution be able to exhibit different coefficients of variation about a single mean value. This was due to the fact that the same distribution was employed throughout this research to replicate different degrees of uncertainty. Only by holding the expected value of the selected 97 distribution constant and altering the variation about this expected value could changes in the response variables be linked to the degree of lead time uncertainty. Only the poisson distribution failed to satisfy the third criterion. The poisson is completely defined by a single parameter, A, which represents the mean number of occurrences of an event per unit time over a given number of trials. A random variable X is said to have a poisson distribution if its probability mass function given by: F(X;>\) = A < 0 O, elsewhere. The random variable X may thus be described as the number of occurrences of an event over some time or space. The expected value of the poisson random variable is: E(X) = X. The variance and standard deviation respectively are: V(X) = X /v(x) = yOT' The expected value of the random variable equals its variance, making the poisson distribution unacceptable given the third criterion. 98 The log-normal, erlang and gamma distributions each satisfy the above criteria. Before detailing the one selected for use in this research, each is reviewed. The LpggNormal Distribution A random variable X is said to have a log-normal distribution if its probability density function is given by: If X is a random variable and y = log x, and y is a normal random variable, then X is said to have a log normal distribution.“ 1 1 F (X; H 902) = ‘-——-——-— e ““ (ln X-U )2 } y y { X oy {SET 20; y The parameters of the log normal include “y and q. "ere Thus, the log normal is the probability distribution of a random variable whose logarithm obeys the normal probability density function. The log normal is encountered in a variety of applications such as income studies and classroom sizes.S Additionally, it has been employed successfully to represent demand. Some typical log normal distributions are illustrated in Figure 4.2. .'George P. Wadsworth and Joseph P. Bryan, Introduction to Probability and Random Variables (New York: McGraw-Hill, 1960), p. 67. . 5Peter A. Zehna, Probability Distributions and Statistics (Boston: Allyn and Bacon, Inc., 1970), p. 160. 99 Figure 4.2 Log Normal Distributions. 100 The Gamma Probability Family A random variable x is said to have a gamma probability distribution if its probability density function is given by: xa’Ae'(X/B) B°‘I‘(a) f(x; Q,B) for x > O; O, elsewhere. The parameters of the gamma distribution are (a) and (B), where (a) refers to the number of successes per interval or unit space and (B) represents the reciprocal of the average number of successes per interval (%). The gamma is thus related to both the poisson and the exponential distributions. The exponential is a special case of the gamma for which a = l. The gamma probability function describes a family of dis- tributions of the gamma random variable (x), one for each possible combination of the values (a) and (B). The random variable x may be considered as the number of units of length (intervals) between one success and the oth succeeding success. The parameters a and 8 determine the shape of the density function, which is skewed to the right for all values of a and B. The skewness will decrease as a increases. As previously noted, however, when a = l the gamma is an exponential distribution and assumes the shape of a decay function as seen in Figure 4.3. If a is a positive integer, then the gamma becomes an Erlang distribution. Figure 4.4 illustrates some typical gamma density functions. The expected value of the gamma random variable is: 101 1.00- 01=-0.5 0.75- 0(=0 0.50- 0.25 / 1.00- 0.75 0.50 0.25 Figure 4.3 Gamma Distributions With Unit 8 But Different Values of a. 102 Figure 4.4 Typical Gama Distributions. E(X) = 6B=% The variance and standard deviation respectively are: V(X) = a82 I V(X) = {OS2 The gamma exists in situations where the underlying process is a poisson. Thus the assumptions relevant to the poisson are applicable. Additionally, the gamma applies only to non-negative random variables. The tie between the gamma, poisson and exponential is close. The poisson resulted from an effort to determine the probability of (n) successes per unit length, given a mean of (X) successes per unit of length. The exponential results from an effort to determine the probability of (x) units of length from one success to the next in a poisson process. The gamma distribution results from an effort to determine the probability of (x) units of length between one success and the (ath) succeeding success.6 There is no direct answer to when the gamma is applicable, one must construct a histogram of the actual data.7 The family is so extensive in shapes of densities available that it is a fairly safe assumption to make as a model for an experiment described by almost any 6Claude McMillan and Richard F. Gouzalez, Systems Analysis: A Computer Approach to Decision Models (Homewood, 111.: Richard D. Irwin, 1965), p. 159. 7Chris P. Tsokos, Probability Distributions: An Introduction to Probability Theory With ApplicationsTTBelmont, Calif.: Duxbury Press, 1972), p. 128. 104 non-negative random variable.8 E. M. Basic found the gamma to provide an excellent description of the probability distribution of demands for a product.9 Additionally, Bryan describes the gamma as applicable "when conditions of the problem exclude values of x smaller than some arbitrary minimum."1° The Erlang Distribution The erlang distribution is a special case of the gamma probability family. When a = l, the gamma is an exponential distri- bution which is a decay type function. When 6 becomes a positive integer above 1 the distribution is an erlang. As 6 goes from 1 to n, the shape of the distribution changes from a decay type function through a series of shapes and eventually approximates the normal. The primary application of the erlang is as a series of service times. A Single service time can be viewed exponentially. AS a second service time is added in series (i.e., a manufacturing process where two service type operations are performed consecutively) the process can be viewed as two independent exponentials. A series of service type operations can be represented with an erlang distri- bution with the value of a equal to the number of stages. Thus, if there is a process which contains three exponential type service times, the entire operation be represented by an erlang distribution with a aZehna, p. 148. 9E. Martin Basic, "Development and Application of a Gamma Based Inventory Management Theory" (Ph.D. dissertation, Michigan State Univer- sity, East Lansing, Michigan, 1965), p. 8. 10Wadsworth and Bryan, p. 91. 105 equal to three. The forms of the density function, expected value, variance and standard deviation of the erlang are the same as for the gamma. Selection and Generation Any one of the three probability distributions presented above might have been employed in this research. However, the erlang distribution was selected based upon its direct applicability when considering several combined service times. In this research, the combination of communication, order processing and transportation times represents a similar situation. Two specific erlang distributions were employed throughout the research. Each had an expected value of 10 days. They differed in their levels of standard deviation. The first distribution had a standard deviation of 2.0 days while that for the second was 3.0 days. Each distribution, along with its mean and standard deviation is provided in Table 4.4. These two distributions were employed as separate test conditions characterizing a medium and high level of order cycle time uncertainty, respectively. A third distribution with a mean of 10 days and a standard deviation of zero was employed as a control condition. Each distribution in Table 4.4 was generated using GASP II.11 To assure that the proper mean and standard deviation were generated, 11A. Alan B. Pritsker and Philip J. Kiviat, Simulation With GASP II (Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1969), pp. 99-102. 106 Table 4.4 Order Cycle Time Probability Distributions Medium Uncertainty High Uncertainty 11 3.0910 231 P00' 5 X) 6 0.0366 6 0.0866 7 0.0766 7 0.1966 8 0.2100 8 0.3200 9 0.4500 9 0.4633 10 0.6400 10 0.6000 11 0.8033 11 0.7266 12 0.8866 12 0.8133 13 0.9466 13 0.8800 14 0.9733 14 0.9200 15 0.9933 15 0.9500 16 0.9966 16 0.9800 17 1.0000 17 0.9900 18 1.0000 i = 10 i = 10 2.02 0 2.9953 0 II 107 a t-test was employed to test the generated mean against the desired mean. In each case the hypothesis of no difference was accepted at the .05 level. The next section specifies the four time series forecasting techniques employed throughout this research. The procedure employed to determine the smoothing constants for each is also reviewed. Forecastin Technigues and Selection of Smoothing Constants The four time series analysis forecasting techniques employed in this research were: 1. R. G. Brown's Basic Exponential Smoothing;12 2 Trigg and Leach's Adaptive Smoothing Model;13 3. P. R. Winters' Exponentially Weighted Moving Averages;1“ and 4 Roberts and Reed's Self-Adaptive Forecast Technique.15 Each is contained in the SPSF Testing Environment and, each was presented in detail in Chapter III. 12R. G. Brown, Statistical Forecasting for Inventory Control (New York: McGraw-Hill Book Co., 1959). 13D. W. Trigg and A. G. Leach, "Exponential Smoothing With An Adaptive Response Rate," Operations Research Quarterly 18 (No. 1; March 1967): 53-59. 1"P. R. Winters, "Forecasting Sales by Exponentially Weighted Moving Averages," Management Science 6 (No. 3; April 1960): 324-342. 15Stephen D. Roberts and Ruddell Reed, Jr., "The Development of a Self-Adaptive Forecasting Technique," AIIE Transportation 1 (No. 4; December 1969): 314-322. 108 AS each of the forecasting techniques employed is characterized by some form of exponential smoothing, the values of the smoothing constants employed in each influenced the accuracy with which a given demand pattern was projected. Through an iterative process, it would have been possible to determine the optimal smoothing constant values for each technique given a specific pattern of demand. However, due to the fact that the Trigg and Leach technique is basically an adap- tation of the R. G. Brown technique, and that the Roberts and Reed technique is based upon that of P. R. Winters, a different set of smoothing constant values for each technique could have inherently biased the results. In an effort to overcome this difficulty, one set of smoothing constants was applied in the R. G. Brown and Trigg and Leach approaches, and a separate set was applied in the P. R. Winters and Roberts and Reed approaches. To determine the smoothing constant values to be applied in each set of techniques, a series of initial test forecasts were gen- erated using the R. G. Brown and the P. R. Winters' techniques. The demand pattern to be employed in the test runs were those used through- out the research. The experimental conditions employed in each of these runs and the statistical analysis employed to select the values of the smoothing constants are detailed in Chapter V. Demand Generation Ideally, the accuracy of each forecasting technique would have been analyzed over a broad range of demand patterns characteristic of 109 actual sales histories. However, as each of the demand patterns employed had to be tested for each possible combination of experimental variables, the inclusion of a large number of such patterns was prohib- itive due to the simulation runs required and the associated costs. In addition, the primary focus of this research was not to determine the relative accuracy of each forecasting technique under numerous demand conditions but to quantify the levels of F0 and 00 in a rep- resentative physical distribution network under selected patterns of demand. The demand patterns employed had to be representative and unbiased in terms of the adaptability of each of the forecasting techniques. Two demand patterns were employed in this research. Each was characterized by an increasing trend changing to a decreasing trend combined with a high or low seasonality. Each represented 12 periods of demand, each period being 20 days in length. These patterns were generated stochastically through a program termed ORDGEN.16 ORDGEN uses a series of uniformly distributed random numbers to create a normal distribution around a specified mean and standard deviation. For the purposes of this research, ORDGEN was used to generate an order containing some or all of the 10 Simulated products for each simulation day. This process was completed for each of the two demand patterns, the orders in each case being stored on tape and employed as the time series of daily demand placed against the DC. 16ORDGEN was validated as part of the SPSF validation procedure for demand generation alternatives. 110 The input to ORDGEN specified a single mean and standard deviation of expected daily sales for each period of each demand pattern. Table 4.5 presents the mean expected daily sales levels for each period for each demand pattern. The standard deviation of daily sales was 10 units throughout each demand pattern. Given this standard deviation, orders for each day were generated stochastically about each mean for the respect demand patterns and periods. Table 4.5 Demand Patterns: Mean Expected Daily Sales by Period Mean Expected Daily Sales (Units/Day) Period Demand Pattern I Demand Pattern II 1 53 57 2 58 69 3 59 75 4 59 78 5 58 88 6 53 72 7 47 58 8 42 46 9 41 40 10 41 37 ll 42 37 47 43 _a N 111 Conclusion This chapter has detailed the environmental specifications employed throughout this research. Included in the discussion was a description of the specific SPSF structure employed and a review of the experimental variables. These variables were the order cycle time probability distributions, forecast techniques, smoothing constants and demand patterns to be employed. Chapter V will present the research methodology and hypotheses employed to investigate the effects of these variables. CHAPTER V HYPOTHESES AND RESEARCH METHODOLOGY Introduction The objectives of this research were four in number: To determine the relative effectiveness of the selected time series analysis forecasting techniques in projecting the following demand patterns: a. demand pattern 1: increasing trend changing to decreasing trend with low seasonality; and b. demand pattern 2: increasing trend changing to decreasing trend with high seasonality; To determine the relative impact which each forecasting technique has upon each of the following response variables for each pattern of demand: a. inventory levels; and b. stockouts. To determine the impact of selected levels of lead time variability upon the following response variables for each pattern of demand: a. inventory levels; and b. stockouts. 112 113 4. To determine the combined effects of both the forecasting techniques and lead time variability upon the following response variables for each pattern of demand: a. stockouts; and b. inventory levels. The method of experimentation employed to meet these objectives was to make changes in the external and internal variables (demand, forecast and lead time) of the simulation model and then analyze the effects of these changes on the performance of the simulated physical distribution channel. To study the results in some meaningful manner, a proper method of analysis, i.e., experimental design had to be selected. The purpose of this chapter is to Specify the two types of experimental design employed. The first is a relatively simple procedure used in determining the relative accuracy of the forecasting techniques. The second is a factorial design. The Objectives of each design, the runs and output required, as well as hypotheses and the statistical analysis employed to test the hypotheses are discussed. Relative Forecast Accuraqy One of the primary objectives of this research was to determine the relative accuracy of four time series forecasting techniques in projecting two specified patterns of demand. The experimental design to be employed to satisfy this Objective will be a determination of the Mean Absolute Percent Error (MAPE) with which each technique forecasts 114 each pattern. The runs and statistical analysis required are presented below. Run Specification for Forecast Accuracy The accuracy of each of the four forecast techniques depends to a large extent upon the value(s) of the smoothing constant(s) specified for each. An attempt was made not to introduce bias into the results through an arbitrary specification of the values of the smoothing constants required. Two sets of smoothing constants were specified for use throughout the research. The first set was a value for the alpha constant to be employed in both the R. G. Brown Basic Exponential Smoothing technique and the Trigg and Leach adaptation of Brown's technique. The second set specified values for the alpha, beta, and gamma constants employed in the Winters' technique and the Roberts and Reed modification of the Winters' technique. To determine the values of the smoothing constants, 14 initial runs were made. Each run was 240 days in length, with forecasts being made and reports being generated every 20 days. Of the 12 periods, only the results of the last 10 were analyzed, allowing the first two periods for the simulation model to “warm up." Although this was not particularly necessary in determining forecast accuracy, it was required if the output from these 14 initial runs was to be employed in the analysis to be described in the next section. The forecasting techniques used in these initial runs were the Brown and Winters' techniques. For each of these techniques, a 115 series of runs was made for each of the two demand patterns, each run under a single demand pattern employing a different set of smoothing constant values. For Brown's technique, three runs were made for each pattern--the first run employing an alpha value of .2, the second an alpha value of .25, the third an alpha value of .3. At the completion of the runs, that alpha value producing the lowest combined Mean Absolute Percent Error (MAPE) for the two demand patterns was selected for use in the remainder of the research. The value selected for Brown's technique was also employed throughout for Trigg and Leach. The process for Winters' technique was the same. Instead of using three different values of alpha, however, four combinations having different values of alpha, beta, and gamma were employed. These were: (1) a = .2, B = .2, A = .2; (2) a = .25, B = .25, A = .25; (3) a = .3, 8 = .25, X = .25; and (4) a = .25, B = .35, X = .35. The combination selected for Winters was also employed in Roberts and Reed. The only output required to make a decision on smoothing constants was the period demand and forecasts. However, in order to use each run corresponding to the lowest MAPE for each technique and demand pattern in later analysis, all of the appropriate response variables were recorded. The output required for these runs, and each remaining run was recorded on file as illustrated in Table 5.1. The response variables recorded for each of these first 14 runs and all subsequent runs are given in Table 5.2. 116 Table 5.1 File Specification Variable Number Name Levels 1 Run number 1-30 2 Time (Period no.) 1-12 3 Product 1-10 4 Response variable 1-8 Table 5.2 Response Variables Recorded Number Description 1 Sales 2 Forecast 3 Inventory level (average) 4 Stockouts 5 Total discrepancy 6 Forecast discrepancy 7 Operating discrepancy 8 Demand 117 Statistical Analysis When the data from each of the 14 runs had been recorded, the set of smoothing constants to be employed for each forecasting technique was selected according to the following process: SLep_l. For each demand pattern, determine the MAPE value for each run corresponding to a different set of smoothing constant values as: 12 O - F. MAPEi=l,3 - ( 3:3 0. ) ////10 J where: i = the set of smoothing contants defining each run; j = the periods over which the calculation is to be made; Dj = demand for period j; and Ej = forecast for period j. Step 2. Sum the MAPE values for each set of smoothing constants over the two demand patterns. Step 3. Select that set of smoothing constant values having the lowest combined MAPE value for use in the research. This procedure did not test the statistical significance of the differences in accuracy of the forecasts. It merely ranked the con- stants according to accuracy. The relative accuracy of all four forecasting techniques (using the smoothing constants selected through this procedure) was analyzed in the second experimental design described in the next section. 118 The output of the initial 14 runs was saved for use in subsequent analysis. However, the only output required was that corresponding to the two runs which resulted in the lowest combined MAPE value for each forecasting technique. Thus, only the output of four of these initial runs was saved on file, the remainder being eliminated. The next section details the general hypotheses investigated in this research and the method of analysis. Factorial Design for Analysis of Variance The general hypotheses investigated in this research are outlined as follows: 1. The four forecasting techniques project each pattern of demand with equal accuracy; 2. The average inventory held at the DC is the same under each forecasting technique for each pattern of demand; 3. The average number of stockouts at the DC is the same under each forecasting technique for each pattern of demand; 4. The average level of inventory held at the DC is the same for each level of lead time variability; 5. The average number of stockouts at the DC is the same for each level of lead time variability; 6. The average level of inventory held at the DC is the same across all forecasting techniques across all levels of lead time variability for each pattern of demand; and 7. The average number of stockouts at the DC is the same across all forecasting techniques across all levels of lead time variability for each pattern on demand. 119 The experimental design employed in investigating these hypotheses was a factorial design. A factorial experiment is one in which the effects of all the factors and factor combinations in the design are investigated simultaneously. In this case, three factors were analyzed: the levels of demand uncertainty, the levels of forecast accuracy, and the levels of operating uncertainty. The factorial design is advantageous in that the effects of a particular factor are evaluated by averaging over a range of other experimental variables. The factorial design will permit statements to be made as to the effect of a particular forecasting technique, where that tech- nique is considered over a range of demand patterns and levels of operating uncertainty. The runs and statistical analysis required for this design are presented below. Run Specifications for Analysis of VarTance To employ the factorial design, one simulation run must be made for each combination of the levels of each independent factor. These factors, and their respective levels are presented in Table 5.3. A total of 30 runs are required. The combinations defining these runs are detailed in Figure 5.1. Within each cell, the number of the run is entered in parentheses. The levels of the experimental factors A, B, and C, are entered in order in the bottom of each cell. Note, however, that runs 4 and 19, corresponding to the R. G. Brown technique, and runs 7 and 22, corresponding to the P. R. Winters technique, have already been completed in the initial series of runs. Thus, only 26 additional runs remained to be made. 120 Table 5.3 Experimental Factors and Levels Experimental Factor A: Demand Uncertainty Levels: Demand Patterns DMDl: Increasing Trend Changing to Decreasing Trend With Low Seasonality DMDZ: Increasing Trend Changing to Decreasing Trend With High Seasonality Experimental Factor 8: Forecast Accuracy Levels: Forecasting Techniques FORTECHl: FORTECHZ: FORTECH3: FORTECH4: FORTECHS: Perfect Forecast (control) R. G. Brown's Basic Exponential Smoothing Trigg & Leach's Adaptive Exponential Smoothing P. R. Winters' Exponentially Weighted Moving Averages Roberts and Reed's Self Adaptive Forecasting Technique Experimental Factor C: Operating Uncertainty Levels: Lead Time Variability 0CTV1: OCTV2: Zero OCTV3: High 121 .mcowumquWumam cam F.m mszmwa n.m.~ ~.n.~ H.m.~ m.e.~ ~.¢.~ H.e.~ m.m.~ ~.n.~ H.m.m n.~.~ N.~.N H.N.N m.H.N N.H.u H.H.N huaaaaoaaum swam .N Aomv Aouv AmNV Asuv AeNV AnNV Aewv Ammv ANNV AHNV Ao~v Amflv Away “Adv Aoav m-wd. Nawdd Han.H m.e.a N.¢4A H.¢4H manqw Nqn.H H.n.H n.NqH N4M.H H.~Nfi mqaaaiwaaa Aqwfi huaaaaommom 304 .H Andy Acav AnHV ANHV AHHV Aoav Amy Amv any on Amv Aev Amv ANV AHV “nu“: taupe mcwmmouuuo and: 364 ouom swam 30A onus swam 304 oumN swam 30A ouou an“: 304 ouwN OH wauwawno vsoua wcummmuuaH Om ON OH 0“ ON OH O” ON OH O" ON OH O” ON OH musuauu> mafia Q mafia n mafia n «Ewe o mafia Qpoo mvm.umw nm¢.owp o¢~.wmo.F new mmp Pm mop N>huo om~.¢mw o¢<.mmp o~_.meo.F mum om, om mop P>puo mcmucvz e~¢.o¢w www.mmp ooe.~mo._ mmm Fmp “FF mop m>kuo mam.P~w ~P~.mnp opm.m¢o.P New opp mm mop N>huo comm; Pom.nwm meF.~m_ o¢¢.mno._ mom Fm mm mm P>huo ucm mmwge meu.¢om mem.mNF oam.m¢o.P mfim wmp amp mop m>huo ~¢¢.oww mom.pwp opo.mmo.P mum mop opp mop N>huo mnm.nmm www.mmp oom.mmo.F mom mm mm cop F>Puo czogm mmm.mwm mn~.~mp oom.oco.~ mom mm _m o m>puo Nom.oom mom.mwp o~o.omo.~ mom om mm o N>Huo opm.emm opm.mm_ om~.¢m~._ mmm ma me o ~>hoo uummgmm va “moo Amy “moo Amv mmpmm xgoucm>=m muzoxuoum no on >Fuo mzcmcgumh cowpzamcpmwo comaznwcpmwo peach wawgm>< mmmcm>< mmmgm>< mmmgw>< m:?ammuwcom mam; mmpmm mcowuvucou mmpnmwgm> mmcoammm Popcmswcmaxm _ ccmguma acmemollxgmesam mucmegowgma mm.c wpnmp 206 mm mm em mop Nap amp cop m>Huo mm mm mm mo_ mmp mmp oop ~>huo cop cop cop cop cop cop co. P>huo :3 :3 :3 :3 E :3 :3 pmou umou mmpmm agoucm>cH mpaoxuoum no on cowuznvcamwo cowpznwspmwa Punch mmmgw>< mmmcw>< mmmgm>< mmmgm>< mmmb mmpmm mmpamwgm> mmcoammm ummomgom uumecma new Peso cm>wwul>Huo mmogo< mmpnmwgm> mmcoammm cw mmmcmcu mm.m m—nmh 207 which were lost as a direct result of the stockout and in no way influenced future demand. Thus, while an 88% increase in the mean level of stockouts from OCTV1 to OCTV2 was partially responsible for a 4% decrease in both total sales and total profits, this result would certainly be much greater in an actual market situation. Table 6.39 provides percentage values of each response variable under the R. G. Brown technique. Here, the same relationships noted above under a perfect forecast were again evident. Specifically, while F0 was relatively constant across OCTV, changes in the mean level of 00 had a direct impact on system performance. In addition, average inventory again increased as the level of stockouts did. This result indicates that variation in the level of average inventory is directly related to increased variation in order cycle times. Finally, the increase in the level of stockouts is considerably less than the combined increases in F0 and 00. In comparison with Table 6.38 this is a direct indication of the canceling effect between FD and OD. Tables 6.40 and 6.41 dealing with the Trigg and Leach and the Winters techniques, respectively, provide additional support to the relationships noted above. The percentage increases in F0 detailed in Table 6.41 for the Winters technique are greater than the corre- sponding increases for any other forecast. Reference to Table 6.37 indicates that for the Winters technique the mean level of F0 actually increased from 102 to 176 between 0CTV1 and 0CTV3. The impact of this level of F0 is clear in a comparison of the Winters-OCTV3 values to those of Brown-OCTV2. Here, the level of 208 um mm mm cop evp va wo— m>huo mm cm mm mop app amp mop ~>huo oo_ oo_ ooF oo— cop cop cop _>puc E E E E E E :3 pmou umou mmme xgopcm>cH muzoxoopm no on covuznwgumwo compsnwupmwo pmuoh mmmgw>< mmmgm>< mmmgm>< mmmxm>< mmmg mmpmm mwpnmwcm> mmcoammm pmmuwgou czocm .o .m vcm Fozo cm>wwlu>huo mmogu< mmpamwgm> mmcoammm cw mmmcmgu mm.o mpnwp 209 mm pm mm mm mop amp opp m>huo mm Fm mm «OF om, amp «PF N>huo cop cop cop cop oo— cop cop P>puo :3 :3 $3 $3 :3 E E pmou umou mwpmm agoucm>cH mpzoxuopm so on cowpzavgumwo cowaznwcpmwo Pepe» mmmcm>< mmmgm>< mmmgm>< mmmgm>< mmmo mmpmm mmpnmwgm> mmcoammm ammumgou sumo; ncm mmwgk m vcm Peso cm>ww-->huo mmocu< mmpammcm> mmcoqmmm cw mmmcmsu o¢.c mpnmh 210 cm am mm mm mop CNN Nm— m>hoo ooF mm mm “op mop pr cop N>Huo cop oo— cop cop cop cop cop P>huo 3 E E E E E umou umou mmpmm xgoucm>cm musoxuoum no on cowuanwcpmwo compznwgummo Pouch mmmgm>< mmmcm>< mmmcm>< mmmgm>< mmmb mm—mm mmpnmmcc> mmcogmmm ammomgom mcmpcwz .m .a a van Pozo cm>wo-->Puo mmoco< mmpnmwgm> mmcoqmmm cw mmmcmcu _v.o m_nm» 211 00 is identical at 110 in each case. The Brown technique, however, had an FD of 105 compared to the 176 for Winters. Thus, the differ- ences in the sales less distribution cost quantities of the system under these two conditions may be partially attributed to the higher FD under the Winters technique. The word "partially" is used due to the fact that this comparison does not consider variations about mean levels of the response variables. Table 6.42 presents a summary of system performance under demand pattern 2. In comparison to the summary for demand pattern 1 (Table 6.37) several important points are evident. First, while the levels of DO under DMDZ are higher than those under DMDl, the increases are much smaller than those for FD. These significant increases in F0 led to a higher level of stockouts in each case. Second, the average inventory values under each demand pattern show little difference when considered across all conditions. These factors lead directly to the higher level of stockouts under DMDZ. Comparisons of the response variables across the levels of OCTV are presented in Tables 6.43 through 6.46 for each of the forecasting techniques. Table 6.43 presents changes in response variables when given a perfect forecast of DMD2. A review of this table shows the same relationships which were noted under a perfect forecast of DMDl (Table 6.38). Specifically, increases in the levels of 00 were directly related to increases in the level of stockouts and decreases in the sales less distribution cost quantity. 212 mmm.wmm mwv.mup omm.opp._ wmw FNN ovp omm m>huo www.mmw Pm_.Fmp ooo.¢Po.F com NNm Pop omm N>Hoo Pmm.m¢m ope.mom oeN.FmF._ mum Pom on map F>huo mgmucmz mom.omm No~.mup oN~.omo.— oem mom Pep mam m>puo m¢~.mmm mmo.emp opw.mpp._ omm mmm map New ~>Huo comm; omm.oom omn.emp ome.mmo.~ umm cam pm mum P>Hoo ccm amwgp om¢.mmm omo.~w~ omm.umo.P mmm mmm map com m>puo mmm.nom mow.owp omm.wmo.F mom Pmm e¢_ owm ~>puo mum.mmw www.mmp own.mmo._ mmm omm Na wmm P>Huo :zocm “so._mo.~ moo.v_~ omo.mvm._ Nwm oo_ mo_ 0 m>huo New.mmo.P www.mpm omm.mn~.P cum mm mm o N>huo mmm.oop.F mmm.m- omo.m~m.p Nam mm pm 0 P>puo powwgma Amy umou Amv pmou Aav mmpmm xgoucm>cH muzoxuoum no em >Huo msumcsum» cowpsnwgpmwo cowpsnwgumwo Peach mmmgm>< mmmgm>< mmmgm>< mmmgw>< mcwpmmumgou mmmo mm_mm mcowpwucou mmpnmwgm> mmcoammm pavemewgqum m cgmpuma cememouuxgm553m mocmsgomgma ~¢.m open» 213 em mm cm o—P vow mom oop m>kuo mm mm mm mop omp amp cop N>puo cop cop co_ co. cop cop oo_ _>huo :3 :3 :3 E :3 :3 :2 “mac pmoo mmpmm xgoucm>cH muaoxooum no on cowpznwgumwa cowgaawgpmwo Pouch mmmgm>< mumgm>< mmmgm>< mmmgm>< mmmg mmpmm mmpammgm> wmcoammm ammomgom uumwcmg a van was: cm>ww-->»uo mmogu< manmmLm> mmcoammm cw mmmcmcu me.o mpnmh 214 mm em mm mm omp wmp cop m>huo POF mm mm mop cop om_ mm N>huo cop cop cop oo_ cop cop cop F>puo E E E E E E E pmou pmou mmme zgoucm>cH monoxuoum no on cowpsnwcumwo cowpanmgpmwo Pouch mmmcm>< ommgm>< mmmcm>< mmmgm>< mmmg mmpmm mmpnmwgm> mmcoammm ummumgom czogm m can maze cm>mo-->huo mmogu< mmpamwgm> mmcoammm cw mmmcmcu «v.0 anmh 215 em mm em mm mmp omp mop m>huo mop mm NoF mFP Fm amp mm N>huo cop cop cop cop oo_ cop cop P>Huo :3 E E :3 E E :3 pmou pmou mmpmm zgopcm>cH mpaoxuopm no on :o_u:nwcpmwo cowusnwgpmwo Page» mmmcm>< woagm>< mmmxm>< mmmgm>< mmmg mmpmm mmpnmmgm> mmcoammm ammumcou sumo; ucm move» a new maze cm>wouu>Huo mmogo< mmpamwgm> wmcoammm cm mwmcmgu mv.o mpnmp 216 mm mm mm mop o—P mmm npp >Huo om mu mm om cop mop cmP N>huo oo_ cop cop cop coy cop cop _>Huo E E E as :3 E E umou pmou mmpmm xgopcm>cH monoxuopm so on companwgumwo cowuznwgumwo quo» mmwgm>< mmmcm>< mmmxm>< mmmgm>< mmmd mmme mmpamwgm> mmcoammm pmmomcou mgmucwz .m .m m can maze :w>wu-->huo mmogu< mwFDmmcw> mmcoqmmm cw mmmcmcu m¢.o mpnmh 217 Table 6.44 shows performance given and R. G. Brown forecast of DMDZ. Here, the system was most profitable under OCTV2. In com- parison with the values for OCTV1 it is apparent that despite a significant increase in the average level of 00, total sales remained relatively constant while cost dropped seven percentage points. Table 6.45 shows results under a Trigg and Leach forecast of DMDZ. Similar to Table 6.44, the system was most profitable under OCTV2. However, it is surprising that the level of stockouts under OCTV2 was only 91% of the level under 0CTV1. This increase in service level was achieved despite a 56% increase in the mean level of OD. Despite the 5% reduction in FD, it is more likely that the increased service level resulted from variations below the mean expected lead times during the peak seasonal demand. Table 6.46 details similar results under the P. R. Winters technique. Here, a 17% increase in F0 and 133% increase in 0D were accompanied only by a 1% decrease in total profit. The increases in F0 and 00, however, reflect changes in mean levels per period. Total sales decreased only 3% accompanied by a 13% decrease in cost. These factors account for the 1% difference in sales less cost between 0CTV1 and 0CTV3. Conclusion This chapter has presented the results of the analysis performed to investigate the general hypotheses developed in Chapter V. The first section detailed the results of the analysis employed to determine the 218 sets of smoothing constants used through the remainder of the research. The second section reviewed the results of the analysis of variance procedure employed to investigate the levels of selected response variables. The final section presented a summary comparison of system performance including a review of system sales, costs and profits. The next and final chapter details the conclusions amenable from the data reviewed in this chapter. In addition, the limitations of this research are reviewed and significant areas for future research are noted. CHAPTER VII CONCLUSIONS Introduction The purpose of this research has been to determine the impacts of variations in the levels of forecast accuracy, demand uncertainty, and lead time variability on the performance of a simulated physical distribution system. The specific results of the analysis were reported in Chapter VI. The purpose of this chapter is to discuss the conclusions drawn from the hypotheses and findings and to suggest implications for the planning and management of physical distribution operations. In the first section, the hypotheses and findings are inte- grated and specific conclusions are reported regarding acceptance or rejection. For each general hypothesis, several subhypotheses were generated and tested as a result of the step-by-step analysis of variance procedure. These are also reviewed and conclusions drawn. In the second section, generalized conclusions are reported based upon the analysis of the first section. Next, the implications of the research for physical distribution management are reviewed. Particular emphasis is given to the length of the operational planning period and accepted practices for reducing system uncertainty. 219 220 The last section looks into the limitations of this research and suggests areas which should be particularly fruitful for future research. Integration of Findings and Hypotheses This section integrates the specific hypotheses for each response variable with the results detailed in Chapter VI. The section is divided into five parts, each dealing with a specific response vari- able. These variables, in order of their presentation are average inventory, stockouts, sales, Forecast Discrepancy and Operating Discrepancy. Average Inventory The first hypothesis states that variations across all levels of the three independent variables will have a significant impact upon the level of average inventory held at the DC. This hypothesis was accepted. However, the data indicated a highly significant interaction between the pattern of demand placed against the DC, the variance in expected lead times and the accuracy of the forecasting technique employed. Given such an interaction, only one inference can be drawn across all three of the independent variables. That is, a perfect forecast consistently results in the highest level of average inventory (when safety stocks are not employed) across all demand patterns and levels of order cycle time variation analyzed in this research. For each of the non-perfect (i.e., operational) forecasts, specific qualifications are required to draw conclusions. 221 Hypothesis 2 stated that there would be a significant difference in the level of average inventory held across all non- perfect forecasts. This hypothesis is confirmed. Thus, in addition to the conclusion drawn for the first hypothesis, it may also be said that the level of average inventory is significantly different across variations in forecast accuracy, order cycle times and demand patterns. In addition, the combined effects of the levels of these variables on average inventory are not additive. That is, the levels of these variables indicate a high degree of interaction and the effects of any single variable may not be said to result in a consistently higher or lower level of average inventory. For hypothesis 2-a, generalizations may be rendered. Here, the hypothesis that given demand pattern 1 (DMDl), there will be significant differences in the level of average inventory when considered across the non-control levels of forecast accuracy and order cycle time variation, was confirmed. In addition, although the interaction between the non-control levels of the two variables was significant, the ranking of forecasting techniques by levels of average inventory was consistent across each level of order cycle time variance. Specifically, the Brown technique consistently resulted in the highest level of average inventory while the Trigg and Leach and Winters techniques ranked second and third, respectively. The dif- ferences, though consistent, were not significant in each case. Hypothesis 2-a-1, which considered the significance of the differences between all levels of forecast accuracy given DMDl and 222 a constant order cycle time (OCTV1), was confirmed. Here, the post-hoc analysis illustrated two specific significant differences. First, the level of average inventory under the Winters technique was significantly lower than that under each of the other techniques. Second, the level of average inventory under the Trigg and Leach technique was signifi- cantly lower than that under the perfect forecast. There was no significant difference found in the comparisons between the Brown and either the perfect forecast or Trigg and Leach. Hypotheses 2-a-2 and 2-a-3 which considered the same rela- tionship under 0CTV2 and OCTV3, respectively, were also confirmed. In addition, the results of the Scheffé tests for each of these hypotheses detailed exactly the same differences noted above for OCTV1. Thus, given DMDl, it was concluded that for each level of OCTV the Winters technique resulted in an average inventory which was significantly lower than the remaining forecast techniques. The Trigg and Leach technique resulted in a significantly lower level than that achieved under a perfect forecast. Hypotheses 2-a-4 through 2-a-7 considered the differences in the levels of average inventory across the levels of OCTV for each forecasting technique given DMDl. Here, only hypothesis 2-a-8, concerning average inventory across the levels of OCTV when given the P. R. Winters forecast of DMDl, was confirmed. The corresponding Scheffé test illustrated that the significant difference occurred between the level of average inventory under 0CTV2 in comparison to OCTV3. Specifically, the level of average inventory under OCTV3 (the 223 highest level of operating uncertainty) was significantly lower than that held under OCTV2. Hypothesis 2-b stated that given DMDZ, the levels of average inventory would be significantly different when considered across the non-control levels of OCTV and FORTECH. This hypothesis was confirmed. However, the significance of the interaction between the levels of OCTV and FORTECH indicated that specific levels of each of these variables must be considered in reaching any conclusion. No generalizations may be made concerning the level of average inventory across the non-control levels of OCTV and FORTECH when given DMDZ. Hypotheses 2-b~1, 2-b-2, and 2-b-3 considered the nature of the differences in level of average inventory held across all four forecasting techniques for 0CTV1, OCTV2 and OCTV3, respectively. Each was confirmed. The Scheffé test performed indicated that the nature of the significant differences was not consistent across the levels of OCTV. For OCTV1 and 0CTV3, the level of average inventory was significantly higher under the control (perfect) forecast than under the non—control (Brown, Trigg and Leach, and Winters) forecasts. There was no significant difference between the levels of average inventory under the non-control forecasts. For 0CTV2, the results were more complicated. Here the Scheffé test indicated two significant differences. First, the level of average inventory under the perfect forecast was significantly higher than that under each of the non-control forecasts. Second, the level of average inventory under the Winters technique was 224 significantly lower than that under the Brown and Trigg and Leach techniques. Thus, similar to the results under DMDl, no significance was found between the average inventory levels held under the Brown versus the Trigg and Leach technique for any level of OCTV. Hypotheses 2-b-4 through 2-b-7 tested the levels of average inventory across the levels of OCTV for each of the forecasting tech- niques, given DMDZ. Here, hypotheses 2-b-4 and 2-b-5 were rejected. No significant difference was found in the levels of average inventory across the levels of OCTV for either the perfect or the Brown forecasts. Hypotheses 2-b-6 and2-b-7 were confirmed. For hypothesis 2-b-6, the Scheffé test indicated that given a Trigg and Leach forecast of DMD2, the level of average inventory held under OCTV2 was significantly higher than that held under OCTV1 or 0CTV3. The Scheffé test on hypothesis 2-b-7 produced an opposite result. Here, given the Winters' forecast of DM02, the level of average inventory under 0CTV2 was significantly lower than that held under OCTV1 or 0CTV3. Stockouts Hypothesis 1 states that variations across all levels of the independent variables would have a significant impact on the level of stockouts. This hypothesis was confirmed. Not only was each of the independent variables significant, the three-way interaction was also highly significant. Thus, the levels of stockouts experienced across the levels of any single independent variable showed significant variations when analyzed over all combinations of the two remaining 225 independent variables. As such, the level of each independent variable must be specified when making any statement concerning stockouts. Hypothesis 2 stated that there would be significant differences in the levels of stockouts when considered across all non-control levels of the three independent variables. This hypothesis was also confirmed. Thus, even when the effects of perfect forecasts and constant order cycle times were not considered, significant variations in the levels of stockouts still occurred. In addition, while each of the main effects was significant, they were not additive as indicated by the significance of both the two-way and three-way interactions. Hypotheses 2-a and 2-b examined stockout levels over all combinations of the non-controlled order cycle time variance and forecasting techniques given demand patterns 1 and 2. Each was confirmed. Thus, it may be concluded that variations in the non- control levels of OCTV and FORTECH do result in significant variations in the levels of stockouts over both demand patterns. In addition, the interaction between OCTV and FORTECH was significant under each demand pattern. As such, even when a specific demand pattern is given, the effects of variations in OCTV and FORTECH upon the resulting level of stockouts are not additive. Hypotheses 2-a-l, 2-a-2, and 2-a-3 considered differences in the levels of stockouts occurring across the four levels of FORTECH given DMDl and either OCTV1, 0CTV2, or OCTV3, respectively. While each was confirmed, the results of the Scheffé tests illustrated that the nature of the difference was not the same for each level of OCTV. For 226 DMDl and either OCTV1 or 0CTV3, the same significant differences were noted. First, the level of stockouts under the perfect forecast were significantly lower than the level experienced under each of the remaining forecast techniques. Second, the level of stockouts under the Winters technique was significantly higher than that under the Brown or Trigg and Leach techniques. Given DMDl and OCTV2, only one significant difference was found through the Scheffé test. Here, the level of stockouts under the Winters technique was significantly higher than the level under each of the other techniques. No significant difference was found between the perfect, the Brown, and the Trigg and Leach forecasts. Hypotheses 2-a-4 through 2-a-7 evaluated differences in the levels of stockouts across the three levels of OCTV for each forecast technique under DMDl. First, hypothesis 2-a—4, which stated that there would be significant differences in the levels of stockouts across the levels of OCTV given a perfect forecast of DMDl, was confirmed. The Scheffé test subsequently performed illustrated that the level of stockouts under OCTV1 was significantly lower than the levels under 0CTV2 and OCTV3. Given the Brown forecast of DMDl, hypothesis 2-a-5 was con- firmed. Here, the level of stockouts under 0CTV1 was significantly lower than the level experienced under OCTV3. Exactly the same results were noted for hypothesis 2-a-6 which dealt with the Trigg and Leach technique. 227 Hypothesis 2-a-7 considering stockouts across OCTV given a perfect forecast of DMDl was also confirmed. The Scheffé test illustrated that the level of stockouts under OCTV3 was significantly higher than the corresponding levels under OCTV1 and 0CTV2. Hypotheses 2-b-l, 2-b-2 and 2-b-3 considered the nature of the differences in stockout levels across all forecasting techniques for DMDZ and either 0CTV1, 0CTV2, or OCTV3, respectively. Each hypothesis was confirmed. For each hypothesis, the Scheffé test indicated that the stockout level under the perfect forecast was significantly less than all other forecasts. This was the only difference which was significant for OCTV1. For 0CTV2 and 0CTV3, however, a second significant difference resulted. For OCTV2, stock- outs under the Winters forecast were significantly higher than under the other forecasts. A contrasting result was found under OCTV3, the stockouts under the Winters technique being significantly lower than those under the Brown and Trigg and Leach forecasts. Hypotheses 2-b-4 through 2-b-7 tested the levels of stockouts across the levels of OCTV for each of the forecasting techniques under DM02. Here, hypotheses 2-b-5 and 2-b-6 were rejected, indicating no significant differences between stockout levels across the 0CTV's for either a Brown or Trigg and Leach forecast of DM02. For hypothesis 2-b-4, the level of stockouts under OCTV1 was significantly lower than the corresponding level under OCTV2 and 0CTV3 given a perfect forecast, 2-b-7, was also accepted. Here, the Scheffé test indicated that the level of stockouts under 0CTV2 was significantly higher than for the corresponding levels of OCTV1 and 0CTV3. 228 _s_a_1_e_s_ Hypothesis 1 stated that there would be significant differences in sales levels when considered across all levels of the independent variables. This hypothesis was confirmed. The specific nature of the difference(s), however, cannot be statistically interpreted due to the highly significant interaction between independent variables. The only generalization across all levels of OCTV and both demand patterns is that the perfect forecast consistently resulted in the highest level of sales. Hypothesis 2 stated that significant differences would be found in the levels of sales when considered across all non-control levels of the independent variables. This hypothesis was confirmed. As with each of the previous response variables, this indicates a significant vari- ation in sales even when perfect forecasts and constant order cycle times are not considered. In addition, the effects of the non-control levels of the independent variables on sales was not additive. In other words, a significant interaction was again indicated. Hypothesis 2-a considered the main effects of the non-control levels of OCTV and FORTECH given DMDl. This hypothesis was confirmed and the two-way interaction was not significant. Thus, it may be concluded that given DMDl, the Brown forecast consistently resulted in a higher level of sales than the Trigg and Leach technique. The Trigg and Leach technique consistently resulted in a higher level of sales then the Winters technique. In each of the comparisons, however, not all of the differences were significant. 229 Hypothesis 2-a-l considered the significance of the differences between the level of sales when analyzed across the four forecasting techniques given OCTV1 and DMDl. This hypothesis was confirmed. The Scheffé test indicated two significant differences. First, the level of sales under the perfect forecast was significantly higher than that under each other forecasting technique. Second, the sales level under the Winters technique was significantly lower than that under each of the other techniques. Hypothesis 2-a-2 was rejected. There was no significant difference in sales level across forecasting techniques given 0CTV2 and DMDl. Given OCTV3 and DMDl, hypothesis 2-a-3 was confirmed. The Scheffé test indicated the sales achieved under Winters was signifi- cantly lower than the perfect or the Brown forecast. There was no significant difference in sales between the Trigg and Leach and each remaining technique. Hypotheses 2-a-4 through 2-a-7 considered the differences in sales levels across the levels of OCTV for each forecasting technique, given DMDl. Here, hypotheses 2-a-4 and 2-a-5 which examined the perfect and Brown forecasts, respectively, were rejected. Hypotheses 2-a-6 and 2-a-7, which considered the Trigg and Leach and Winters forecasts, respectively, were each confirmed. In addition, the Scheffé tests indicated that the nature of the difference was identical for each. Specifically, for the Trigg and Leach and Winters forecasts, the sales achieved under OCTV1 were significantly higher than under 0CTV2 and 0CTV3. 230 Hypothesis 2-b stated that there would be a significant difference in the level of sales when examined across all non-control levels of the independent variables given DM02. This hypothesis was accepted. However, while each of the main effects of the non-control independent variables as well as their interaction was significant in hypothesis 2-a (DMDl) only the interaction was significant under DMDZ. When considered across both demand patterns, two conclusions may be made: (1) in each case, the interaction between the levels of OCTV and FORTECH is highly significant, and (2) in each case the perfect forecast consistently resulted in the highest level of sales. Hypotheses 2-b-l through 2-b-7 considered sales levels given DM02. Hypotheses 2-b-1, 2-b-2, and 2-b-3 examined sales differences across all forecasting techniques given DMDZ and either OCTV1, 0CTV2, or OCTV3, respectively. Each was confirmed. The results of the Scheffé tests revealed that for each level of OCTV the only significant differ- ence occurred between the level of sale under perfect forecast in comparison to that under each of the other forecasts. Thus, while the Winters technique resulted in a higher level of sales than either the Brown or Trigg and Leach techniques under both OCTV1 and OCTV3, the differences were not significant. Hypotheses 2-b-4 through 2-b-7 tested the levels of sales across each level of OCTV for each of the forecasting techniques. Here, hypotheses 2-b-4 and 2-b-5, which considered the sales across OCTV given a perfect and a Brown forecast, respectively, were rejected. Thus, given DMD2 the level of OCTV did not have a significant impact on the sales achieved under either technique. 231 Hypotheses 2-b-6 and 2-b-7 were accepted. The Scheffé test indicated that for the Trigg and Leach (hypothesis 2-b-6) sales levels achieved under OCTV3 was significantly lower than under OCTV1 and 0CTV2. For the Winters technique (hypothesis 2-b-7), however, the level of sales under OCTV3 was significantly higher than the level under 0CTV2. The results of the hypotheses relating to DM02 permit only one generalization. That is, given a highly variable demand pattern, no specific results concerning the level of sales may be predicted without first qualifying the level of variation in the order cycle times and the forecasting technique in use. Forecast Discrepancy (FD) Hypothesis 1 stated that there would be significant differences in the levels of FD when considered across all levels of the independent variables. Hypothesis 2 stated that such differences would be found when considering FD across all non-control levels of the independent variables. Each was confirmed. This result, for hypothesis 1, was expected in that FD under the perfect forecast was zero in all cases. The confirmation of hypothesis 2, however, indicated a significant difference in the accuracy of the three non-control forecasts. In addition, the three-way interaction was also found to be significant in each case indicating that FD varies across the level of OCTV as well as that of DMD and FORTECH. Conclusions as to the nature of this evidenced relationship are reached in the hypotheses below. 232 Hypothesis 2-a tested the significance of the differences in the level of F0 given DMDl and the non-control level of OCTV and FORTECH. This hypothesis was confirmed. Not only did each of the main effects have a significant impact on accuracy, but the interaction of these effects was significant as well. Thus, even when DMDl was specified as a constant, there was a significant variation in forecast accuracy when considered across the non-control levels of OCTV and FORTECH. Hypotheses 2-a-l and 2-a-2 tested the significant difference in FD across all forecasting techniques given 0CTV1 and OCTV2. Each was confirmed. However, the subsequent Scheffé test indicated that for each hypothesis the only significant difference was between the perfect forecast and the remaining three. There was no significant difference found between the FD levels across the non-control forecast for either OCTV1 or OCTV2. A significant difference was found in the Scheffé test for hypothesis 2-a-3 considering OCTV3. Here, the Winters technique was significantly less accurate than the other three. Thus, given a highly variable order cycle time (0CTV3) the Brown and the Trigg and Leach techniques were significantly more accurate than the Winters technique. Additional information regarding relative accuracy resulted from the analysis of hypotheses 2-a-4 through 2-a-7 which considered variations in the levels of FD under each forecasting technique across OCTV levels. Here, hypothesis 2-a-4, considering the perfect forecast was rejected. Hypotheses 2-a-5 and 2-a-6, considering Brown and Trigg 233 and Leach were also rejected. The only significant difference in technique accuracy across levels of OCTV was found in hypothesis 2-a-7. Here, the level of F0 for the Winters technique was significantly higher under OCTV3 than either OCTV1 or OCTV2. Thus, depending on the type of forecasting technique which is being employed, consideration of the variation of order cycles must be made when reviewing forecast accuracy. The necessity of this constraint is even more clear upon a review of the hypotheses below. Hypothesis 2-b tested the significance of the differences in FD, given DM02, across the non-control level of FORTECH and OCTV. Similar to the conclusion reached for hypothesis 2-a (considering the same relationships under DMDl) this hypothesis was confirmed. However, while each of the main effects as well as their interaction was significant under DMDl, only the interaction was significant under DMD2. However, the next hypotheses examined indicate that the lack of significance for the main effects under DM02 was due to the confounding effects of the interaction. Hypotheses 2-b-1 through 2-b-3 tested the significance of differences in F0 for each OCTV level across the four forecasting techniques, given DM02. Each was confirmed because of the inclusion of the perfect forecast. However, the Scheffé tests indicated addi- tional significant differences under OCTV1 and OCTV3. In each case, Winters' technique was significantly more accurate than either the Brown or Trigg and Leach. Only under 0CTV2 was there no significant difference between the non-control forecasting techniques. 234 Hypotheses 2-b-4 through 2-b-7 provided further support for the consideration of OCTV in statements regarding forecast accuracy. Hypotheses 2-b-4, 2-b-5 and 2-b-6 testing variations in the accuracy of the perfect, Brown, and Trigg and Leach forecasts, respectively, across OCTV level were rejected. The level of order cycle time variance did not significantly impact forecast accuracy. However, a significant impact was indicated for the accuracy of the Winters technique in the confinnation of hypothesis 2-b-7. Earlier similar results were reported for hypotheses 2-a-4 through 2-a-7, when the only significant difference resulted for the Winters technique. In that instance, DMDl was given and the Scheffé test indicated the accuracy of Winters' technique to be significantly less under OCTV3 than either OCTV1 or OCTV2. Given DM02, the Scheffé test subsequent to hypothesis 2-b-7 found an opposite result. Here, the accuracy of the Winters technique was significantly higher under OCTV3 than it was under OCTV2. Thus, the level of OCTV had a signif- icant impact on the accuracy of the Winters technique and the nature of this impact depends upon both the demand pattern and the level of OCTV. Operating Discrepancy (OD) Hypothesis 1 stated that there would be significant differences in the levels of 00 when considered across all levels of the independent variables. Hypothesis 2 stated that such differences would be found when considering the variations in 00 across the non-control levels of 235 independent variables. Each was confirmed. However, while the main effects were significant in each case, none of the three- or two-way interactions were. The reasons for this are evidenced in the hypotheses reviewed below. Hypothesis 2-a tested the significance of the differences in the level of OD across the non-control levels of OCTV and FORTECH, given DMDl. This hypothesis was confirmed. In addition, the level of interaction between the main effects was relatively non-existent, indicating that the levels of 00 are relatively consistent across each of the forecasting techniques. Hypotheses 2-a-l, 2-a-2, and 2-a-3 tested the differences in the levels of OD across each of the forecasting techniques given OCTV1, 0CTV2 and OCTV3, respectively, given DMDl. Hypotheses 2-a-1 and 2-a-3 were confirmed. For hypothesis 2-a-1, the results of the Scheffé tests indicated that the OD level under the Brown and Trigg and Leach techniques was significantly higher than the perfect forecast or the Winters technique. Hypothesis 2-a-2, however, was rejected. Hypotheses 2-a-4 through 2-a-7 provided further information on the nature of these differences. These hypotheses tested the differences in OD across the levels of OCTV under each forecasting technique. Each was confirmed as expected due to the inclusion of 0CTV1, the constant order cycle time. While the perfect forecast consistently resulted in a value of zero for F0, such was never the case in the analysis of the constant order cycle time and 00. In 236 each case regardless of the forecasting technique in question, a significant operating discrepancy was found with a constant order cycle time. Thus, even given a perfect forecast and a constant order cycle time, a distribution system may experience discrepancies between the levels of demand and sales. Further support for this statement is illustrated below. Hypothesis 2-b stated that a significant difference would be found in the level of DO when considered across the non-control levels of OCTV and FORTECH, given DMD2. This hypothesis was confirmed. In addition, there was a significant interaction between the main effects on the level of 00. The nature of this interaction is evidenced in the conclusion reached regarding hypotheses 2-b-1 through 2-b-7. Hypotheses 2-b-1, 2-b-2, and 2-b-3 tested the levels of DO across the four forecasting techniques for each level of OCTV, given DMDZ. Only hypotheses 2-b-l and 2-b-2 were accepted. For each, the level of 00 occurring under Brown and Trigg and Leach was significantly higher than that under the perfect forecast and Winters' forecast. As under DMDl, however, the final hypotheses detail the most critical information. Hypotheses 2-b-4 through 2-b-7 tested the difference in 00 levels across the levels of OCTV for each forecasting technique, given DM02. Each was confirmed adding additional support to the corresponding conclusions reached under DMDl. Here again, even when given a perfect forecast and a constant order cycle time, the test distribution system experienced a significant level of 00. The 237 subsequent Scheffé test illustrated that the 00 level under OCTV1 was significantly less than that under OCTV2 or 0CTV3, as logically expected. However, the fact that significant levels of 00 are experienced even under a constant order cycle time bears important implications for operational decision making. Generalized Research Conclusions Based on the analysis of simulation results several generalized research conclusions can be stated. Two tables are presented to aid development of the inferences drawn below. Each table is based upon the results presented in Chapter V. Specifically, Table 7.1 presents the mean values of each response variable as a percentage of the aver- age period demand under DMDl. Table 7.2 provides corresponding data under DM02. Each is employed in the following discussion. Average Inventory Throughout this research, the order quantity at the DC remained constant at 10 days of forecasted sales. Any variation in the forecast thus resulted in a change in inventory levels. The conclusions devel- oped for average inventory indicate only one constant pattern. For each demand pattern and each level of order cycle time variation tested, the perfect forecast consistently resulted in the highest level of average inventory. This result is evident from an evaluation of the average inventory percentages in Tables 7.1 and 7.2. The reasons for this result are two. 238 Table 7.1 Mean Levels of Response Variables as a Percentage of Average Period Demand Under DMDl Average Inventory Stockouts Sales FD 00 0CTVl: FORTECHl 29.4 4.9 95.0 0 5.0 FORTECH2 27.9 9.2 9l.0 l0.4 7.2 FORTECH3 27.2 9.4 90.6 9.6 6.5 FORTECH4 23.8 l3.5 86.5 l0.6 5.2 0CTV2: FORTECHl 30.9 9.3 90.7 0 9.2 FORTECH2 28.5 ll.0 89.0 ll.2 ll.4 FORTECH3 27.7 l2.4 87.6 ll.2 l0.2 FORTECH4 25.5 l4.3 85.7 ll.3 9.4 OCTV3: FORTECHl 32.0 9.5 90.5 0 9.4 FORTECHZ 29.0 l3.3 86.8 ll.3 l3.9 FORTECH3 26.8 l5.7 84.2 ll.4 l2.6 FORTECH4 22.9 22.4 87.7 l8.3 ll.8 239 Table 7.2 Means Levels of Response Variables as a Percentage of Average Period Demand Under DM02 Average Inventory Stockouts Sales FD OD 0CTVl: FORTECHl 30.6 4.6 95.4 0 4.6 FORTECH2 22.8 22.0 77.9 25.4 8.1 FORTECH3 22.7 21.5 78.5 24.3 8.0 FORTECH4 22.0 17.7 82.2 16.6 5.3 0CTV2: FORTECHl 33.0 8.7 91.3 0 8.7 FORTECHZ 23.2 12.1 87.8 24. 12.7 FORTECH3 25.6 19.6 80.4 23.1 12.5 FORTECH4 17.7 28.4 71.6 25. 9.0 OCTV3: FORTECHl 33.7 9.3 90.7 0 9.3 FORTECH2 22.3 26.4 73.6 26. 12.8 FORTECH3 21.2 27.0 73.0 26.0 12.4 FORTECH4 22.8 19.4 80.6 19. 12.3 240 First, each of the demand patterns employed exhibited pronounced variations in the level of monthly sales. As specified, this variation was significantly higher for DM02 than for DMDl. The perfect forecast was obviously attuned to these changes in each case, consistently providing an exact identification of the turning points. This is evidenced by the zero levels of F0 in the tables. Each of the other forecasts lagged behind changes throughout each demand pattern. The effect of this lag was indicated by the Scheffé test to be sig- nificant for each forecast technique. The second reason is more complicated. In Chapter I the contention that current operational forecasting techniques are deficient in the sense that future forecasts (being projections of sales levels) are biased to the extent that they project the effects of Operating as well as Forecast Discrepancy was developed. To the extent that Operation Discrepancy is negative (i.e., shipments arrive later than expected) the ability of the system to achieve the forecasted level of sales is reduced. By projecting past sales to estimate future levels and to set inventory to meet those levels, the forecast and inventory levels are effectively lowered. The result is that the forecast is inherently biased and the future performance of the system is adversely affected. Each of the confounding factors noted above vary in the impacts on system inventory depending upon the forecasting technique in use. This is especially true when the forecast employed is adaptive, altering the smoothing constant in response to "forecast error" in addition to O 241 sales. Any inclusion of Operating Discrepancy in the measure of forecast error will result in unjustified changes in the smoothing constant. As the "error" of the forecast increases, additional weight is given to more recent period's sales in generating new forecasts. As noted above, however, recent sales potential is reduced by the existence of Operating Discrepancy causing the forecast error to appear larger than it really is. Thus, the impacts of Operating Discrepancy on inventory, forecast accuracy and sporadic fluctuations of the forecast itself are increased when adaptive techniques are employed. In addition, it should be noted that these effects are circular and dynamic within the system, biased forecasts resulting in biased performance over time and vice versa. The effects of these confounding factors are illustrated in a review of Table 7.1. The level of F0 under each forecasting technique increased as the level of OCTV increased. For example, the FD of Trigg and Leach as a percentage of demand was 9.6 under OCTV1, 11.2 under OCTV2 and 11.4 under 0CTV3. In Table 7.2 the relationship is more complicated. The Brown and Trigg and Leach techniques were slightly more accurate under OCTV2 while the opposite is true of the Winters technique. This contrast is an indication that the relationship between F0 and OCTV varies according to the demand pattern in question. In other words, this reflects the three-way interaction between OCTV, DMD and FORTECH on the level of F0 which was found to be significant in Chapter VI. 242 Operationally, this impact may be difficult to identify when safety stocks are employed. To the extent that such safety stocks eliminate stockouts, the bias in the forecast, and thus inventory, will be reduced. However, serious implications still remain. A comparison of the average inventory percentage of the perfect forecast (FORTECHl) indicates a second important result. For each demand pattern, the level of average inventory as a percentage of average demand increased as the level of order cycle time variability increased. As indicated by the Scheffé tests (see Figure 6.6, page 175), however, none of these differences was significant and may be assumed to have occurred by chance at a==.05. In addition, in comparing the levels of average inventory of each forecasting technique across the levels of OCTV, a similar result was not found. In addition to the above factors, it is also evident that the relatively more simple forecasting techniques generally resulted in the higher levels of average inventory. As a review of Figure 6.1 (page 152) indicates this was true in all cases under demand pattern 1. More significant, however, is that the average inventory levels under the more simple forecasting techniques were less affected by increases in order cycle time variability. Thus, the inventory levels under Brown were relatively constant across the levels of OCTV whereas those under the Winters technique varied significantly. For any of the forecasting techniques, the impact of changes in the level of OCTV upon average inventory is related to the variabil- ity of the demand pattern. Thus, under the more stable pattern of DMDl, 243 system performance (in terms of average inventory) under each forecasting technique was relatively stable across OCTV levels. Given the greater variation of DM02, however, the impact of OCTV upon average inventory levels was more pronounced. Again, however, the simpler forecasting techniques were the most consistent. Thus, the more complex the forecasting technique the greater the confounding effects of OCTV upon average inventory levels, regardless of the demand pattern. The affects of increased variability in the demand pattern were also significant.. Consider the average inventory levels under DMDl (Table 6-37, page 205) as compared to those under DM02 (Table 6.42, page 212). Here, despite the increased level of demand, the average inventory levels were relatively consistent between DMDl and DM02. Stockouts, however, increased by more than 100% for both Brown and Trigg and Leach. The increase under the Winters technique was less, although still significant. The same relationship is evident in a comparison of system sales as a percentage of demand in Tables 7.1 (page 238) and 7.2 (page 239). Here, sales as a percentage of demand were significantly lower under DM02 for each forecasting technique regardless of the level of OCTV. This decrease is primarily due to the fact that the smoothing constants for each technique remained constant across the demand patterns. Had the constants been increased when forecasting DM02, it is logical that each would have been able to adapt more quickly to changes in the level of demand from period to period. Recall, however, that the Trigg and Leach technique is by 244 design adaptive. Regardless of this fact, this technique did not result in consistently higher levels of average inventory. On the contrary, as noted above, this technique was more adversely affected by a combination of increased demand and order cycle time variability than the more simplistic Brown technique. Based on the above factors, general conclusions as to the levels of average inventory are outlined as follows: 1. Increases in the level of demand variability result in increased variation in the level of average inventory; Increases in the level of order cycle time variability result in increased variation in the level of average inventory; Increased complexity in the forecasting technique employed did not nullify the adverse impacts of increased demand variability; Average inventory levels are less affected by increases in the level of order cycle time variability when a more simple forecasting technique is employed; and The greater the variability in the level of demand the greater the impact on average inventory levels caused by increases in the level of order cycle time variability. The next section reviews conclusions to system performance in terms of sales and stockouts. 245 Sales and Stockouts Inferences regarding sales and stockouts are presented together because when combined they represent total demand. As such, the level of stockouts also represents Total Discrepancy, or the difference between demand and sales. First, the perfect forecast consistently resulted in a lower level of stockouts. However, it is important to note that even when given a constant order cycle time, stockouts still existed. As the FD in each of these cases was zero, the resulting stockouts are attributed to distribution system performance. However, as the order cycle times were constant, it is difficult to conceive of any Operating Discrepancy. The stockouts experienced resulted from the fact that the "perfect fore- cast" replicated the total level of demand over the forecast period and not the rate at which demand impacted against the system. The forecast period employed was 20 days in length. The ROP and E00 were set at 10 days each. However, if less than 50% of the period's demand was experienced by the system before the tenth day, the system could not reorder until after that date. Consequently, it was impossible for the replenishment shipment to arrive in time to service demand experienced on the twenty-first day. When considering the sales (or stockouts) under each of the operational forecasts, a logical relationship to average inventory was found. In each possible comparison at a given level of OCTV, an increase in average inventory resulted in an increase in the level of sales. In addition, a comparison of the sales and corresponding 246 average inventory levels between OCTV2 and OCTV3 reflected the impact of increased order cycle time variance. Take FORTECH2 for example. Under OCTV2, this technique resulted in a sales level of 87.8% with an average inventory of 23.2%. Under OCTV3, the slight reduction in average inventory to 22.3% was accompanied by a relatively large drop in sales to 73.6%. That this result was in fact caused by the change in OCTV is seen in the increase in 00 while FD remained relatively constant. Several important points result from an analysis of demand patterns. First, for DMDl, Figure 6.3 (page 175) shows the level of sales was inversely related to the level of forecast complexity across all levels of OCTV. That is, for the forecast techniques the simulated system consistently achieved the highest level of sales under the Brown approach. In addition, the less complex the forecasting technique the smaller the adverse impact of increases in order cycle time variability. Figure 6.2 (page 164) provides corresponding information by stockouts. Here again, the performance of the system was consistently better under Brown than under Winters or Trigg and Leach. The impact of increases in order cycle time variability are also consistent. Regardless of the forecast technique employed, increases in order cycle time variability lead directly to increased stockouts and reduced sales. Under the highly seasonal pattern of DM02 the forecasting techniques resulted in sales and stockout levels which varied sig- nificantly across the levels of OCTV. Under OCTV1, the constant 247 order cycle, the Winters technique resulted in the highest level of sales due to its specific consideration of the seasonal factor of the time series. This same result was evident under OCTV3, the highest level of order cycle time variability. However, under OCTV2, the Brown and the Trigg and Leach techniques resulted in superior performance. In the above comparison it is important to stress that each forecasting technique was projecting exactly the same demand pattern under each level of OCTV. As such, the inconsistency in the rankings of the forecasting techniques by sales levels across the levels of OCTV results from the inclusion of Operating Discrepancy in estimating future sales. Finally, it is concluded that adaptive forecasting techniques are more adversely affected by levels of Operating Discrepancy than those that are non-adaptive. In comparing the results under the two demand patterns, one further point resulted. As the combined levels of demand and order cycle time uncertainty increase, the probability that system per- formance will meet expected levels decreases significantly. This is especially evident for the more complex forecasting techniques. When the levels of FD and OD tend to cancel each other, the total discrepancy between period demand and sales understates the inef- ficiencies of the system. This understatement is subsequently incorporated into the generation of future forecasts. The forecasting techniques employed in this research varied significantly in response to this relationship. In general, however. the basic Brown technique resulted in the most consistent system performance. 248 Based on the above factors, general conclusions as to the effects of the independent variables on sales and stockouts may be outlined as follows: 1. Increases in the level of demand variability result in decreased sales as a percentage of total demand; Increases in the level of order cycle time variability result in decreased sales as a percentage of total demand; The greater the variability in demand and order cycle times, the greater the possibility of a cancellation effect between Forecast Discrepancy and Operating Discrepancy; As the level of OCTV increases, system sales as a percentage of demand are more consistent under a simple forecasting technique (i.e., Brown's exponential smoothing) than under more complex techniques (i.e., Trigg and Leach or Winters); The greater the variability in the level of demand, the greater the impact of increases in the level of OCTV upon system sales and stockouts. The next section presents conclusions as to the levels of F0. Forecast Discrepancy (FD) The relative accuracy of the forecasting techniques examined in this research was significantly affected by changes in the levels of order cycle and the demand pattern. Specifically, given DMDl, the only significant difference occurred under OCTV3. Here, as presented in Table 1 (page 238), Winters was significantly less accurate than 249 Brown or Trigg and Leach. For the same comparison in Table 7.2 (page 239), an opposite relationship was determined. Thus, it is not enough merely to specify a change in the type of forecasting technique and expect to safely predict changes in systems performance. Tables 7.1 and 7.2 heavily underscore this fact in that under one condition or another, each of the three forecasting techniques achieved in the lowest level of FD at a level of OCTV. A second indication of the importance of the interactions is presented by a review of the levels of F0 and sales under 0CTV2 in Table 7.1. Here, while FORTECH3 was less accurate than FORTECH2, the level of sales was higher under FORTECH2. Several of the factors noted in the above discussions on systems performance were expected. Each forecasting was most accurate under DMDl. The surprise conclusion, however, was that the simpler the technique, the more consistent were the mean levels of F0 across the levels of OCTV. A review of Figure 6.4 (page 186) underscores this fact. Although the rankings of the forecasting techniques do vary across OCTV under each demand pattern, the Brown technique was clearly the most consistent. The Winters technique was in contrast the most sporadic. Tables 6.37 (page 205) and 6.42 (page 212) present additional evidence of the impact of order cycle time variation on the accuracy of each forecasting technique. A review indicates the level of F0 varied according to the level of OCTV regardless of the demand pattern. This variation was significantly higher for the Winters technique. 250 Finally, while the FD under the Brown technique increased with each increase in OCTV, the same was not true of the other two techniques. Thus, given an increase in the level of OCTV, it is concluded that the impact of Brown's technique on FD may be projected with more confidence than the other two techniques. In comparing the levels of FD between forecasting techniques for DMDl and DM02, another relationship developed. There was consid- erable variability in the impact of increased demand uncertainty on technique accuracy. In general, the more simple technique showed the greatest increase in F0. The increase was almost double in each case. The Winters technique was the least affected except under 0CTV2, where the Trigg and Leach model resulted in the smallest increase in FD. Thus, overall, the more complex the forecasting technique, the greater its ability to adapt to increases in the uncertainty of the demand pattern. Based on the above factors, general conclusions as to the effects of the independent variables on the level of FD are outlined as follows: 1. Increases in the variability of the demand pattern result in considerable increases in the level of Forecast Discrepancy regardless of the forecasting technique employed; 2. Given no change in the level of OCTV, the more complex the forecasting technique the greater its ability to adapt to increases in the variability of demand; 251 3. The more complex the forecasting technique, the greater the change in Forecasting Discrepancy as a result of increases in the level of OCTV; 4. Increases in the level of OCTV effect forecast accuracy regardless of the technique employed or the demand pattern being projected; 5. The more complex the forecasting technique the less consistent is its performance over variations in both demand and order cycle time variability. The next section presents conclusions as to the levels of Operating Discrepancy. Operating Discrepancy (OD) The levels of DO allow more specific conclusions than have been drawn to this point due to the fact that there are not significant interactions confounding the results. Regardless of the demand pattern or the forecasting technique in question, the level of 00 increases consistently as the level of OCTV increases. In addition, the net change in the levels of 00 from OCTV1 and OCTV2 are consistently greater than the changes between OCTV2 and OCTV3. Finally, changes in the type of forecasting technique being employed or the demand pattern in question have little, if any, significant impact on the levels of 00. Returning to Tables 6.37 (page 205) and 6.42 (page 212), a review of the 00 columns clearly indicates the first point. That is, 252 regardless of the demand pattern or the forecasting technique, increases in the level of OCTV result in increased levels of OD. This relation- ship was consistent in every combination of forecasting techniques and demand patterns employed in this research. More important is the fact that this result did not vary in its magnitude according to the forecast technique in question. Thus, the direction of the change in system performance resulting from an increase or decrease in order cycle time consistency may be predicted with a relatively high degree of confidence. The FD, 00 and Stockout entries in either Table 6.37 or 6.42 illustrate the cancellation effect between FD and 00. At no time was the level of stockouts greater than the summation of FD and 00. In fact, only for a perfect forecast was there a direct relationship between this summation and stockouts. Tables 7.1 (page 238) and 7.2 (page 239) provide a further illustration of the effects of OCTV on system performance. A review of the sales figures for a perfect forecast (FORTECHl) and a constant OCTV (OCTV1) indicated that as little as 90.7% of demand was satisfied under such a "perfect" environment. As discussed in Chapter VI, this discrepancy between sales and demand may or may not be defined as 00. The important point is that failure to recognize this relationship introduces the possibility of incorrect modifications of future forecasts. Based on the above discussion, general conclusions as to the effects of the environmental variables on 00 are as follows: 253 l. Increases in the level of OCTV lead directly to increases in the level of Operating Discrepancy regardless of the demand pattern being experienced or the forecasting technique being employed; 2. Even under a perfect forecast and a constant OCTV there will be a discrepancy between the level of demand and sales; and 3. The level of Operating Discrepancy does have an adverse effect on forecast accuracy when forecasts are based on past sales levels. Research Implications The conclusions illustrate that variations in the levels of demand uncertainty, operating uncertainty, and forecast accuracy do have a significant impact upon distribution system performance. In addition, it was determined that a significant interaction exists between each of these factors as they combine to influence performance. The conclusions thus reached have implications for the planning, man— agement and control of the distribution system. This section delineates these implications. 1. As variations occur in the levels of demand uncertainty, operating uncertainty and forecast accuracy, the performance of the distribution system varies significantly. When considered indepen- dently, a change in the level of each of these factors results in a direct though non-proportional change in system performance. For 254 example, an increase in the variation of order cycle times, all other factors being held constant, results in an increased level of Operating Discrepancy. When the combined impacts of any two or all three of the factors is considered, however, the change in system performance is directly dependent upon the levels of each factor being considered. Thus, the combined impacts of these factors may in fact be offsetting. As such, a decrease (increase) in forms of uncertainty combined with an increase (decrease) in forecast accuracy does not always result in positive (negative) changes in system performance. This supports the contention made in Chapter I that any improvement in forecast accuracy must be made with due consideration given to the ability of the distribution system to effectively support the forecasted level of sales. 2. It is reasonable to assume that the levels of uncertainty and forecast accuracy will change over time. Management must constantly monitor the levels of each of these factors and relate them to per- formance variables. This is necessary because of the interactions between the factors. For example, assume that the manager recognizes a significant increase in the number of stockouts and is certain there have been no significant change in the order cycle time variance. Based upon the conditions employed in this research, it would be incorrect for him to make the assumption that forecast accuracy has significantly fallen. Although such may be the case, it is also possible that the forecast has in fact become more accurate. 255 3. The above point leads directly to a second implication introduced in Chapter I. That is, existing forecasting techniques provide no mechanism whereby the cause of the Forecast Discrepancy may be identified and measured. Thus, despite the fact that the true level of demand is not known, the prevailing practice of employing previous periods' sales levels in the generation of new forecasts is incorrect. Such a procedure adapts the level of the forecast incorporating existing Operating Discrepancy as well as the Forecast Discrepancy. The impacts of this practice are two. First, the automatic identification of forecast error as the difference between sales and the forecast reduces the likelihood that repetitive inefficiencies within the distribution system will be identified. Second, to the extent that the actual level of Operating Discrepancy occurring in the system varies, incorporation will tend to magnify future forecast fluctuations. 4. The addition of safety stocks to the base level of inventory to protect against demand and operating uncertainty may serve to obscure each of the effects noted immediately above. The typical procedure for setting safety stocks is to statistically combine some estimate of the variance in order cycle times with that of daily sales in order to estimate the standard deviation of expected sales during order cycle. When this is accomplished, safety stocks are specified based on the combined standard deviations. The results of this research indicate that these variances are not 256 necessarily additive. In fact, it was illustrated that they are often off-setting. No safety stocks were included in this research, permitting this relationship to be examined. To the extent that safety stocks achieve their goal of eliminating stockouts, such relationships (whether additive or off-setting) become more difficult to identify. 5. Each of the above implications provide direct support for the use of the systems concept on two levels in distribution. First, the importance of the interrelationships between forecast accuracy, demand uncertainty and operating uncertainty must be recognized. Any attempt to reduce the adverse effects of any of these factors requires consideration of those remaining. Thus, it is the total or combined impact of these factors which is of critical importance. Second, the importance of viewing the entire channel in distribution planning is underscored. The nature of the links between channel members directly determines the speed and consistency of order cycle times. Any variance in the consistency of order cycles has been shown to directly effect not only stockouts but also future demand forecasts. 6. It is the rate of flow in expected demand which is critical to operational planning and not the mean expected daily demand. The portion of this research which examined the performance of a distri— bution system given a perfect forecast and a constant order cycle time support this implication. Regardless of the demand pattern employed, a significant level of stockouts occurred in each simulation. 257 7. In tenms of total system performance, the greatest consistency in this research was achieved using the most simple forecasting technique. The more complex the forecasting technique, the greater the likelihood of significant interactions between forecast accuracy and operating efficiency. Thus, while Brown's simple expo- nential smoothing may be less accurate in projecting a highly variable demand pattern, it is also less likely to be adversely effected by order cycle time variability. 8. In summary, two points repeatedly resulted from the research. First, current forecasting techniques have the potential to inherently bias the accuracy of future forecasts and the subsequent perfonnance of the distribution system if no attempt is made to dis- tinguish between and accommodate Forecast and Operating Discrepancy. Second, the nature of the interaction between demand uncertainty, operating uncertainty and forecast accuracy requires recognition of the systems concept in seeking to reduce the impacts of any or all of these factors. When these two points are recognized, the importance of continually monitoring system performance is underscored. Even though the manager will never be so fortunate as to have a perfect forecast or constant order cycle time, increased understanding should lead to a more accurate modification of future forecasts and the opportunity to similarly adjust operating parameters. 258 Limitations of the Research Typical of all simulation studies, the generalizability of this research is constrained to the extent that the model employed replicates actual distribution systems. The SPSF Testing Environment used in this research has been subjected to extensive validation.‘ An additional limitation is the number of levels employed for each of the independent factors. Ideally, numerous demand patterns, forecasting techniques and order cycle time distributions would have been employed. However, a simultaneous dissertation was completed analyzing the accuracy of various demand generation procedures, including the method employed in this research.2 The forecasting techniques employed may be considered as representative of varying levels of sophistication currently available based on the literature review. Finally, previous research has shown the generalizability between the order cycle time probability distributions employed in this research and selected other probability distributions.3 Not- withstanding these factors, the levels of the independent variables employed in this research are not intended to constitute random samples. 1Donald J. Bowersox, David J. Closs, John T. Mentzer, Jr., and Jeffrey R. Sims, Simulated Product Sales Forecastingf-Documentation (East Lansing, Mich.: Graduate School of Business Administration Research Bureau, Michigan State University, Forthcoming). 2John T. Mentzer, Jr., "Simulated Product Sales Forecasting: Analysis of Market Area Demand Simulation Alternatives" (Ph.D. dissertation, Michigan State University, 1978). 3George D. Wagenheim, "The Performance of a Physical Distribution Channel System Under Various Conditions of Lead Time Uncertainty: A Simulation Experiment" (Ph.D. dissertation, Michigan State University, 1974). 259 Future Research This research has generated a wide range of areas which offer potential for additional analysis. The major areas may be delineated as follows: 1. Safety Stock Policy: This research did not employ safety stocks and, as noted, the levels of variance in demand and operating uncertainty are not necessarily additive. As such, similar research utilizing accepted methods for calculating and safety stocks could offer important results to the distribution manager. Further, it might be determined whether or not setting a safety stock at a specific number of standard deviations does in fact result in the expected level of protection in a multi-echeloned system. 2. Forecast Generation: Despite the fact that FD and 00 were isolated in this research, additional research in these factors appears justified. Specifically, what are the differences in the perfonmance of a system when future forecasts are generated using sales and forecast error rather than demand potential and Forecast Discrepancy? 3. Complex Channel Structures: This research employed only one facility at each of three echelons and illustrated significant interactions in the independent variables. Significant impli- cations might be achieved through an investigation of the effects of such interactions on a more complex system encom- passing multiple locations at each echelon. 260 Logistical Management: This research dealt with the uncertainties in physical distribution, assuming supply continuity. A logical extension would be to expand the model to consider the effects of uncertainty in materials management and internal inventory transfer, especially in light of the potential energy and materials shortages. Economic Sensitivity Analysis: This research has defined variations in system performance as a result of forecasting and operating uncertainties. Based on these results, research analyzing improvements in system performance for a given dollar expenditure would be most valuable. Specifically, given a fixed amount of capital to invest in improving the operating system what relative returns would result from making this investment in increased levels of safety stock versus improved forecasting or more consistent order cycle times. Effects of Stockouts: One of the most difficult problems facing the distribution manager is caused by the inability to measure the impacts of stockouts on future demand. Although the assumptions and limitations of any research in this area would have to be rather broad, additional information would be of immense value. Strategic Inventory Placement: Given that both operating and forecast efficiency have a direct impact on system performance, the question arises as to the impact of centralizing channel 261 inventories. Specifically, from a total channel perspective, what effects does inventory centralization have on the relationships between FD, OD and overall system performance? APPENDIX SELECTED ABSTRACTS APPENDIX SELECTED ABSTRACTS Adelson, R. M. "The Dynamic Behavior of Linear Forecasting and Scheduling Rules." Operations Research Quarterly 17 (No. 4; December 1966): 447-462. In the first portion of this article, Adelson presents a discussion of some of the basic conclusions made by D. H. Ward (Ihg_ Statistician, Vol. 13, 1973, p. 173) in his article concerning trend- corrected exponential smoothing models. Some of the shortcomings of the exponential smoothing technique are discussed, most notably, the method's inability to cope with a steadily changing level of demand. The basic model underlying trend-correcting methods (i.e., those of R. G. Brown, P. R. Winters, and Box and Jenkins) is then presented with attention given to transient response and conditions of stability. In the final portion of the article, the author presents a linear production and stock control scheme using exponential smoothing. The tradeoff between variance of stock levels and production is analyzed. The article concludes with the algebraic derivations of the author's model. 262 263 Bates, J. M., and W. J. Grauger. "The Combination of Forecasts," Operations Research Quarterly 20 (No. 4; December 1969): 451-468. By experimenting with airline passenger data, the authors show that combinations of forecasts may be able to provide the analyst with better forecasts although many of the individual techniques normally provide important independent information that cannot be obtained from the aggregate figure given by a combined result. In combining techniques, the major obstacle lies with the definition of the weights to be attributed to each one of them. There are many ways to do this, and the aim of the authors was to zero in on the one likely to yield the lowest forecast errors for the combined forecasts. After setting up five different methods of attributing weights to forecasts, they went on to combine Brown's exponential smoothing, Box-Jenkins' adaptive forecasting, and other techniques, and came to the conclusion that the combined efforts resulted in lower variance of errors. Such a conclusion, however, ought not to be taken as a general rule because of the limited scope of the sample chosen, and because of the possibility of improving on forecasting through the combination of more than just two techniques. 264 Batty, M. "Monitoring an Exponential Smoothing Forecasting System." Operations Research Quarter1y_20 (No. 3; September 1969): 319-325. This paper is primarily concerned with the problem of defining a monitoring system capable of making exponential smoothing an automatic technique where the adjustments in the value of a are automatically deployed by the forecasting system. In this vein, it briefly reviews Brown's and Trigg's contributions to this area (tracking signal), and it builds upon them by defining the variance of the sum of errors (variance of the smoothed error for one less degree of freedom divided by the square of the smoothing constant), as well as new values for the tracking signal limits (obtained through simulation) which could be employed in the monitoring of a forecasting system. 265 Box, G. E. P., G. M. Jenkins and D. W. Bacon. "Models for Forecasting Seasonal and Non-Seasonal Time Series." Advanced Siminar on Spectral Analysis of Time Series. Edited by B. Harris. New York: John Wiley & Sons, 1967, pp. 271-310. Box, G. E. P., and G. M. Jenkins. "Some Recent Advances in Forecasting and Control." Applied Statistics 17 (No. 2; 1968): 9l-108. In each of two articles, Box and Jenkins develop, mathematically and logically, the procedure for use of the Box-Jenkins approach to time series forecasting. Initial effort is devoted to a description of the development of the multitude of possible time series models used in this approach. Further effort is devoted to the steps in the approach, i.e., identification of possible models for use with a particular time series in question, fitting of the time series to the models, and diagnostic checking. Illustrations and formulations of trend and seasonal models are given with explanation of applying this spectral analysis to forecasting the time series. The latter parts of the articles are devoted to the explanation of what the authors term seasonal and linear dynamic models, i.e., models in which the value of Y is determined by X, not time. 266 Brown, Robert G. "Forecasting in Physical Distribution-~Inventory Management." Transportation and Distribution Management 11 (No. 3; March 1971): 43-46. In this article, Brown presents a very basic method of forecasting for inventory levels. From the forecast discrepancy over a number of periods, he finds the "mean absolute deviation," or mean difference between forecasted and actual sales. Then, using standard deviations, he illustrates that the chance of an out-of-stock situation may be mathematically calculable, based on the given forecast. Brown also deals with the uses of "product family forecasting," which is actually a breakdown of the sales of a product family according to major characteristics. Performed for consecutive periods, Brown suggests "weighting" the most recent period for use in forecasting. Finally, the author suggests determining a value which relates sales to product availability for use in setting inventories. This value is to be determined through market research studies. 267 Brown, R. G., and R. F. Meyer. "Fundamental Theorem of Exponential Smoothing." Operations Research 9 (September-October 1961): 673-685. The authors begin with a discussion of the justification for the development of the exponential smoothing technique, i.e., the decrease in forecaster subjectivism and the historical data require- ments. Brown and Meyer use this point to develop a five-step process for calculating forecasts and forecast errors through the use of exponential smoothing. This process consists of data type analysis (continuous, discrete, or difference between time series), model definition, smoothing (the purpose of which is to estimate values for the coefficients in the model), forecasting, and error measurement. This discussion is followed by the mathematical derivation of the basic exponential smoothing fOrmula: Forecast1 = a (Saleso) + (l-o)(Forecast0) from its fundamental theorem. This leads to a discussion of the determination of the appropriate value of a. The authors conclude with a reiteration of the fact that the major advantage of expo- nential smoothing lies in the ability to account for the effect of past data with a minimum of historical data requirements. 268 Cantor, Jerry. Pragmatic Forecasting, Chicago: American Marketing Associatibn, 197T} The book covers forecasting both for new products, where subjectivity plays a determinant role, and for products which have already established a historical data base, where statistical tech- niques are more likely to be found. Specifically, in the area of statistical forecasting, Cantor concentrates on simple and weighted averaging (exponential smoothing) and presents a straightforward explanation of single, double, and triple smoothing. Although he deals with trends and trend shifts, there is nothing in his book in the area of adaptive smoothing where the adjustment of a over time is automatically made as a function of a predefined criterion. The author also elaborates briefly on the mathematical procedure which calculates the forecasting through the evaluation of its major factors such as trend, seasonality, and irregularity. 269 Chambers, John C., Satinder K. Mullick, and Donald D. Smith. An_ Executive's Guide to Forecasting. New York: John Wiley & Sons, 1974. This guide divides the forecasting techniques into qualitative, time series projection, and causal models. Its advantage, however, when compared with similar texts, lies in up-to-date information and the thoroughness of presentation. Among the qualitative techniques, the Delphi method, market research, panel consensus, visionary fore- casts, historical analogy, and cross-impact analysis are explained. In terms of time-series methods, the authors discuss moving average, exponential smoothing with a brief account of the adaptive methods, Box-Jenkins, Census Bureau X-ll, trend projections, and learning curves. The causal models discussed include regression, econometric and input-output models, diffusion indices and the leading indicators approach. All of these methods, within their own category, are then compared to one another in terms of their accuracy (short-, medium-, and long-run), their ability to identify turning points, amount of data required, and times required to develop an application and make a forecast. The comparison also takes into consideration the cost of forecasting as well as the possibility, if any, of implementing the technique without the use of a computer. The computer is most likely to be found in the application of the quantitative methods (time series and causal model), but its use with qualitative models ought not to be regarded as impractical. 270 Chisholm, Roger K., and Gilbert R. Whitaker, Jr. Forecasting_Methods. Homewood, 111.: Richard D. Irwin, Inc., 1971, pp. 8-27. Chisholm and Whitaker give a very rudimentary introduction to time series forecasting. Chapter Two is initiated with a numerical description of very basic time series forecasting, i.e., taking the previous period's sales and adding to this the trend, cyclical, and seasonal components. Each of these components is described in turn. This is followed by a numerical description of three-period moving average forecast, and the chapter concludes with a mathematical development of the basic exponential smoothing forecast with multiplicative trend and seasonal factors. Copulski, William. Practical Sales Forecasting. New York: American Management Association, T970. This book adopts quite a simplistic approach to forecasting in that it does not give detailed explanations of the more recent developments in the area. Despite this incompleteness, it does provide the reader with an extensive coverage of the non-numerical techniques. Copulski briefly covers most of the numerical methods, but at the time series level, he goes no further than trend and cycle analysis. In addition, he tends to give his presentation a markedly macro orientation in which company sales are seen in the light of economic indicators, with correlation being used as the major matchmaker between the two. 271 Dancer, Robert, and Clifford Gray. "An Empirical Evaluation of Constant and Adaptive Computer Forecasting Models for Inventory Control." Decision Sciences 18 (No. 1; January 1977): 228-238. These authors evaluated the performance of one constant and two adaptive forecasting models. Each model was employed to project three separate patterns: one each characterized by horizontal, trend, and seasonal. In each case, Mean Absolute Deviation was employed as the criteria to compare the models. Hypotheses tested were of the general form that different forecasting techniques would vary significantly in their accurate projection of different known demand patterns. Surprisingly, the authors point out that the constant and adaptive models yield statistically similar results (employing the t-test between means) under each time series pattern. In spite of this, however, the adaptive models are recommended for their automatic adjustment capability. 272 Eilon, S., and J. Elmaleh. "Adaptive Limits in Inventory Control." Management Science 16 (No. 8; April 1970): 533-548. The main thrust of this article is on inventory control under conditions of demand non-stationarity. However, in defining adaptive limits and in proposing a more flexible system of dealing with inventory policies, the authors discuss demand forecasting, and propose a modifi- cation of Winters' rule for handling seasonals and trend changes and adjusting smoothing constants. The modifications include the definition of a monitoring subroutine for periodic testing and for a reassessment, as well as a special treatment for those cases where negative demand values are forecasted. This special treatment involves the equating of the fore- cast to zero, while the maintenance of the negative value in the equa- tion is dependent upon the prospective forecasting error and its effect on production levels. 273 Farley, John U., and Melvin J. Hinich. "Spectral Analysis." Journal of Advertising Research 9 (No. 4; December 1969): 47-50. Farley and Hinich present a basic illustration of the possible uses of spectral analysis, a statistical technique designed to analyze cycles inherent in time series. An explanation of the method, its applications, and its basic mathematical basis are given. Spectral analysis approximates the cyclical components in a time series by use of a set of sinusoidal functions and involves an attempt to fit such a set of function to an observed time series. When the periods are known, the method is similar to regression analysis and employs the sinusoidal functions as independent variables. The method suited for: (1) determining cyclical components or combinations of components in a time series; (2) testing the ran- domness of time series data; and (3) eliminating trends from the data. The authors conclude the article with a short discussion of the difficulties in applying the technique, the most apparent of which is the length of the data series required to obtain precision. 274 Ferratt, Thomas W., and Vincent A. Mabert. "A Description and Application of the Box-Jenkins Methodology." Decision Sciences 3 (No. 4; October 1972): 83-107. -——_—____ Ferratt and Mabert present an excellent mathematical description of the rather complicated Box-Jenkins technique for analysis and forecasting of time series. The authors give a lengthy application of the technique to the analysis and forecasting of Ohio Electrical monthly power consumption for 1954 through 1970. Each of the three stages of the Box-Jenkins methodology are described and clearly illustrated. These include model identification; the matching of sample autocorrelation functions against theoretical autocorrelation functions; parameter estimation by minimizing the sum of the squared residuals; and diagnostic checking, deciding if the theoretical model is an adequate representation of the observed series. This is all presented in by far the most clear-cut explanation encountered of this difficult technique, least squares and exponential smoothing techniques. 275 Gross, Donald, and Jack L. Ray. "A General Purpose Forecast Simulator." Management Science 11 (No. 6; April 1965): 119-135. Gross and Ray describe their computer package which will generate time series from a pseudo-normal random number generator. Any of nine forecasting models can be specified to forecast values which are compared to the generator's value to calculate error. Although rather naive in its time series generation, the authors' GPFS system does provide a theoretical test environment for forecasting models. The article contains an appendix which mathematically develops the nine moving average and exponential smoothing forecasts available in the GPFS package. This appendix is an excellent summary of various exponential formulas. 276 Harrison, P. J., and C. F. Stevens. "A Bayesian Approach to Short-Term Forecasting." Operations Research Quarterly 22 (No. 4; December 1971): 341-355. This is a new approach in that it incorporates the computations of posterior probabilities to the definition of the forecasting device to be used. The model resembles the growth models of Holt, Brown, and Box-Jenkins (ARIMA models), and it encompasses demand, trend value, slope value, and seasonal factors. It depends upon a generating process where the above variables are related with observational noise, trend and slope perturbation. The above process is in turn defined by four states: no change, step change, sloper change, and transient. The system, despite its apparent complexity, has the advantages of rapidly responding to changes in trend and slope, increasing its sensitivity when changes occur, and defining not only a single figure forecast but a joint parameter distribution, which is an expression of the level of uncertainty inherent in the estimates of slope and trend changes. 277 Holt, C.PC., F. Modigliani, J. F. Muth, and H. A. Simon. Plannin roduction, Inventories, and Work Force. Englewood Cliffs, N.J.: Prentice-Hall,_lnc., 1960. In Chapter VII, Holt et al. begin their discussion of forecasting by describing the components of time series. Brief discussion is devoted to simple moving average followed by a numerical example of moving average with trend and seasonal adjustment. This is followed by a description of the irregular seasonal pattern situation and a concluding section on regression analysis. Chapter VIII describes examples of utilization of the author's production and inventory control models in response to forecasts under conditions of different sinusoidal periodicities and impulses. Chapter IX develops the computation of costs due to forecasting errors and a description of the analysis of these costs. 278 Keay, Frederick. Marketing and Sales Forecasting. Oxford, U.K.: Pergamon Press, 1972. The task of sales forecasting can be approached by a variety of methods, and Keay breaks them down into four categories: consensus techniques, statistical projection, deterministic situations with no random elements, and stochastic situations. Each category is covered extensively enough to provide the reader with a good understanding of the methods. Under consensus techniques, the author groups those methods of a qualitative nature. Under statistical projection; curve fitting, smoothing techniques, and trend analysis are discussed. Included in deterministic situations are the methods which comprise mathematical equations and their interrelationships. Finally, under stochastic models that imply in definition the probabilities to be allocated to the situations under analysis. In terms of the smoothing techniques, the book does not go beyond simple and weighted moving average, nor does it account for exponential smoothing and adaptive forecasting with adjustable a. 279 Kirby, Robert M. "A Comparison of Short and Medium Range Statistical Forecasting Methods." Management Science 13 (No. 4; December 1966): 202-210. Kirby used 23 time series, each consisting of 78 years of actual monthly data, and 23 artificial time series of equal length to compare the accuracy of moving average exponential smoothing with trend and seasonal adjustment, and least squares. Both the moving average and the least squares forecasts for each period were adjusted by the cyclical factor calculated in the exponential smoothing technique. The moving average forecast was also adjusted by the exponential smoothing trend factor. Forecasts were made with varying exponential smoothing constants and different base months for the average calculations for one month into the future (short range) and six months into the future (medium range). The author concluded that both exponential smoothing and moving average forecasts were better in the short and medium range than least squares. It was further stated that throughout the medium range and in the trend time series both exponential smoothing and moving average were equally accurate. In the short range and in cyclical time series, exponential smoothing was found to be more accurate. 280 Lippitt, Vernon G. Statistical Sales Forecasting, New York: Financial Executives Researcthoundation, 1969. Lippitt attempts to provide comprehensive coverage of the state of the art of forecasting. He achieves this, but frequently sacrifices depth for breadth. After a forecasting introduction, he divides techniques into two basic categories: causal and non-causal (extrapolation) models. Among causal models, he includes the general additive model (impact of parts on the forecast is individually defined and subsequently aggregated), regression, and the simulation models. His extrapolation techniques encompass moving averages, both arithmetic and weighted (exponential smoothing); decomposition of time series (trend, cycle, seasonal); and curve fitting. He goes beyond these two categories and briefly elaborates on early turning indicators of a macro-economic nature and lead decision series, which give information on the prospective magnitudes of future changes in company sales. The connection between the leads and the company sales is generally developed through regression models, which normally present lagged explanatory variables as linking elements between two temporally separated data sets. 281 Mabert, Vincent A. "Statistical Versus Sales Force-Executive Opinion Short Range Forecasts: A Time Series Analysis Case Study,“ Decision Sciences 7 (No. 2; April 1976): 310-318. Mabert presents the results of his comparison of executive generated forecasts with those from three statistical techniques: Winters' three parameter exponential smoothing model, Brown's harmonic model, and the Box-Jenkins methodology. Each technique was used to forecast levels of sales for an industrial product with a relatively stationary demand pattern and some seasonality. Company forecasts were generated by management for general planning purposes and were compared against the three statistical techniques for the five-year period from 1968-1972. The criteria for the analysis was: (1) mean absolute percent error; (2) total man-hours required to generate a forecast; (3) the elapsed time in generating a forecast; and (4) the computer time required. As expected, the statistical provided the greatest accuracy with Box-Jenkins registering the lowest mean absolute percent deviation of 14 percent. Brown's harmonic model had 14.1 percent, Winters' expo- nential model had 15.1 percent, and the subjective company forecast had 15.9 percent. In man hours required for forecasting, the company fore- cast required more than six times the hours needed for Box-Jenkins, the second most time consuming technique. In addition, the company required 27 days to generate the subjective forecast while the longest time required with any of the statistical techniques was 2.5 days for Box- Jenkins. Box-Jenkins also required the most computer time for data analysis and forecast generation. 282 Mabert's conclusions to the case study were rather common: statistical techniques used to track sales patterns should be combined with judgmental analysis of causal factors in the generation of forecasts. Makridakis, Spyros, and Steven C. Wheelwright. "Adaptive Filtering: An Integrated Autoregressive/Moving Average Filter for Time Series Forecasting." Operations Research Quarterly 28 (No. 2; 1977): 425-438. This paper provides an example of several practical consider- ations in time series analysis forecasting by adaptive filtering. The basic adaptive filter model is first developed. This model is then compared with ARMA models of the Box-Jenkins methodology. Advantages and disadvantages of each approach are noted. The major portion of the article details practical guidelines for the development of adaptive filtering models and concludes with a step-by-step application to airline passenger data. At the end of the example, the forecast accuracy achieved with the adaptive filter is compared for several different future time periods. 283 Makridakis, Spyros, and Steven C. Wheelwright. "Forecasting: Issues and Challenges for Marketing Management." Journal of Marketing, October 1977, pp. 24-38. This article provides an excellent review of current forecasting methodologies and their applications. Techniques are categorized as quantitative (time series, causal or regression) or as qualitative (technological or subjective assessment). In addition to providing a concise overview of frequently used quantitative approaches, specific examples of research employing each approach are provided. The major portion of this article deals with comparisons of methodologies. The authors conclude, however, with a call for further research in specific marketing applications. Finally, the bibliography included is excellent as a reference guide for the reader seeking further information. Magee, John F., and David M. Boodman. Production Planning and Inventory Control. New York: McGraw-Hill, 1967, pp. 84-115. In a chapter on decision uncertainty, the authors briefly analyze sales forecasting, focusing at first on the qualitative tech— niques (collective opinion) and subsequently expounding on the statis- tical and mathematical methods. In this second category, correlation analysis is touched upon; extrapolation (trend analysis) is briefly mentioned, and the projection techniques (simple moving average and exponential smoothing) are more extensively analyzed. It is noteworthy that Magee and Boodman do not mention adaptive smoothing, although they were among the first to systematically present some form of exponential smoothing in their 1958 edition of the same text. 284 Mapstone, George E. "Forecasting for Sales and Production." Chemical Engineering 80 (No. 11; 14 May 1973): 126-132. In his article, Mapstone presents a clear discussion of several methods used in time series forecasting. He begins with a description of the major factors to be considered in forecasting, and includes a discussion of the steps to be taken in selecting the proper curve to provide the best fit, based on the characteristics of the data. Specifically, he deals with moving averages, both arithmetic and exponentially weighted, and common trend curves (i.e., polynomials, exponentials, and modified exponentials). In addition to presenting the basic formulas for the different methods, the author provides a summary of procedures for the use of slope characteristics, the method of least squares, and the procedures used in fitting linear, simple, and modified trends, as well as the Gompertz curve. The article concludes with a discussion of confidence intervals and methods for detecting changes in the trend. 285 Montgomery, Douglas C. "An Introduction to Short-Term Forecasting." Journal of Industrial Engineering 19 (No. 10; October 1968): 500-514. Montgomery gives a general description of the formulation for exponential smoothing models for constant, linear trend, and seasonal time series. The model for trend is an additive function while the seasonal adjustment is multiplicative. Further, non-quantitative discussion is devoted to the importance of selecting a proper smoothing constant value, the necessity of a tracking signal for monitoring control, and the desirability of a self—adapting model. The article concludes with a brief discussion of short-term forecasting applicability to inventory and production control and a very simplistic numerical example of forecasting with linear trend and seasonality. In spite of its basic approach, this is an excellent, concise introductory article. Montgomery, Douglas C., and L. E. Contreras. "A Note on Forecasting With Adaptive Filtering." Operations Research Quarterly 28 (No. l; 1977): 87-91. These authors begin with a review of recent articles developing examples of the use of adaptive filtering for forecasting. Specifi- cally, they review the basic elements of the method, pointing out its inherent similarities to autoregressive models. The remainder of the article details a criticism of the Wheelwright and Makridakis article reporting on the use of adaptive filtering to forecast champagne sales (Operations Research Quarterly, Vol. 24). Montgomery and Contreras 286 point out the disadvantages of adaptive filtering in their critique. Finally, the same time series of champagne sales is forecasts using exponential smoothing, and the results compared to those of Wheelwright and Makridakis. Nerlove, M., and S. Wage. "On the Optimality of Adaptive Forecasting." Management Science 10 (No. 2; January 1964): 207-223. The article expounds on Theil and Wage's contribution to adaptive forecasting by showing that the optimality characteristic of this type of forecasting is of a much broader spectrum than first devised. Nerlove and Wage demonstrate that adaptive forecasting not only allows for the minimization of the mean square error of forecast of a non-stationary time series but also can be used to optimize the second difference of the above mentioned time series, which is of a stationary nature. Going one step further from Theil and Wage's original formulation, the authors show that the minimum square error depends only on the first two weights of the linear prediction scheme. By giving a formulation that allows the reader to calculate these weights based on past data, the authors are providing a device capable of eliminating the arbitrariness of defining the values of such weights, as in Theil and Wage. . 287 Newbold, Paul. "The Principles of the Box-Jenkins Approach." Research Quarterly 26 (No. 2; 1975): 397-412. Newbold begins his description of the Box-Jenkins methodology by elaborating on the advantages and disadvantages of the approach with comments on ways to overcome many of these disadvantages. The next sections of the article are devoted to the description of the appli- cation of the methodology to a non-seasonal time series based solely on past data. An example of this situation follows in forecasting for a consumer durable on a monthly basis. The latter sections are devoted to an application where leading indicators rather than past data are used in forecasting and to discus- sion of the author's practical experience in using the Box-Jenkins approach. Packer, A. H. "Simulation and Adaptive Forecasting as Applied to Inventory Control." Operations Research 15 (No. 4; July- August 1967): 660-679. Exponential smoothing is used with simulation, the latter pro- viding the methodology for defining the total inventory cost. This cost in turn serves as the criterion for identifying the "best smoothing rate value," after several runs in which the system is simulated to work with different values of a. The determination of the optimal a in the light of existing data is strongly influenced by the defined lead times and the magnitude of the order quantity agreed upon. The usefulness of the overall results was determined by comparing the data obtained to historical records, and in one second stage the new inventory decision rules were compared to the existing procedures. 288 Pearce, Colin. Predictive Techniques for Marketing Planners. New York: John Wiley & Sons,71971, pp. 65-85. Pearce presented a clear introduction to time series forecasting in his text. Included was a general description of the time series forecasting approach and numerical examples of both arithmetic moving average and exponential smoothing. The exponential smoothing description consists of a numerical example of single exponential smoothing and is followed by non- quantitative descriptions of trend and curvilnear models and adaptive smoothing. Mathematical development is given in the appendix. Peterson, Rein. "A Note on the Determination of Optimal Forecasting Strategy." Management Science 16 (No. 4; December 1969). The definition of an optimal forecasting procedure is theoret- ically easy to develop but difficult to carry out. The difficulty stems from the fact that the choice among different forecasting alternative requires well defined loss functions which depend on a comparative cost study of forecast errors not found in any practical situation. The other problem concerns the fact that, since all fore- casting techniques are operating with the same set of data, the char- acteristics of the error distribution of each procedure may be dependent and even correlated. This makes the application of any test for the sake of defining individual performance highly questionable. Finally, if we are comparing a large number of procedures, it is difficult for us to know the form of their joint probability, which is normally of a multivariate nature. 289 Radhakrishnan, S. R., and William G. Sullivan. ”A Dynamic Method for Forecasting." Journal of Systems Management 23 (No. 7; July 1972): 11-16. In their article, Radhakrishnan and Sullivan report the results of an attempt to forecast the demand for x-rays at the Eugene Talmadge Medical Hospital by triple exponential smoothing. Data collected during each of 20 months was subdivided to create 36 demand patterns according to type of patient and x-ray examination. These patterns constituted independent time series. The accuracy of the exponential smoothing technique was evaluated according to three criteria: (1) standard deviation of the forecast error, (2) average cumulative error, and (3) forecast- ability error. The alpha level was set at 0.3 after testing values ranging from 0.1 to 0.4. The results of the experiment were judged successful according to the three criteria and illustrated that the forecast followed closely the actual demand pattern. 290 Raine, Jesse. "Self Adaptive Forecasting Reconsidered." Decision Sciences 2 (No. 2; April 1971): 181-191. '_____—_— Raine devoted considerable effort to the use and development of SAFT and compared its performance to three other simpler smoothing techniques. The first was single exponential smoothing with a smoothing constant value of 0.154. The second was single exponential smoothing, which set the forecast equal to the most recent month's demand when the automatic tracking signal was activated. The third was again single exponential smoothing with automatic tracking signal which, when activated, set the smoothing constant equal to 0.50 for three periods and then returned to 0.154. These three were compared to the SAFT for accuracy (determined by the sum of the squared errors) on time series with trend and with impulse. The best technique for both the series was the third technique described above. SAFT was found in this article only to be better than the first single exponential technique described. However, more of the differences were found to be statistically significant. 291 Reinmuth, James E., and Michael D. Geurts. "A Bayesian Approach to Forecasting the Effects of Atypical Situations." Journal of Marketing Research 9 (August 1972): 292-297. Reinmuth and Geurts present a forecasting model whereby the forecast obtained by any standard forecasting technique for a partic- ular period is multiplied by one plus the expected proportionate change created by an atypical occurrence in that period. The authors feel that probability distributions can be developed for the occurrence of these atypical situations in a period and, thereby, effectively utilized in forecasting. However, they do concede that the data gathering and calculation burden necessary for the determination of this probability distribution would be great. The majority of the article is devoted to a numerical example of illustrating the technique. 292 Richard, Robert S. Practical Technigges of Sales Forecasting. New York: McGraw-Hill, 1966. The book addresses itself to the many facets of practical sales forecasting, focusing both on those situations in which reliable past information exists and on those in which it does not (new product forecasting). It begins with the subjective methods (sales-force composite, jury of executive opinion, panel approach) where judgment is the rule, and goes on to time series extrapolation, emphasizing both the decomposition techniques (trends, cycles, sea- sonals) and the exponential smoothing procedure. From time series on, the author directs his attention to correlation analysis (simple and multiple regression as well as step regression where the model accepts those factors which strongly influence sales and disregards those influences which are not important), and mathematical modeling. Finally, the book elaborates on the combination of forecasts in which subjective inputs are used as corrective measure to adjust variations considered unacceptable in the light of the objective techniques. It must be pointed out that Richard did not attempt to reproduce each technique in its original mathematical complexity but decided on a more qualitative approach. 293 Roberts, Stephen D., and Ruddell Reed, Jr. "The Development of a Self-Adaptive Forecasting Technique." AIIE Transactions 1 (No. 4; December 1969): 314-322. In their article, Roberts and Reed developed an adaptive exponential smoothing forecast called SAFT (Self-Adaptive Forecasting Technique). The technique combines the basic exponential forecasting models of P. R. Winters with the response surface analysis of G. E. P. Box to arrive at an appropriate smoothing constant value. Discussion is given to both the Winters formulas for level, trend, and seasonal time series, and the development of response surface analysis. Formulation of the SAFT models for level, trend, and seasonal time series follow. SAFT was compared to Winters' model for accuracy and rate of response on various computer generated time series. For non- autocorrelated time series both models were found to be of equal accuracy, but for autocorrelated series, especially germane to industrial application, SAFT was found significantly more accurate at the .01 level. SAFT similarly demonstrated a better rate of response for impulse, step, and ramp changes. 294 Schussel, George. "Sales Forecasting With the Aid of a Human Behavior Simulator." Management Science 13 (No. 10; June 1967): 593-611. This paper presents a forecasting scheme aimed at helping a manufacturer predict his sales of photographic materials to dealers. The scheme consists of a demand simulator, the parameters of which were first defined by the dealers themselves via interviews. The simulator then takes a macro forecast of retail sales as input and transforms it into orders that are placed against the warehouse of the manufacturer. The innovation in this method lies in the fact that simulation is used as a "transfer" mechanism between a forecast of retail sales on one side (estimates produced by company executives), and a forecast of wholesaler (dealer) orders on the other side. It is noteworthy that the combined results of forecasting and simulation proved to produce better forecasts than the simple straight forecasting of dealer orders. The results were based on a relatively small sample of dealers (32) in one specific industry, and despite their apparent usefulness, caution is advised since in demand forecasting rarely, if ever, are two cases the same. 295 Smith, David Eugene. "Adaptive Response for Exponential Smoothing Comparative System Analysis." Operations Research Quarterly 25 (No. 3; September 1974): 421-423. The definition of the proper smoothing rate (a) for different types of demand over time is a major problem in using exponential smoothing for forecasting. Smith analyzes this problem by comparing how adaptive smoothing models (Brown's fixed and bimodal response, Chow's evolutionary design, Trigg's constant coefficient adjustment, and Smith's adaptive model corrector) perform in light of both random and non-stationary demand. The performance measurements are primarily variances of the forecast errors along with respective values of the smoothing rates. Both random and non-stationary demands are more specifically identified through temporal behavioral models, constant, linear growth, and sinusoidal models. Finally, the paper shows that each adaptive model has its forecasting capabilities impaired or strengthened as a result of the definition of the adjusting criterion for the smoothing rate. 296 Theil, H., and S. Wage. "Some Observations on Ada tive Forecasting." Management Science 10 (No. 2; January 1964): 198-205. This article initially mentions the two sources of evidence normally used in adaptive or exponential forecasting, that is, latest evidence and the value computed one period before. From then on, it attempts to simplify the forecasting procedure through the decomposi- tion of the time series into elements such as trend value, seasonal coefficient and residual. The trend value, from one period to the next suffers the impact brought upon it by the trend changes. Finally, an attempt is made to minimize mean score error by selecting smoothing rates. Minimization is developed through vector form, along with the definition of a probabilistic model underlying the forecasting technique. Thompson, Howard E., and William Beranek. "The Efficient Use of an Imperfect Forecast." Management Science 13 (No. 3; November 1966): 233-243. Thompson and Beranek present a rudimentary discussion of the application of Bayesian statistics and payoff matrices for determining optimal actions under given states of nature. Expected values are computed for each cell of the matrix and then, for a given state of nature, the optimal decision is found. The cost of perfect information is also treated in the common format. Expected values are determined on a subjective basis throughout, as the forecaster is to assume, through a knowledge of the market, that 297 his forecast has a certain probability of being correct. When the forecaster is uncertain of his probability of having made a correct forecast, he may assume several probabilities and calculate his expected value under each. Trigg, D. W., "Monitoring a Forecasting System." Operations Research Quarterly 15 (1964): 271-274. Trigg begins his article with a brief description of the formulation for the Brown tracking signal and an elaboration of the disadvantages inherent in this approach. This is followed by the author's own tracking signal formulation and a description of the calculation for setting control limits around this signal, which are determined by the magnitude of the smoothing constant. An illustration is given of the tracking signal in use as compared to the Brown tracking signal. The Trigg signal performs appreciably better in measuring the degree to which the forecast is out of control. However, no attempt is made in this article to utilize the tracking signal as feedback into an adaptive smoothing technique. 298 Trigg, D. W., and A. G. Leach. "Exponential Smoothing With an Adaptive ggsggnse Rate." Operations Research Quarterly 18 (No. l; 1967): Trigg and Leach begin their article by describing the simple exponential smoothing formula, followed by a description of the tracking signal developed by Trigg. The adaptive smoothing technique developed in the article is to simply set the smoothing constant for each forecast equal to the absolute value of the tracking signal. This technique is compared to a non-adaptive model for time series with step, trend, sinusoidal, and impulse functions. In each of these series the technique was found considerably more accurate and responsive. It is further felt by the authors that the technique eliminates the dilemma of determining initial values of the smoothing constant, i.e., grossly wrong initial values will create immediate adjustment in the smoothing constant value. Tydeman, J. "A Note on Short-Term Forecasting Using an Irregular Time Interval." Operations Research Quarterly 23 (No. 3; September 1972): 381-383. Normally, the exponential smoothing forecast computations are made with fixed-time interval between successive observations. Tydeman shows in his paper that there are situations in which a fixed-time interval does not make sense; thus, he proposes an exponential model that handles an irregular time base situation. 299 Wheelright, Steven C., and Spyros Makridakis. "An Examination of the Use of Adaptive Filtering in Forecasting." Operations Research Quarterly 24 (No. 1; March 1973): 55-65. The authors attempt to devise a technique which, rather than simply smoothing the noise of past data, identifies a signal pattern which is then used to adjust the values of a over time. The optimum set of weights is reached at the minimum mean square error (optimiza- tion criterion). This minimum is materialized through the computer manipulation of a formula which encompasses the old weight vector, a learning constant (k) which determines how fast the weights are adjusted, the forecast error using the old weights and the vector of past observations. Thus, by understanding the relationships between (k), the number of iterations being used and the number of weights in test, the forecaster can focus in on a final weight value (a) which minimizes the mean square error, therefore adjusting at the best level possible the estimates to the actual figures. Finally, Wheelwright and Makridakis show applications of the filtering technique to linear and constant series as well as to random and cyclical ones. They further acknowledge the advantages of the method over moving average and simple exponential smoothing in the short run, and its inability to compete with regression analysis in the long run. 300 Whybark, D. Clay. Testing an Adaptive Inventory Control Model. Working Paper No. 289. Lafayette, Ind.: Purdue unTVersity, October 1970. Whybark developed an inventory policy game to test his adaptive forecasting technique. The technique is essentially an exponential smoothing formula in which the smoothing constant was given a value near one when the forecast error exceeded prescribed limits. The forecast was used to automatically set safety stock levels and 39mm order quantities in the game. The model competed with a group of students who had extensive background in inventory and game theory and a group with little or no such experience. The model not only performed well with respect to the two groups of students, it also kept pace with the Wagner-Whitin model which is based on perfect prior information. However, the author felt that more work was needed on the setting of the adaptive parameters. 301 Whybark, D. Clay. "A Comparison of Adaptive Forecasting Techniques." The Logistics and Transportation Review 8 (No. 3): 13-26. Whybark compared the results of four adaptive forecasting techniques on the basis of total inventory cost, forecast error, a qualitative evaluation of performance, and computer execution time and storage. The techniques compared were those developed by (l) Whybark, (2) Trigg and Leach, (3) Eilon and Elmaleh, (4) Roberts and Reed. All of the time series tested over 200 periods in length with two-step shifts in an otherwise constant mean demand. The Trigg-Leach method was found to have the lowest inventory cost and to be sensitive to small fluctuations in a stable period. The Whybark model, on the other hand, reacts more quickly to demand shifts, had the smallest forecast error, and required the least computer execution time and storage. 302 Winters, P. R. "Forecasting Sales by Exponentially Weighted Moving Average." Management Science 6 (No. 3; April 1960): 324-342. This paper presents a complete analysis of exponential smoothing, from the simplest form to the inclusion of seasonals and trend changes into the forecasting technique. It also depicts how the initial values of the smoothing seasonal, and trend factors can be calculated and fed into the basic formulation. The utility of the proposed model is brought about by comparing the accuracy of the expo- nential smoothing model with the results obtained through the use of a simple moving average technique and of a seasonality adjusted fore- casting device. The exponential smoothing technique out-performs the two others in terms of the standard deviation of forecast errors, and offers the advantage of giving better forecasts than traditional models, requiring less information and storage capacity, and responding more rapidly to fluctuations in the time series (highly responsive to current data). BIBLIOGRAPHY BIBLIOGRAPHY Adam, Everett E., Jr. "Individual Item Forecasting Model Evaluation." Decision Science 4 (No. 4; October 1973): 458-470. Adelson, R. M. "The Dynamic Behavior of Linear Forecasting and Scheduling Rules." Operational Research Quarterly 17 (No. 4; December 1966): 447-462. Anderson, 0. 0. Time Series Analysis and Forecasting; The Box-Jenkins Approach. Reading, Mass.: BUtterworth & Co., Ltd., 1974. Barksdale, Hiram C., and Hugh J. Guffey, Jr. "An Illustration of Cross- Spectral Analysis in Marketing." Journal of MarketingeResearch 9 (1972): 271-278. Bates, J. M., and C. W. J. Granger. "The Combination of Forecasts." Operational Research Quarterly 20 (No. 4; December 1969): 451-468. Batty, M. "Monitoring an Exponential Smoothing Forecasting System." Operations Research Quarterly 20 (No. 3; September 1969): 319-325. Berry, W. L., and D. C. Whybark. Computer Augmented Cases in Operating and Logistics Management. SouthWestern PUBlishing Co., 1972. Bishop, Jack L. "Experience With a Successful System for Forecasting and Inventory Control." Operations Research 22 (No. 6; November- December 1974): 1224-1231. Bliemel, Friedhelm. "Theil's Forecasting Accuracy Coefficient: A Clarification." Journal of Marketing Research 10 (No. 4; November 1973): 444-446. Bloomfield, P. On the Error of Prediction of a Time Series. Department of Statistics: Technical Report II, Series 2. TPrinceton, N.J.: Princeton University, February 1972. Bossons, John. "The Effects of Parameter Misspecification and Non- Stationality on the Applicability of Adaptive Forecasts." Management Science 12 (No. 9; May 1966): 659-669. Box, G. E. P. "Evolutionary Operations--A Method for Increasing Industrial Productivity.” Applied Statistics 6 (No. 2; June 1957): 31-101. 303 304 Box, G. E. P. Time Series Forecasting and Control. San Francisco: Holden-Day, 1968. Box, G. E. P., and G. M. Jenkins. "Some Recent Advances in Forecasting and Control," Part I. ,Applied Statistics 17 (No. 2; 1968): 91-109. Box, G. E. P., and G. M. Jenkins. Time Series Analysis--Forecasting and Control. San Francisco: Holden-Day, 1970. Box, G. E. P., and G. M. Jenkins. Time Series Analysis. 2nd ed. San Francisco: Holden Day, 1976. Box, G. E. P., G. M. Jenkins, and D. W. Bacon. "Models for Forecasting Seasonal and Non-Seasonal Time Series." Advanced Seminar on §pectral Analysis of Time Series. Edited by B. Harris. New York: John Wiley & Sons, 1967. Brenner, J. L., D. A. D'Esopo, and A. G. Fowler. "Difference Equations in Forecasting Formulas." Management Science 15 (No. 3; November 1968): 141-159. Brillinger, David R. Time Series: Data Analysis and Theory. New York: Holt, Rinehart & Winston, 1975. Brown, Hugh E. Sales Forecasting With an Adaptive Model. Presented at the 34th NationalTMeeting of the Operations Research Society in America. Kalamazoo, Mich.: The Upjohn Co., November 1968. Brown, R. G. Decision Rules for Inventory Management. New York: Holt, Rinehart 8 Winston, 1967. Brown, R. G. "Forecasting in Physical Distribution: Inventory Management." Transportation and Distribution Management 11 (No. 3; March 1971): 43-46. Brown, R. G. Smoothing, Forecasting,eand Prediction of Discrete Time Series. Englewood Cliffs, N.J.: ’Prentice-Hall, Inc., 1963. Brown, R. G., and Richard F. Meyer. "The Fundamental Theorem of Exponential Smoothing." Operations Research 9 (No. 5; September- October 1961): 673-685. Burns, James F. Hedgingeand Forecastingin Inventory Management. Gainesville, Fla.: Florida University, March—1969. Cantor, Jerry. Pragmatic Forecasting. New York: American Management Association, 1971. 305 Chambers, John C., Satinder K. Mullick and Donald D. Smith. .52 Executive's Guide to Forecasting. New York: John Wiley 8 Sons, 1974. Chambers, John C., S. K. Mullick, and Donald D. Smith. "How to Choose the Right Forecasting Technique." Harvard Business Review 49 (No. 4; July-August 1971): 45-74. Chan, Hung, and Jack Haya. "Spectral Analysis in Business Forecasting." Decision Sciences 7 (No. 1; January 1976): 137-151. Chang, Sang Hoon, and David E. Fyffe. "Estimation of Forecast Errors for Seasonal Style Goods Sales." Management Science 18 (No. 2; October 1971): 89-96. Chatfield, Christopher. "Some Comments on Spectral Analysis in Marketing." Journal of Marketing_Research 11 (1974): 97-101. Chatfield, C., and D. L. Prothero. "Box-Jenkins Seasonal Forecasting: Problems in a Case Study." Journal of the Rnyal Statistical Society 136 (1973): 295-352. Chen, Gordon D. C., and P. R. Winters. "Forecasting Peak Demand for an Electric Utility With a Hybrid Exponential Model." Management Science 12 (No. 12; August 1966): 531-537. Chentnik, Chester G., Jr. "The Use of Forecast Error Measures as Surrogates for an Error Cost Criterion in the Production Smoothing Problem." Decision Sciences 3 (No. 4; April 1972): 54-75. Chisholm, Roger K., and Gilbert R. Whitaker, Jr. Forecasting Methods Homewood, 111.: Richard D. Irwin, Inc., 1971. Chow, W. H. "Adaptive Control of the Exponential Smoothing Constant." Journal of Industrial Engineering 16 (No. 5; September-October 1965). . Chow, Ya-Lun. Applied Business and Economic Statistics. New York: Holt, Rinehart & Winston, 1963. Cohen, Gerald D. "A Note on Exponential Smoothing and Autocorrelated Inputs." gOperations Research 11 (No. 3; May-June 1963): 361-367. Copulski, William. Practical Sales Forecasting. New York: American Management Association, 1970. Couts, 0., D. Grether, and M. Nerlove. "Forecasting Non-Stationary Econgmic Time Series." Management Science 13 (No. 1; September 1966 : 1-21. 306 Dancer, Robert E., and Clifford F. Gray. "An Empirical Evaluation of Constant and Adaptive Computer Forecasting Models for Inventory Control." Decision Sciences 8 (No. 1; January 1977): 228-238. Dobbie, James M. "Forecasting Periodic Trends by Exponential Smoothing." Operations Research 11 (No. 6; November-December 1963): 908-918. Dubman, M. R., and N. R. Goodman. Spectral Analysis of Multiple Time Series. Canoga Park: Rocketdyne,7North American Rockwell, 1969. Eby, Frank H., Jr. Sales Analysis: Principles and Applications. New York: AMR International,’197l. Eilon, S., and J. Elmaleh. "Adaptive Limits in Inventory Control." Management Science 16 (No. 8; April 1970): 533-548. Enrich, Norbert L. Market and Sales Forecasting: A Quantitative Approach. San Francisco: Chandler Publishing, 1969. Farley, John U., and Melvin J. Hinich. "Spectral Analysis." Journal of Advertising Research 9 (No. 4; December 1969): 47-50. Ferratt, Thomas W., and Vincent A. Mabert. ”A Description and Application of the Box-Jenkins Methodology." Decision Sciences 3 (No. 4; October 1972): 83-107. Foster, Edward. "Sales Forecasts and the Inventory Cycle." Econometrica 31 (No. 3; July 1963): 400-401. Freind, Irwin, and Paul Taubman. "A Short-Term Forecasting Model." The Review of Economics and Statistics 46 (No. 3; August 1964): 229-236. Geldback, A. R., and A. W. Wortham. "Forecasting Using Equations With Smoothed Coefficients." The Logistics and Transportation Review 9 (No. 3; July 1973): 253-262. Geoffrion, A. M. "A Summary of Exponential Smoothing." Journal of Industrial Engineering_8 (No. 4; 1963). Golder, E. R., and J. G. Settle. "0n Adaptive Filtering." ,Operational Research Quarterly 27 (No. 4; December 1976): 557-567. Groff, G. K. "Empirical Comparison of Models for Short-Range Fore- casting." Management Science 20 (No. 1; September 1973): 22-31. Gross, Donald, and Jack L. Ray. "A General Purpose Forecast Simulator.“ Management Science 11 (No. 6; April 1965): 119-135. 307 Hadley, G., and T. M. Whitin. "A Family of Dynamic Inventory Models." Management Science 13 (No. 11; July 1967): 458-469. Harrison, P. J. "Exponential Smoothing and Short-Term Sales Fore- casting." Management Science 13 (No. 11; July 1967): 821-842. Harrison, P. J., and 0. L. Davies. "The Use of Cumulative Sum Techniques for the Control of Routine Forecasts of Product Demand." pOperations Research 12 (No. 2; March-April 1964): 325-333. Harrison, P. J., and C. F. Stevens. "A Bayesian Approach to Short-Term Forecasting." Operational Research Quarterly 22 (No. 4; December 1971): 341-355. Hayes, Ralph E. A Comparison of Short-Term Forecasting Models. Monterey, Calif.: Naval Postgraduate School, September 1971. Helmer, Richard M., and Johnny K. Johansson. "An Exposition of the Box-Jenkins Transfer Function Analysis With an Application to the Advertising-Sales Relationship." Journal of Marketing, Research 14 (No. 2; May 1977): 227-239. Hertz, D. B., and K. H. Schaffer. "A Forecasting Method for Management of Seasonal Style Goods Inventories." Operations Research 8 (No. l; January-February 1960): 45-52. Hirsch, Albert A., and Michael C. Lovell. Sales Anticipation and Inventory_Behavior. New York: John Wiley & Sons, 1969. Holt, C. C. ForecastingeSeasonals and Trends by Exponentially Weighted Moving Averages. Pittsburgh: Carnegie Institute of Technology, 1957. Holt, Charles C., Franco Modigliani, John F. Muth, and Hergert A. Simon. Planning Production, Inventories, and Work Force. Englewood Cliffs,_N.J.: Prentice-Hall, Inc., 1960. Iglehart, Donald L. "The Dynamic Inventory Problem With Unknown Demand Distribution." Management Science 10 (No. 3; April 1964): 429-440. Jenkins, G. M. "A Survey of Spectral Analysis." Applied Statistics 14 (1965): 2-32. Jenkins, G. M., and G. G. Watts. Spectral Analysis and Its Applications. San Francisco: Holden-Day, 1968. Keay, Frederick. Marketing and Sales Forecasting. Oxford, U.K.: Pergamon Press, 1972. 308 Kendall, G. M. Time Series. London: Griffin, 1973. Kirby, R. M. "A Comparison of Short and Medium Range Statistical Forecasting Methods." Management Science 13 (No. 4; December 1966): 202-210. Koopmans, Lambert H. The Spectral Analysis of Time Series. New York: Academic Press, 1974. Krampf, R. F. "The Turning Point Problem in Smoothing Models." Ph.D. dissertation, University of Cincinnati, 1972. Lev, Baruch. "The RAS Method for Two-Dimensional Forecasts." Journal of Marketing Research 10 (No. 2; May 1973): 153-159. Lewis, C. 0. Demand Analysis and Inventory Control. London: Heath & Lexington,—1975. Lippitt, Vernon G. Statistical Sales Forecasting. New York: Financial Executives Research Foundation, 19691 Lovell, Michael C. "Buffer Sticks, Sales Expectations, and Stability: A Multi-Sector Analysis of the Inventory Cycle." Econometrica 30 (No. 2; April 1962): 267-296. Lovell, Michael C. "Manufacturers' Inventories, Sales Expectations, and the Acceleration Principle." Econometrica 29 (No. 3; July 1961): 293-314. Mabert, Vincent. "Statistical Versus Sales Force-Executive Opinion Short Range Forecasts: A Time Series Analysis Case Study." Decision Sciences 7 (No. 2; April 1976): 310-318. Magee, John F., and David M. Boodman. Production Planningyand InventoryyControl. New York: McGraw-Hill, 1967. Makridakis, Spyros et al. "An Interactive Forecasting System." American Statistician, November 1974, pp. 153-158. Makridakis, Spyros, and Steven C. Wheelwright. "Adaptive Filtering: An Integrated Autoregressive/Moving Average Filter for Time Series Forecasting." Operations Research Quarterly 28 (No. 2; 1977): 425-438. Makridakis, Spyros, and Steven C. Wheelwright. "Forecasting: Issues and Challenges for Marketing Management." Journal of Marketing 4] (No. 2; October 1977): 24-38. 309 Mapstone, George E. "Forecasting for Sales and Production." Chemical Engineering 80 (No. 11; May 1973): 126-132. McClain, J. D., and L. J. Thomas. "Response-Variance Tradeoffs in Adaptive Forecasting." ,Operations Research 21 (No. 2; March- April 1973): 554-568. McClain, John 0. "Dynamics of Exponential Smoothing With Trend and Seasonal Terms." Management Science 20 (No. 9; May 1974): 1300-1304. McKenzie, Ed. "An Analysis of General Exponential Smoothing." Operations Research 24 (No. 2; January-February 1976): 131. McLaughlin, Robert L. Time Series Forecasting: A New Computer Technique for Company Sales Forecasting. Chicago: American Marketing Association, 1962. McLaughlin, Robert L., and James J. Boyle. Short-Term Forecasting: A New Computer Program for Sales and Economic Forecasting. Chicago: American Marketing ASsociation, 1968. Michael, George C. "A Computer Simulation Model for Forecasting Catalog Sales." Journal of Marketing Research 8 (No. 2; May 1971): 224-229. Miller, D. W., and Martin K. Starr. Inventpgy Control: Theory and Practice. Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1962. Miller, Jeffrey G., W. L. Berry, and Chung-Yi F. Lai. "A Comparison of Alternative Forecasting Strategies for Multistage Production- Inventory Systems." Decision Sciences 7 (No. 2; April 1976): 310-318. Mincer, Jacob. Economic Forecasts and Expectations. New York: Columbia University Press, 1969. Montgomery, 0. C. "An Introduction to Short-Term Forecasting." Journal of Industrial Engineering 19 (No. 10; October 1968): 500-504. Montgomery, 0. C., and L. E. Contreras. "A Note on Forecasting With Adaptive Filtering." lOperations Research Quarterly 28 (No. l; 1977): 87-91. Montgomery, 0. C., and L. A. Johnson. Forecasting and Time Series Analysis. New York: McGraw-Hill, 1976. 310 Murdock, Robert G., and Arthur E. Schaefer. Sales Forecasting for Lower Costs and Higher Profits. Englewood Cliffs, N.J.: Prentice-Hall, Inc.,51969. Muth, J. F. "Optimal Properties of Exponentially Weighted Forecasts of Time Series With Permanent and Transitory Components." Journal of American Statistical Association 55 (June 1960): 298-306. Nelson, Charles R. Applied Time Series Analysis for Managerial Forecasting. San Francisco: Holden-Day, Inc., 1973. Nerlove, Marc. "Spectral Analysis of Seasonal Adjustment Procedures." Econometrica 32 (No. 3; July 1964): 241-286. Nerlove, Marc, and S. Wage. "On the Optimality of Adaptive Forecasting." .Management Science 13 (No. 2; January 1964): 207-223. Newbold, Paul. "The Principles of the Box-Jenkins Approach." Operational Research Quarterly 26 (No. 2; June 1975): 397-412. Newbold, P., and C. W. J. Granger. "Experience With Forecasting Univariate Time Series and the Combination of Forecasts." Jgurnal of Royal Statistical Society, Series A, Vol. 137 974). Orneck, Ozden. An Analysis of Mean Absolute Deviation Variability. Monterey, caliii: Naval Pfistgraduate School, 1969. Orr, Lloyd 0. "Expected Sales, Actual Sales, Inventory Investment Realization." The Journal of Political Economy 74 (February 1966): 46-54. Pack, David J. "Revealing Time Series Interrelationships." Decision Sciences 8 (No. 2; April 1977): 377-402. Packer, A. H. "Simulation and Adaptive Forecastin as Applied to Inventory Control." Operations Research 15 (No. 4; July- August 1967): 660-679. Parkan, John. "A New Approach to Sales Forecasting and Production Scheduling." Journal of Marketing 25 (No. 3; January 1961): 14-21. Parsons, Leonard J., and Walter A. Henry. "Testing Equivalence of Observed and Generated Time Series Data by Spectral Methods." Journal of Marketing_Research 9 (No. 4; November 1972): 391-395. 311 Parzen, Emanuel. Some Recent Advances in Time Series Analysis. Stanford, Calif.: Stanford University Press, July 1971. Pearce, Colin. Predictive Techniques for Marketing Planners. New York: J6hn Wiley 8 Sons, 1971. Pegels, C. C. "Exponential Forecasting: Some New Variations." Management Science, 15 (1969). Peterson, Rein. "A Note on the Determination of Optimal Forecasting Strategy." Management Science 16 (No. 4; December 1969): B165-8169. Phadke, Hadhav S. "Multiple Time Series Modeling and System Identi- fication With Applications.” Ph.D. dissertation, University of Wisconsin-Madison, 1973. Radhakrishnan, S. R., and William G. Sullivan. "A Dynamic Method for Foregasting." Journal of Systems Management 23 (No. 7; July 1972 : 11-16. Raine, Jesse E. "Self-Adaptive Forecasting Reconsidered." Decision Sciences 2 (No. 2; April 1971): 181-191. Rao, Ambar G., and Arthur Shapiro. "Adaptive Smoothing Using Evolutionary Spectra." Management Science 17 (No. 3; November 1970): 208-218. Reid, 0. J. "Forecasting in Action: A Comparison of Forecasting Techniques in Economic Time Series." Proceedings of the Joint Opnference of the Operations Research Society, Long Range Planning and Forecasting, 1971. Reinmuth, James E., and Michael D. Guerts. "Using Spectral Analysis for Forecast Model Selection." Decision Sciences 8 (No. 1; January 1977): 134-150. Richard, Robert S. Practical Techniques of Sales Forecasting, New York: McGraw-Hill, 1966. Roberts, Stephen 0., and Ruddell Reed, Jr. "The Development of a Self-Adaptive Forecasting Technique." AIIE Transactions 1 (No. 4; December 1969): 314-322. Schneewers, G. A. “Smoothing Production by Inventory--An Application of the Wiener Filtering Theory." Management Science 17 (No. 7; March 1971): 472-499. 312 Schussel, George. "Sales Forecasting With the Aid of a Human Behavior Simulator." Management Science 13 (No. 10; June 1967):593-611. Smith, David Eugene. "Adaptive Response for Exponential Smoothing Comparative System Analysis." Operation Research Quarterly 25 (No. 3; September 1974): 421-433. Steiner, Ekero. "Forecasting With Adaptive Filtering: A Critical Re-examination." Operational Research Quarterly 27 (No. 3; 1976): 705-716. Theil, H., and S. Wage. “Some Observations on Adaptive Forecasting." Management Science 10 (No. 2; January 1964): 198-206. Thompson, Howard E. "Sales Forecasting Errors and Inventory Fluctua- tions: Random Errors and Random Sales." Management Science 12 (No. 5; January 1966): 448-458. Thompson, Howard E., and William Beranek. "The Efficient Use of an Imperfect Forecast." Management Science 13 (No. 3; November 1966): 233-243. Thompson, Howard E., and Leroy J. Krajewski. "A Behavioral Test of Adaptive Forecasting." Decision Sciences 3 (No. 4; October 1972): 108-119. Trigg, D. W. "Monitoring a Forecasting System." Operational Research Quarterly 15 (No. 2; June 1964): 271-274. Trigg. D. W., and A. G. Leach. "Exponential Smoothing With an Adaptive Response Rate." Operational Research Quarterly 18 (No. 1; March 1967): 53-59. Tydeman, J. "A Note on Short-Term Forecasting Using an Irregular Time Interval." Operational Research Quarterly 23 (No. 3; September 1972): 381-383. Wheelwright, Steven C., and Spyros Makridakis. "An Examination of the Use of Adaptive Filtering in Forecasting." Operational Research Quarterly 24 (No. 1; March 1973): 55-65. Wheelwright, Steven C., and Spyros Makridakis. Forecasting Methods for Management. New York: John Wiley & Sons, 1973. Wheelwright, Steven C., and Spyros Makridakis. ForecastingyMethods for Management. 2nd ed. New York: John Wiley & Sons, 1977. Wheelwright, Steven C., and Spyros Makridakis. Interactive Forecasting San Francisco: Holden-Day, 1977. 313 Whybark, 0. Clay. "A Comparison of Adaptive Forecasting Techniques." The Logistics and Transportation Review 8 (No. 3; July 1973): 3-26. Whybark, 0. Clay. Testin an Adaptive Inventory Control Model. Working Paper No. 289. Lafayette, Ind.: Purdue University, October 1970. Winters, P. R. "Forecasting Sales by Exponentially Weighted Moving Averages." Management Science 6 (No. 3; April 1960): 324-342. Wood, Douglas, and Robert Fildes. Forecasting for Business: Methods and Applications. New York: Longman, 1976. Woods, 0. N. "Improving Estimates That Involve Uncertainty." Harvard Business Review 44 (No. 4; July-August 1966): 91-98.