A SIMULATED SALES FORECASTING MODEL: A BUILD-UP APPROACH , Thesis for the Degree of Ph. D. MICHIGAN STATE UNIVERSITY FRED WILLIAM MORGAN, JR.. 1972 III II IIIIII IIIIIII 1293 57980 This is to certify that the thesis entitled A SIMULATED SALES FORECASTING MODEL: A BUILD-UP APPROACH presented by Fred William Morgan, Jr. has been accepted towards fulfillment of the requirements for Ph.D Marketing - Business degree in WI @wécm/ j r professor July 21, 1972 Date 0-7639 E" MEI?“ I saw NS’ auax BINDERY mc LIBRARY BINDER sl mmmr mm Il\\‘=- A] .— II 5 iii“ ABSTRACT A SIMULATED SALES FORECASTING MODEL: A BUILD-UP APPROACH BY Fred William Morgan, Jr. Business firms have long been frustrated in their attempts to evaluate their forecasting capabilities prior to anticipated sales becoming actual sales. This disserta- tion is a presentation of a way to deal with this measure- ment problem. This is accomplished by devising both a forecasting archetype and an approach for tailoring the forecasting model to meet specific needs. Three traditional bases for categorizing fore- casting models are: l. The length of the forecasting period 2. The level of the forecast 3. The technique utilized. Forecasting periods shorter than one year can be arbitrarily defined as short-term, while periods lengthier than one I I. F’ftrvl‘l v, year are long- could be the the firm are 1: region levels. mathematical c Playing a vita The ob imPlement a mo sions of predi “ique. Each 0 rThe PrediCtion as aSPGC’CS of ti forecasts are Planning hOrizg LOTie-t erm p Ian: Fred William Morgan, Jr. year are long-term in nature. The level of the forecast could be the economy, the industry, or the firm. Within the firm are product, product group, region, and product- region levels. Techniques can be described as either mathematical or nonmathematical with managerial judgment playing a vital role in either case. The objective of this research is to build and to implement a model with flexibility along the three dimen- sions of prediction interval, level of detail, and tech- nique. Each of these dimensions was studied thoroughly. The prediction interval and level of detail were treated as aspects of the planning horizon for forecasting. Since forecasts are critical inputs to the planning process, the planning horizon influences both of these dimensions. Long-term planning requires aggregate forecasts for lengthier time periods. Short-term forecasts are more detailed and cover smaller time intervals. The emphasis of this study was arbitrarily placed on a short—term (one year) time Span. To aid in the selection of the appr0priate fore- casting technique. a set of guidelines was deve10ped. This set includes the following: (1) cost, (2) level of detail, (3) accuracy, (4) turning points, (5) market factors, (6) input requirements, (7) planning horizon, (8) timing, (9) rigor, and compared usin! Sever III factor lis force composit age, (6) eXpor and (8I regreg of these tech IEVealed that Shut-term f0 The n CO‘I’ered as a a Viable 10m EHViro nta‘ pmblEm. Fred William Morgan, Jr. (9) rigor, and (10) clarity. Prospective techniques can be compared using these criteria. Several techniques were explored, including (1) factor listing, (2) jury of executive Opinion, (3) sales force composite, (4) users' expectations, (5) moving aver- age, (6) exponential smoothing, (7) time series analysis, and (8) regression and correlation analysis. A comparison of these techniques with respect to the set of guidelines revealed that exponential smoothing was the most appr0priate short-term forecasting method. The need for a flexible forecasting model was dis- covered as a result of a research project at the Graduate School of Business at Michigan State University to deve10p a viable long-range planning model for physical distribu— tion systems. The model, referred to as the Long-Range Environmental Planning Simulator (LREPS), includes (1) the basic components of the physical distribution system, (2) a strategic planning horizon, and (3) the sequential decision prdblem. The model is modular in nature and, since its initial uses, has been extended to cover broader classes of manufacturing firm and public sector planning. LREPS provides the framework for the forecasting model and a way to validate the forecasting model's Fred William Morgan, Jr. capability under controlled conditions. Several hypothe- sized actual sales patterns, each the result of different marketing plans and environmental conditions, can be simu— lated with LREPS. Based on physical distribution costs and forecasting accuracy, management can determine the most appr0priate forecasting technique, prediction interval, and level of detail needed to anticipate these sales pat- terns. Assuming given external conditions, management can ad0pt a market plan and know which forecasting system is most useful. The research objective, the construction and imple- mentation of a flexible short—term forecasting model for use in conjunction with LREPS, has been achieved. An indus- trial sponsor supplied sample data for the develOpment of a specific model. Five values for the exponential smoothing constant (0.01, 0.05, 0.10, 0.30, 0.50) and for the prediction interval (1 Wk., 2 wks., 1 mo., 2 mos., 3 mos.) were examined. Each of these two variables was associated with variable physical distribution costs through regression analysis. The t statistic revealed several statistically significant correlations. Alternative levels of detail were compared using the F statistic. Based on statistical Fred William Morgan, Jr. evidence, the following recommendations were made: 1. Smoothing constant values in the 0.01 to 0.10 range are apprOpriate. 2. Prediction intervals from one to two weeks are apprOpriate. 3. Product forecasts are as accurate as product- region forecasts. The relationship between forecasting accuracy (an index composed of the relative forecasting variance and Theil's inequality coefficient) and variable physical dis- tribution cost was measured using Spearman's rank correla- tion coefficient. A statistically significant direct rela- tionship was observed. LREPS provides several measures of system service achievement, one of which is the percent of case units backordered. Percent backorders was regressed against variable physical distribution cost. The relationship was an inverse one and was significant at the .01 level, based on the t statistic. Since service and percent backorders are inversely related, service and cost proved to be directly related. The computer experimentation provides a general way to adapt this build-up forecasting model for use by any firm selling products in different market segments. Fred William Morgan, Jr. The forecasting technique, the prediction interval, and the level of detail should be manipulated to Optimize the firm's objective. The LREPS model facilitates this manipu- lation by allowing the firm to input alternative simulated actual sales for testing different forecasting models. A SIMULATED SALES FORECASTING MODEL: A BUILD-UP APPROACH BY Fred William Morgan, Jr. A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Marketing and Transportation 1972 © Copyright by FRED WILLIAM MORGAN, 1972 JR. AC KNOWLEDGMENT S Several individuals and groups deserve recognition for their contributions, both direct and indirect, to this research. The industrial sponsor Of LREPS, Johnson and Johnson Domestic Operating Company, is gratefully acknowl- edged. The research team, headed by Dr. Donald J. Bowersox, faculty advisor, and composed Of several doctoral students, must be thanked. Team members were Dr. 0. Keith Helferich, Dr. Edward J. Marien, Dr. V. K. Prasad, Dr. Michael Lawrence, Dr. Peter Gilmour, and Richard Rogers. The dissertation committee consisted Of Dr. Donald J. Bowersox, Dr. Richard J. Lewis, and Dr. Donald A. Taylor, all Professors Of Marketing and Tran3portation at Michigan State University, and Dr. 0. Keith Helferich, Systems Research Incorporated, Lansing, Michigan. Dr. Bowersox, committee chairman and academic advisor, has guided me since the outset Of my doctoral pro- gram. He gave me the Opportunity to participate in the LREPS project and kept me focused on the research Objectives Of this dissertation. Dr. Lewis' knowledge Of forecasting techniques and Statistical analysis was Of great value in the development 0f the technical aspects Of this research. His constant iii encouragement enabled me to persevere to the end. Dr. Taylor, as department chairman, has been a steadying influence throughout my academic career. This is in addition tO his valuable comments and suggestions regarding this work. Dr. Helferich spent many hours explaining the in- tricacies Of the LREPS model and infOrming me Of the latest model SOphistications. His suggestions led to many improve- ments in the forecasting model. Additional support in the form Of computer time and guidance was provided by Systems Research Incorporated. Without this assistance this study would have been extended by several months and hundreds Of dollars. TO Gerald Brown, who provided the computer program- ming expertise for this research, I am indebted. His good humor as I continually changed by programming requirements should not be unrecognized. The professional typing assistance provided by Mrs. JO McKenzie is greatly appreciated. She remained patient despite my missed deadlines. Finally, I wish to thank my wife, Karen, and son, Todd, for their patience and support throughout the duration Of this research and my doctoral program. iv TABLE OF CONTENTS ACMOM‘EDGMENT S O O O O O O O O O O O O O O O 0 LIST OF TAfiES O O O O O O O O O O O O O O O 0 LIST OF F IGURES O O O O O O O O O O O O O O O 0 Chapter I. II. III. IMRODUCTION . O O O O C O O O O O O 0 Statement Of Purpose. . . Situation Analysis. . . . Research Problem. . . . . Organization Of Thesis. . OVERVIEW OF SALES FORECASTING . . . . Introduction. . . . . . . . . . . Planning HOrizon for Forecasting. Uses for Sales Forecasts. . . . . Criteria for Technique Selection. Summary . . . . . . . . . . . . . SALES FORECASTING TECHNIQUES. . . . . Introduction. . . . . . . . . . . . Presentation Of Techniques. . . . . Comparison Of Sales Forecasting Techniques . . . . . . . . . . . . Selection of Forecasting Technique. Summary . . . . . . . . . . . . . . FOMCAS TING ACCUMCY . O O O O O O O 0 Introduction. . . . . . . . . . . . Statistical Accuracy Evaluation . . Nonmathematical Accuracy Evaluation Accuracy Evaluation by Objectives . Summary . . . . . . . . . . . . . . Page iii vii ix 10 14 17 17 19 31 36 46 50 50 50 85 91 98 98 100 107 111 112 Chapter V. VI. VII. VIII. RESEARCHIMETHODOLOGY. . . . . . . . . . Introduction. . . . . . . . . . . . . The General Model . . . . . . Researchable Questions and Hypotheses Research Sequence . . . . . . . . . . DEVELOPMENT OF THE DETAILED FORECASTING MODEL. . . . . . . . . . . Introduction. . . . . . . . . . . . Smoothing Constant and Prediction Interval Experimentation . . . . Level of Detail Experimentation . Cost-Accuracy Considerations. . . Summary . . . . . . . . . . . . . PHYSICAL DISTRIBUTION COST-SERVICE TRADEOFF Introduction. . . . . . . Traditional Propositions. Experimental Results. . . .Summary . . . . . . . . . SIMULATED SALES FORECASTING: FINDINGS AND IMPLICATIONS . . . . . . . . . . . Introduction. . . . . . . . . . . . . Summary Of Experimental Results . . . A General Approach to Short- Run Forecasting. . . . . . . . . . . Extensions to Long-Range Forecasting. Implications for Future Research. . . BIBL IOGRAPHY O O O O O O O O O O O O O O O O 0 vi Page 115 115 116 117 122 125 125 126 141 150 154 156 156 157 159 166 168 168 169 174 183 187 199 LIST OF TABLES Table Page 2.1 Shortest Time Breakdown Used for Sales Forecasts . . . . . . . . . . . . . . 21 2.2 Frequency Of Sales Forecast Preparation . . . 22 2.3 Frequency Of Sales Forecast Revision. . . . . 23 2.4 Longest Period Ahead for Which Forecasts Are Regularly Prepared. . . . . . . . . . . 24 3.1 Comparison Of Sales Forecasting Techniques. . 86 6.1 Combinations of Smoothing Constant and Prediction Interval Values. . . . . . . . . 129 6.2 Physical Distribution Cost as a Function of Smoothing Constant . . . . . . . . . . . 130 6.3 Physical Distribution Cost as a Function of Prediction Interval. . . . . . . . . . . 131 6.4 Regression of Physical Distribution Costs (Y) on Smoothing Constant Values (X) with Prediction Interval Fixed. . . . . . . . . . . . . . . 134 6.5 Regression of Physical Distribution Costs (Y) on Prediction Interval Values (X) with Smoothing Constant Fixed. . . . . . . . . . . . . . . 137 6.6 The "Best" Build-Up Function. . . . . . . . . 142 6.7 Summary Of Product-DU F Tests . . . . . . . . 145 6.8 Summary of DU F Tests . . . . . . . . . . . . 146 6.9 Summary Of Build-Up Breakdown Comparison. . . 149 6.10 Indexes of Forecasting Accuracy . . . . . . . 152 vii Table 7.1 Physical Distribution Cost-Service values 0 O O O O O O O O O O O O O O O O O O 164 Cost-Service Regression . . . . . . . . . . . 165 Comparison Of Strategic and Tactical Planning . . . . . . . . . . . . . 184 viii LIST OF F IGURES Figure Page 1.1 Stages Of the Physical Distribution Network 0 O O O O O O O O O O I O O O O O O 5 1.2 Physical Distribution Systems Concept . . . . 8 ix CHAPTER I INTRODUCTION Statement Of Purpose The determination Of sales volumes for future time is mandatory for assessing a company's prosPects. The sales forecast is the core Of management's planning effort. However, even firms with the most SOphisticated forecasting systems cannot evaluate their forecasting capabilities un- til anticipated sales become actual sales. The overall purpose Of this research is tO develOp a way to deal with this measurement problem. This is accomplished by devising both a forecasting archetype and an approach for tailoring the forecasting model to meet specific needs. Three traditional bases for categorizing forecast- ing models are: l. The length Of the forecasting period 2. The level Of the forecast 3. The technique utilized. Forecasting periods shorter than one year can be arbitrarily defined as short-term, while periods lengthier than one 1 year are long—term in nature. The level Of the forecast could be the economy, the industry, or the firm. Within the firm are product, product group, region, and product- region levels. Techniques can be described as either mathematical or nonmathematical with managerial judgment playing a vital role in either case.) TO set the stage for the prOpositions which follow, a definition Of a sales forecast is needed. The following definition that the American Marketing Association pre- scribes is used:1 Sales Forecast--An estimate Of sales in dollars or physical units for a specified future period under a prOpOsed marketing plan or program and under an assumed set Of economic and other forces outside the unit for which the forecast is made. The forecast may be for a specified item Of mer- chandise or for an entire line. Comment--Two sets Of factors are involved in making a Sales Forecast: (1) those forces outside the control Of the firm for which the forecast is made that are likely to influence its sales, and (2) changes in the marketing methods or prac- tices Of the firm that are likely tO affect its sales. In the course Of planning future activities, the management of a given firm may make several forecasts, each consisting Of an estimate Of probable sales if a given marketing plan is adOpted or a given set Of outside forces prevails. The estimated effects that several marketing plans may have on Sales and Profits may be compared in the process Of arriving at that marketing program which will, in the Opinion Of the Officials Of the company, be best designed tO promote its welfare. The following comments about the usefulness Of 3 forecasting provide an alternative viewpoint:2 Once established, the sales forecase becomes the basis for marketing and other plans in the company, with the volume Of sales forecast, in a sense, as a built-in goal. As a chemical company executive puts it, "When sales are forecast at a certain level, the entire Operation—-production, marketing support, sales manpower, etc.--is geared to that level Of activity." This leads some tO argue that the forecast is to a considerable ex— tent self-fulfilling, sO that any later comparison Of actual sales with the forecast value may be a better gauge Of marketing accomplishment than Of forecasting accuracy. The spOkesman for a large Office equipment company feels strongly about this point. "I do not consider what we do 'forecasting,'" he asserts. "We are setting targets, self-fulfilling prophecies. One only forecasts events over which he has no control. This distinction needs far more emphasis than it is typically given." The American Marketing Association's definition provides considerable direction for this research. First, several forecasts, each resulting from a different set Of environmental conditions and marketing plans, should be prepared. Next, the forecasting period should be specified because different length periods are possible. Finally, the forecast could be made for an individual product or for the entire product line. The executives' comments imply that a gauge for com- paring forecasts with actual sales helps to evaluate fore- casting accuracy and to measure marketing effectiveness. 4 Situation Analysis Computer simulation can incorporate the desired forecasting model features into a single comprehensive model. Simulation permits a more realistic and complex model for solving business problems than those possible through the application Of analytical techniques. ' While a generalized simulation Of the firm is not yet available, the physical distribution system has been modeled successfully. The integrated physical distribution network has been conceptualized as consisting Of fixed facilities, transportation capability, inventory alloca- tions, communication, and unitization. An overview Of the typical physical distribution system appears in Figure 1.1. There are three stages Of activities: (1) the Manufacturing Control Center (MCC) stage at which products are produced and inventoried at the Replenishment Center (RC), (2) the Distribution Center (DC) stage at which products are located adjacent tO the market— place, and (3) the customer demand stage, identified in Figure 1.1 as the Demand Unit (DU) stage. Demand units consist Of the geographic customer groupings because the inclusion Of individual customers as DU's is prohibitively costly and time-consuming. For this application DU's are defined as ZIP Sectional Centers, although counties, FIGURE 1.11 STAGES OF THE PHYSICAL DISTRIBUTION NETWORK STAGE 1: Hanufacturin Control Centers and Replenish- ment Centers STAGE 2: Distri- bution Centers STAGE 3: Demand Units PD REGION PD REGION ——INFORMATION FLOW PRODUCT FLOW REGION...THE REGION IS DEFINED BY THE ASSIGNMENT OF RDCS AND DUS T0 AIPDC. CC......EACH MANUFACTURING CENTER PRODUCES A PARTIAL LINE. RC.......REPLENISHMENT CENTERS STOCK ONLY PRODUCTS MANUFACTURED AT COINCIDENT MCC. RDC. . . . . .REDDTE DISTRIBUTION CENTER, FULL OR PARTIAL LINE. PDC......PRIMARY DISTRIBUTION CENTER, EACH PDC IS FULL LINE AND SUPPLIES ALL PRODUCTS TO DUS ASSIGNED TO THE PDC REGION; PRODUCT CATE- GORIES NOT STOCKED AT THE PARTIAL LINE RDCS IN THE REGION ARE ALSO SHIPPED BY THE PDC. DU.......THE DEMAND UNIT CONSISTS OF ZIP SECTIONAL CENTER(S). 1 D. J. Bowersox, et al., Dynamic Simulation of Physical Distribu— tion Systems, Monograph (East Lansing, Michigan: Division Of Research, Michigan State University, Forthcoming). 6 Standard MetrOpolitan Statistical Areas, Economic Trading Areas, and REA modified grid blocks were also given con- sideration.6 The distribution center stage includes four levels: (1) Primary Distribution Centers (PDC), which handle a full line Of products and have the potential to serve all the DU's in a defined region Of the total market area; (2) Re- mote Distribution Centers—Full Line (RDC-F), which handle all products; (3) Remote Distribution Centers—Partial Line (RDC-P), which handle only a portion Of the product line; and (4) Consolidated Shipping Points (CSP), RDC-P's Which handle no products, but function as points at which the demand Of several DU's is agglomerated and served by a PDC. PDC's are capable of serving the same DU's served by RDC-P's; however, PDC's cannot serve the DU's served by RDC-F's. A computerized model, encompassing all Of the aforementioned components and activities, has been developed.7'8 This model is entitled Long-Range Environ- mental Planning Simulator (LREPS). It is capable of simu— lating changes in the firm's physical distribution system, such as the addition Of distribution centers or the re- arrangement Of communication linkages. The conceptual model can be most easily understood 7 by examining Figure 1.2, which illustrates the major model components. The Demand and Environment Subsystem (D&E) focuses on testing for changes in the customer and product mix and in order characteristics. The D&E also provides the sales level and the basis for allocating sales among DU's. The Operations Subsystem (OPS) processes orders through the major physical distribution activities. The Measurement Subsystem (MEAS) develOps the criteria for evaluating alternative distribution system configurations. The Monitor and Control Subsystem (Mac) is the model super- visor and the controller section Of LREPS in which feedback from past activities can dynamically affect current policy decisions. Exogenous data are inputted through the Support- ing Data Subsystem which audits, reduces, and formats infor- mation into usable terms. Finally, model output is con- verted into managerial reports by the Report Generator Subsystem. Figure 1.2 reveals the importance Of the sales pattern as a critical input to LREPS. One Of the unique features Of LREPS is the treatment Of sales volumes and patterns. Several activities carried out within the Sup- porting Data Subsystem relate tO the preparation Of sales data. The first step is a detailed analysis Of sales .Awsaaooauuom .huamno>fio5 madam coww:OHx .noumomom mo aoamfi>an samumuocoz .maoum w coausnauumfia Hausa 5m mo downwaaawm oases .Illl'llulll Zmamwm mo wmzm _ _ N O _ I T 1 mm _ was? madam .2 T.nu _ L m N 28933 358% s m _ m M m _ oofiamfiIV .8328 5°2ng c m. m H c s _ E328 M M _ 8528 82 mofizoz m P _ — . “ ,on94amommzame _ onsr have been exPanding into new markets have broad, and >ftentimes unrelated, product lines. Unlike products are >ften affected by different factors, so the forecasting system must be more complex to deal with this divergence. Iome products may be relatively easy to forecast, while >thers require a detailed technical analysis. The more liverse the company's product offerings, the more complex rnd burdensome the forecasting mechanism becomes. The length, as well as the type, of the marketing :hannel influences the choice of the forecasting scheme. fhe company selling directly to the final consumer has only .ts own and its competitors' marketing strategies to assess. is the channels become lengthier, the marketing strategies 50". A.—‘.‘1 of the cast. types I under a retail the as the sa Produc produc manY 5 theme: Prod“, 0f tn. techm 41 of the intermediaries have a greater impact on the fore- cast. Also, the same product may be sold through different types of channels. For example, appliances may be sold under a firm's private label as well as through national retailers. Although the physical products are identical, the assumption that the same factors affect sales or that the same forecasting techniques are applicable for both products may be invalid. An obvious prOblem in estimating the sales of a new product is the absence of historical data. Today, however, many so-called new items are merely variations on Old themes or designs. Forecasts for these Older or similar products can be modified to reflect the supposed advantages Of the new product. A truly new development, such as a technological innovation, is a project for the marketing research department.21 If a firm sells certain products nationally and others regionally, then a regional approach to developing estimates seems reasonable. A product's sales may be affected by different factors in different parts of the country, again suggesting localized forecasting. Input Rec ware. D requirem can be f or the 5 t0 reCOg is neede ing tech maniPula interpre. f0recast. deviceS : the tem. p r°°esse tl’pes is gory may 42 Input Requirements Inputs are of three types: data, human, and hard- ware. Data input refers to the "numbers," or informational requirements, of the forecasting model. The human category can be further broken down based on the task to be performed or the skills brought to bear on the problem. The ability to recognize and to choose among data source alternatives 3 is needed, and mastery of the intricacies of the forecast— L ing techniques is also important. Someone must perform the manipulative task of formatting the input data. Finally, interpretive skills are basic to prOper utilization of the forecasted results. The last category, hardware, refers to devices such as computers which replace humans to perform the tedious, repetitive, and time-consuming computational processes. It is difficult to say which of the three input types is the most important. A weakness in any one cate- gory may result in an unreliable forecast. Data are avail- able from many locations. A firm with a good record-keeping system can rely on this internal source for much historical data. Statistical sampling can reduce considerably the data bank required to Operate the forecasting mechanism. At the same time, data can be collected by a direct research program or by dealing with research agencies or public 1 sources 0 analysis I ’ into the even be 1 casting Broad 1e availab teChnic lOWer ] POrate “Seful inCree 43 sources of information, such as the government. A joint analysis of intra- and extrafirm data could provide insight into the future of sales patterns. Data associations might even be recognized which lead to correlation analysis. The human element is an integral part of the fore- casting process. A wide spectrum of know-how is required. Broad level guidance for choosing the model based on data ‘3“ “*r-c-cr‘fi availability is a must. This suggests a combination of technical competence and managerial judgment. At a slightly lower level, at least in terms of decision-making, is the capacity to extract the needed data from the massive cor- porate and public banks of information and to format it usefully. This is especially important in view of the increased reliance on electronic data processing equipment. Ultimately, the forecast must be used; a forecast is not a goal in itself. The many uses for sales forecasts already mentioned in this chapter are evidence of this. .Management at several levels will have access to the fore- casted results. They should be aware of the weaknesses in the other factors, the data and the technique used. They should be told the degree of confidence which can be placed in the forecast. 44 Planning Horizon The planning horizon has been detailed very thoroughly previously. Long-term or planning forecasts dictate gross forecasting approaches which sum but don't report explicitly all geographic and product estimates. Conversely, short-term or operating forecasts suggest this detailed reporting. The specific forecast interval, such as a week or a month, affects the selection of the technique, as does the frequency of preparation of forecasts. 0ft-repeated, short-term forecasts necessitate a model which is inexpen- sive to use. This implies a relatively small data base and an algorithm which iterates quickly. Forecasts for a year or more can be more sophisticated in terms of formulae and data needs if they aren't generated frequently. Timing No forecast, regardless of its precision or detail, is useful unless it is available on a timely basis-~that is, available for use when needed. The speed with which a forecast can be generated, given the input, should be as fast as is economically feasible. Computers have resolved this problem considerably; however, less SOphisticated or smaller firms may rely on mechanical or manual calculation. For the: such as timing he in; series. cast t} Ri or Casting E’valua. it is ; ShOuld managex unbias. by Cap 45 For these companies speed is very much a consideration. If new input data are necessary for each forecast, such as with correlation and regression analysis, the timing of the availability of this information is critical. The input series should precede or "lead" the forecasted series. This enables the firm to use current data to fore- cast the future. 311.919.; Rigor refers to both the objectivity of the fore- casting model and its technical assumptions. Subjective evaluation of the forecast is certainly permissible, but it is reserved for the users of the forecast. The model should not be designed to yield forecasts which match management's preconceived notions. The model should be an unbiased manifestation of fact, professionally constructed by capable personnel. All of the quantitative forecasting methods are based on certain inherent assumptions, not always explicitly mentioned. Any forecasting method is weakened if it does not satisfy these assumptions about the data, sales pat— terns, etc. The more powerful methods have more rigorous sets of assumptions, but they yield more reliable output if these assumptions are met. Forecasting is analogous to statisti searcher requireu searcher Clarity 46 statistical testing. Nonparametric tests allow the re- searcher to test hypotheses. If the data conform to the requirements of certain parametric tests, then the res searcher has more powerful tools at his disposal. Clarity This final guidelines draws on nearly all of the preceding criteria for its substance. A useful forecast must be understandable in terms of the technique used, the weaknesses of the forecast, the physical presentation of the data, and the preforecast research and analysis. A manager who merely receives a host of numbers referring to products and regions and time periods is in no position to make use of this data. On the other hand, every manager needn't be well versed in all of the technical facets of forecasting. For the manager perhaps physical layout is the most important factor. HOpefully, among the team mem- bers required to complete the forecasting process from start to finish will be peOple capable of answering any questions that management might ask. Summary This chapter has introduced the broad considera- tions of sales forecasting. The role of sales forecasting as a key element of the planning process should be evident. Beth 10: The lens concern period I period serve t mates . 0f the 0r eva] factors startir of the is p] import; 47 Both long- and short-range planning utilize the forecast. The length of the planning horizon is, therefore, a major concern of the firm. The impact of this variable planning period on sales forecasting is manifested in terms of the period length and detail of the forecast. The several examples of the uses for forecasts serve to illustrate the company-wide value of these esti- mates. Virtually every department in the firm can make use of the sales forecast. Finally, another perspective, that of a firm about to select a forecasting mechanism or evaluate the present one, suggests a set of guidelines to aid in the selection or evaluation. This universal list represents the major factors that a firm should consider. With this overview of sales forecasting as a common starting point, a detailed discussion about and comparison of the various techniques for forecasting is meaningful. This presentation is found in Chapter III. Further, the importance of forecasting accuracy can now be better under- stood. Methods of error analysis are discussed in Chapter IV. ..___,__m.__.1 48 CHAPTER II--FOOTNOTES 1G. A. Steiner, Top Management Planning_(New York: The Macmillan Company, 1969), p. 7. 2L. G. Erickson and R. J. Lewis, forthcoming pub- lication (New YOrk: McGraw—Hill, Inc., 1972), ch. 7. 3Management Qperating System: Forecasting, Materials Planning and Inventory Management--Genera1 (White Plains, N.Y.: International Business Machines Corporation), p. 1. 4For an excellent treatment of the temporal dimen- sion of planning see Steiner, same reference as footnote 1, pp. 21-25. 5Erickson and Lewis, ch. 7. 6ForecastingSales, Studies in Business Policy, No. 106 (New York: The National Industrial Conference 7W. J. Stanton and R. H. Buskirk, Management of the Sales Force (Homewood, Illinois: Richard D. Irwin, Inc., 1964) 1 pp. 549-551. 8F. E. Hummel, Market and Sales Potentials (New York: The Ronald Press Company, 1961), ch. 11. 9 Forecasting Sales, p. 9. 1OJ. Dean, Managerial Economics (Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1951), ch. 4. 11D. H. McKinley, M. G. Lee, and H. Duffy, Forecasting Business Conditions (The American Bankers Association, 1965), ch. 4. 12R. Fels and C. E. Hinshaw, Forecasting and Recognizing Business Cycle Turning Points (New York: National Bureau of Economic Research, 1968), pp. 7-11. 13W. Lazer, "Sales Forecasting: Key to Integrated Management," Business Horizons (Fall, 1959), p. 64. 7..-, Individual Business Enterprise University Microfilms, 1967), pp. 41-49. 49 14Erickson and Lewis, ch. 7. 15Steiner, pp. 211-212. 16Forecasting Sales, p. 4. l 7Ibid., p. 5. l81bid.. pp. 100-103. 19B. R. Copeland, Sales Forecasting for the 20Erickson and Lewis, ch. 7. 21Forecasting Sales, p. 103. (Ann Arbor, Michigan: “Hf-77““) Par: f0“ tec} CHAPTER III SALES FORECASTING TECHNIQUES Introduction u . Anni!“ " This chapter presents an analysis of the various Ways 0 both mathematical and nonmathematical, to forecast sales. The techniques covered are those that have been sYStemetized through usage. Individual guessing based on hunches, for example, is not included. Only those methods Which can be reduced to a series of logical steps are dis- cussed. The advantages and disadvantages of each technique are also presented. Next, the various techniques are com- pared with each other. The objective is to select the most appropriate technique for use in later experimentation. Presentation of Techniques Although considerable space is allotted to sales forecasting in most marketing texts, the basic forecasting techniques have not changed significantly in recent years. A“ c><.‘.<:asional new application is mentioned, and perhaps small technical dimensions are added from time to time. 50 How No: ti! 1'10“ The de ef ar. 51 However, the methods remain virtually unchanged. Two general approaches to sales forecasting are popular. These are the build—up and the breakdown approaches. The build-up approach is characterized by qualitative or nonmathematical estimating, while the break- down approach can be either qualitative or quantitative. Nonmathematical Forecasting Four nonmathematical techniques are common: 1. Factor listing 2. Jury of executive Opinion 3. Sales force composite 4. Users' expectations. The overall approach in each instance is qualita- tive; however, this is not to say that these approaches are not structured. For example, in the sales force composite method each salesman may contribute his estimate based on a very'thorough analysis. Factor:§isting.--The factor listing approach is the Simplest of the qualitative approaches. It involves the develOpment of a list of factors or events which have an ef-"fect on sales. Those factors which would increase sales are, denoted accordingly, as are those lowering sales. Then the list is evaluated in one or both of two ways. First. 52 the number of positive factors is compared with the number of negative ones. If there are more positive factors, then sales are expected to increase in the future. As might be suspected by now, a majority of negative factors would lead to a forecasted decline. The second possibility is to assign weights, as well as a positive or negative sign, to each factor. A weighted ' .- __r_. total can then be computed by multiplying weights and signs a, and summing the results. Again, a positive score suggests a sales increase, while a negative total foretells a decline. The relative change expected might be available from the magnitude of the total score. Obviously this technique, though structured, is unsophisticated.1 It relies on the compilation of a thorough list of factors. Without such a list the approach is trivial. Considerable expertise on the part of the com- piler of the list is required. In addition, the arbitrary assignment of weights under the second scoring possibility is subject to criticism. Simply by manipulating the weighting values, management can change both the sign and magnitude of the total score. Jury of Executive Opinion.-—This approach to sales forecasting has been practiced for many years by companies. 30 in spite of its simplicity, it's one of the oldest “'| 53 techniques available and in use.2 It involves a combining or grouping of the estimates supplied by a cross-section of a company's executives. One writer has glamorized the pro- cess by subdividing it into the following three variations:3 1. An originating committee approach 2. A reviewing committee approach 3. A presidential survey approach. The originating committee compiles the tentative 1 forecast, as well as revised and final versions. If the committee also happens to be the budget committee, it can grant final approval of the sales forecast as the sales budget. A reviewing committee does not prepare the initial forecast. Instead, this first draft is submitted to the reviewing committee for examination and revision. Ulti- mately, the reviewing committee approves a final forecast and submits it to the person or persons who have the authority to approve the forecast as the budget. It is possible that this authority is vested in the reviewing committee itself: hence, no further evaluation is necessary. The presidential survey is similar to the originating Committee approach in that several executives develop and SUbndt.forecasts. The executives don't meet as a group. The fbrecasts are reviewed by the chief operating officer, W? In 54 who eventually develOps the finalized forecast. In this way the estimates of those the president considers to be the most pragmatic can be weighted more heavily; however, this technique stifles group discussion and relies on the skills of the president, who may not be the most qualified forecaster in the firm. The jury of executive Opinion method offers several advantages and disadvantages.4'5'6'7'8 First, on the plus side, the approach can be implemented quickly. By specify- ing regular due dates for the projections, the committee or the president can routinize the process. Second, detailed or complex statistics may not be mandatory. Executives may rely on their many years of experience to develOp their forecasts. Next, balance is achieved by drawing on many different departments throughout the company. Production, finance, marketing, and others may all be invited to par- ticipate, resulting in forecasts as seen from several van- tage points. Another benefit of the jury method is that it is "data free." That is, the jury approach doesn't require a data bank beyond that which is stored in the executives' memories. This method might be especially valuable for new firms, new products, or new markets for existing products. Finally, this technique supposedly synthesizes the most cu th h re ch ex f0 be th SD is hi ca fa th an: 80; we, 55 current information available. Alert managers are aware of the latest happenings within their departments, and marketers should be cognizant of market trends. The result should be an up-to-date, relevant forecast. Considerable negative criticism has been aimed at the jury method of forecasting. The most obvious one is SJ. that personal biases and Opinions are given considerable If. ._a recognition. Guesswork, rather than fact—finding, may characterize this scheme. It has been suggested that many executives are in no position to assay potential market per- formance. Second, should highly paid and skilled executives be bothered with develOping these estimates, particularly the Operating forecasts? Couldn't their time be better spent on Other tasks? Another disadvantage is that the ultimate forecast is an average. The accuracy of an average of Opinions is highly suspect. Fourth, since it is an average, the fore- cast cannot be traced to any one individual. Responsibility for the projections is dispersed throughout the firm. If there is a disparity between actual and forecasted sales, anyone can ease the pressure placed on him by simply blaming 8Omeone else. Fifth, to get any kind of local, product line, or wee'klyforecast is nearly impossible. This would occupy 56 an inordinate prOportion of the executives' time. Manage- ment must settle for aggregate forecasts. Lastly, the inertia of committees may slow down the forecasting process. Several confident men in positions of power, each with a different Opinion, provide the fodder for controversy and indecision. Sales Force Composite.--The sales force composite method involves gathering data from either salesmen or sales managers. The former approach has been described as a "grass-roots" one because it represents the forecasts of 9 the men closest to the markets. Each salesman is asked to 10 estimate future sales within his territory. Often esti- mates are submitted for each product for which the salesman is responsible. The National Industrial Conference Board, in a recent summary, cited many advantages of the sales force 11 composite method: 1. Uses specialized knowledge of men closest to the market. 2. Places responsibility for the forecast in the hands Of those who must produce the results. 3. Gives sales force greater confidence in quotas develOped from these forecasts. 4. Tends to give results greater stability because of the magnitude of the sample. 5. Lends itself to the easy develOpment of product, territory, customer, or salesmen breakdowns. S7 The same source, however, also notes a number of disadvantages. These are as follows:12 1. Salesmen are poor estimators, often being either more Optimistic or more pessimistic than conditions warrant. 2. If estimates are used as a basis for setting quotas, salesmen are inclined to understate the demand in order to make the goal easier to achieve. 3. Salesmen are often unaware of the broad economic patterns shaping future sales and are thus incapable of forecasting trends for extended periods. 4. Since sales forecasting is a subsidiary func— tion of the sales force, sufficient time may not be made available for it. 5. Requires an extensive expenditure of time by executives and sales force. 6. Elaborate schemes are sometimes necessary to keep estimates realistic and free from.bias. By bypassing the salesmen and utilizing the special- ized knowledge of the sales managers, a company can overcome many of the aforementioned shortcomings. These managers view the need for adequate and realistic sales forecasts from a management perspective. They realize the need for constant monitoring and updating of the forecast. Consider- able time should be saved by freeing salesmen from the fore- casting tasks for which they are likely ill-equipped.13 A serious drawback of basing the forecast on sales manage- ment's estimates is the loss of the localized knowledge of ‘37:“! ‘f' *7 58 the salesmen: however, managers might consult with their salesmen prior to formulating the forecast. Users' Expectations.--Advocates of the users' expec- tations method prOpose that actual and potential customers are the best sources of information upon which to base an estimate. Of course, anytime a survey or questionnaire is used, problems of accurate representation of the pOpulation are encountered. Anticipating a poor rate of response, forecasters might poll an extremely large sample. If the population is small, it might be enumerated completely and researched. As was the case earlier, arguments can be made by those favoring and those opposing this approach to fore— casting.l4'15 First, the users' expectations method is based on information Obtained directly from the users who, taken together, make up the firm's market. Second, the company can learn, maybe only subjectively, the thoughts underlying the intentions of buyers. Next, indirect sources, middlemen, are eliminated by going to the cus- tomer. The level of detail is under the control of the forecaster. Fourth, new product sales or new markets can be estimated when other methods are inapplicable. Also, the notion of a survey seems logical because buyers are worthy of consideration. Other methods overlook th; buj kn: Wht a1 ch no is no to SE OF. "P 59 this basic fact. Industrial buyers often plan ahead and buy periodically. Perhaps this personal contact will make known facts not discernable from other approaches, such as when a buyer plans to shift to another supplier. Last of all, the users' expectations method can serve as a cross- check on the results of another way of forecasting. Objections to this approach have been raised. As noted, diverse markets with many products and users do not lend themselves, except at the expenditure of much time and money, to a survey approach. The reliability of users has to be considered. Uninformed or uncOOperative buyers can seriously hamper the research. Third, any forecast based on expectations is a function of those expectations. If for some reason buyers' expectations are altered, the old forecast would be immediately outdated. I A survey approach is necessarily time-consuming. Pretesting of the questionnaire is almost mandatory. Con- siderable manpower is required to contact personally each respondent in the sample. The alternative is to rely on the lower return rate of voluntarily completed mail surveys. If there are any middlemen, they can exert considerable in- fluence on the buying practices of purchasers. These intermediaries should be polled in a separate interview for completeness . 60 This concludes the study of the nonmathematical approaches to sales forecasting. These judgmental approaches may not be appealing to the trained statistician; neverthe- less, they provide the means for the less technically trained forecaster to complete his assigned task. Mathematical Forecasting The mathematical techniques for forecasting which are covered here are the following: 1. Moving average 2. Exponential smoothing 3. Time series analysis 4. Regression and correlation analysis. For each of the above quantitative approaches, a specific numeric or logical decision rule or process is executed to generate the forecasted value. There is nothing inherently superior about a quantitative technique as com- pared tO a qualitative one, despite the aura of infalli- bility which seems to surround the former. Moving Average.--Again, the starting point is the simplest of the alternatives. The term "moving average" is precisely definitive of the nature of this technique. The forecast is calculated by averaging sales for the most recent n time periods. As the data for another period ;‘1 ‘1... be th si de 61 become available, the average is recalculated, again for the last n periods. SO the moving average technique con- sists of adding to the total the latest period's data and deleting the data from n+1 periods in the past. This total is then divided by n to achieve the forecasted value. If t denotes the most recent time period and S(t) represents sales in period t, then the current moving aver- age value is MA(t). Expressed symbolically, this relation- ship is MA(t) = S(t) + S(t-l) + + S(t-n+1) n where n is the number of time periods. Obviously the moving average value is easy to com- pute, especially once the initial value is obtained. The mechanics of the process can easily be programmed for use 16 Also, the technique is easily on an electronic computer. understood, so the problem of training personnel in its use is a minimal one. Unfortunately, there are several limitations which cannot be overlooked. An inherent assumption of the moving eaverage method is that the future will be, for the most part, ‘an.unweighted and lagged extension of the past. If this ‘trend can be validated for a given product, then perhaps the moving average approach is an adequate one: otherwise, 62 it's use ought to be discontinued. Related to this first shortcoming is the unrespon- siveness or sluggishness of the technique. For large values of n, current sales data make up only a small proportion of the total before averaging. Thus, if there is a radical and penmanent shift in the direction of the sales pattern, the moving average won't reflect this until the preshift data are eliminated from the total. On the other hand, a small n value will result in a highly responsive, even volatile, system that overstates one-time fluctuations. The fore- caster must be familiar with the product in order to choose wisely the value for n. The size of the data base required to Operate a moving average model is also a function of the number of periods. Extensive records must be maintained for each product. If n is increased, this data file increases accordingly. Finally, because the moving average technique is a way to measure central tendency of data, the computed 'value is inapprOpriately called a forecast. Instead, any iiverage is representative of the interval over which it Vfiis calculated. If the value must be located at some point ‘Within this interval, the midpoint is a much more logical Point than an endpoint (as is the case with using the 63 average value as a forecast). Theoretically, any moving average value represents some past time interval and shouldn't be interpreted as a forecast. Exponential Smoothing.--Certain of the shortcomings of the moving average method have been eliminated by expo- nential smoothing, which is really nothing more than a weighted type of moving average.17 (The derivation which follows is based on work by Brown.18'19) Exponential smoothing bases the new estimate of sales on the previous estimate incremented by some fraction of the amount by which the old estimate differed from actual sales. Ex- pressed in equation form, the relationship is E(t) = E(t-l) + d(D(t-l) - E(t-l)) where E(t) is the estimate for period t, D(t-1)‘ is the actual demand for period t-l, and @111. If like terms are com— bined, the equation becomes E(t) = aD(t-1) + (l-a)E(t-1). Brown has shown that by substituting consecutive values for 13(t-l), E(t-2), etc., a form of weighted average is derived VVhere E(t) = aD(t-l) + d.(l-d)D(t-2) + d(l-a.)2D(t-3) + . . . + a.(l-a.)kD(t-k-l) . Tflme current estimate is a sum of past sales multiplied by diJfferent weights, which sum to one. The resulting equation rf rn 64 is New Average = a(New Demand) +(l-a)(Old Average). This formula doesn't correct for trend. It is possible to alter this equation so that corrections for trend are automatic. An estimate of the present trend could be Trend - New Average - Old Average. Reasoning analogous to that underlying the derivation of the New Average yields New Trend - a(Current Trend)-+(l~a)(Old Trend). SO the estimate for sales is the combination of trend and average values. The expected sales total is Expected Sales = New Average + (lle (New Trend). The analysis can be carried furth:r if needed. Second and third order formulae which include the trend (and the acceleration of change, respectively, can be de- rived.20 The second order approach involves calculating the New Average as just shown, but calling the result the New First Average. That is, New First Average = a(New Demand)4-(l~a)(Old First Average). ZKJaother average, based on the previous computation is New Second Average = a(New First Average) + (l-a)(Old Second Average). 65 The latest forecast is New Forecast = 2(New First Average) - New Second Average. The trend value is New Trend = New First Average - New Second Average. Last, the final forecast is New Final Forecast = New Forecast + New Trend. Third order analysis requires that a new Third Average be computed so that the acceleration factor can be derived. The final forecast includes, then, the New Fore— cast, the New Trend (rate of change in forecast), and the New Acceleration (rate of change of rate of change in fore- cast). Both second and third order models are very sensi- 21 tive to shifting sales. Further corrections are possible to adjust for seasonality patterns in the data.22 A final variation of exponential smoothing is adaptive smoothing. Here, the value of the smoothing constant (a) is reviewed regularly, perhaps each time a new forecast is made. The value for the smoothing constant used to make the next projection is that value which would have given the most precise estimate for the last forecast period. In other words, alternative a values are used to project last period's sales. These several projections are compared with.last period's actual sales. The smoothing constant Value yielding the projection closest to actual sales is 66 used to generate the next forecast. EXponential smoothing is, because of its origin, more complicated than the moving average technique. This complexity can hinder the adoption of exponential smoothing as a method of forecasting. Lack of understanding may cause firms to select another, less SOphisticated approach. SOphistication isn't inherently good, but to rule out a technique because of its complexity is unfortunate. A serious disadvantage is the distortion over time in amplitude of the smoothed sales flow. The distortion depends on the nature of the input, the order of the smoothing model, and the prOportion of disturbance or "noise" in the input information.23'24 A random input, such as sales being twice the volume in a certain period as compared with any other period, would cause the fore- casting model to overestimate for the next period. Then, the anticipation of a continued decline because of a return to previous sales levels would cause the subsequent period's sales estimate to be decreased. This distorted value would gradually be "washed out" of the system. The predictive pattern would be that of a dampened oscillation over time. Several positive attributes can, however, be mentioned. After the model has cycled one time, the only data which must be saved are the most recent prior values L... U! E! In 67 of the trend and the average (or the previous average and the most recent demand if the non—trend-adjusted version is used). Data storage requirements are practically nonexist- ent. Further, the computation is simple. Last of all, exponential smoothing is very apprOpriate for short- interval forecasting because in such cases causality or association can't be handled in a practical way. Time Series Analysis.--A time series is a group or set of statistical observations arranged in chronological order.25 There may or may not be a relationship between or among the values of the series. If such a relationship doesn't exist, successive values are said to be independent. Obviously, this study is not concerned with independent time series because they cannot be forecasted. Dependency is exhibited if successive values can be estimated based on previous values. The standard time series model is composed of four components. These are: l. Secular trend 2. Cyclical movements 3. Seasonal patterns 4. Irregular fluctuations. Secular trend is described as the underlying general tendency of the time series. It is a long-term concept, 68 encompassing ten or more years for adequate description. By definition secular trend is a stable phenomenon, mani— festing itself in the form of a smooth line if plotted graphically. Representation of the secular trend can be achieved by a variety of approaches. The simplest is merely to plot the data on a graph and visually fit a line to estimate trend. This is a very quick method, but it is dependent upon the bias of the drawer. Any two peOple are likely to develOp different lines; hence, there are no criteria for fitting the line. Another possibility is to divide the data into two equal time intervals. Then the average value for each of the two intervals is computed. These averages are plotted at the midpoints of the two intervals and are connected with a straight line. By finding the arithmetic difference between the two mean values and dividing this by the number of time periods, the slope of the line can be determined. The origin of the line can be arbitrarily set anyplace on the time scale by moving away from either of the means. This technique isn't biased like the free-hand method, but it is still very crude. Only straight lines can be gen- erated, a very real weakness. Last,a line can be fitted to a set of data on the 69 basis of one of several statistical criteria. Perhaps the sum of the absolute differences between actual and computed (using the trend line) values could be minimized. A rather common approach is to minimize the standard error of the estimate. This leads directly to the "least-squares" method of line-fitting. Here, the sum of the squares of the differences between corresponding actual and computed (using the trend line) values is minimized. This means that E(Y‘Yt)2 is a minimum where Y represents the actual values and Yt refers to the corresponding computed trend values. An additional prOperty of the least-squares method is that the sum of the differences between actual and computed values is zero. In other words, s(Y-Yt) = 0. Calculus can be applied to assure that these two restrictions are enforced. If a rectilinear relationship is assumed, the trend equation is of the form Yt = a + bx, where Yt is the forecasted (dependent) variable and x is the time (independent) variable. values for a and b must be chosen such that the aforementioned conditions hold. Thus, the deviations which are to be minimized are functions of a and b. So 2 f(a.b> = ew-Yt) . 70 Since Yt - a + bx, this relationship can be substi- tuted into the above equation. Thus, f(a,b) = e(Y - a — bx)2. To minimize f(a,b), compute partial derivatives with respect to a and b and set them equal to zero. Hence, afgalb) 3a af§a,b) ab (-2)€(Y - a - bx) = O and (—2X)e(Y - a - bx) = 0. These two equations can be solved to get the so- called "normal equations" EY na + bex EXY asX + bEXZ. Because of arbitrary coding of the time (x) vari- able, the ex term drOps from each equation because it equals zero. The remaining relationships are easily solvable for a and b eY na a = eY/n hex2 b e:XY/eX2 . eXY Curvilinear trend lines can be derived simply by assuming any of several possible curved relationships be- tween x and Y and repeating the preceding minimization steps. The least-squares and similar approaches seem to be ‘Ehe most useful of the trend-fitting techniques. Least- 71 squares satisfies a specific set of criteria: it is unbiased and consistent. Further, it can be used to develOp both straight- and curved-line relationships. Cyclical movements refer to activities of the busi- ness cycle which occur over two to ten or more years. These recurring cycles are generally measured by estimating the interval between consecutive like turning points (e.g., from peak to peak). Length and amplitude of the cycle vary from product to product. The absence of a constant period and amplitude for a given product makes prediction even more difficult. The combination of interproduct variation and intra- product inconsistency results, as would be expected, in no highly accurate method for forecasting this particular part of the time series model. The standard approach is to describe the trend and seasonal factors and to combine the cyclical and irregular factors as a residual.26 In par- ticular cases the develOpment of basic trigonometric func- tions to mirror the cyclic element might be feasible if consecutive periods are nearly the same and the amplitude variation is minimal. Fourier series analysis might also 1be attempted under such circumstances. Seasonal patterns reflect the variations which occur regularly and complete a cycle each year. Such 72 factors as weather, customs, and religion are usually considered to be the major causes of seasonality. Higher sales during the days before Christmas exemplifies the impact of the seasonal element. Numerous approaches for depicting seasonal patterns have been develOped. Among these methods are (1) general average, (2) link—relative, and (3) ratio-to-trend.28 The general average technique involves fewer computations and is probably the easiest to understand. Historic data for several periods are tabled. Then average sales values for each quarter (or month) are computed, and average quarterly (monthly) sales for the entire time (the "general average") is calculated. Indexes are derived by dividing the four (twelve) quarterly (monthly) averages by the general aver- age. These indexes must sum to 400 (1200) to be adjusted for trend. The link-relative method formulates the indexes into a chained or linked sequence by basing each index on the previous one. Again the data are arranged in table format by year. Indexes are computed by dividing quarterly (monthly) sales by sales for the immediately preceding period. In this way the indexes are linked together. The average index for each period is determined. Since the original data contain a trend factor, this can be removed 73 by assuming a rectilinear relationship and adjusting the average indexes. As always, quarterly (monthly) indexes should be adjusted so that they sum to 400 (1200). The last main approach to develOping seasonal in- dexes, ratio-to-trend, is the most complex. The initial step is to devise a trend line to represent the historic data. A least—squares approach would be acceptable, as would a four-quarter (twelve-month) or longer moving aver- age because a long—term average eliminates seasonal fluctua- tions. Ratios of actual sales to trend values are calcu- lated and tabled by year. Then average quarterly or monthly indexes can be computed and adjusted to sum prOperly. Certainly these means of isolating the seasonal element differ in usefulness. The ratio-to—trend approach involves more computation, yet it will permit curvilinear trend assumptions. All the other approaches assume a rectilinear trend. The link-relative method is the easiest to use because once an initial trend value is known, all other seasonal-adjusted estimates for a given year can be obtained by a simple multiplication. The general average technique is a simple and "common sense" method for obtain- ing rough indicators of the seasonal factor. The fourth and final time series component is irregular fluctuations. These movements are erratic and 74 unpredictable. They result from such causes as disasters and windfalls. Because of their very nature, irregular fluctuations are not able to be forecasted; instead, they are included as residual variations. These four time series components are typically related to each other in either a multiplicative or additive model. The general form of the multiplicative model is Y = T x C x S x I where Y = actual sales data T = secular trend C = cyclical movements S = seasonal patterns I = irregular fluctuations. Y and T are in terms of sales dollars or units. S, C, and I are ratios (or percentages) which adjust the ‘trend to equal actual sales. An additive model can be used which has the form Y = T + C + S + I. In.this case all components are in terms of dollars or units. Time series analysis offers several advantages to hits users.29 It forces forecasters to probe into the com- INDnent factors which affect sales. By isolating the various 1Influences, management hopefully can develOp a better 75 understanding of products and their sales patterns. Related to this first advantage is the systemitized approach to forecasting offered by time series analysis. It is an organized, step-by-step method which should result in con- sistent findings when applied by any trained forecaster. A third reason for adOpting time series analysis is that considerable detail is possible. Naturally much work would be necessary to derive initially the equation para- meters for many products and regions, but then a computer could perform the forecasting and revisions with ease by simply iterating through a matrix of equation parameters. Unfortunately, short—term forecasting using time series analysis is a bit risky, especially if the data base used to build the model is formulated in terms of longer time periods. For example, if quarterly historic data were used to construct the relationships, monthly or weekly fore- casts are theoretically possible, but practically unsound. Regardless of the data, trend and cyclical com- ponents are multiyear in nature. This violates the earlier one-year-or-less definition of a short-term period. The need for considerable historic data has been alluded to already. In addition, technical know-how for develOping the projection model from this broad spectrum of data is vital. In the hands of a novice, time series 76 analysis as a forecasting tool could be a dangerous weapon. A series shortcoming is the lack of a causal rela- tionship. Time series analysis assumes a continuation of prior sales movements. There is no intrinsic relationship between time and sales volume. Of course, if adequate research and study suggest a continuation of past patterns, then time series analysis is quite useful. Finally, there is a chance that random or irregular factors present in past data may overshadow relatively the effects of the predictable time series components. Care must be taken to isolate the regular factors without being misled by the unusual fluctuations. Regression and Correlation Analysis.--Regression analysis and correlation analysis are sometimes mistakenly defined alike. Actually regression analysis refers to methods for estimating values of a variable based on infor- mation about one or more other variables. Correlation analysis encompasses measurement of the degree or strength of the association among the several variables.30 Regression analysis can be categorized as either simple or multiple. Simple regression involves estimating values for one variable (dependent) based on corresponding Values for one other variable (independent). Multiple regression, then, forecasts dependent variable values on 77 the basis of values for two or more independent variables. No directional causality is implied for dependent and independent variables. A cause-effect relationship could, in fact, exist in either direction: however, deter- mination of causality is not a necessary condition. A graph or scatter diagram is an advisable first step in the use of simple regression analysis. The diagram depicts the general nature of the relationship and aids in the selection of the mathematical archetype. As was the case with time series analysis, regres— sion analysis utilizes a goodness-of-fit criterion in develOping an equation to represent historic data. The most widely used rule is, again, to minimize the sum of the squares of the deviations of actual and computed sales values (vertical deviations with sales plotted on the ordinate). The analysis is similar to that described for time series, except that the x value is no longer restricted to a time dimension. The units of x may be in terms of any dimension measurable on an interval scale. Obviously for multiple regression a scatter diagram is difficult to draw for two independent variables and impossible for three or more . Several assumptions underlie the use of regression 31 analysis for forecasting. They are as follows: 78 l. The regression error must be randomly dis- tributed with zero expected value. 2. The variance of the regression error must be the same for all values of x (homo- scedasticity). 3. Individual forecast errors must be statis- tically independent of each other (no auto- correlation3 ). 4. Individual forecast errors must be uncorre- lated with the independent variable in the forecast equation. 5. The underlying relationship between Y and x must be strictly linear if the regression slope parameters and forecasts based upon it are to be independent of the distribution of the X's in the sample. 1\ sixth restriction, applicable in the case of multiple re- gression, is that multicollinearity should be minimized and, ideally, equal to zero. It is beyond the sc0pe of this Paper to explain rigorously the impact of imposing these Six restrictions. Each condition can be commented on in terms of its practical importance. The first assumption has intuitive appeal even for the nonstatistician. The error in using the regression Ifcxrmmla (YeYb) to estimate sales should be randomly distri- 1Duted and should sum to zero for a large number of obser- vations. Otherwise, the so-called error would be predict— ‘allle and could be incorporated into the formula. Only when the expected value is zero is the equation reliable. 79 Homoscedasticity simply means that the variance of the regression error is independent of the location within the relevant range of the predictive equation. For example, if Y is an increasing function of x, then larger Y values are associated with larger values of x. Homoscedasticity implies that the variance of the prediction error is the same for both large and small values of x. The confidence interval about the regression line is of constant width for all X values and is not extreme-value sensitive. Autocorrelation of forecast errors generally causes the forecast variance and the variance of the regression slope parameter to be biased downward. This implies more information about the regression slope, b, than is actually 33 Autocorrelation should be minimized. available. The fourth assumption relates somewhat to the con- cept of homoscedasticity. Here the regression error is not permitted to be an increasing or decreasing function of the independent variable. If there is a relationship between x and error, the forecast can reflect this association. This restriction is more important for structural analysis, for Obtaining unbiased b estimates, than for forecasting.34 Model linearity refers to the regression equation's coefficients. Adding a cx2 term to the right side of the standard Y = a + bx equation does not result in a nonlinear 80 function. The variables are still included in linear com- binations. However, adding an Xc term generates a nonlinear form. Logarithms provide a means of eliminating nonline- arity in this second example. If the underlying relation- ship between Y and X is curvilinear, the distribution of the independent variable must be controlled to ensure prOper interpretation of the regression parameters.35 Perhaps suc- cessive rectilinear equations over restricted ranges of X could be used to approximate the curved relationship. The existence of multicollinearity implies depend- ence among or between the independent variables. Multi- collinearity reduces the efficiency of the estimates for the regression parameters: however, for efficiency in forecasting sales (Y) the interrelationships among dependent variables pose no real problems. Of importance is the amount of information about Y available through the use of the independent variables taken in concert, not sepa- rately. If two independent variables correlate perfectly. one can be drOpped because it doesn't add any insight not already present by including the other one. A number of measures have been pOpularized for examining the strength of the relationship between the dependent and independent variables. These are the coeffi— cients of correlation (r), determination (r2), 81 nondetermination (k2), alienation (k), and association (A). These variables can be represented symbolically. For any X value, three Y values are of interest. These are Y, the actual or observed Y value: Y the value computed by using cl the regression line; and E. the mean of the historic Y values. The total variance, Oyz, or the variance of Y values about their mean, Y, is 6(Y¥§)z/n. The unexplained o 2 YC , or the variance of Y values about the re- variance, gression line, is e(Y—Y¢)2/n. The explained variance, oyxz, or the variance of Yb values around the mean, §, is e(YcJY)2/n. It can be shown that .(y-i'r') 2 = E(Y-Yc) 2 + ewe-FF n n n or thus yc y “yx y coefficient of determination a the relative reduction in squared error (variance) due to estimating values of Y from values of X. o "< X a) \ Q "< N H M II coefficient of nondetermination = the relative amount of squared error (variance) remaining due to esti— mating values of Y from values of X. o \ q N I “I ll r = /45 = coefficient of correlation. 82 k = /k2 = coefficient of alienation = the remaining relative amount of absolute error due to estimating values of Y from values of X. A = l-k = coefficient of association = the relative amount of reduction in absolute error due to estimating values of Y from values of X. It is difficult to say which of these measures pro- vides the greatest insight into the strength of_the associ- ation between X and Y. The interpretation of r2 as a rela- tive type of measure is, perhaps, more logical than that of r, which is dimensionally meaningless. However, r is eaSily analyzed by using statistical testing. Because of their interrelated nature, all coefficients can and should be 2 or k2 is known. calculated once r A primary reason for utilizing correlation and re- gression analysis is Objectivity. The technique practically guarantees a cold, hard look at a situation without the danger of human bias. Of course, a forecaster could elimin- ate some possible independent variables from the study by allowing his intuition to guide him. Generally, regression and correlation analysis leads to an objective and measur- able result. The preliminary investigation required to develOp a set of potential independent variables leads to a more thorough understanding of the factors influencing sales. 83 The subtleties of the relationship between sales and other factors become clearer. There may be a temptation to interpret a strong association as causality, but this may not be true. It is possible, however, to discover a causal relationship. Such a finding would obviously be invaluable to all those who rely on the forecast for information. Because the effects of different independent vari- ables on sales may vary throughout the country or by product, many variables may have to be selected; neverthe- less, regression and correlation analysis is still very useful. Different sets of equations on a regional basis may enhance forecasting accuracy. Certainly in the limit every product in every location could be studied individu- ally to determine the most suitable variables and equations. A final reason for the possible adOption of this technique is the availability of a measure of the strength of the predictive equation. By looking at the five coef- ficients derivable from the historic data, management can evaluate the reliability of the projections. A critical assumption here is that past relationships will continue to hold, at least on a relative basis. This previous point introduces a negative considera- tion: the impact of residual or excluded independent 84 variables. Surely no analyst is capable of achieving a perfect relationship (r2 = 1.0) among sales and all needed independent variables on every attempt. Even the slightest imperfection sets the stage for later forecasting error. For example, within the current range of values an excluded independent variable may have a negligible impact on sales. Suppose the strength of the association increases consid- erably beyond the current range. Unless someone notices this, the forecast goes awry without a reasonable explan- ation. The availability of data related to the independent variables could be regarded as another limiting feature of this technique. If indexes are used as predictors, espe- cially aggregates of economic activity, oftentimes only annual data are available. unless the independent variables are leading series, they may have to be predicted in order to be used as input to the regression equations. A considerable volume of data is required to Oper- ationalize this type of model. Detailed historical analysis is necessary to derive the regression equations. Continuous updating is advisable so that the parameters will reflect the most recent tendencies. Lack of data timeliness, availability, or accuracy could each have a detrimental effect on forecasted output. 85 Last, it is apparent that considerable technical expertise is a must in order to understand and to develOp the relationships. This point can be re—emphasized merely by studying any intermediate or advanced text dealing with regression and correlation analysis. Likewise, those using the forecast may be mystified by the inner workings of the model. Comparison of Sales Forecastinngechniques To attempt to generalize about the relative merits of the aforementioned forecasting techniques is to be some- what arbitrary at best. Surely, the rankings depend upon the experiences of the user, as well as his reading know— ledge of the alternatives. Regardless, a relative ranking is presented in Table 3.1. Cross-rankings between quantita- tive and nonquantitative classes are not attempted because of noncomparability. For instance, it is not patently obvious, except for a specific case, whether or not polling a sales force costs more than running a computerized regres— sion model. The ranking scheme is from one to four. A "l" is the best in the sense of least-cost, most detailed, fastest, etc. Equal rankings are indicated by an average of the specific rankings involved.. For example, two 2.5's 86 H m.H m s H m.~ s «HmsHme< doauoaouuoo use oowmnouwom Aomv m.~ m.H s m m n.~ m mHmsHee< moHumm maHa Amav m.N n.m H N m H N wcweuooam Hmwuaocoexm Ammv s m.m H H m s H oweum>< weH>oz Haze H e m.~ s H N e msoHueuomaxm .muoma Ampv N m.~ m.~ m m.m H m muHmoaaoo OOHOM endow Ammv m m.~ n.~ N m.m m.m N aoHaHao m>Hu Ioooxm no mean Amhv s H m.~ H N m.m H weHumHH sauces AHEV muwumHo nome wcHaHH souwuom muoasH muOuomm musuom zomuooo< Hamuon umoo wowssoam uoxuez wowcuaa mo Ho>ma wosvwcnoma maneufiuu wqaxsom meovfioaoma wsaummoouom moaem mo confineaaooll.fl.m mgm<9 87 indicate a tie between the second and third methods. Also, no rankings were develOped for the selection criteria of accuracy, turning points, and clarity. These are best mea- sured, as noted before, in terms of absolute standards germane to a specific company and situation. The rankings deserve some comment to clarify or, as the case may be, to justify the tabled results. Cost would seem to increase going from Factor Listing (FL), to Jury of Executive Opinion (JE), to Sales Force Composite (SF), and finally to Users' Expectations (UE). Increased manpower requirements characterize this sequence. In addition, UE entails devising a sampling instrument and the attendant costs of testing and tabulating. SF yields a more detailed forecast because it is a buildup approach. UE may come close to this level of detail, but at the cost of an extremely thorough sampling. FL and JE, because of their general nature, practically preclude any detail. UE can reveal the important market factors if the test instrument is well-rounded and prOperly constructed. The FL approach suggests management's list of possible factors, but JE and SF only indirectly incorporate these market factors. It is logical that required inputs would parallel 88 cost movements. FL requires only a set of factors with intuitive weightings. JE calls for tOp executives to pool their knowledge and to expend a certain amount of effort and time. To implement an SF approach, considerable man- power must be used, including both field sales personnel and sales managers. An adequate UE forecast calls for marketing research peOple, a test instrument, research validation, tabulation, and interviewer or postal service costs. Analyzing relative strength in terms of the plan- ning horizon is a difficult task. It seems that the FL and JE methods might be more useful for long-term fore— casting, while SF and UE lend themselves to a shorter time period. The first two approaches tend to be more general and planning oriented. On the other hand, salesmen and customers may have more precise estimates for the near future. Overall, none seems to be good for both long- and short-term forecasting. The timing criterion has been applied assuming that the forecast being sought is not a first-time one. In other words, the projection mechanism is already is exist- ence. An FL approach should be the fastest to Operate be- cause of its basic simplicity. An SF or JE estimate should be next fastest if the process has already been formalized. 89 The prOper signals need only to be sent to those involved to trigger the return of the individual forecasts. Last, UE requires prodding many customers to respond, not always an easy task, even if good customer-firm relations exist. The degree of rigor is inversely related to cost. A valid customer survey would be most appealing to the theoretician because of its representativeness. A detailed sales staff survey might approach a customer survey in rigor, but the information gathered would still be second- hand. Even more removed from the market place are the results obtained from the FL and JE approaches. They are the least exacting. The quantitative techniques are also listed in order of increasing cost--Moving Average (MA), Exponential Smoothing (ES), Time Series Analysis (TS), and Regression and Correlation Analysis (RC). Again, this sequence corre- sponds to increasing input requirements. MA requires only the most recent n values for sales and a calculator (or a computer for many computations). ES may be even simpler in terms of raw data requirements, but greater technical know- how is needed. TS calls for much data, technical capability, and computing equipment. Finally, RC is even more demand- ing than TS with respect to data and technical ability because of possible multivariate, nonlinear applications. 90 Perhaps BS is the most flexible in level of detail. It can be implemented at nearly any level, and it is espe- cially useful for generating product-by-product projections. TS and RC are just as effective theoretically as ES, but there are certain practical limitations. Sales of low volume items can't be estimated with confidence, and rela- tionships between independent variables and a specific product may be weaker than for product classes or areas of the country. MA is practically limited because it is an averaging method. Low volume or high volume, volatile products make averaging historic data a useless exercise. Only RC actually considers market factors in the form of the independent variables incorporated into the model. The other three quantitative approaches emphasize data patterns, not relationships or causalities. The four techniques appear, at first glance, to be about equally apprOpriate for different length planning horizons. Actually only ES offers true flexibility, but it generally is used at the shorter end of the time scale. TS, by virtue of its component structure, can be utilized only over a longer-range interval. Projections should not be computed for periods shorter than those of the historic data used to derive the equations. The same warning is relevant for MA. Last, RC's usefulness is largely a 91 function of the periodic availability of values for the independent variables. Often such data are obtainable only on a yearly basis. TS and RC can both generate forecasts throughout their relevant ranges. As long as the assumed relation- ships hold, the equations can be used. MA and ES are data- bound in that they both require the latest value for actual sales before a new forecast can be produced. TS and RC are capable of forecasting at a more advanced date than either MA or ES. The most rigorous of the quantitative techniques is RC. Nonlinear and multivariate alternatives are possible, as well as simple linear models. Several restrictions on data and model structure must be met. Finally, five coef- ficients for evaluating the strength of the relationship can be derived. TS is based on a best-fit criterion, and it decomposes historic data into four component flows. ES can be weighted to be very responsive to current input, or just the Opposite. The MA approach is obviously the least sophisticated; it is a simple average over several periods. Selection of ForecastinggTechnique The remaining task at the present time is to choose the sales forecasting technique to be used in conjunction 92 with the LREPS model. The foregoing presentation, analysis, and comparison of techniques provide the framework for this selection. The discussion which follows is organized on a priority basis. First, since the LREPS model is a quantitative, computerized simulation, the forecasting mechanism must lend itself to either logical or symbolic expression. All eight techniques satisfy this test, although the quantitative methods are somewhat easier to program. Next, all data input must be available at the start of a simulation and be capable of being updated through feedback linkages in the model. The nonquantitative tech- niques start to fall short at this point. For example, to anticipate and to program all the thought processes in which an executive or a salesman might engage while up- dating his initial forecast is an impossible task. Like- wise, to conduct an Opinion poll of simulated product users is to know in advance the way in which they think, making the poll results trivial. The nonquantitative approaches to sales forecasting can be eliminated from consideration. Third, the need is for a short-term mechanism cap- able of forecasting in detail by product and by geographic region. Fourth, the Operating cost is inconsequential because any technique will not contribute substantial 93 additional cost when compared with existing model Operating costs. Based on level of detail required and the desired planning horizon, exponential smoothing is the most appro- priate choice. In addition, this approach is less demand- ing in terms of inputs than the more rigorous techniques. Timing becomes less important because of the rela- tively short time interval being studied. Also because of the one year time span, the importance of market factors is diminished. This forecasting mechanism is for control purposes, not for long-range planning. As a result, the importance of detail, inputs, and the planning horizon seem to indicate an exponential smoothing model. This, then, is the adOpted forecasting technique. The format of the model is New Forecast = a(New Demand) + (l-a)(Old Forecast). Summary The objective of this chapter was to describe the alternative forecasting techniques so that a wise selection of the technique to be used in conjunction with the LREPS model could be made. As was shown, there are two main cate- gories of techniques, nonmathematical and mathematical. Included in the nonmathematical group are factor listing, jury of executive Opinion, sales force composite, and users' 94 expectations. The mathematical approaches are moving average, eXponential smoothing, time series analysis, and regression and correlation analysis. Each technique has its strengths and weaknesses. To choose correctly the most apprOpriate technique, manage- ment needs to compare a set of selection criteria with the relative rankings of these techniques. The situation facing the firm dictates which technique is suitable. For purposes of short-term computer experimentation, exponential smoothing appears to be the most useful of the various techniques. Forecasts for short time periods are possible, as are regional and product forecasts. 95 CHAPTER III--FOOTNOTES 1M. H. Spencer, C. G. Clark, and P. W. Hoguet, Business and Economic Forecasting (Homewood, Illinois: Richard D. Irwin, Inc., 1961), p. 4. 2Stanton and Buskirk, p. 555. 3C0pe1and, p. 76. 4COpeland, pp. 78-83. 5Spencer, Clark, and Hoquet, p. 17. 6Stanton and Buskirk, p. 555. 7R. R. Still and E. W. Cundiff, Sales Management: Decisions, Policies, and Cases (Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1958), pp. 555-556. 8Forecastipgpsales. pp. 12-14. 9E. C. Bratt, Business Cycles and Forecasting (Homewood, Illinois: Richard D. Irwin, Inc., 1953), p. 239. loForecasting Sales, p. 20. llIbid., p. 21. lzrbid. 13Cope1and, p. 98. 14C. M. Crawford, Sales Forecasting: Methods of Selected Firms (urbana, Illinois: The university of Illinois, 1955), pp. 28-30. 15Forecasting Sales, p. 31. 16R. G. Brown, Statistical Forecasting for Inventory Control (New YOrk: McGraw-Hill, Inc., 1959), p. 13. 17Ibid., p. 45. 18Ibid., pp. 46-51. 96 19R. G. Brown, Exponential Smoothing for Predicting Demand," Tenth National Meeting of ORSA (San Francisco, Nov. 16, 1956). 20Management Operating System: Forecastinge- Exponential Smoothing--De§ail (White Plains, N.Y.: Inter- national Business Machines Corporation), p. 17. 21Brown, Statistical Forecasting . . ., p. 66. 22A. A. Hirsch and M. C. Lovell, Sales Anticipations and Inventory Behavior (New York: John Wiley & Sons, Inc., 23J.‘W. Forrester, Industrial Dynamics (Cambridge, Mass.: The M.I.T. Press, Massachusetts Institute of Tech- nology: 1961), p. 411. 24R. W. Llewellyn, Fordyn:l An Industrial Dynamics Simulator (Raleigh, N.C.: North Carolina State University, 1965), pp. 6.40-6.44. 25M. Hamburg, Statistical Analysis for Decision Making (New York: Harcourt, Brace & World, Inc., 1970), p. 540. 26Ibid., p. 543. 27G. R. cooper and C. D. McGillem, Methods of Signal and System Analysis (New York: Holt, Rinehart and Winston, InC. ' 1967) ' pp. 90-93. 28T. Thornton, unpublished lecture notes (East Lansing, Michigan: Michigan State University, 1971). 29ForecastingSales, p. 34. 30Hamburg, p. 460. 31R. E. Frank, A. A. Kuehn, and w. F. Massy, Quantitative Techniques in Marketing Analysis (Homewood, Illinois: Richard D. Irwin, Inc., 1962), p. 85-93. 32J. Johnston, Econometric Methods (New Ycrk: MCGraw-Hin. Inc., 1963), pp. 179-199. 33Frank, Kuehn, and Massy, p. 87. 34Ibid. 351bid.. p. 89. 97 CHAPTER IV FORECASTING ACCURACY Introduction The critical nature of sales forecasting as the core of the planning process has been mentioned, as have been approaches for estimating future sales volumes.. In order to ensure sound forecasts, however, the firm needs bases for evaluating the reliability or accuracy of these projec- tions. Further, the assessment of the forecasting mechanism should be a continuous process: a model which is apprOpriate now may not be valid in future time periods. To validate a model with the historic data used to develOp it is surely an incestuous approach. The question of how to measure forecasting accuracy is not a simple one. Certain mathematical forecasting techniques have built-in measures to guide the user in their application. These specific measures are not univer- sally applicable. In this study only general techniques for determining the accuracy of any approach to forecasting are presented. 98 99 Hirsch and Lovell suggest several criteria Which can be used to appraise alternative measures of forecasting 1 accuracy: 1. It is obviously advantageous to work with a measure that may be regarded as an index of the cost to the firm of erroneous forecasts. 2. It is useful to have a measure of forecast accuracy that is independent of units of measurement so that the relative accuracy of forecasts by large and small firms will be comparable. 3. It is helpful, at least for certain purposes, if the yardstick of precision rates the fore- caster relative to the difficulty inherent in the type of series he is trying to forecast. 4. It is useful to judge the forecasts both with and without systemstic bias. The first rule implies that the magnitudes of the error levels are important if the impact of a given error is to be determined. The second guideline suggests the need for a dimensionless, relative measure of error. Third, the sales pattern of the firm must be analyzed. Finally, bias should be removable. The overall need is, then, for an absolute, relative, difficulty-weighted, and unbiased error measure. Apparently no one gauge will be able to satisfy these all-inclusive, even contradictory, criteria. The remainder of this chapter looks at accuracy measurement in terms of several perspectives. First, 100 common statistical techniques are presented. Next, a generalized turning-point approach is covered. Last, evalu- ation based on firm objectives is suggested in addition to the previous approaches. Statistical Accuracy Evaluation As might be suspected, there are several ways to measure forecasting accuracy with mathematical derivations. A listing of such methods would include the following:2'3 1. Standard deviation 2. Mean-square error 3. Mean absolute deviation 4. Correlation coefficient 5. Theil's inequality coefficient 6. Adaptations of previous measures. Each measure is now discussed in detail, highlighting the reasons for the possible use of each. Standard Deviation The standard deviation is an obvious choice as a yardstick for appraising forecasting accuracy. The general formula for computing the standard deviation of a sample is s = ZQY?§)2 n where s is the sample standard deviation, Y is an individual 101 observation, Tris the average of all such sample observa- tions, and n is the number of observations. For use in the context of a forecasting evaluant, each Y can be inter- preted as the difference between a forecasted and an actual sales value, Yf - Ya' 80'? is the mean of all such values in the sample and is equal to §;_:—§;. As is the case with other measures which follow, the standard deviation does not satisfy all of the criteria noted at the outset of this chapter. It is an absolute measure, so interfirm comparisons are not possible. The first point, suggesting an error index related to cost, is tested in Chapter VI for a combination of error measurement methods. No reading on the inherent complexity of the forecasted time series is supplied by the standard devia- tion. In general, this criterion is beyond the sCOpe of this research and is left to econometricians. Unfortun- ately, the standard deviation can hide consistent over- or underestimation, so it does not remove systematic bias. Statistical testing is possible by utilizing the variance, the square of the standard deviation. The F test permits comparison between forecasting approaches by com- paring the resulting variances in ratio form. 102 Mean-Square Error An alternative to the standard deviation is the mean-square error (MSE), which can be shown to be related to the standard deviation. An expansion and rearrangement of terms yields The first term on the right of the equality is the mean- square error. Since it is a component of the standard deviation, it is simpler to compute. The size of the nume bers involved when using the mean-square error is one draw— back of the technique: hence, comparisons between firms is not meaningful.5 Work has been done which indicates that the mean- square error may be prOportional to certain production and inventory costs.6 A more comprehensive model, such as LREPS. could be used to verify these claims. Because it squares an absolute difference, the mean-square error would flag any steady positive or negative forecasting mistakes. Mean Absolute Deviation The general form for the mean absolute difference is eLXl n where, according to convention, Y is the difference between 103 actual and forecasted sales. This approach ignores the sign of the difference and gives the average value of the unsigned forecasting error. This technique is the simplest so far, and it is the easiest and fastest one to calculate. However, comparisons between large and small firms are im- possible because the measure is absolute, not relative. Consistently bad estimates are pinpointed because of the absolute value. Correlation Coefficient There may be an association between predicted and actual changes in sales volumes. A correlation coefficient which measures the strength of this association can be com- puted. The formula r2 = 1 - sez/sa2 2 e 2 where r is the coefficient of determination, s is the is the variance of 2 variance of the forecast error, and sa2 7 is the same as the sales data. The interpretation of r for the coefficient of determination discussed in the previous chapter in the section dealing with regression and correlation analysis. The range of values possible for r2 is from zero to one. This gauge is a relative one, so it encourages matching among companies. If a firm consistently under- 104 estimates sales, it could receive a perfect score of unity. Thus, the correlation coefficient does not eliminate bias. Theil's Inequality Coefficient Theil develOped a test statistic which is bounded at the lower extreme by zero.8 The measure is U = fuss (Ya)2/n A perfect forecast yields a U of zero, while fore- casting error results in U values greater than zero. If 0:1, the same result could have been achieved by forecast- ing a value of no change from the previous actual value. If U is greater than one, the forecast of no change was better than the forecast used. The relative value is desirable for making compari- sons. It imposes a penalty for systematic linear bias be— cause of the incorporation of the mean-square in the rela- tionship.9 Adaptations of Previous Measures Hirsch and Lovell have suggested several transforma- tions and combinations of the aforementioned measures.10 The first such coefficient is E(Yé) 105 The value of this adaptation is in the very meaning- ful interpretation which can be given to values of r12. An r12 of one indicates perfect forecasting. A value of zero suggests the same results could have been realized by making the naive forecast of no change. Finally, negative values imply the forecaster is doing worse than could have been done with the naive estimate. Another possibility is r22 = 1 - MSE/saz. This coefficient compares the accomplishments of the forecaster with the root-mean-square-error obtained by a naive forecaster who consistently predicts the most recent average sales value as the estimate. Unless the actual data exhibit a trend, this coefficient is useless. It does show that the forecaster working with a series dis- playing a consistent trend is working with a simpler problem. Additional manipulations can be performed which reflect seasonal or cyclic fluctuations. If certain charac- teristics of the actual sales pattern are known or suspected, error measurement can be refined considerably. A general improvement can be made to correct all of these measures to relative form. This is to define the actual and forecasted values in relative terms. That is, let at be the actual relative change in sales and ft be the 106 forecasted relative change, both for period t. Then, at """ Ya,t ' Ya,t-l ft = Yf,t ' Ya,t-1 Ya,t-l Ya,t-1 where Y is actual sales in period t and Y is fore- a,t f,t casted sales for period t. This conversion is especially important for using the F test to compare variances for different forecasting periods. For example, comparing variances for the differ- ence between actual and forecasted sales in absolute terms for two different intervals, one being one day and another being three months, would probably be a waste of time. If average daily sales is 100 units and the range is :10%, then to forecast the average would be to err by no more than 10 units. With quarterly sales equal to 6,300 units (be- cause one quarter equals 63 days within the LREPS model) and the range being :10%, forecasting the average could result in an error of 630 units. The standard deviation of the forecast for the larger prediction interval would be much larger than for the one day interval. However, using the relative definition of the standard deviation would eliminate this noncomparability. If a and f are substituted for Ya and Yf, respec- tively, in the foregoing presentation, the adjustment for relativity is included. Thus, in the previous formulae Y 107 now becomes a - f instead of Ya - Yf. Nonmathematical Accuracy Evaluation The ways just presented to determine forecasting accuracy are ones which all firms and managers may not understand. They probably would feel comfortable in terms of peer group evaluation if they used such methods. In other words, if the benefits of using such evaluatory tech- niques are ignored, there may be pressure to use these methods "just because we should." This hypothetical situation, if it exists, is a sad one indeed. For persons who may be trapped by such circum- stances, a possible solution can be cited. A less sophisti- cated technique, at least in terms of the mathematics, is the analysis of turning points. One can argue that the critical element of sales forecasting is to determine the direction of change of sales volume.11 Given the direction, either a continuation Of the present or a change in trend (a turning point), any of the previous forecasting tech- niques could be used with an acceptable degree of precision. Accordingly, the "hit" prOportion on predicting turning points is a simple measure of forecasting prowess. An arbitrary "batting average" can be set as a minimum accept- able prOportion of correct directional estimates. If a 108 company is not anticipating turning points a high enough prOportion of the time, then the application and choice of technique should be re-evaluated. The value of correctly foreseeing turning points often is not appreciated by forecasters. A simple example dramatizes the crucial nature of such anticipations. Sup- pose a firm's sales volume is increasing by about 10% per time period with the range of the increase being between 8% and 12%” The length of the time period, the number of products involved, and the geographic region in question are not important for purposes of this illustration. Now suppose for next period the firm projects another 10% incre— mental increase in sales when, in fact, a 10%.decrease is realized. The net effect is an overestimate of 20%. This error is considerably larger than the historical i2% per period. If the firm had based its Operating plans on the tolerance level of 2%, then the 20% could prove to be disastrous. Another illustration highlights individual product studies. A firm might be able to forecast within an accept- able range for a product group. For simplicity imagine a grouping of 10 products. Suppose five products have been increasing in volume over time, and five others have been declining in sales. If the sales of each set of five were 109 comparable, a reversal by each would leave total sales generally unchanged. The internal changes would make a shambles of physical distribution performance. There would be stockouts, excess inventories, poor service, premium replenishment costs, and backorders. Thus far, only the prOportion of turning points correctly prognosticated has been mentioned. At least two statistical tests can be used to evaluate the firm's record of anticipating these directional switches. A logical first choice would be an application of the binomial test. A brief example shows the usefulness of this test. If the process by which a turning point is forecast involves some- one merely flipping a coin, then the probability of cor- rectly forecasting a turn when it occurs is 0.5. A compari- son of an actual z (or t) value with a critical 2 (or t) value based on the desired confidence level could be made. The actual value is computed from z (or t) = (p - 0.5)/f(.5)(.5)/n where p is the prOportion of turning points correctly esti- mated and n is the number of turning points or sample size. If n is sufficiently large, say, greater than 30, then the z statistic is appropriate: otherwise, t is used. The hypotheses and the corresponding acceptance and rejection regions depend upon the objectives of the researcher. 110 The runs test, used in conjunction with the binomial test, may be more enlightening. It is possible that the firm is forecasting only one-half of the turning points. Thus, the results of the binomial test would not be signifi- cant. If the prOportion of mistakes (or correct estimates) follows a nonrandom pattern, then the runs test would re— veal this. The runs test identifies the extreme cases of Jr- ‘5 “-273“! many or few runs (sequences of like events) in a sample. As an extreme case, with C indicating a correct turning point forecast and I denoting an incorrect one, imagine the following sample of 20 forecasts: CCCCCIIIIICCCCCIIIII The prOportion of correct projections is exactly 0.5, and the number of runs is four. The probability of observing four or fewer runs with two subsamples of 10 ele- ments each is 0.001. Thus, it is very unlikely that the above pattern is a chance occurrence. It should be pointed out that analysis of turning points is more apprOpriately applied to planning or long- term forecasting. When the forecasting interval is large, a directional error has a greater impact. Operating fore- casts made weekly can be updated to compensate for direc- tional errors. Annual forecasts for a planning horizon of several years had better be directionally correct: lll otherwise, they are worthless. Accuracy Evaluation by Objectives During the course of selecting and evaluating a forecasting model, a company can easily become involved with the details of the process and lose sight of broader objectives. Certainly the assumptions underlying the vari- ous techniques must be considered, and the planning horizon should be studied carefully. The forecast is not, however, an end in itself. The value of the forecast is measured by how much it contributes to the planning process. Should not forecasting accuracy be measured in a like manner? How is effective planning manifested? One gauge is the con— strained level of cost or profit. The constraints may com- plicate the problem considerably, but cost and profit are measureable accounting concepts. The LREPS model presents an Opportunity to examine the relationship between forecasting accuracy and physical distribution system costs. For each forecasting model used measures of forecasting error are collected. At the same time, physical distribution costs are monitored. Thus, cost and error are tracked simultaneously. If accuracy is truly reflected in the system objective, minimized con- strained physical distribution cost, then error and cost 112 profiles should be quite similar. The ideal forecasting mechanism is the one which yields minimum total costs. This concept can be generalized to any system utilizing a forecast as an input. To be safe, patterns Of cost and error should be reviewed for similarities. Possibly one system component of relatively larger size might not rely on the accuracy of the forecast: hence, little benefit would accrue to the firm which emphasizes increased precision in its estimates. Usually an insightful way to compare forecasting alternatives would be to compare respective system costs. Summary A forecasting model cannot be ignored once it is in Operation. It should be monitored regularly to assess its performance. Several approaches for carrying out this evaluation have been presented. Statistical analysis is quite pOpular, and many variations of standardized statis- tics have been developed. As an alternative, turning point analysis offers the less quantitative forecaster a means of evaluating performance. Regardless of the forecasting technique and error analysis employed, the objective of the forecast should not be overloOked. Forecasts provide, in the limit, valuable 113 input to the planning process. To a lesser extent, differ- ent segments of the firm need projections. A sOphisticated planning model, such as LREPS, can utilize a complex and accurate forecasting mechanism. On the other hand, for other uses a less complex forecasting model could probably sacrifice some accuracy, yet still be a valuable tool. 114 CHAPTER IV--FOOTNOTES 1Hirsch and Lovell, p. 36. 2;p;g., pp. 37-42. 3Brown, Statistical Forecasting . . ., pp. 89-94. 4M. Mendenhall, Introduction to Probabiliry and Statistics (Belmont, California: Wadsworth Publishing Company, Inc., 1968), pp. 39-40. 5 Brown, p. 90. 6c. c. Holt, F. Modigliani, J. F. muth, and H. A. Simon, Planning Production, Inventories, and Work Force (Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1960). 7Hirsch and Lovell, p. 38. 8H. Theil, Applied Economic Forecasting (Amsterdam: The North-Holland Publishing Co., 1966), pp. 26-32. 9Hirsch and Lovell, p. 38. loIbid., pp. 39-42. 11Erickson and Lewis, ch. 7. CHAPTER V RESEARCH METHODOLOGY Introduction The general form of the short-term build-up fore- casting model has been conceptualized and programmed. The research to this point has been aimed at fulfilling the first part of the overall research Objective, that of develOping a forecasting archetype. This chapter begins the presentation of a continuing illustration of how the general model can be applied in a particular situation, the remaining part of the overall objective. It should be noted that the data used in this example are disguised because of the competitive nature of the industry in which the sample firm Operates. First, the forecasting model is described in its entirety. Next, the researchable hypotheses which guided the experimentation are outlined. Finally, the research sequence used in carrying out the computer experimentation is presented. 115 116 The General Model The forecasting mechanism discussed is flexible in terms of three dimensions. In Chapter I it was suggested that the forecasting technique, the level of detail, and the prediction interval should be variables which can be assigned different values. The model was designed with this in mind. Each of these three dimensions can be varied as the situation dictates. In Chapter III exponential smoothing was chosen as the apprOpriate technique for short-term forecasting. The ZIP regional system and the broad range of possible tracked products found in LREPS allow the use of many techniques at the same time. However, only exponential smoothing was deemed suitable for this search. To maintain maximum flexi- bility, the smoothing constant was allowed to vary by product and by region. At one extreme, the same smoothing constant can be used for all products and regions. At the other extreme, each product area can be assigned a different smoothing constant. The second dimension, level of forecasting detail, is Operable at four levels. These are the firm, the product, the DU (ZIP Sectional Center), and the product-DU. More levels are theoretically possible, but were not modeled. One of the unique features of this model is that it 117 facilitates the develOpment of aggregate forecasts by accu- mulating lower-level estimates. For example, product fore- casts can be generated by summing all product-DU forecasts for that product. Alternatively, the forecast can be made at the product level, but product-DU projections would then be only arbitrary fractions of this product total. Even though the aggregate forecasts might be equal under the two approaches, the detailed forecasts under the build-up method should prove to be more accurate and, thus, more useful. Variations in the prediction interval are quite easy to achieve. The forecasting module within the LREPS model is simply called at different times during the simu- lation run. Frequent callings generate frequent forecasts, but for shorter periods, and vice versa. Again, different product-DU's can be assigned unequal forecasting intervals. The computer experimentation described in Chapter 'VI deals with these three dimensions as they relate to a specific company. The dimensions are interrelated to formulate a specific forecasting model. Researchableguestions and Hypotheses Several questions were devised which served to focus the research on specific tOpics. These questions are: l.' What are the "best" values for the smoothing 118 constants of the predictive equations for the sample data? These values can be determined through experimentation--evaluated on the basis of forecasting accuracy and physical distribution costs--or by examining previous research aimed at this problem. 2. What prediction interval (day, week, month, etc.) is most appropriate? What is the functional relationship between physical distribution costs and the size of the pre- diction interval? 3. What is the nature of the build-up function which provides the national forecast from the local forecasts? In other words, how much detail is required for a satisfactory forecast? 4. What is the relationship between physical dis- tribution costs and system service levels? Cost has been traditionally considered to be an increasing function (at an increasing rate) of service level. A set of testable research hypotheses can be de- rived from each question. From Question 1 a relationship ibetween.equation parameters (smoothing constant values) and cost can be hypothesized: 119 H : There is no association between smoothing constant values and physical distribution costs. H1: There is an association. Since both data sets are at least interval in nature, regression and correlation analysis is applicable. The smoothing constant can be treated as the independent variable and cost as the dependent variable. The signifi- cance of the sample correlation coefficient (r) can be learned by using the t statistic t = r/711-r2)/(n-2). A two-tailed test is in order with HD being rejected if the absolute value of the computed t is greater than the criti- cal t value. The following relationship between physical distri- bution costs and the size of the prediction interval is based on Question 2: H0: There is no association between the size of the prediction interval and physical distribution costs. H1: There is an association. .As for the previous question, the sample correlation coeffi- ¢:ient can be examined by the use of the t statistic. The emphasis of Question 3 is on the derivation and «comparison of alternative build-up functions. This can be done by comparing sample variances for the same forecasting 120 level for any two alternatives. The variances are in rela- tive terms. The hypotheses are: H : The variances for any ZIP area (DU) and product are equal for the two build-up functions. 0 H1: The variances are unequal. Additional hypotheses about other equivalent forecasting levels are formatted similarly. The ratio of the two variances being compared ex- hibits an F distribution. The test statistic is F = 812/822 where the 32's are the sample variances with 312 arbitrarily chosen as the larger value for convenience in comparing actual F values with tabled critical F values. H0 is rejected if the actual F value exceeds the critical value. Another set of comparisons relating to the selec- tion of an adequate level of detail is based on Ques- tion 3. Given the accuracy and associated physical dis- tribution cost at one level, does the next level of fore- casting detail offer a significant improvement? The sup- ;xosed relationships of the sample variances are: H0: The variances are equal for the two levels of forecasting detail. H1: The variances are unequal. Once more, the F test can be used as described for 121 the previous hypotheses. The relationship between physical distribution costs and forecasting accuracy can be hypothesized as a result of the preceding analyses: H - There is no association between forecasting error and physical distribution costs. H1: Higher levels of forecasting error are associated with higher levels of physical distribution costs. This set of hypotheses can be tested by computing the value of Spearman's rank correlation coefficient (r8). The nonparametric test is in order because several measures of error are reasonable, as can be seen from the discussion in Chapter IV. Thus, a weighted composite index of error measures can be displayed only in rank order. The computed r is based on 8 n n n E . - Z - Z . r = nifllxiyl (i=lx1)(i=lyl) S n n n n 2 2 2 2 n E x. - (2'. X-) n.£ Y -' (z Y.) [i=1 1 i-l 1 ][ i=1 1 i-l 1 ] In this instance x- refers to forecast error and Yi 3. :refers to physical distribution costs--both ranked ordi- 2na11y. The computed r is compared with the critical value (of re, based on sample size and significance level. Since ‘the hypothesized relationship is a direct one, the critical 122 r8 has a positive sign. If the computed r8 is greater than the critical r H0 is rejected. 8' The subject of Question 4 is the relationship be- tween physical distribution costs and service. The specific hypotheses are: H0: There is no association between physical distribution cost and service. H1: Higher physical distribution costs are associated with higher service levels. With service as the independent variable and cost as the dependent variable, regression and correlation analysis is applicable. Again, the t statistic in one-tailed form is suitable for the testing of these hypotheses. Research Sequence The questions are ordered to provide a sequence for the experimentation. The analyses of Questions 1, 2, and 3 were carried out concurrently. That is, smoothing constant values, prediction intervals, and build-up functions were derived simultaneously. Initial values were selected on 1 and others. Changes were the basis of research by Packer Inade to yield different combinations of these three factors. 'The result was a convergence on an Optimum combination for the selected alternatives. This phase of the research is 3presented in Chapter VI. 123 The second set of experiments was designed to iso- late the effect of physical distribution service on costs. Since the structure of the components of total cost could also have changed, these costs components were traced too. The results of these experiments are covered in Chapter VII. 124 CHAPTER V--FOOTNOTES 1A. H. Packer, "Simulation and Adaptive Forecasting as Applied to Inventory Control," Operations Research (July, 1967). CHAPTER VI DEVELOPMENT OF THE DETAILED FORECASTING MODEL Introduction The general short-run forecasting mechanism with exponential smoothing as the selected technique serves as the framework for the detailed sales forecasting model. Specific levels for the three dimensions of prediction interval, level of detail, and smoothing constant are chosen for consideration. Various combinations are tested to (determine a heuristically Optimum set. Since an infinite number of combinations are possible, only a limited number are examined. The following relationships are studied in this chapter: 1. The relationship between the exPonential smoothing constant and physical distribu- tion costs. 2. The relationship between the prediction interval and physical distribution costs. 3. The relationship between levels of fore- casting detail. 125 126 4. The relationship between forecasting accu- racy and physical distribution costs. For all experimentation the same pattern of actual simulated sales is used. The daily sales total is con- sidered to be increasing by a constant amount (rectilinear trend). Variation in sales is achieved by assuming sales values to be normally distributed about this trend line. These random fluctuations represent chance variations in daily sales. Even if a firm experiences rigidly increasing sales (a strict linear trend), day-to-day variations are likely to occur. Each day the trend value for sales is adjusted by the product of a normal random deviate times the standard deviation of daily sales. To simplify experimentation the LREPS structure has been condensed. Instead of examining many products and 390 Demand Units (DU's), 10 sample products and 31 DU's are utilized. This represents a partial product line and a .region of the country. This greatly reduces computational (:ost and time, while still facilitating the testing of the build-up concept . Smoothing Constant and Prediction Interval Experimentation Smoothing constant and prediction interval values are handled concurrently because, as stated in Chapter II, 127 they interrelate as variables in the planning process. The choice of values for one of these variables affects the range of choices for the remaining one. Several writers have researched the range of appro- priate smoothing constant values.l'2 Brown notes that a value of 0.01 results in a sluggish and nonresponsive model, while 0.5 is a volatile and overly sensitive choice. He suggests further that 0.10 is a satisfactory alternative to these extremes.3 For this research the following five values were selected: 0.01 0.05 0.10 0.30 0.50 The choice of a prediction interval is more diffi- cult. Since the overall period of analysis is one year, this is the maximum value. Such a choice would not facili- tate the initialization needed to allow the exponential smoothing model to function prOperly. For this reason the lengthiest interval examined is three months. Even this selection results in only four forecasted periods, a rela- tively small number of observations. The LREPS model has been designed to process no events shorter than one day, so this is the lower bound on the prediction interval. Because of this design constraint, one week is the shortest selected interval. The prediction intervals used are as follows: 128 1 wk. 2 wks. 1 mo. 2 mos. 3 mos. There are 25 combinations of smoothing constant and prediction interval values, as shown in Table 6.1. For each combination variable physical distribution cost was collected. These data are presented in Tables 6.2 and 6.3 with fixed prediction interval and smoothing constant values, reSpectively. Variable physical distribution cost was selected for analysis because of the short-term nature of these experiments. Of the cost components within LREPS, including inbound transportation (Manufacturing Control Center to Demand Unit), outbound transportation (Distribution Center to Demand unit), throughput, ordering, facility, and inventory, only inbound transportation and inventory costs vary with service requirements in the short run. Higher service levels (delivery to more customers with less vari- ation) are achieved with the larger inventories and frequent replenishments (inbound shipments); therefore, cost is in- creased. The other components remain fixed for periods of one year or less. SmoothingyConstant Analysis The first set of research hypotheses tested related the smoothing constant to physical distribution costs: 129 TABLE 6.1.-~Combinations of Smoothing Constant and Prediction Interval Values Smoothing Prediction Experiment Constant Interval 11 0.01 1 wk. 12 0.01 2 wks. 13 0.01 1 mo. 14 0.01 2 mos. 15 0.01 3 mos. 21 0.05 1 wk. 22 0.05 2 wks. 23 0.05 1 mo. 24 0.05 2 mos. 25 0.05 3 mos. 31 0.10 1 wk. 32 0.10 2 wks. 33 0.10 1 mo. 34 0.10 2 mos. 35 0.10 3 mos. 41 0.30 1 wk. 42 0.30 2 wks. 43 0.30 1 mo. 44 0.30 2 mos. 45 0.30 3 mos. 51 0.50 1 wk. 52 0.50 2 wks. 53 0.50 1 mo. 54 0.50 2 mos. 55 0.50 3 mos. 130 TABLE 6.2-—Physica1 Distribution Cost as a Function of Smoothing Constant Prediction Smoothing Variable Physical Interval Constant Distribution Cost/1b. 1 wk. 0.01 $0.010553 0.05 0.011552 0.10 0.012318 0.30 0.011991 0.50 0.012170 2 wks. 0.01 0.010363 0.05 0.010854 0.10 0.010833 0.30 0.011105 0.50 0.011418 1 mo 0.01 0.010187 0.05 0.010320 0.10 0.010333 0.30 0.010466 0.50 0.010743 2 mos. 0.01 0.010197 0.05 0.010178 0.10 0.010359 0.30 0.010345 0.50 0.010430 3 mos. 0.01 0.009877 0.05 0.009962 0.10 0.009860 0.30 0.010088 0.50 0.010165 131 TABLE 6.3.-—Physical Distribution Cost as a Function of Prediction Interval Smoothing Prediction Variable Physical Constant Interval Distribution Cost/lb. 0.01 1 wk. $0.010553 2 wks. 0.010363 1 mo. 0.010187 2 mos. 0.010197 3 mos. 0.009877 0.05 1 wk. 0.011552 2 wks. 0.010854 1 mo. 0.010320 2 mos. 0.010178 3 mos. 0.009962 0.10 1 wk. 0.012318 2 wks. 0.010833 1 mo. 0.010333 2 mos. 0.010359 3 mos. 0.009860 0.30 1 wk. 0.011991 2 wks. 0.011105 1 mo. 0.010466 2 mos. 0.010345 3 mos. 0.010088 0.50 1 wk. 0.012170 2 wks. 0.011418 1 mo. 0.010743 2 mos. 0.010430 3 mos. 0.010165 ‘AI ,9 . A“. wavy-4n... ; ‘ . 132 H : There is no association between smoothing constant values and physical distribution costs. H1: There is an association. For each set of research hypotheses there are cor- responding statistical hypotheses, based on the population parameters in question. For the above hypotheses, the sample correlation coefficient, r, expresses the strength of the relationship between the two variables. The smooth- ing constant is treated as the independent variable in the analysis. The statistical hypotheses about the pOpulation correlation coefficient, 0, are these: HO: 0 = 0. H1: p # O. This test was conducted at a .10 level of signifi- cance. The t statistic is apprOpriate with the actual t value computed from t r//(1-r2)/(n-2) where n is the sample size. In this analysis physical distribution costs were regressed on smoothing constant values with the prediction interval held constant. Five samples, each containing five data pairs, result from such an approach. The structure of the hypotheses implies a two-tailed test. The critical t value for three (n—2) degrees of 133 freedom at the .10 level is 2.353. The resulting decision rule is: If |t|> 2.353, reject HO and accept H otherwise, accept H0. 17 The results of this group of tests appear in Table 6.4. The data were fit to the following four general line forms using the least-squares criterion: 1. Y = a + bX 2. Y = a + b(log X) 3. Y = a(b)X 4. Y= a(X)b. There is substantial evidence to suggest a direct relationship between smoothing constant values and variable physical distribution costs. With sales following an in- creasing trend pattern, variations are due mainly to chance fluctuations. Larger smoothing constant values tend to exaggerate the effect of a random fluctuation and would trigger a system over-reaction, such as a large replenish— ment order to cover a sudden increase in sales. Smaller smoothing constant values overlook this "noise" and smoothly increase the forecast to match the sales growth. Other than it is a direct one, the exact relation- ship is difficult to describe. Equation four seems to be the best overall choice for all five data groupings; how— ever, the simple linear trend line, equation one, reflects Hm>ma no. on» pcommp ucMUHMNcme 134 Hm>wH Ho. mnu cachmn udMuachmeM we gamma“ mmm.N osq.N «NsmHm. s om uomfimu mmm.N nmsN.q mmmem. m on soohmu mmN.N mN¢.N amNmHm. N m uumnmu mmm.N nNQN.s NommNa. H msuaoa m mm “8?: SUN momN $33. s on sumhmu mmm.N mmo.N «NoHsm. m on uummmp mmm.N mom.N ooqmmm. N m uumnmu mmm.N mao.N mmmHsm. H msucoa N mm “8?: EN pass 208.... .V on somnmu mmN.N ammo.w mmwNna. m on gamma“ mmm.N nNNm.m cmemw. N m uuonmp mmm.N mMHo.m quNNm. H asses H mm “gummy mmm.N mea.o aNmon. s on somhmu mmm.N anm.m quQHa. m on gummmu mmm.N mmmm.o mmmaom. N m nommmu mmm.N ncmm.m mooon. H ax: N mm somnmu mmN.N HNm.N HwHoom. s on uawoom mmm.N NNN.H NNONam. m on “comm“ mmN.N mso.m «Hess». N m uawuum mmm.N mmN.H NmNmmn. H Haws H cowmfioma u HmoHuHuo u pmusaaoo uamwoawmmoo show Hm>umuaH coaumHmuuoo coaumavm coauuwnmum maaamm Hmumamo pmxfim pmme Hm>umuaH sowuuwpmum nu“: Axv mmaHm> usmumaoo wsflnuooEm co va wumoo cowusnfiuumfln Hmowmmsm mo aoummmuwmmll.¢.o Manda 135 a strong association for four of the five samples. It should be emphasized that this is an aggregate relationship because the same smoothing constant was applied to all products in a given experiment. Since all products were assigned a similar sales pattern, the overall pattern proved to be an accurate reflection of the individual product patterns. Relative sales amounts did, however, vary considerably among products. This did not pose a problem because there was no apparent relationship between volume and smoothing constant. The only problem was an artificial one, created by the use of the simulation pro- cess. Only a fraction of any product's sales can be simu- lated during an experiment. Extrapolation is used to attain total sales. A product with a relatively small sales volume is difficult to simulate accurately. The simulated volume for slow-moving items could possibly be too light to fore- cast within this model. This parallels the real-world lyroblem of inactive products; therefore, these products ‘were maintained within LREPS in spite of this forecasting- ‘volume problem. Based on this analysis the firm with this product cmonfiguration and sales pattern should choose smoothing «constant values in the 0.01 to 0.10 range. Higher values cyverexcite the physical distribution system, and these 136 extreme reactions can only lead to higher cost. Prediction Interval Analysis The next set of hypotheses refer to the possible relationship between the prediction interval and physical distribution costs: H - There is no association between the size of the prediction interval and physical distribution costs. I? -_ P. _--_~.~:--. - H : There is an association. Again the sample correlation coefficient, r, can be tested to determine the strength of the relationship. The statistical hypotheses are as follows: H ° p = 0. H1: p # 0. As in the earlier experiments the t test is employed at the .10 level of significance. Five samples of five pairs of values for the prediction interval and variable physical distribution costs were analyzed for fixed values of the smoothing constant. The decision rule is identical to that presented in the previous section. The experimental findings are shown in Table 6.5. The same four general equations were fit to the data. 'These results show an even stronger relationship than the lorevious ones. Several t values are significant beyond 137 ”I .2.C'wl.l.lh ‘ «v N Hm>mH no. mnu psommn quUHchmeo Hm>mH No. «no vcoumn uchHwHame Hm>mH Ho. mnu pcommn uchHMchHmM mm uumHmu NHN.N- quq.NH- essoaa.- s 03 uumnmu mmm.N| ummm.m| memooa.l m o: uomnmu mmm.N| ammo.0H| moo~mm.a N m uomnmu mmm.N| UH¢¢.mn qumam.l H om.o we uomfimu mmm.N- mNNs.o- Noooom.u s om uumnmu mmm.N| mmm.N| eNqoem.u m o: uomnmu mmm.N: mHmm.m| ommoom.| N m uumnmu mmm.Nn «No.NI NeNmmm.l H om.o mm “omfimu mmm.N- UH83. ooqum.- s o: unmoom mmm.N- NHN.Nu momNmN.n m cm uomhmu mmm.N| ummm.mn mmMHHm.I N m udmuum mmm.Nn ooH.NI memomn.| H oH.o we “umhmu mmm.N- mms¢.N- ommmNm.- s on uumhmu mmm.NI mam.Na quNom.I m on uumnmu mmm.N| mowm.on ma~¢om.u N m uumnmu mmm.N| mum.Nu Hmmomm.n H mo.o m: “umnmu mmm.N- amoN.¢- stosm.u q om uumnmu mmm.N| u~mm. Hm>uoucH coHuUHnmum co va mumoo :OHuanHuumHa Hmonmnm mo GOHmmmuwm Il.m.o MHm critical F, reject H07 otherwise, accept HO. .A .10 level of significance was used for all comparisons. The "best" function (experiment 26) was compared 'with the simulation run utilizing a smoothing constant of 0.10 and a prediction interval of three months (experiment 35). This run is one of the lowest in terms of variable cost per unit, and its backorder percentage is comparable to that of the "best" function. In terms of service for a given unit cost, however, the "best" function offers the more desir- able result. The relative variances of the forecast error were compared for each product—DU in the two simulations. The .fl -- -'] 1’. Jr 145 TABLE 6.7-—Summary of Product-DU F Tests Larger Variance Significant Yes No Total Experiment with 26 51 113 164 Larger Variance 35 53 93 146 Total 104 206 310 4'71 P‘s.” results are shown in Table 6.7. With 10 products and 31 DU's a total of 310 F ratios were computed. For 104 product-DU's the difference in relative variances is significant at the .10 level; how- ever, each run has almost the same number of significant ratios. In addition, twice as many of the F values are not significant at the tested level. The conclusion based on the product-DU evidence is that the two simulations seem to be equally effective in forecasting capability. A test for the significance of the prOportion of F tests favoring one or the other of the experiments offers corroborative evidence in favor of the above conclusion. If p is the prOportion of larger F ratios for experiment 26, then the probability of observing 164 or more F ratios is the probability that z (a standard normal deviate) is greater than (l64-150)//510(.25). This is equivalent to 146 P(z > .902) = .1788, a value much larger than conventional significance levels of .05 or .10. This data can be examined at the DU level of the product level, but the same conclusion is reached. For the DU level the results are summarized in Table 6.8. TABLE 6.8.-~Summary of DU F Tests Larger Variance Significant Yes No Total Experiment with 26 10 3 13 Larger Variance 35 13_ _fi_ 18 Total 24 7 31 The prOportion of significant F ratios is consider- ably higher for the DU level as compared to the product-DU leve, but the total is fairly equally divided between the two experiments. The prdbability of Observing 18 in one of two classes in a sample of 31 is considerably greater than .10 (about .18). At the product level the data tend to favor experi- ment 26 as the better choice, but the results are not sta- tistically significant. Experiment 35 has the larger ‘variance for seven of the 10 products: however, the prob- ability of this happening is .17, above the .10 level. _ -13.:sz 147 The only conclusive evidence is the difference in overall variance for the two experiments. The F value is 1.853 with the variance for experiment 35 being the larger of the two. This F value is significant well beyond the .02 level. This may appear to be in conflict with earlier findings, but there is a very good reason for such a re- 1:53 sult. A re-examination of the product-DU variances reveals 11:516.: . .. ,. . that, while the two experiments have approximately the same .ra prOportion of large variances, experiment 35 generated many extremely large variances. For the 51 product—DU's that exhibited significantly larger variances from experiment 26, most were significant at the .10 level only. On the other hand, the 53 significant variances from experiment 35 were often significant well beyond .02 or even .01 levels. The variances must be compared at all four levels, the distribution center, the DU, the product, and the product-DU, to establish which experimental results are superior. Even this guarantees only an indication of sta- tistical significance. Managerial significance must also be considered. Although experiment 35 is less effective than 26 in forecasting seven of the 10 tracked products and has a larger overall variance, these seven are ranked fourth to tenth in sales volume. Experiment 35 does predict the tOp 148 three products in sales volume more accurately. The net effect is that each approach forecasts about one half of the total sales volume more effectively than the alterna— tive. Again neither experiment can be picked as the better one . Interlevel Detail Analysis The research hypotheses for this stage of the study are: H0: The variances are equal for the two levels of forecasting detail. H1: The variances are unequal. The variances are computed for the same level of detail, the product level; however, forecasts are generated at the product level for one alternative and at the product- DU level for the other. The F test can be used to compare the relative variances. The test statistic is 2 _ 2 _ 2 2 F ' 511 /312 °r F ‘ 512 /Sil depending upon which yields the larger F value. Here silz is the variance for product i accumulated from all product- 2 DU forecasts, and s12 is the variance based on product level forecasts. The statistical hypotheses are these: 2 2 Ho: “11 = “12 ' r.- 149 If the computed F value exceeds the critical F value (based on sample sizes and .10 significance level), then H0 is rejected: otherwise, it is accepted. Experiment 26 is based on the smoothing constant and prediction interval values shown in Table 6.6. The forecasts are develOped at the product—DU level. Experi— m ment 27 is the same as 26 with one important exception: E The forecasts are computed at the product level and allo- é cated to the DU's on the basis of the weighted index for each DU. Experiment 26 is a build-up approach. The re— sults are presented in Table 6.9. TABLE 6.9.-~Summary of Build-Up Breakdown Comparison Experiment with Significant Product Larger Variance Difference 1 27 no 2 26 no 3 26 no 4 27 no 5 26 no 6 27 no 7 26 no 8 27 no 9 26 no 10 27 no 150 Since the products are ranked high to low by sales volume, clearly neither 26 nor 27 has an advantage. This position is further enhanced by the F value of 1.049 for the ratio of the two overall variances, easily an F value due to chance alone. It appears that for this data the product forecasts could be allocated to the DU's with little to no loss in forecasting accuracy as compared with a more detailed forecasting approach. This conclusion is due primarily to the allocation of simulated actual sales to DU's on the same basis as forecasted sales, the DU weighted indexes. With different allocative bases experiment 26 would most likely be a con- siderably more accurate approach. Cost-Accuracy Considerations Determination of the specifications of the build— up function facilitates the analysis of the effect on physical distribution costs of forecasting accuracy. The related research hypotheses are: H0: There is no association between forecasting error and physical distribution costs. Higher levels of forecasting error are associated with higher levels of physical distribution costs. Spearman's rank correlation coefficient is used to test these hypotheses because the measure of error 151 develOped is an index or composite value. A gauge such as the variance is a useful estimate of error, but it over- looks systematic bias. A consistent 10% overestimate, for example, is not identified by the relative variance of the forecasting error. To overcome this deficiency, the vari- ance can be used in conjunction with a measure sensitive to systematic bias, such as Theil's inequality coefficient. Values for the relative variance and for Theil's coefficient were collected for each of the first 25 experi- ments detailed in Table 6.1. The 25 variances were ranked in order, from low to high, from one to 25. The same rank- ing was applied to values for Theil's coefficient. An in— dex for each experiment was obtained by multiplying the ranking of each of the two measures by 0.5 and summing the two products. The equal weights were arbitrary, based on the assumption that the two error measures are equally important. The values for the two error measures, their respective rankings, and the weighted indexes appear in Table 6.10. Variable physical distribution costs and rankings are also included in this table. Spearman's rank correlation coefficient is computed from ° n 152 O OOHOHO. 0.0H NH NH OOHO. N OOOO. ON OH OOOOHO. O N OH OONO. O NOOO. ON NH OONOHO. O.NH O.NH N OOHO. OH OONO.H ON HN OHOHHO. 0.0H NH NH OOOO. NH OONO.H NN ON ONHNHO. OH 0.0H OH OOOO. OH OOOO.H HN O OOOOHO. O.NH O.NH OH OHOO. O OOOO. ON HH OOOOHO. O 0.0 N OOOO. O HOOO. OH NH OOOOHO. H N O OOOO. H OOON. OH ON OOHHHO. OH 0.0H O OONO. OH OONO.H NH ON HOOHHO. O.N 0.0H HH NNOO. OH HOOO. OH H OOOOOO. OH OH OH OOOO. HN ONHN.H OH NH OOOOHO. OH OH OH ONHO. O OONO. OH OH OOOOHO. O.N 0.0H O HOHO. OH OOOO.H OH OH OOOOHO. O O H HOOO. HH OOOO. NH ON OHONHO. O 0.0 O ONHO. N OONO. HH O NOOOOO. 0.0N 0.0H ON ONOO. NH OOOO.H OH O ONHOHO. OH O.NH HN OHOO. OH OONO.H O O ONOOHO. 0.0N 0.0H OH ONOO. ON HOON.H O OH OOOOHO. O HH O OONO. OH ONNO.H N NN NOOHHO. O 0.0 O OOOO. O OOOO. O N NNOOOO. NH OH ON NHOO. O OOOO. O O NOHOHO. ON O.NN ON NOOO. ON NONH.H O N NOHOHO. NN ON, OH NNOO. ON HOOO.H O OH OOOOHO. ON 0.0N ON OOOO. ON NONO.H N OH OOOOHO. ON NN NN OOOO. NN OOON.H H xcmm umoo am xsmm xmocH xomm .wmooo xcmm woamwum> unmawumoxm OHOmHum> Omuanmz O.HHOOO O>HOOHOO ouHmoaaou Hmuoa momuooo< wawuwmomuom mo mmxmosHll.0H.o mHm 0.400, reject H . 0’ otherwise, accept HO. The computed value of rS based on this data is 0.470, which is significant at the .05 level. H0 is re— jected, and it is concluded that there is a direct relation- ship between variable physical distribution cost and fore- casting error. While the relationship isn't strong, it is statistically significant, implying that the hypothesized relationship (H1) does exist. Inaccurate forecasts over- state inventory and result in higher inbound transportation costs to deliver the needed backorders. This is further proof of the importance of accurate forecasts. There are no automatic physical distribution system responses to compensate for the consequences of erroneous forecasts. 154 m This chapter presented the application of the general forecasting model to a Specific situation. Differ- ent smoothing constant values, prediction intervals, and levels of detail were examined for usefulness. The firm selling the 10 sample products in the specified area of the country should heed the following recommendations: h 1. Select smoothing constant values from the I 0.01 to 0.10 range. 5“ 2. Select prediction intervals from the one to two week range. 3. Forecast at the product level rather than the product-area level. Finally, physical distribution costs were found to be increasing with forecasting error increases. This emphasizes the importance of forecasting precision. 155 CHAPTER VI - - FOOTNOTE S 1Brown, pp. 53-54. 2Hirsch and Lovell, pp. 141-169. 3Brown, p. 54. “Ofrwrmgfl! CHAPTER VII PHYSICAL DISTRIBUTION COST-SERVICE TRADEOFF “It. If. M..— Introduction I. (2. 4 .4 The detailed forecasting model can be used in con— junction with other models or by itself to verify or to disprove theories in business. As noted earlier, however, the forecast serves as a vital input to many processes of the firm. Further experimentation with the model alone beyond validation would seem to be pointless. The other possibility, the merger of the forecasting model with a model of firm activity, is much more promising. The LREPS model, as a representation of the distri— bution system, provides the backdrOp for studying proposi— tions heretofore tested with simple, incomplete models. Since the application of the systems concept to physical distribution, the relationship between system cost and the attendant customer service level has been under close scrutiny},2 This relationship is a natural one to be studied with such a comprehensive model as LREPS and is 156 157 considered in detail in this chapter. Traditional PrOpositions Before presenting the current thinking on the rela- tionship between cost and service, these two terms should be defined. Cost refers to total physical distribution cost, the sum of the component costs for facilities, inven- ink; o o a u o o a 3 tories, transportation, communication, and unitization. fix:- 1 ‘ ‘ Tradeoffs are allowed between and among these five com- ponents to achieve service levels at minimum cost. Since this is a short—term analysis, only those costs which vary during this time span are examined. This includes inven- tory and inbound transportation costs. In this analysis service refers to transport capa— bilities, rather than the broader marketing definition. Such aspects as condition of goods are assumed to be un- changed as service level is altered. Service is a two-fold concept, covering both speed and consistency of service. Speed and consistency are analogous to the mean and standard deviation of a prob— ability distribution, respectively. Speed refers to the average time needed to move an item between two points, normally a warehouse and a buyer. Consistency describes the variation in speed over a number of observed transfers. 158 To specify fully a service level, both speed and consistency must be defined. For example, a service objective may be to provide five-day delivery to at least 80% of the customers and seven-day delivery to all others. Taken alone, the five-day average time doesn't explain service. The last definitional problem is the functional relationship between cost and service. A given service {i level can be attained through many system configurations at 4 differing costs. The reference here is to the least—cost system for each level of service. The cost-service line, if plotted graphically, is the lower bound of the cost- service space, which represents all combinations of cost and service. Cost has been considered an increasing function of the service level at an increasing rate. Costs spiral up— ward as perfect service, delivery to 100% of the customers in the minimum possible time using the swiftest mode of transportation available from every inventory location, is neared. Magee has suggested that the effect of distribu- tion on sales is a function of location, inventories, and system responsiveness. He contends that to fill 95%.rather than 80% of orders from stock, a 15% change, is to increase inventory costs by about 80%.;4 An interesting hypothetical example was formulated by Hill showing that something less 159 than perfect service should be the goal. The inordinately high cost of increased service levels would not be offset by the added product demand.5 Heskett, Ivie, and Glaskowsky relate alternative system configurations to after-logistics-costs profit to show that the best system in terms of service isn't neces- sarily the most profitable.6 This example is especially expressive of the viewpoint hOpefully adOpted by marketers: Service provided should also be a function of service ex- pected by the customer, not only be a function of attain— able service. Bowersox, Smykay, and LaLonde noted that the typical firm usually balances "reasonable" performance levels against "realistic" costs.7 The prohibitive nature of extremely high service forces the firm to back away from it. Eerrimental Results Service can be analyzed from another perspective, that of the customer order cycle. The buyer views the order cycle as consisting of five elements:8 1. Order initiation and dispatch time 2. Order transmission time 3. Order processing time . f “g“? n I 160 4. Order shipment time 5. Order receipt time. The supplier controls the order processing stage and exerts influence on the alternatives used in the order transmission and order shipment stages. These three stages are elements of the normal customer order cycle of LREPS. A fourth element, the stockout delay, is an additional LREPS feature which can be added to the normal cycle to r -2; . I find the total order cycle of the supplier. For the LREPS model variations in each of these four components determine what prOportion of customers are served within a specified time, as well as the statistical variance of the total order cycle. Order processing is based on Monte Carlo random variate functions. The following discrete probability distribution is used: Probability Order Processing Time 10% 0 days 60% 1 day 30% 2 days Any function describing reality can be used. If a distri- bution center is Operating above a specified volume, an additional one-day order processing delay is added to a percentage of the orders equal to the percentage of Opera- tions above the defined level.9 161 Order transmittal and outbound (from Distribution Center) transportation times were develOped in a similar fashion. Sets of three concentric circles were placed around each Distribution Center to indicate one, two, and three day service. If a Demand Unit fell within the inner- most circle, it would receive one-day service on average. If it was within circle two and outside circle one, it fl .r ;- 4.".-1: . :1. _ could expect two-day service. Communications rings can be interpreted identically. With many ring sets available each Distribution Center can be assigned different communi- cations and delivery rings. Variances about these average order transmittal and delivery times are formulated on the basis of several Monte Carlo functions, like the order pro- cessing functions. Each Distribution Center can achieve different consistency of service. Finally, the stockout delay is computed during the simulation cycle, not prior to execution, as are the previous elements of the order cycle. The stockout delay is the sum of out-of-stock days divided by the sum of out— Of-stock units. LREPS provides summaries of Distribution Center Performances after each experiment. More specifically, the following service measures are computed:10 1. Customer service penalty time (stockouts) 162 F 2. Mean and standard deviation of normal customer order cycle time 3. Mean and standard deviation of customer delivery time 4. Total customer order cycle time 5. Percent of case units backordered 6.‘ Mean and standard deviation of product stockout days ,n 7. Normal order cycle time prOportions t 8. Domestic average service time Rh 9. Average lead time for each DC—MCC link. An accurate measure of system service performance can be obtained by allowing inventories to be negative, if needed (in other words, no stockouts). Every time a poten- tial stockout occurs, the order is filled and backordered. The measure of performance is the percent of case units backordered. A high backorder percentage implies poor service. Improved service is realized at an increased cost: therefore, percent case units backordered should be an inverse measure of service achievement. For the cost-service experiments the normal order cycle time distribution was specified as follows: PrOportion Normal Order of Orders Cycle Time .13 4 days .70 5 days .17 6 days 163 The standard deviation of the normal order cycle was set at 0.8 days. The research hypotheses describing the cost—service interaction are as follows: HO: There is no association between physical distribution cost and service. H1: Higher physical distribution costs are associated with higher service levels. :1 Regression and correlation analysis is apprOpriate r.— “L”... I; because both sets of data are ratio in nature. Percent case units backordered is the independent variable, and cost is the dependent variable. Since high service levels imply low percentages of backorders, the statistical hypoth- eses about p, the population correlation coefficient de- scribing the relationship between cost and badkorders, are formulated as follows: HO: 0 = 0. H1: 0 < 0. The structure of the hypotheses implies a one- tailed t test with t = r//Q1-r2)/(n-2) where r is the sample correlation coefficient and n is the sample size. The sample data are 25 pairs of variable physical distribution cost-percent case units backordered values (Table 7.1). The critical t value for a one-tailed 164 TABLE 7.1.--Physical Distribution Cost-Service Values 2 Cases Variable Physical Experiment Backordered Distribution Cost/lb. 41 0.325 $0.01199l 51 0.350 0.012170 31 0.425 0.012318 21 0.500 0.011552 42 0.525 0.011105 52 0.600 0.011418 32 1.100 0.010833 22 1.275 0.010854 43 1.275 0.010466 53 1.500 0.010743 11 1.525 0.010533 12 2.000 0.010333 44 2.050 0.010345 54 2.150 0.010430 12 2.175 0.010363 13 2.350 0.010187 23 '2.375 0.010320 45 2.800 0.010088 55 2.825 0.010165 35 2.850 0.009860 34 3.125 0.010359 25 3.125 0.009962 15 3.575 0.009877 14 3.975 0.010197 24 4.200 0.010178 test at the .05 level of significance for 23 degrees of freedom is 1.714. The decision rule is If t < -l.7l4, reject HO; otherwise, accept Ho. The four general equation forms described in Chapter VI were fit to this data. The results of this analysis are shown in Table 7.2. All four t values are 165 TABLE 7.2.—-Cost-Service Regression Sample Correlation Equation Coefficient Computed t Critical t Decision 1 -.845808 -7.603 -1.714 reject H0 2 —.947242 -l4.l73 -l.7l4 reject H0 3 —.856195 -7.948 -1.714 reject H0 4 -.950914 ~14.737 -1.7l4 reject H0 t i _‘ .- significant well beyond the .05 level, suggesting a very strong inverse relationship between percent backorders and variable physical distribution costs. In addition, since the fourth equation provides the best fit (least unexplained variance), the relationship of cost to service is likely one which increases at an increasing rate. These 25 cost values can be considered short-run minima for given service levels. Facilities are fixed during this time span, and transportation and communications networks are the most economical of the available alterna- tives. This cost-service relationship must be used in conjunction with the firm's overall profit function. The cost of transport service is only one of several components of the total profit picture. A systems perspective is 166 required to reach the Optimum profit position. For example, transport service has an impact on sales revenue. To pro- vide high service to the extent that the marginal cost of service exceeds the marginal revenue generated is not an economically sound policy. Summary A specific application of the forecasting-LREPS model has been presented in this chapter. The direct rela- tionship between physical distribution cost and service has been substantiated statistically using the sample data. Based on the different functional forms fit to the data, the cost—service relationship appears to be increasing at an increasing rate. if" - “ft“;I-Zc—il 167 CHAPTER VII--FOOTNOTES l . . . . D. J. Bowersox, "PhySical Distribution DevelOp- ment, Current Status and Potential," JOurnal of Marketing (January, 1969), pp. 63-70. 2Bowersox, Smykay, and LaLonde, ch. 1. 3. Ibid., ch. 5. 4J. F. Magee, "The Logistics of Distribution [W Systems," Harvard Business Review (July-August, 1960), p pp. 89-101. F k. 5"The Case for 90% Satisfaction," Business Week (January 14, 1961), pp. 82-85. 6J. L. Heskett, R. M. Ivie, and N. A. Glaskowsky, Business Logistics (New York: The Ronald Press Co., 1964) pp. 173-174. 7Bowersox, Smykay, and LaLonde, p. 116. 8D. McConaughy and C. J. Clawson (eds.), Business _ngistics--Policies and Decisions (Los Angeles: Research Institute for Business and Economics, University of Southern California, 1968), pp. 120-125. 9Helferich, pp. 179—180. 1°Ibid.. pp- 268-270. CHAPTER VIII SIMULATED SALES FORECASTING: FINDINGS AND IMPLICATIONS Introduction truer-:7: The overall purpose of this research has been to devise a way to deal with the measurement problem of evalu- ating a sales forecasting mechanism. Until now management has been forced to wait until forecasted time periods be- came history before assessing forecasting capability. The combination of a forecasting archetype with the LREPS model provides the firm with an approach for evaluating the fore- casting model before it is used. Only after-the-fact in- vestigation yields positive proof regarding forecasting accuracy, but the approach suggested in this dissertation reduces substantially the uncertainty involved in develOp- ing a forecasting system. This chapter first summarizes the results of apply- ing this new forecasting approach to sample data. As a result of this application, a generalized approach to Short-term sales forecasting can be suggested. Next, the 168 169 extension of the short—term to long-run applications is develOped. Finally, the research areas worth further in- vestigation are discussed. Summary of Experimental Results The forecasting model was constructed with flexi— bility along the three dimensions of forecasting technique, prediction interval, and level of detail. Exponential F smoothing was chosen as the specific forecasting technique. ha Experimentation was conducted with the sample data to deter- mine the apprOpriate levels for these three dimensions. The basis for evaluation was minimized physical distribu- tion costs, measured by LREPS. Since the period of interest was one year, only those costs which varied in the short- run were isolated. These include inventory and inbound transportation costs. The assumed sales pattern was that of an increasing linear trend with random fluctuations allowed to occur about the trend line. The linear assumption is actually more rigorous than a nonlinear assumption. With small sample sizes, as was the case with the data used in this research, curved lines often provide a better fit. If a straight line can be applied, stronger statements can be made regarding the functional relationship. It ‘.b’: i 170 Smoothing Constant Analysis Smoothing constant values (the parameter of the simple exponential smoothing formula) were regressed against variable physical distribution costs. The relation- ship was found to be a direct one with higher smoothing con— stant values associated with higher variable cost values. Sample correlation coefficients were found to be statis— tically significant, based on the t statistic, at the .10 level. Some coefficients were significant beyond the .01 level. The tested values for the smoothing constant were 0.01, 0.05, 0.10, 0.30, and 0.50. Based on this analysis the firm supplying the sample data should consider small smoothing constant values in the 0.01 to 0.10 range. Prediction Interval Analysis Similar regressions were performed on pairs of values for the prediction interval and physical distribu- tion cost. The values for the prediction interval which were examined were one week, two weeks, one month, two months, and three months. Exceptionally strong inverse relationships were discovered between the two variables. Most sample correlation coefficients were significant beyond the .05 level. It was recommended that the firm in question should utilize a very short prediction interval, 171 one or two weeks. A multiple correlation was carried out treating variable physical distribution cost as the dependent vari- able and the prediction interval and the smoothing constant as independent variables. A significant (r = .78) relation- ship was observed, although it was not as strong as those of the simple regressions. A larger sample encompassing many strong individual relationships which were not as strongly interrelated accounts for this. Level of Detail Analysis The forecasting archetype permits forecasting at four levels of detail: the Distribution Center, the product, the Demand Unit (DU), and the product—DU. Two types of experiments determined the most apprOpriate fore- casting model for a given level of detail and also the needed level of detail. For the first analysis two simulation runs were compared: (1) a model using the same smoothing constant and prediction interval for all tracked products and (2) a model using the smoothing constant and prediction interval for each product which minimized that product's relative variance between forecasted and actual sales. Forecasts were generated at the product-DU level, and comparisons 172 of the relative variances at all four forecasting levels for the two simulations were made using the F statistic. Only at the Distribution Center level was there a statis— tically significant difference (.10 level) favoring the second forecasting model. However, because the second model was less precise in forecasting the high volume products, no definite preference could be reached based on statistical evidence. The decision was left to management h to determine which products and market segments are vital to the firm's future. The model which served these fore- casting cells best is the probable choice. The second analysis compared two versions of the model which utilized the best smoothing constant and pre- diction interval values for each product. The first ver- sion forecasted at the product-DU level, while the second version generated product forecasts which were allocated to the DU's. The F test was used to compare product vari- ances for the two methods to see if the added detail of the product-DU approach was needed. No significant dif- ference was observed; therefore, the simpler model was just as effective as the more complex one for the sample data. 173 Effect of Forecasting Accuracy To determine the importance of forecasting accuracy to the Operation of the physical distribution system, an index of forecasting error was regressed against variable physical distribution cost. The index was composed of equal-weighted rankings of the relative forecasting vari- ance and Theil's inequality coefficient. The variance 3"“ TELL-m4 :: measured the consistency of the forecast error and Theil's coefficient pinpointed any steady over— or underestimated forecasts. Twenty-five pairs of values for the two vari- ables were gathered through simulation runs. Spearman's rank correlation coefficient was com- puted to be 0.470, significant at the .05 level. This tends to suggest a direct relationship between forecasting error and variable physical distribution costs for the sample data. A further contention is that this is likely to be a general relationship found within nearly all firms. Physical Distribution Cost-Service Relationship Experts have suggested for several years that physical distribution cost is an increasing function of service (and probably at an increasing rate). Since the LREPS model provides several measures of service as well as cost, this hypothesized relationship could be tested. 174 By allowing all orders to be filled regardless of the existing inventory levels (possible only in a simulated environment), an accurate measure of service was obtained. Each time an order would have caused a stockout, the order was filled and a backorder was placed. The measure of ser- vice develOped was percent of case units backordered, an inverse measure of service achievement. The regression of percent backorders against vari- able physical distribution cost yielded a statistically significant (beyond the .01 level) negative correlation coefficient. Cost is an inverse function of percent back- orders, implying a direct relationship between cost and service. Of the four mathematical relationships derived, the strongest was an exponential form. This means that the direct relationship is prObably at an increasing rate. This analysis lends strong support in favor of the tradi- tional prOpositions about the cost—service tradeoff. A General Approach to Short-Run Forecasting From the experimental results developed for this particular company, some general guidelines for organizing and implementing the short-term forecasting process can be stated. It should be pointed out again that forecasting is part, a central component, of the more comprehensive 175 management task of planning. Since the type is defined by the interval over which the plans are developed, short-term planning is necessarily a subset of long—range planning. As a result, short-term forecasting must be synchronized with long-term forecasting. The combination of several short-term forecasts should coincide with the overall long- term forecast for the total time span. Because this research is based on the build-up con— f; cept, this agreement is critical. Long-range projections are defined as the summation of shorter—term forecasts. If management has thoroughly organized the planning process, long-range objectives should imply short-term goals. This channel should flow in both directions; hence, short-range forecasts, as responses to short-term objectives, should aggregate over time to yield a suitable long-term forecast, in line with the long-range planning objectives. To operationalize the short-term forecasting pro- cess, the following steps should be taken: 1. Determine the precise use of the forecast. 2. Segment the market into homogeneous areas. 3. Develop similar product groups. 4. Collect required data. 5. Specify the three dimensions of the general short-term forecasting model. 176 6. Review model results regularly. 7. Check results with alternative method. Each step is discussed in terms of its content and its func- tion in the entire short-term forecasting process. Forecast Objective The initial step is to define exactly the purpose of the forecast. The position of the forecast in the overall planning scheme must be specified in order to complete the remaining steps. Many uses for sales forecasts were listed in Chapter II. In this research the forecast served to control inventory levels and replenishment orders. The marketing department may want to evaluate the impact of alternative short-term tactics, such as coupon campaigns or sales. Perhaps personnel is develOping manpower plans to cover peak-season production activity. Each department within the company can utilize these short-range estimates for its own purposes. Because departmental needs vary, the forecast needs depend somewhat upon the requesting department. The mar- keting example cited above suggests detailed product-market segment projections. The personnel example implies gross sales data along with standard production per man infor- mation, although a varied product line might have to be 177 forecasted in detail. A very explicit statement of forecast objectives often makes the remaining steps in the forecasting process self-evident. Data needs become obvious, as does the level of forecast detail. Considerable savings in man hours could result from extra effort at this initial stage. Market Segments The identification of market areas with similar characteristics helps to simplify the forecasting problem. The common features should, however, be transformable into action. This is important primarily to marketers. Charac- teristics which suggest marketing tactics are much more valuable than thOse which offer nothing beyond neat cate- gories. This latter Case is better than no classifying scheme whatsoever. These control units can be treated in a similar fashion for forecasting purposes. Perhaps the same forecasting techniques and equation parameters will be applicable. Another reason for segmenting is to isolate areas that are very difficult to forecast because of low sales volumes, unpredictable competitive action, or other reasons. These "bad" segments can be approached more intuitively, while the other areas can be analyzed with more conventional rwwmwmmwwa .. ‘i' v I Irl . D“ U.’ I '- O _c '0 7‘. .- ‘ '- ‘- l' - o . -- 178 techniques. It is conceivable that different departments might prefer unique market segments or control units for fore- casting. A uniform classifying system is desirable, but not at the cost of generating useless forecasts. This prob- lem is one which must be solved at the time it appears be— cause no universal set of categories currently exists. Product Groups Product groupings accomplish the same goal as market segments: simplified forecasting because of similar pat- terns of sales. Groupings should be made with this in mind, not on the basis of like product characteristics. Extremely unrelated products may exhibit like sales behavior. Again departmental differences may dictate several grouping con- figurations. For example, forecasts for inventory control purposes may not interest the financial manager who is evaluating product line performance. Data Collection The main emphasis of this step is on the shape of anticipated sales patterns and the availability of historic sales and associative data. This step interrelates con- siderably with the subsequent one, the general forecasting model dimensions: however, it is important enough to merit ‘I .. ‘\ a 7.. v 3., {L '3 -'. .0 H . - — v'l q '0‘- ' .1 .-. I - O s . . 179 separate attention. The sales pattern greatly affects the dimensions of the model. The specific technique utilized will depend up— on the regularity of fluctuations (perhaps the seasonal and cyclic components of the time series model). If random movements are prevalent, then maybe a very short prediction interval will be needed to enable quick adjustments to be made in the forecast. Historic data, as input to a data bank, also deter- mine which techniques are feasible. For example, time series analysis and regression and correlation analysis utilize considerable historic, and for the latter, assoc- iative, data. With product and market groupings outlined manage- ment has a framework for data collection. These classifi- cations can serve as common denominators or control units for the formatting and storing of data. The importance of a central data bank should be noted. Even though different departmental objectives probably cause separate short-run forecasts to be generated, these projections should ideally be drawn from a common source of information. By designing flexibility along the three dimensions discussed in the next section, management can derive different forecasts from the same data bank by specifying the control unit set I“ C . ,I. p ,‘Z Li :3- ,4 r i I. I- ‘ 3 F: 0 :5 ' r. f, 180 and the levels of these three dimensions. The only remain- ing problem is that of suitable report formatting, a rela- tively simple problem in comparison with those just men- tioned. General Forecasting Model Dimensions This is the stage in the forecasting process upon which this research is primarily focused. Three specific dimensions have been delineated: the forecasting technique, the prediction interval, and the level of detail. These dimensions are Operable within an overall framework of product-market area grids (forecasting cells). Management can prescribe the right dimensions levels by defining an objective function to be Optimized with these three dimen— sions as the independent variables. Simple regression analysis can be used to determine the nature of the relationship between the forecasting technique parameters and the dependent variable of the ob- jective function. A similar approach can be applied to the prediction interval. If possible, a multivariate relation- ship should be derived. If significant relationships are found, then the apprOpriate ranges for the technique para- meters and prediction interval can be specified. Level of detail analysis encompasses a re-examination 181 of the first two dimensions, as well as a third area for making model specifications. The former refers to the assignment of prediction interval values to the forecasting cells. The preceding discussion implies an aggregate level (same interval for all cells) study. Possibly more reliable forecasts would result from using several prediction inter- 3 vals for different products. This can be determined by I d comparing forecasting error (e.g., variance between fore- casted and actual sales, Theil's inequality coefficient, other statistics, or composite indexes of several measures) for different levels of the first two dimensions. The com- parisons are made at the same level of forecasting detail. Once the prediction interval and forecasting tech- nique parameters have been finalized, the overall level of detail can be determined (product-market segment, product, product group, market segment, etc.). The purpose of such comparisons is to eliminate spurious detail. If forecasts for individual products yield satisfactory results when compared with forecasts derived at the product-market seg- ment level, the additional forecasting level is of little to no marginal value. This dissertation has emphasized statistical analyses as the bases for choosing values for these three dimensions. While this is the recommended approach, less 182 sophisticated methods may be just as valid. Managerial judgment is vital to forecasting and should be given con— sideration. The prOper balance between judgmental and experimental findings is hard to attain, yet each of the two is necessary. Forecast Review Because the business environment is so dynamic, conditions can suddenly change, making the forecasting model obsolete. This is a rather extreme and unlikely possibility. More probably, competitive action will cause a forecast to be incorrect. Periodic reviews enable manage- ment to "retrac " the forecasting mechanism so that fore- casts are again within the tolerance limits. As the forecast is revised, perhaps different marketing plans should be constructed to anticipate the change. The purpose of the forecast is to provide useful input to the total planning process. To be useful, fore- casts must be timely and must reflect current conditions. Result Verification This step does not imply the develOpment of an alternative forecasting mechanism as elaborate as the first. Instead, this may be the time to apply managerial judgment and intuition. The forecast review helps to 183 realign the existing forecasting model, based on experience with that particular system. Verification offers a fresh viewpoint. Extensions to LongrRange Forecasting The transition from short- to long-range forecasting can be most easily described after contrasting short- (tactical) and long-range (strategic) planning. Steiner lists 15 points as bases for distinction.1 A comparison of strategic and tactical planning appears in Table 8.1. This comparison has a direct impact on the conver- sion of the short-term forecasting process, described in the preceding section, to cover long-range estimating. First, the specific use of the forecast is more difficult to state: nevertheless, this is still a crucial first step. Long-range goals, Objectives, policies, and strategies form the foundation for the forecast. The very nature of the planning process entails the formation of objectives and purposes; hence, this first step is less likely to be over- loOked for long-range forecasting than for short—temm forecasting. The market-product classification is likely to be less difficult for long-range forecasting. The corporate and broad nature of strategic planning will usually be TABLE 184 8.1.-—Comparison of Strategic and Tactical Planning Comparative Basis 10. 11. 12. l3. 14. 15. Level of conduct Regularity Subjective values Range of alternatives Uncertainty Nature of problems Information needs Time horizons Completeness Reference Detail Type of personnel involved Ease of evaluation DevelOpment of objectives, policies, and strategies Point of view Tactical Planning lower management fixed schedule lower weighted smaller less structured, repetitive narrower, internal shorter narrower focus based on strategic plan more lower management easier (short time) historically based functional view Strategic Planning upper management continuous, but irregular higher weighted greater, by : definition 3 L more unstructured, unique broader, external longer broader focus original less upper management harder (several years) new, flexible corporate view 185 concerned with large geographic regions and product groups or the entire line. Because of the general emphasis of long-range fore- casting, the data required are to generate aggregate esti- mates. Perhaps annual or quarterly data are all that are needed. The time period for which the data are collected may be much larger. Long—range forecasting extends far into the future, making use of a strong historic foundation for extrapolation purposes. If the future is thought to be relatively independent of the past, then the extensive historic base is less important; however, the newer rela- tionships must be based on valid and current information. The difference in long- and short-term forecasting can be handled by the flexible forecasting archetype upon which this dissertation is based. The model can accommodate several ranges of alternatives by inputting many simulated actual sales patterns. Each pattern reflects a different set of assumptions; therefore, the model can be designed to anticipate the sales pattern thought to be the most likely to occur. The model can be Operated whenever a forecast is needed. Possible forecasts could be generated on a regular basis in addition to the "as needed” projections. The fore- cast interval can be defined for practically any feasible I1 C—‘filc A 'J “a. :‘2 -..- .9 186 length. For long—range forecasting a one-year interval might be useful. Ten to fifteen, or even more, one-year periods could be forecasted consecutively to build the long-run forecast. An alternative approach is to continue short-term forecasting and accumulate them in the same fashion. Both approaches could be used to develOp forecasts for the same long-term period for comparative purposes «- .3' (step 7--check results--in the forecasting process). I t... A‘AF-“ :1. u Further flexibility is provided by selecting dif- ferent forecasting techniques. For this research exponen- tial smoothing was the apprOpriate technique. Methods more suited for long—range forecasting, such as time series analysis and regression and correlation analysis, might be chosen. No matter which technique is selected, it can be applied at any level desired. As an example, a complex multiple regression rela- tionship might be derived utilizing the same equation para- meters (with adjustments made for relative volume differ- ences) for all products and market segments. At the other extreme, different regression equations could be formulated for each market and product. The aggregate nature of long- range forecasting lends itself more to the general approach, but either one is a viable alternative. The remaining two steps in short-term forecasting 187 are equally valid for long-term forecasting. The model should be reviewed periodically, and results should be checked using other methods. In summary, this build-up forecasting model used along with LREPS has been designed to traverse the timing— detail separation between long- and short-term forecasting. The prediction interval, forecasting technique, and level of detail can be combined to forecast for tactical or strategic purposes. Combining these dimensions with alternative hypothetical sales patterns effects maximum forecasting flexibility. The other short-run forecasting steps are relevant for long-run forecasting as well. The real difference is in the depth and breadth of managerial perspective and approach. Implications for Future Research Expansion of this research into other directions appears promising. Additional research areas can be classi- fied according to the following scheme: 1. Extensions of current research problem 2. Tests of additional business-related hypotheses 3. sOphistications of the general forecasting mechanism 4. Alternative approaches to forecasting. 1.. ”Ll.-.‘ may. A V: " 188 Examples from each of these three major areas are discussed below. Extension of Current Research Problem Several studies can be conducted with the sample data used for this dissertation. First, the assumed actual sales pattern could be altered to reflect changing environ— mental and marketing conditions. The only pattern tested ! of-é— A... 'd was an increasing linear trend, subjected to random fluctu- ations. A seasonal pattern would be an obvious choice be- cause many firms experience regular variations in sales throughout the course of a year. Another possibility is a decaying sales function. All such patterns wouldn't be relevant at the same time, but such experimentation would prepare a company for the different stages in the product life cycle. A second inventory location would introduce inter- actions not presently observed with the single warehouse model. LREPS assigns Demand Units (DU's) to Distribution Centers on a priority basis. The second placement would allow orders resulting in stockouts at one location to be filled from the other site. This complicates the fore- casting problem; however, it is an added dimension of reality. 189 More combinations of forecasting levels might also be attempted. This research presented a comparison of only two levels and no variation in the level of detail within a given experiment. The experimental evidence indicated certain of the products could be forecast in the aggregate, while others might be handled more accurately with fore- I casts at each DU. Experiments using different combinations of these two levels could be compared, approaching gradu- IF un- -Am‘ . 1.- . _ " ally the precide model which minimizes forecasting error for this data. Overly zealous pursuit of this so-called accuracy should be avoided. Since the simulated actual sales input is hypothetical, pinpoint forecasts would represent Spurious accuracy. By increasing the simulated sales volume per unit of time, certain of the approximations inherent in the simulation process would be overcome. For example, certain products with high absolute sales volumes, but low relative to other tracked products, might not appear to be active in several DU's. By increasing the volume per time period, the probability of the product appearing on the generated simulated invoices increases. It should be emphasized that this is a valid experimental approach. Increased simulated sales decreases the prOportion of extrapolated to simulated sales. This increases the reliability of the 190 experimental results. As long as the total of actual simu- lated sales and extrapolated sales is realistic, the results are valid. As a final improvement in experimental technique, sample sizes could be increased. Only five pairs of smooth— ing constant-cost and prediction interval-cost values were collected for each sample. The very significant relation- ships discovered tended to minimize the problem of sample size; nevertheless, larger samples generally are more reliable. All of the steps suggested as being desirable were not taken because of a cost-time constraint. Additional funding and several more months would have been required to probe thoroughly into all of these areas. Tests of Additional Business-Related Hypotheses To enumerate all of the possible areas of business which could be examined using the LREPS-forecasting mech— anism as a starting point is an impossible task. The first step might be to consider additional forecasting-distribu- tion system interactions. For example, in the short—run only inbound transportation and inventory costs were found to vary with the quality of the forecast. Longer term analysis should result in more component variations within 191 the physical distribution system. LREPS can be directed to add Distribution Centers to the national network according to predetermined decision rules. If the forecast is too high to the extent that an additional Distribution Center is "built” three to six months too soon, the cost of such an error could be measured directly. The best forecast in the long-run might be the one that postpones Distribution Center additions as long as possible from a cost-service viewpoint. This example leads to another area of examination: the interrelationship among forecasting (planning), physi- cal distribution, and other business areas. A natural area would be finance, since it monitors all other activi- ties within the firm.2 Research has been done to establish relationships between financial variables and changes in the structure of the distribution system.3 By adding the forecast as a causal factor in this changing logistics. structure, forecasting and finance can be related. The impact of forecasting accuracy on the sources of funds is an example of a specific research project. The forecast mechanism could be detached from the LREPS model and combined with another simulation of firm activities. A production system model, including the materials procurement network, would interrelate With the forecasting model. Eventually, if all such models could 192 be connected, the firm itself could be analyzed. This last possibility is still several man-years of work away from occurring. sOphistications of the General Forecasting Mechanism The existing model can be improved in a number of areas. The first such improvement could be to make the three dimensions more dynamic. With feedback linkages built into the model, regular monitoring of results could be achieved. For example, in this research the exponential smoothing approach could have been replaced by a form of dynamic smoothing. That is, the last period's forecast could be generated using several smoothing constants. The smoothing constant resulting in the most accurate forecast would be used to generate the next forecast. Variations on this basic theme could also be implemented. Some products could be forecasted using dynamic smoothing and several smoothing constants, while other products with stable sales patterns might use the existing approach. A two-step dynamic smoothing model could also be used. Four or five initial smoothing constant values could be used to narrow down the range of possible values. Given the best initial values, additional values within a smaller range might be tested. Gradually the most suitable value 193 would be found. Another method is to develOp a warning system by checking forecasting accuracy regularly. When the fore-4 casting error exceeds some predetermined level, the dynamic smoothing module could be activated. The current value for the smoothing constant is used until the error becomes unacceptable. This approach reduces computing time ; by executing the dynamic smoothing programs only when they r are needed, not every time a forecast is generated. The prediction interval can be analyzed dynamically. The forecasting error can be checked to guard against poor interval choices. Periodic regressions of the most recent data (dependent variable) against prediction interval values could be made to determine the general line form. If the relationship isn't strong, adaptations in the inter- val can be initiated. The apprOpriate level of detail can also be deter— mined dynamically by checking the forecasting error. If the forecasts are not satisfactory, the next level of detail can be the new forecasting base. A second sOphistication that is warranted is the elimination of the homogeneity assumption about the DU's. Simulated actual sales are allocated to the DU's on the basis of the DU's relative pOpulation. Some variation is 194 achieved by allowing DU pOpulations to grow at different rates. By adding different allocation bases, a possibility already provided for within LREPS, DU's with unique fea- tures could be simulated. For the sample data analyzed in this research, pOpulation was a satisfactory DU descriptor: however, for other firms this may not be the case. Additional flexibility could be attained by in- putting simulated actual sales in another fashion, actual sales dollars by product and DU, instead of allocating sales to these cells. This would eliminate some of the random simulation approximations referred to in earlier sections. Actual sales would be recorded precisely. The Order File Generator within LREPS could still be used to simulate the invoice detail required to meet sales speci— fications. More and varied statistical analyses could be attempted to aid in the develOpment of the forecasting model detail. Some of the measures of forecasting error discussed in Chapter IV, beyond the variance and Theil's inequality coefficient, could be examined through experi- mentation. Perhaps certain measures of error are useful only as a function of the actual sales pattern. Other variables within the model could be tracked and related to the prediction interval, equation parameters, and [It i i. 195 level of detail dimensions. For example, average inventory levels could be related to these model dimensions. Alternative Approaches to Forecasting This dissertation has focused on the joint usage of a forecasting framework and the LREPS model, including the Order File Generator. The forecasts were generated within Er fl the forecasting model, and simulated actual sales were dis- % seminated throughout the geographic market area by the Order a File Generator. Serving as the objective function to be optimized, the LREPS model reflected the reactions of the physical distribution system to different levels of fore- casting accuracy. These accuracy levels resulted from dif- ferent settings for the three dimensions of the forecasting module. The LREPS model, taken alone is an equally viable forecasting tool. Annual sales data can be inputted exogenously through the Supporting Data Subsystem. The mechanism which distributes sales dollars across products and territories can be used to forecast the future. Rela- tionships between product category sales or product sales and selected independent variables can be derived for every Demand Unit (DU) from historic data. Detailed (by product and by DU) forecasts are easily obtained once the 196 historic relationships are determined. The physical dis- tribution system, as simulated by LREPS, can again be used as the objective function. Seasonal and cyclic influences can be incorporated into this approach. The exogenous sales input can be adjusted by cyclic indexes to reflect general economic con- ditions. The seasonal variations can be anticipated by associating indexes with each day simulated by LREPS. Quarterly seasonal factors can be included by assigning the same index to each day in the quarter. More precision can be obtained by gradually changing the indexes on a day- to-day basis. Using LREPS as the forecasting mechanism eliminates having to design the three dimensions of the forecasting model develOped in this dissertation. LREPS can be used directly to generate forecasts. The forecasting module develOped through this research is easily "uncoupled" from LREPS to stand alone. The LREPS forecasting mechanism would be somewhat more difficult to use independently, although it is possible with some minor modifications of the model structure. This alternative forecasting approach involves using LREPS as the primary forecasting instrument instead of as a controlled experimental environment. The 197 conclusions reached in this dissertation could be validated by this other method. If the two approaches to forecasting resulted in similar forecasted values, more confidence could be placed in the estimates. Conversely, unlike fore- casts would cause management to investigate the causes for divergence. An attractive area for additional study would be a statistical comparison of the results of these two forecasting methodologies. 198 CHAPTER VIII--FOOTNOTES lSteiner, pp. 37-39. 2 . . . . The idea for this potential research can be attri- buted primarily to Dr. Michael L. Lawrence, Assistant Professor of Finance, University of Missouri, Columbia. 3M. L. Lawrence, Development of a Dynamic Simula— tion Model for Planning Physical Distribution Systems: The Financial Implications of Warehousing Decisions (un- published doctoral dissertation, Michigan State University, 1972). 199 SELECTED BIBLIOGRAPHY Aismow, M. Introduction to Design. Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1962. Ballou, R. H. Multi-Echelon Inventory Control for Inter— related and Vertically Integrated Firms. Ann Arbor, Michigan: University Microfilms, Inc., 1965. Bass, F. M. et a1. Mathematical Models and Methods in Marketihg. Homewood, Illinois: Richard D. Irwin, Inc., 1961. fl .0 Bowersox, D. J., et a1. Dynamic Simulation of Physical Distribution Systems. Monograph. East Lansing, Michigan: Division of Research, Michigan State university, forthcoming. i 3.: mn'f-ra Bowersox, D. J. "Physical Distribution DevelOpment, Current Status and Potential." JOurnal of Marketing. (January, 1969), pp. 63-70. Bowersox, D. J. "Forces Influencing Finished Inventory Distribution." Readings in Business LOgistics. Edited by McConaughy for The American Marketing Association. Homewood, Illinois: Richard D. Irwin, Inc., 1969, pp. 85-90. Bowersox, D. J.: Smykay, E. W.; and LaLonde, B. J. Physical Distribution Management. New York: The Macmillan Company, 1968. Bratt, E. C. Business Forecasting. New York: McGraw- Hill, Inc., 1963. Bratt, E. C. Business Cycles and Forecasting. Homewood, Illinois: Richard D. Irwin, Inc., 1953. Brown, R. G. Smoothing, Forecasting, and Prediction of Discrete Time Series. Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1963. Brown, R. G. Statistical Forecasting for Inventory Control. New York: McGraw-Hill, Inc., 1959. Brown, R. G. "ExPonential Smoothing for Predicting 200 Demand." Tenth Annual Meeting of ORSA. San Francisco (Nov. 16, 1956). Buchan, J. and Koenigsberg, E. Scientific Inventory Management. Englewood Cliffs, N.J.: Prentice- Hall, Inc., 1963. "The Case for 90%.Satisfaction." Business Week. (January 14' 1961) I pp. 82-850 COOper, G. R. and McGillem, C. D. Methods of Signal and System Analysis. New York: Holt, Rinehart and Winston, Inc., 1967. COpeland, B. R. Sales Forecasting for the Individual Business Enterprise. Ann Arbor, Michigan: University Microfilms, Inc., 1967. Crawford, C. M. Sales Forecasting: Methods of Selected Firms. Urbana, Illinois: Bureau of Economic and Business Research, The University of Illinois, 1955. Dauten, C. A. and Valentine, L. M. Business Cycles and Forecastihg. Cincinnati: South-Western Pub- lishing Co., 1968. Dean, J. Managerial Economics. Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1951. Erickson, L. G. and Lewis, R. J. Forthcoming Publication. New York: McGraw-Hill, Inc., 1972. Ezekiel, M. and Fox, K. A. Methods of Correlation and Regression Analysis. New York: John Wiley & Sons, Inc., 1959. Fels, R. and Hinshaw, C. E. Forecasting and Recognizing Business Cycle Turning Points. New York: National Bureau of Economic Research, 1968. Fetter, R. B. and Dalleck, W. C. Decision Models for Inventory Management. Homewood, Illinois: Richard D. Irwin, Inc., 1961. Forecasting Sales. Studies in Business Policy, No. 106. New YOrk: The National Industrial Conference Board, 1964. ' 201 Forrester, J. W. Industrial Dynamics. Cambridge, Mass.: The M.I.T. Press, 1961. Frank, R. E.; Kuehn, A. A.: and Massy, W. F. Quantitative Techniques in Marketing Analysis. Homewood, Illinois: Richard D. Irwin, Inc., 1962. Gilmour, P. "DevelOpment of a Dynamic Simulation Model for Planning Physical Distribution Systems: Validation." Unpublished Doctoral Dissertation, Michigan State University, 1971. Hamburg, M. Statistical Analysis for Decision Making. New York: Harcourt, Brace & World, Inc., 1970. Hanssmann, F. Operations Research in Production and Inventory_Control. New York: John Wiley & Sons, Inc., 1962. Hayes, R. H. "Statistical Estimation Problems in Inventory Control." Management Science. Vol. 15, No. 11. Providence: The Institute of Management Sciences, July, 1969. Helferich, O. K. "DevelOpment of a Dynamic Simulation Model for Planning Physical Distribution Systems: Formulation of the Mathematical Model.“ unpub- lished Doctoral Dissertation, Michigan State University, 1970. Heskett, J. L.: Ivie, R. M.; and Glaskowsky, N. A. Business Logistics. New York: The Ronald Press Co., 1964. Hirsch, A. A. and Lovell, M. C. Sales Anticipations and Inventory Behavior. New York: JOhn‘Wiley & Sons, Inc., 1969. Holt, C. C.: Modigliani, F.: Muth, J. F.; and Simon, H. A. Planning Production, Inventories, and Work Force. Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1960. Hummel, F. E. Market and Sales Potentials. New YOrk: The Ronald Press company, 1961. Johnston, J. Econometric Methods. New York: McGraw- Hill, Inc., 1963. ‘1! 202 Lawrence, M. L. "DevelOpment of a Dynamic Simultation Model for Planning Physical Distribution Systems: The Financial Implications of Warehousing Deci- sions." Unpublished Doctoral Dissertation, Michigan State University, 1971. Lazer, w. "Sales Forecasting: Key to Integrated Manage- ment." Business Horizons. Vol. 2, No. 3 (Fall, 1959), pp. 61-67. Llewellyn, R. W. Fordyn: An Industrial Dynamics Simulator. Raleigh, N.C.: North Carolina State University, 1965. Magee, J. F. Physical Distribution Systems. New York: McGraw-Hill, Inc., 1967. Magee, J. F. "The Logistics of Distribution Systems." Harvard Business Review. (July-August, 1960), pp. 89-101. Magee, J. F. Production Planningeand Inventory Control. New York: McGraw—Hill, Inc., 1958. Management Operating System: ForecastinngMaterials Planning and Inventory Manegement--General. White Plains, N.Y.: International Business Machines Corporation. Marien, E. J. "DevelOpment of a Dynamic Simulation Model for Planning Physical Distribution Systems: Formu— lation of the Computer Model." Unpublished Doctoral Dissertation, Michigan State university, 1970. Marketing Definitions: A Glossary of Marketing Terms. Chicago: The American Marketing Association, 1970. Marks, N. E. and Taylor, R. M. Marketing Logistics: Perspectives and Viewpoints. New YOrk: JOhn Wiley & Sons, Inc., 1967. McConaughy, D. and Clawson, C. J. (eds.). Business Logistics--Policies and:2ecisions. Los Angeles: Research Institute for Business and Economics, University of Southern California, 1968. 203 McKinley, D. H.; Lee, M. G.; and Duffy, H. Forecasting Business Conditions. The American Bankers Asso- ciation, 1965. Mendenhall, W. Introduction to Probability and Statistics. Belmont, Cal.: Wadsworth Publishing Company, Inc., 1968. Packer, A. H. "Simulation and Adaptive Forecasting as Applied to Inventory Control.“ ,Qperations Research. Vol. 15 (August, 1967), pp. 660-679. Sales Forecasting Practices: An Appraisal. Experiences in Marketing Management, No. 25. New YOrk: The National Industrial Conference Board, 1970. Siegel, S. Nonparametric Statistics. New York: McGraw- Hill, Inc., 1956. Spencer, M. H.: Clark, C. G.; and Hoguet, P. W. Business and Economic Forecasting. Homewood, Illinois: Richard D. Irwin, Inc., 1961. Stanton, W. J. and Buskirk, R. H. Management of the Sales Force. Homewood, Illinois: Richard D. Irwin, Inc., 1964. Steiner, G. A. Tgp Management Planning. New YOrk: The Macmillan Company, 1969. Still, R. R. and Cundiff, E. W. Sales Management: Decisions, Policies, and Cases. Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1958. Taylor, T. C. (ed.). The Computer in Marketing. Collection from Sales Management, 1970. Theil, H. Applied Economic Forecasting. Amsterdam: The North-Holland Publishing Co., 1966. Thornton, T. Unpublished Lecture Notes. Michigan State University, 1971. Veinott, A. F. "The Status of Mathematical Inventory Theory." Management Science. Vol. 12, No. 11. Providence: The Institute of Management Sciences, July. 1966. "11111111111111