:q . 1:45. P. II... Quarr— 7: is in . a .51.» 4.... 2 .14?) FJM 01‘!‘ J ‘BBSTIBLIB— LIBRARY i M'Ch'Qa” State This is to certify that the Unwersnty dissertation entitled FORECASTING CONSUMER ADOPTION OF TECHNOLOGICAL INNOVATION: choosing the appropriate diffusion models for new products and services before launch presented by Lance Cameron Gentry has been accepted towards fulfillment of the requirements for the Marketing and PhD. degree in Supply Chain Management 0 Mala? Professor’s Signature 30 ”(AX/L 4003 Date MSU is an Affirmative Action/Equal Opportunity Institution _-.— -e-n--o-u-n-o-o-o-—-- PLACE IN RETURN BOX to remove this checkout from your record. To AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE Aug 2 E5399‘? l LIVE 6/01 c:/ClRC/DateDue.p65-p.15 ,, ,, , ...__ __ FORECASTING CONSUMER ADOPTION OF TECHNOLOGICAL INNOVATION: choosing the appropriate diffusion models for new products and services before launch By Lance Cameron Gentry A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Marketing and Supply Chain Management 2003 ABSTRACT FORECASTING CONSUMER ADOPTION OF TECHNOLOGICAL INNOVATION: choosing the appropriate diffusion models for new products and services before launch BY Lance Cameron Gentry Within the vast literature on various forecasting models, there is consensus that no single diffusion model is best for every situation. Experts in the field have asked for studies to provide empirically based guidelines for recommending when various models should be used. This research investigates multiple diffusion models and provides recommendations for which diffusion models are appropriate for radical and really new products and services before the launch of the innovation. In addition, a forecasting classification grid is proposed. Copyright by Lance Cameron Gentry 2003 DEDICATION This dissertation is dedicated to my family, my committee, and the wonderful people in N370. I greatly appreciate the extra effort my wife, Allison, has put forth to ensure I had uninterrupted time to work on this project. Luke, Charles, Leah — you are the primary reasons why I went back to school and pursued my doctorate. I hope and expect that the life of an academic will provide ample opportunity to get to know you even better as your mother and I lead you to adulthood. Professor Calantone - Many thanks for your guidance over the last few years. I appreciated the opportunity to get to know you better and I am grateful you made time to serve as my chair. Professor Droge - Thank you for your lessons on focus and your remarkable editing. I hope that I learn to teach my students with the same clarity that you bring to your classes. Professor Bohlmann - I appreciate your suggestions. Your assistance provided reassurance when I had doubts about the output of some of the diffusion models. Professor Page — You are a large part of the reason my research is aimed at the diffusion of innovation among consumers. Thank you for making the study of consumer behavior so interesting. Professor Levine - Thank you for your willingness to serve on my committee even though the timing did not work out. Professor Hoagland — Many thanks for your support, your stories, and the benefit of your vast experience in forecasting. I also want to thank all of those in N 370. You have made it easier to succeed over the last four years and I appreciate it. ACKNOWLEDGMENTS First and foremost, I would like to thank those at the Consumer Electronics Association. I was first introduced to you while serving at Philips and was favorably impressed by your professionalism and effectiveness. I am pleased that l was responsible for bringing Intel into your organization and believe this was a win-win scenario for both Intel and the CEA. And I greatly appreciate your willingness to give me access to your data for this study. The people at the CEA have always been very friendly and helpful. I would especially like to thank Sean Wargo for his diligence and patience in this project. Despite the number of questions I asked about minutia (e.g., wasthe AVCO Cartrivision system included in the 1974 VCR data?), he managed to get back to me in a prompt and courteous manner. I would also like to thank Richard Zwetchkenbaum (Intel), Angela Heam (Satellite Broadcasting and Communications Association), and Ron Christianson (Cyber Telephone Museum) for their help in obtaining data for this project. TABLE OF CONTENTS LIST OF TABLES ............................................................................................... viii LIST OF FIGURES ................................................................................................ x LIST OF ABBREVIATIONS/DEFINITIONS .......................................................... xi Chapter 1 ............................................................................................................. 1 WHAT THIS RESEARCH WILL ACCOMPLISH ................................................... 1 The Research Problem ........................................................................................ 1 Is Forecasting Part of Marketing? ................................................................. 1 Why is This an Important Problem for Academics & Managers? .................. 3 The Research Questions .............................................................................. 4 The Research Context ......................................................................................... 4 Data Sources ................................................................................................ 5 Chapter 2 ............................................................................................................. 7 LITERATURE REVIEW ........................................................................................ 7 Forecasting: Techniques and Methods ............................................................... 7 Classification Schemes in the Literature .............................................................. 8 Cetron and Ralph, 1971 - summary ............................................................. 8 Cetron and Ralph, 1971 - strengths and weaknesses ................................ 11 Martino, 1972 - summary ........................................................................... 11 Martino, 1972 - strengths and weaknesses ................................................ 13 Bright, 1978 — summary .............................................................................. 14 Bright, 1978 - strengths and weaknesses .................................................. 16 Armstrong, 1985 - summary ....................................................................... 17 Armstrong, 1985 - strengths and weaknesses ........................................... 20 Amtstrong, 2001 - summary ....................................................................... 21 Armstrong, 2001 - strengths and weaknesses ........................................... 23 A Forecasting Typology is Proposed .................................................................. 24 Predictions .................................................................................................. 27 Intentions ................................................................................................. 27 Conjoint Analysis ..................................................................................... 28 Expert Opinion ......................................................................................... 29 Scripts ......................................................................................................... 31 Role Playing ............................................................................................ 31 Scenarios ................................................................................................ 32 Correlations ................................................................................................. 32 Extrapolation ........................................................................................... 33 Analogies ................................................................................................. 34 Neural networks ...................................................................................... 36 Models ........................................................................................................ 38 Expert Systems ....................................................................................... 38 Econometric Forecasts ............................................................................ 42 Structural Models ..................................................................................... 43 Summary of Literature Review ........................................................................... 44 vi Chapter 3 ........................................................................................................... 45 METHOD ............................................................................................................ 45 Hypotheses ........................................................................................................ 46 Data Sources ..................................................................................................... 48 Personal Computers (PCs) ......................................................................... 48 DBS Satellite Receivers .............................................................................. 50 CD Players .................................................................................................. 51 Camcorders ................................................................................................ 51 Projection Televisions (PTVs) ..................................................................... 52 VCRs ........................................................................................................... 52 Cordless Phones and Telephone Answering Device (T ADs) ...................... 53 Revised Classification of Innovations .......................................................... 54 Models ................................................................................................................ 55 Bass Model (B) ........................................................................................... 57 Bass Model Variants ................................................................................... 57 Simple Logistic (SL) & Gompertz (G) .......................................................... 58 Flexible logistic (F LOG) — Box and Cox (BnC) ............................................ 59 Selected Models and Proposed Forecasting Classification Grid ................. 59 Verification of Models ......................................................................................... 60 Process ........................................................ 64 Curve Fitting ................................................................................................ 64 Forecasting ................................................................................................. 64 Hypotheses Testing (Quadrant Analysis) .................................................... 65 Chapter 4 ........................................................................................................... 66 RESULTS ........................................................................................................... 66 Potential Fit of Models ................................................................................. 66 Optimal Parameters .................................................................................... 68 Actual Fit of Models (Forecasting) .............................................................. 70 Hypotheses Testing (Quadrant Analysis) .................................................... 79 Cell Testing (Hypotheses Testing) .............................................................. 82 Quadrant Analysis without PCs ................................................................... 83 Cell Testing (Hypotheses Testing) without PCs .......................................... 86 Chapter 5 ........................................................................................................... 87 DISCUSSION OF RESULTS ............................................................................. 87 Answering the Research Questions ................................................................... 87 Lessons From the Hypotheses ........................................................................... 88 Models ................................................................................................................ 89 Bass Models ............................................................................................... 89 Simple Logistic and Gompertz .................................................................... 91 Box and Cox ............................................................................................... 92 Contributions ...................................................................................................... 92 APPENDIX A ...................................................................................................... 97 Selecting Diffusion Models ................................................................................. 97 APPENDIX B .................................................................................................... 104 Diffusion and The Generalized Bass Model ..................................................... 104 Bibliography ..................................................................................................... 106 vii Table 1: Table 2: Table 3: Table 4: Table 5: Table 6: Table 7: Table 8: Table 9: Table 10: Table 11: Table 12: Table 13: Table 14: Table 15: Table 16: Table 17: Table 18: Table 19: Table 20: Table 21: Table 22: Table 23: Table 24: Table 25: Table 26: Table 27: Table 28: Table 29: Table 30: Table 31: Table 32: Table 33: Table 34: Table 35: Table 36: Table 37: Table 38: Table 39: Table 40: Table 41: Table 42: Table 43: LIST OF TABLES Segmentation’s Effectiveness in Forecasting ...................................... 21 Summary of Forecasting Classification Schemes ............................... 24 Summary of Intentions Findings .......................................................... 28 Summary of Expert Opinion Findings .................................................. 30 Summary of Role Playing Findings ..................................................... 31 Summary of Scenario Findings ........................................................... 32 Ranking of Data Sources for Extrapolation by Intended Use (Armstrong, 2001) ................................................................................................... 33 Summary of Extrapolation Findings ..................................................... 34 Cellular Analogy (Lenz, 1962) ............................................................. 35 Bisexual Reproduction Analogy (Lenz, 1962) ................................... 35 Summary of Neural Network Findings ............................................... 37 Summary of Expert Systems vs. Judgmental Forecasts ................... 40 Summary of Expert Systems vs. Econometric Forecasts .................. 42 Summary of Econometric Findings .................................................... 42 List of Growth Curve Models ............................................................. 43 Diffusion Models Initially Considered ................................................. 56 Diffusion Models used in Research ................................................... 57 Minding p's and q's ............................................................................ 61 Camcorder Diffusion with Lilien Adjustment ...................................... 62 Model Abbreviations .......................................................................... 66 Curve Fitting Results ......................................................................... 66 Curve Fitting - Comparative Placement ............................................. 67 Curve Fitting - Optimized Parameters for B Model ............................ 68 Curve Fitting - Optimized Parameters for Bv Model .......................... 68 Curve Fitting - Optimized Parameters for GB Model ......................... 68 Curve Fitting - Optimized Parameters for GBv Model ....................... 69 Curve Fitting - Optimized Parameters for SL Model .......................... 69 Curve Fitting - Optimized Parameters for G Model ............................ 69 Curve Fitting - Optimized Parameters for BnC Model ....................... 70 Personal Computer Forecasting Results ........................................... 71 PC Forecasts — Comparative Results ............................................... 71 DBS Satellite Receiver Forecasting Results ..................................... 72 DBS Satellite Receiver Forecasts - Comparative Results ................ 72 CD Player Forecasting Results ......................................................... 73 CD Player Forecasts - Comparative Results .................................... 73 Camcorder Forecasting Results ........................................................ 74 Camcorder Forecasts — Comparative Results ................................... 74 Projection Television Forecasting Results ......................................... 75 PTV Forecasts — Comparative Results ............................................. 75 Video Cassette Recorder Forecasting Results .................................. 76 VCR Forecasts - Comparative Results ............................................. 76 Cordless Phone Forecasting Results ................................................ 77 Cordless Phone Forecasts - Comparative Results ........................... 77 viii Table 44: Table 45: Table 46: Table 47: Table 48: Table 49: Table 50: Table 51: Table 52: Table 53: Table 54: Table 55: Table 56: Table 57: Table 58: Table 59: Table 60: Table 61: Table 62: Table 63: Table 64: Table 65: Table 66: Table 67: Telephone Answering Device Forecasting Results ........................... 78 TAD Forecasts - Comparative Results ............................................. 78 All Eight Innovations - Comparative Results ..................................... 79 Radical Innovations — Comparative Results ...................................... 79 Radical/High Priced Innovations - Comparative Results .................. 79 Radical/Low Priced Innovations — Comparative Results ................... 80 Really New Innovations — Comparative Results ................................ 80 Really New/High Priced Innovations — Comparative Results ............ 80 Really New/Low Priced Innovations — Comparative Results ............. 81 High Priced Innovations - Comparative Results ............................... 81 Low Priced Innovations - Comparative Results ................................ 81 Results of Cell Comparisons ............................................................. 82 All Seven Innovations - Comparative Results ................................... 83 Radical Innovations — Comparative Results ...................................... 83 Radical/High Priced Innovations - Comparative Results .................. 84 Radical/Low Priced Innovations — Comparative Results ................... 84 Really New Innovations - Comparative Results ................................ 84 Really New/High Priced Innovations — Comparative Results ............ 85 Really New/Low Priced Innovations — Comparative Results ............. 85 High Priced Innovations — Comparative Results ............................... 85 Low Priced Innovations - Comparative Results ................................ 86 Results of Cell Comparisons ............................................................. 86 Watching p's and q's ......................................................................... 90 Sum of Squared Errors for All Forecasts ......................................... 105 LIST OF FIGURES Figure 1: Initial Classification of 8 Consumer Electronic Innovations .................. 5 Figure 2: Forecasting Methodology Tree (1985) ............................................... 18 Figure 3: Armstrong’s Methodology Tree (2001) ............................................... 22 Figure 4: Forecasting Classification Grid .......................................................... 26 Figure 5: Existing Forecasting Techniques and the Grid ................................... 27 Figure 6: How Descriptive Terms (Same, Horizontal, Vertical, & Opposite) Are Used ................................................................................................... 46 Figure 7: Revised Classification of 8 Consumer Electronic Innovations ............ 55 Figure 8: Classification of Research Models ..................................................... 60 Figure 9: VCR Diffusion with Lilien Adjustment ................................................. 62 Figure 10: Cordless Phone Diffusion ................................................................. 63 Figure 11: Recommended Models by Context .................................................. 88 Figure 12: Comparison of Gentry and Lilien et al. Diffusion Forecasts for VCRs ........................................................................................................... 94 Figure 13: Extended Comparison of Gentry and Lilien et al. Diffusion Forecasts for VCRs ............................................................................................ 95 Figure 14: Initial Look at Logarithmic Parabola Model ...................................... 98 Figure 15: Initial Look at Modified Exponential Model ....................................... 99 Figure 16: Initial Look at Observation-Based Modified Exponential Model ....... 99 Figure 17: Initial Look at Bass Model .............................................................. 100 Figure 18: Initial Look at Generalized Bass Model .......................................... 100 Figure 19: Initial Look at Simple Logistic Model .............................................. 101 Figure 20: Initial Look at Gompertz Model ....................................................... 101 Figure 21: Initial Look at Extended Logistic Model .......................................... 102 Figure 22: Initial Look at Log-Logistic Model ................................................... 102 Figure 23: Initial Look at the Flexible Logistic Inverse Power Transform Model ......................................................................................................... 102 Figure 24: Initial Look at the Flexible Logistic Box and Cox Model ................. 103 Figure 25: Initial Look at the Flexible Logistic Exponential Model ................... 103 Abbreviations B BnC Bv G GB GBv SL Definitions Forecast Prediction Radical Innovation Really New Innovation LIST OF ABBREVIATIONS/DEFINITIONS Bass model — see page 57 Box and Cox model - see page 59 Bass model variant - see page 57 Gompertz model — see page 58 Generalized Bass model (Price) — see page 57 Generalized Bass model (Price) variant - see page 57 Simple Logistic model — see page 58 "a statement about a condition in the future, arrived at through a system of reasoning consciously applied by the forecaster and exposed to the recipient" (Bright, 1978). "a statement about the future based on rationale, if any, that the predictor has not made known" (Bright, 1978). Radical innovations cause both macro- marketing and macro—technological disruptions (Garcia and Calantone, 2002). Really new innovations cause either a macro- marketing or a macro—technological disruption (Garcia and Calantone, 2002). xi Chapter 1 WHAT THIS RESEARCH WILL ACCOMPLISH The Research Problem How does one know when or if consumers will accept a technological innovation before the innovation hits the market? This research will evaluate techniques for forecasting consumer adoption of really new and radical technological innovations and develop a methodology for selecting the most appropriate techniques. The focus is on the consumer adoption of a product or service itself, not on the success or failure of a particular firm (e.g., will high- definition televisions be adopted by most consumers, not will Philips capture 20% of the HDTV market). Forecasting is used in many contexts including predicting the weather, the economy, the advancement of technology, the effect of medicine on a patient, and even changes in fashion. A review and evaluation of the general forecasting methods is necessary to determine which tools are appropriate for forecasting the consumer demand for an innovation. Is Forecasting Part of Marketing? The core values of marketing state consumer “welfare is the ultimate goal of all marketing activities” (Achrol and Kotler, 1999). Thus, forecasting would be part of the marketing process if the ability to forecast consumer adoption of a technological innovation benefits consumers. Perhaps the most basic consumer benefits are the economic gains provided by forecasting. Forecasting improves both effectiveness and efficiency in the production of goods. Effective production - producing the right things - depends upon manufacturers knowing what to produce. If firms produce the wrong product (i.e., products that fail), resources are wasted and losses are imposed on the firm. Consumer welfare is reduced when the firm recoups this cost by increasing the price of successful products. If a forecast dramatically underestimates the market, manufacturers may decide not to meet this need at all. For example, Univac pioneered commercial computers, but forecast a limited market potential for this innovation because they thought computers would only be used for scientific purposes. Based upon this assumption, Univac's market research predicted that there would be a total of a thousand computers in use in the year 2000. Initially they were not even concerned when IBM developed a computer platform designed for business applications (Schnaars, 1989). Efficient production — producing things right given that the decision to produce has been made — depends upon manufacturers knowing how much to produce. If a forecast is too large, firms waste resources by over-investing in the new offering. Likewise, if the forecast underestimates demand, firms waste resources by catching up to the demand and consumers pay more for the product and/or have to do without it for some time. The more accurate the forecast, the more effectively and efficiently an innovation may be brought to market. The more effectively and efficiently an innovation is brought to market, the greater the consumer welfare. The greater the increase in consumer welfare, the greater the marketing contribution of forecasting. Why is This an Important Problem for Academics 8. Managers? Academics are in the business of creating knowledge. In other words, researchers exist to reduce uncertainty. Gervvin (1988) listed three types of uncertainty that he found useful in investigating technology: technical uncertainty, financial uncertainty, and social uncertainty. Forecasting consumer adoption of technological innovations - and assigning probabilities to these estimates - is a necessary part of evaluating the technological, business, and social implications of innovation. Managers who can better understand the range of potential futures should be better prepared for whatever future occurs. As previously discussed, forecasts enable managers to more effectively and efficiently manufacture the right products in the right quantities. In theory, firms with managers who better prepare their firms for these future needs should have a competitive advantage over firms whose managers did not foresee what might lie ahead. However, this presumes that the forecasts do not lead the managers astray. According to Hoagland (2001), false predictions of a Y2K disaster disrupted the supply chain as firms and individuals stocked up on inventories as insurance for the expected disruption. Hoagland’s research led him to conclude that the actions taken to hedge against the predicted Y2K disruption actually caused the recent recession. Academics and managers clearly need to know how much confidence they should have in a forecast. It is likely that the need for a higher level of confidence is related to the height of the barriers of entry and exit of a market. For example, the barriers of entering the suborbital tourism market are very high as there are significant technological, regulatory, market, and capital issues to overcome. Before any reasonable firm would risk the vast amounts of resources needed to serve this market, they need to be extremely confident that a viable market truly exists. The Research Questions R01. R02. R03. Which forecasting methods should be used for forecasting consumer adoption of radical technological innovations? Which forecasting methods should be used for forecasting consumer adoption of really new technological innovations? The answer to this question may be the same as RQ1, but this research may show that radical and really new technological innovations should use different forecasting techniques. Does an innovation ’3 price afiect which methods should be used to forecast consumer adoption of technology innovations? In other words, does price affect forecasting accuracy for various methods? If so, what forecasting methods should be used for low and high priced innovations? The Research Context This study will evaluate the diffusion of the innovations shown in Figure 1. That is, this research is looking at the diffusion of radical and really new innovations intended for use in the home. The innovations will be classified as either high priced or low priced. Figure 1: Initial Classification of 8 Consumer Electronic Innovations Innovations for Consumers Price Level Radical Innovation Really New Innovation PCs Camcorders (1980 - 2000) (1985 — 2000) High Satellite Receivers Projection TVs (1986 - 2000) (1984 - 2000) VCR Cordless Phones (1974 - 2000) (1980 -— 2000) Low CD Players Telephone Answering Device (1983 - 2000) - (1982 - 2000) To reduce confounds and to simplify the data-collection process, only the US. market will be considered. Likewise, only consumer electronic innovations will be studied in this research. Data Sources This study used secondary data for the eight data sets shown in Figure 1. The overwhelming majority of the data was obtained from the Consumer Electronics Association. The CEA, formerly the Consumer Electronic Manufacturers Association, includes more than a thousand companies within the US. consumer technology industry. They are the best possible single source for industrial level US. sales of consumer electronics. As shown in Figure 1, the eight data sets were initially selected to include two samples in each cell of consumer electronic innovations. Only data sets with a reasonable history were considered. Newer innovations were not feasible as one would have to wait at least 10 years before comparing the results of the various forecasts with actual results. Greater detail about each data set is provided in Chapter 3. Chapter 2 LITERATURE REVIEW Forecasting: Techniques and Methods Bright (1978) defined a forecast as "a statement about a condition in the future, arrived at through a system of reasoning consciously applied by the forecaster and exposed to the recipient." Jantsch (1969) first differentiated between two general approaches to forecasting: exploratory and normative. Exploratory forecasting utilizes relevant historical records to project parameters and/or functional capabilities into the future. Normative forecasting starts with future goals and works backwards to identify what barriers must be overcome in order to obtain these goals. Armstrong (2001) considered normative forecasting as synonymous with planning. Lenz (1971) noted that these distinctions are not absolute. All forecasters bring some normative thinking into their forecasts simply by what assumptions they make and what factors they select as important. Conversely, all normative forecasts use exploratory techniques as the starting points for their assumptions. Nevertheless, the distinction between exploratory and normative forecasts is a useful one. All of the forecasts in this study are exploratory forecasts. Classification Schemes in the Literature Brucks (1986) stated that a good typology should have three objectives: 1) The typology and coding scheme should be easy to use and seem logical to people who are using the coding scheme. 2) The typology should cover as many of the subjects' statements as possible while remaining relatively parsimonious. 3) The categories in the typology should be as distinct from each other as possible. In other words, a good classification system should be exhaustive, exclusive, and concise. Exhaustive means that the classification system should cover every potential option. Exclusive means that anything that belongs into one category should clearly not belong in another category. These criteria will be used to evaluate the various classification schemes that researchers have created to compartmentalize technological forecasting methods. There are many ways to classify forecasts, all of them at least somewhat arbitrary. The ones more frequently used in the literature are discussed. The classification systems are listed in chronological order as this approach allows the reader to see how subsequent classifications built upon earlier classification methods. Cetron and Ralph, 1971 - summary Cetron and Ralph grouped forecasting techniques into five categories: intuitive methods, trend extrapolation, trend correlation, analogy, and dynamic predictive models. This classification system appeared to have been largely based upon the chapter headings of Lenz’s 1962 landmark work on technological forecasting, but Cetron and Ralph did place some new methods within some of the classifications. Intuitive methods include: individual forecasting, polls, panels, and the Delphi technique. Cetron and Ralph’s reasoning for grouping these methods together was that all were based upon opinions. Ideally, these opinions were well-educated estimates made by experts, but they were all based upon the intuition of the forecaster. Trend extrapolation is simply forecasting based upon the continuation of existing trends. It includes: simple extrapolation, substitution, and modified curve-fitting. Cetron and Ralph found that the general opinion in 1971 was that trend extrapolation was widely used due to its ease-of-use rather than due to any accuracy advantages (echoing an observation made a decade earlier by Lenz in 1962). The two key assumptions of trend extrapolation are: 1) the factors which caused the prior pattern of progress will continue; 2) the combined effect of these factors will continue the same pattern of progress. In a substitution forecast, one measures the rate of substitution, with time, of a new innovation over an older innovation. While the relative increase in performance is presumably the reason for the substitution, this performance increase is reflected in the rate of substitution. The key assumption of substitution is that the process will continue until the new innovation has completely replaced the older innovation if at all possible. Since technological progress typically advances slowly, reaches a critical mass, accelerates exponentially, and then slows as it reaches limitations, one can expect a given innovation to fit a type of trend curve. Cetron and Ralph distinguished between five types of trend curves: linear with flattening, exponential with no flattening, s-shaped, double exponential, gradual-rapid- subsequent flattening. In trend correlation, the forecaster assumes that "one factor is the primary causal influence in the advancement of the technological parameter of interest." Trend correlation analysis is optimal for situations where the development of a certain innovation lags the development of another innovation. Analogy forecasting simply looks fer another pattern that should be similar to the pattern to be forecast. These are typically classified as growth or historical analogies. Forecasters have used growth formulas (e.g., the rate of cell increase within a rat) and historical patterns (e.g., GE looked at fossil fuel and hydroelectric power development to successfully forecast nuclear power development). Dynamic predictive models are based upon work initially done by Forrester (1958), the chair of Lenz’s thesis. Lenz built upon Forrester's modeling structure to simulate the impact of important causal factors. Over time, these models became more sophisticated. Currently, these types of models are most frequently referred to as structural models. 10 Cetron and Ralph, 1971 — strengths and weaknesses Cetron and Ralph’s original contributions are largely in the area of intuitive methods, in the addition of historical analogies to the analogy classification, and in incorporating previous research into a formal classification system. Their taxonomy is concise, but neither exhaustive nor exclusive. It is not exhaustive as it does not consider techniques such as forecasting by role-playing. It is not exclusive as their definition of trend correlation specifically incorporates causality. Thus, one could reasonably say that trend correlation - as defined by Cetron and Ralph - is a subset of their dynamic predictive model classification. Martino, 1972 - summary Martino discussed five types of forecasts: intuitive, consensus, analogy, trend extrapolation, and structural models. Intuitive forecasts are obtained by simply asking an expert. Martino wryly noted that "even though an expert may be wrong, his intuitive forecast may still be the best forecast available." He then cited Ralph C. Lenz’s quip that intuitive forecasting’s real problem is it is "impossible to teach, expensive to learn, and excludes any process of review." Consensus methods obtain results by asking multiple experts. These experts typically meet together, but this is not a requirement. The positive aspects of this method are: that any fact that is known to one expert becomes available to all; multiple heads are less likely to overlook something; chances are that biases will balance out; opportunities for experts to see how others think and thus revise estimates with new input. 11 The negative aspects of this method include: c all the problems associated with group dynamics (the Delphi technique is a consensus method that tries to eliminate/reduce these problems); 0 any misinformation known to one is known by all. The forecasting analogy method compares a known event (historical event, physical/biological process, etc.) with the event to be forecasted. Growth curves are often used to predict the advance of some technology. The S-curve has been found in many living species for both individual and population growth curves. The adoption of many technological innovations follows a similar pattern - starting slow, followed by a rapid rise, then a leveling off that leads to obsolescence. "The major strength of this method is that it eliminates much of the subjectivity of either intuitive or consensus methods of forecasting. Its major weakness, however, is that the exact extent of the analogy between the model and the thing to be forecast is often not evident until it is too late to do any good" (Martino, 1972). Trend extrapolation avoids the problem of estimating changes in specific S-curves. Instead of focusing on a single device - or technology - trend extrapolation considers a series of devices that perform the same function. Successive devices usually have major differences in performance (on the order of 100% or more), while improvements to a single device are usually on the order of a few percent. 12 Structural models create an analytical model of the technology- generation process. "A characteristic feature of such models is they tend to be abstractions; certain elements are omitted because they are judged to be irrelevant, and the resulting simplification in the description of the situation is intended to be helpful in analyzing it and understanding it" (Martino, 1972). Martino, 1972 - strengths and weaknesses Martino’s classification system is concise and easily understood. His lexicon is a bit confusing, as intuitive forecasts do not consist of all intuitive forecasts, but merely those that are from the opinion of a single expert. He reserves the classification consensus methods for the opinions of multiple experts. As his boundaries are quite clear for all five categories, Martino’s classifications are exclusive. One might question the need for dividing subjective techniques into two categories based upon whether a single or multiple number of experts contributed toward it. This distinction does not seem useful and Martino is the only one to have made such a division. Further, the preciseness with which Martino defined his two expert classifications actually precluded both of these categories from incorporating non-expert intuitive forecasting methods such as role-playing. Thus, Martino’s taxonomy is not exhaustive. 13 Bright, 1978 - summary Bright developed and used eight categories of forecasting: intuitive forecasting, trend extrapolation, dynamic modeling, morphological analysis, normative forecasting, monitoring, cross-impact analysis, and scenarios. As one would expect from their names, Bright's intuitive forecasting, trend extrapolation, and dynamic modeling categories are virtually identical to their Cetron and Ralph (1971) counterparts: respectively, intuitive methods, trend extrapolation, and dynamic predictive models. Bright's classification of morphological analysis was for techniques that created a matrix of all theoretically possible combinations of technological approaches and configurations. He admitted that for morphological analysis to be considered forecasting, "one must argue that morphological analysis identifies known technology and predicts future technology by displaying possibilities that are not yet in use or even explored." Bright stated that in 1942, Zwicky used morphological analysis of the jet engine to conceptualize the terra-jet, the hydra- jet, and the ram-jet. However, granting Bright’s assumption that morphological analysis allows one to identify future possibilities does not make morphological analysis a forecasting technique. Since morphological analysis does not mention the timing of a new innovation, but rather the potential for its existence, it falls short of Bright's own criteria for a forecast. This is not to say morphological analysis has no place, but rather, morphological analysis may help the forecaster conceive of 14 some new technology. Then the forecaster can determine the appropriate method to forecast the adoption of this innovation. Bright categorizes forecasts that assume new technology will materialize to meet a specific need as normative forecasting. However, the distinction between a normative forecast and an exploratory forecast does not change how forecasts are done. Rather, it changes the rate-of-progress assumptions for the forecast and normative forecasts should obviously show a faster rate-of-progress than exploratory forecasts.‘ Thus, while it is important to understand the distinction between normative and exploratory forecasting, normative forecasting is a type of forecasting, not a method of forecasting. Bright stated that monitoring is based upon assessing events in process and included four activities: 1) Searching the environment for signals that may be the forerunners of significant technological change; 2) Identifying possible alternative consequences if these signals are not spurious and if the trends that they suggest continue; 3) Choosing those parameters, policies, events, and decisions that should be followed in order to verify the true speed and direction of technology and the effects of employing that technology; 4) Presenting the data from the first three steps in a timely and appropriate manner for management's use in decisions about the organization's reaction. Bright (1978) believed the essence of monitoring is "evaluation and continuous review." Like his mistake with normative forecasting, Bright is confusing a goal of the forecast (monitoring) with the forecast itself. Monitoring is simply a way of using forecasts, but is not a forecast in itself. Indeed, monitoring more accurately 1 An exception to this expectation would be in the theoretical case where the demand was to slow down progress (e.g., Luddites making policy decisions). 15 describes a way in which one may wish to use forecasting techniques to incorporate data as it becomes available. Bright stated that cross-impact analysis “attempts to do in fact what is implied in all forecasting - to provide a prediction of future conditions with allowance for all the interacting forces that will shape that future." Cross-impact analysis is a technique for building a matrix from the opinions of experts. It has some similarities to the Delphi technique and Bright mentioned that cross-impact analysis could complement the Delphi technique. So, cross-impact analysis should be more properly considered as a technique within the intuitive forecasting classification. Bright (1978) uses the term scenario to describe a detailed description of a possible future. “In effect, the planner concedes he cannot predict the 'real' future, so he looks at several possible futures with the idea of being prepared for any uncertainty (the usual military goal) or of coming up with a plan that best accommodates the variety of uncertainties ahead (the usual industrial goal).” This was indeed a new technique that does not readily fall into any of the previously discussed classifications. One might force it to fit into a loose definition of an intuitive forecast, but as Bright used them, scenarios were meant to cover the entire range of foreseeable options with little thought given to which scenario was most probable. Bright, 1978 - strengths and weaknesses Bright was a strong advocate of the use of scenarios in forecasting and this was one of his main contributions to the field. He also distinguished between 16 forecasts, predictions, and speculations. Bright (1978) defined a forecast as "a statement about a condition in the future, arrived at through a system of reasoning consciously applied by the forecaster and exposed to the recipient." He defined a prediction as "a statement about the future based on rationale, if any, that the predictor has not made known." And Bright defined speculation as "a statement about the future in which the predictor admits high uncertainty and/or admits lack of a highly supportive rationale." By these definitions, one cannot make an intuitive forecast, but merely an intuitive prediction or speculation. With eight classifications, Bright’s taxonomy is hardly concise. However, three of Bright’s categories - morphological analysis, normative forecasting, and monitoring are not actually forecasting classifications at all. In addition, the cross-impact analysis is a subset of his intuitive forecasting classification, so his classifications are not exclusive. His classification system is one of the more exhaustive systems and it would not take much redefining to incorporate newer techniques such as forecasting by role-playing into his scenario classification. Armstrong, 1985 - summary Armstrong (1985) said that research for analyzing data has historically been organized along three continuums: subjective vs. objective, naive vs. causal, and linear vs. classification methods. He then placed five forecasting methods within these continuums to develop a methodology tree (Figure 2) that also provided guidance as to when various methods should be used. The heavier lines represent the key decisions that need to be made by the forecaster; 17 the decisions in turn will help determine which methods should be used. Armstrong’s five classifications were: judgmental, bootstrapping, extrapolation, econometric, and segmentation. Figure 2: Forecasting Methodology Tree (1985) Econometric Segmentation T Linear Classification T Extrapolation A Naive Causal Objective Judgmental Bootstrapping I Subjective Objective Start with feet on ground (Armstrong, 1985) The subjective methods are those using implicit (i.e., vague) processes for data analysis. Naive methods only use data on the variable of interest; causal models use additional variables. Causal models ask "why?” and use these factors to make forecasts. “Linear” is used by Armstrong as meaning a formula. Armstrong preferred linear models as they are both simpler and - in his experience - more accurate than non-linear models. The other side of the linear continuum is classification (segmentation). 18 Armstrong stated that there are three main decisions to be made when making a forecast. The primary decision is to select intuitive or objective methods. If objective methods are chosen, then Armstrong says another choice must be made between naive and causal approaches. And if a causal approach is selected, the forecaster must then decide between linear and classification approaches. The judgmental classification in Armstrong’s lexicon is synonymous with his use of the term subjective. In his words, “These methods are also called implicit, informal, clinical, experienced-based, intuitive methods, guestimates, WAGs (wild-assed guesses), or gut feelings." This category may be considered equivalent to Cetron and Ralph’s (1971) intuitive methods. Likewise, Armstrong’s extrapolation classification is similar to Cetron and Ralph’s use of trend extrapolation. The only difference of note is that Armstrong included analogies within his extrapolation category. Bootstrapping methods are ways of explicitly capturing the subjective processes used by an intuitive forecaster. Direct bootstrapping involves input from a forecaster on how an intuitive forecast was made. In many cases, the predictor is unable to produce an algorithm for producing his forecast. Indirect bootstrapping is used to reverse engineer the rules the forecaster is intuitively using, thus making these rules explicit. All of the previous classifications schemes placed all explicit models into one category. Armstrong divided his into two categories: econometric and 19 segmentation. The econometric classification is used for linear2 representations of causal models that summarize existing knowledge within the models themselves. The segmentation methodology "attempts to find behavioral units that respond in the same way to the causal variables and to group these units.” For example, a very basic forecast about the initial acceptance of a new innovation may use a gender segmentation scheme and assume that five percent of males and three percent of females will adopt the innovation in the first yeah Armstrong, 1985 - strengths and weaknesses ' Armstrong’s Forecasting Methodology Tree provided guidance that better enabled a forecaster to understand what elements went into determining which forecasting method(s) to use. Armstrong’s suggestion and use of the naive/causal continuum was also quite useful and built upon the traditional subjective/objective distinction. However, his linear/classification distinction seems questionable. Not only does this distinction include a bias against non- linear methods, it seems to serve little purpose. For example, the resulting classifications — econometric and segmentation - are not exclusive (e.g., econometric models can easily incorporate multiple segments with their models). One might even say that segmentation is not a forecasting method per se; rather, segmentation techniques may be used to complement most forecasting methods. Forecasters may create forecasts from aggregate data or they may first segment the data, create individual forecasts for 2 As discussed earlier, Armstrong saw little point in non-linear econometric models and his nomenclature reinforced his bias. . 20 its! -.Ialllllilt It each segment, and then sum these forecasts. Table 1 shows some of the empirical results from using segmentation. Table 1: Segmentation’s Effectiveness In Forecasting Source F inding(s) When comparing forecasts of gasoline sales, a regression 2:32;?91 3;: technique had a 58% error rate. By using segmentation, a ’ forecast was created with only a 41% error rate. Dunn, William, Found that additive decomposition forecasts (i.e., summing and Spiney, segments) were superior to a top-down approach for 1971 forecasting demand for telephones. Found that additive decomposition forecasts (i.e., summing Dangerfield products) were superior to a top-down approach for forecasting and Morris, demand for a product class (used over 15,000 aggregate 1992 series created by combining individual series from the M- competition database). In addition, the models that result from bootstrapping might be viewed as econometric and/or segmentation models. Armstrong’s (1985) classification scheme is concise, but is neither exhaustive nor exclusive. Armstrong, 2001 — summary Fortunately for the progress of forecasting, Armstrong did not stop with his initial Forecasting Methodology Tree. Armstrong’s (2001) Methodology Tree is a much revised version of his earlier classification scheme. It also provides guidance to which method(s) should be used in a given situation. 21 Figure 3: Armstrong’s Methodology Tree (2001) Knowledge Source judgmental statistical univariate multivariate self others role no role theory—based data-based 7 Role Intentions Expert Extrapolation Multivariate Pla in_ Opinions, Models Models Analogies Conjoint J udgmental Rule—Based Analysis Bootstrapping Forecastin """""""""""""""""""""""" ‘v I_ _______________________________ Expert Econometric Systems Models n Dashed lines represent possible relationships. Armstrong (2001) believed there are eleven types of forecasting methods: role playing, intentions, conjoint analysis, expert opinions, judgmental bootstrapping, analogies, extrapolation methods, rule-based forecasting, expert systems, econometric models, and multivariate models. Armstrong placed these eleven categories into a Methodology Tree (see Figure 3) where the first branch separates judgmental methods from statistical methods. Judgmental methods are then subdivided into those that predict one's own behavior (self) and those where experts predict how others will behave (others). The self methods are further subdivided into roleplaying (where people are placed in a role and asked to act accordingly) and intentions (where people predict their own behavior in various scenarios). Conjoint analysis examines how different scenarios affect intentions. Along the "others" branch, expert opinions are used to make 22 forecasts. Judgmental bootstrapping uses regression analysis to infer experts' rules for forecasting based upon the information that the experts use to make forecasts. Analogies are typically used when few, or no, observations are available (e.g., the introduction of a completely new innovation like holographic television). The statistical side of the methodology tree first splits into univariate and multivariate branches. The univariate branch is also known as “extrapolation methods” since it uses values of a series to predict other values. Rule-based forecasting is a type of expert system that integrates forecasting methodology with domain knowledge. Expert systems represent rules that the experts use. The multivariate branch subdivides into theory-based (econometric) and data- based (multivariate) models. Armstrong, 2001 - strengths and weaknesses Armstrong’s scheme is more useful than the older schemes as it provides guidance as to when to use various techniques. However, it is also a very flawed classification system. It is neither exhaustive, exclusive, nor concise. It is not exhaustive because certain classifications are not listed (e.g., where do non- expert opinions about the behavior of others 90?). It is not exclusive since he has a classification for extrapolation models, yet all forecasts are extrapolations in one sense or another and some of his classifications are really subsets of a more general classification that he also listed. For instance, he stated that judgmental bootstrapping and rule-based forecasting were expert systems, yet 23 he listed these as unique types along with expert systems. And with eleven non- exhaustive classifications, his system was hardly concise. A Forecasting Typology is Proposed The existing classification schemes have made great contributions to the development of forecasting, especially technological forecasting. The earlier typologies (Cetron and Ralph 1971; Martino, 1972) were most useful in determining what was — and was not - forecasting. The later ideas (Armstrong, 1985, 2001) took a step fonNard by also providing guidance as to when certain classifications should be used. Unfortunately, these taxonomies were neither exhaustive, exclusive, nor concise (Table 2). Table 2: Summary of Forecasting Classification Schemes Source Classifications Strength(s) Weakness(es) Cetron and intuitive methods, trend concise neither Ralph, extrapolation, trend exhaustive nor 1971 correlation, analogy, and exclusive dynamic predictive models Martino, intuitive, consensus, concise and not exhaustive 1972 analogy, trend exclusive extrapolation, and structural models Bright, intuitive forecasting, added concept neither 1978 trend extrapolation, of scenarios, exclusive nor dynamic modeling, could be concise; also morphological analysis, considered included some normative forecasting, exhaustive with categories that monitoring, cross-impact a liberal were analysis, and scenarios interpretation inappropriate Armstrong, judgmental, concise, added neither 1985 bootstrapping, naive/causal exclusive nor extrapolation, continuum, exhaustive econometric, and provided segmentation guidance to which forecast should be used 24 Source Classifications Strengfi(s) Weakness(es) Armstrong, 11 role playing, intentions, provides flawed 2001 conjoint analysis, expert guidance to classification opinions, judgmental which forecast system (neither bootstrapping, analogies, should be used exclusive, extrapolation methods, exhaustive, nor rule-based forecasting, concise). expert systems, econometric models, and multivariate models As per Korchia (1999), "If a typology does not satisfy any of Brucks’ three criteria (1986), it must be modified and improved." Therefore a simpler forecasting typology is proposed. As it only has four classifications, it is unquestionably the most concise scheme yet discussed. Thus it should be evaluated to see if it is more exhaustive and exclusive than the other classifications. Figure 4 shows the Forecasting Classification Grid (hereafter, simply the “‘Grid”). Like all the other classification schemes, it recognizes the importance of distinguishing between opinion and ideas that can be empirically evaluated. It also includes Armstrong’s naive/causal distinction. This typology assumes that these two continuums are independent. Given this assumption, four exclusive categories logically follow: predictions, scripts, correlations, and models. Predictions are defined as explicit forecasts that are based upon opinions whose assumptions have not been made explicit. Scripts are defined as made up scenarios in which a potential future is described and causal assumptions are made. Correlations are defined as forecasts based upon the performance of another factor without any causal assumptions. Models are defined as any forecast with explicit causal assumptions that may be mathematically stated. 25 Figure 4: Forecasting Classification Grid Empirical Correlations Models Naive Causal Predictions Scripts Opinion One of the attractions of the Forecasting Classification Grid is its simplicity relative to the other ways of classifying forecasts. However, even if the grid is concise, exclusive, and exhaustive, it needs to also fit well with the existing forecasting techniques. Figure 5 shows how the existing techniques fit within the proposed classification scheme. The various techniques and their applicability to forecasting as noted in the literature are discussed using the Grid classifications (predictions, scripts, correlations, and models). 26 Figure 5: Existing Forecasting Techniques and the Grid Empirical . Econometric Trend Extrapolation segmentation SEM Neural Networks Correlations Models Combinations Hybrids Historical Analogies Biological Analogies Bootstrapping Naive Causal Delphll ROle Playing Predictions Scripts Conjoint Analysis Scenarios Intent , , Novels/Short Stories Expert Opinion (i.e., science fiction) Opinion Predictions By definition, predictions are opinion-based speculation with no explicit causal assumptions. Techniques in this classification include methods such as intentions, conjoint analysis, and expert opinion practices (e.g., Delphi). Intentions Since intentions have been shown to influence behavior (Fishbein and Ajzen, 1975; Ajzen, 1991), polling purchase intentions of potential consumers is used by many firms to develop market forecasts. Jamieson and Bass (1989) found that 70% to 90% of market-research clients use purchase intentions data on a regular basis. Table 3 summarizes the major empirical finding on using purchase-intentions data for forecasting. 27 Table 3: Summary of Intentions Findings Source F inding(s) Found that purchase intentions data for durable goods Juster 1966 underestimates actual purchasing . Found that purchase intentions data for durable goods MCNe" 1974 underestimates actual purchasing Theil and Found that purchase intentions data for durable goods Kosobud, 1968 underestimates actual purchasing 3:353; Found that purchase intentions data for nondurable goods 1966 g, overestimates actual purchasing Morrison, 1979 Concluded that intentions are imperfect measures of behavior and that intention-based predictions should be adjusted Morwitz and Schmittlein, 1992 Found that by segmenting households before creating intention-based predictions, they were able to reduce forecasting error by more than 25% compared to comparable aggregate forecasts. This was only true for segmentation methods that distinguished between dependent and independent variables — that is, methods using discriminant analysis or CART (Classification And Regression Trees). Bemmaor, 1995 Created a model that used intentions to bound forecasts that was accurate for existing consumer products, but not new products. Lee, Elango, and Schnaars, 1997 Found that extrapolation of past sales provided more accurate forecasts than intention-based forecasts 323$"; d Found that extrapolation of past sales provided less accurate K ’ forecasts than intention-based forecasts umar, 2000 Showed how to use historical intention and behavior data to adjust for bias in future predictions. Using Theil and Morwitz, 2001 Kosobod’s (1968) data, her method reduced the absolute percent error of intention-based predictions from 17.2% to 9.7%. Conjoint Analysis In a search of literally hundreds of conjoint analysis articles, only a single peer-reviewed article could be found where conjoint analysis was used to forecast the acceptance of a really new innovation. Vavra, Green, and Krieger (1999) describe how conjoint analysis was used to help determine commuter demand for the EZPass system throughout the Northeast corridor. Even in this 28 instance, similar systems had been available in other states for years. There are innumerable articles on how conjoint analysis was used to forecast the acceptance of new products, but the “new” products were invariably incremental innovations (e.g., faster cars in Steckel, DeSarbo, and Mahajan, 1991). One of the few times conjoint analysis was used to evaluate a really new innovation (e.g., on-line shopping in Talaga and Tucci, 2001), the researcher used the technique to determine what features are important to the consumer after the consumer has already adopted the innovation. After discussing the theory and history of conjoint analysis, Wittink and Trond (2001) concluded that conjoint analysis should not be used for discontinuous innovations. If a forecaster strongly desired to use conjoint analysis to make a forecast about "new-to-the-world types of products", Wittink and Trond recommended first educating respondents about the category. Even then, they had limited hope for the accuracy of such a forecast. This author’s own experience with professional market-research firms’ attempts to forecast consumer demand for radical and really new products in the consumer electronic and PC industry supports their recommendation and conclusion. Expert Opinion Table 4 summarizes some of the empirical findings on how expert opinions are used in forecasting. 29 Table 4: Summary of Expert Opinion Findings Source I Finding(s) Use of information Ebbesen and Konecni, 1975 Experts (judges) did not base their judgments on all the available relevant information. Gaeth and Shanteau, 1984 Agricultural experts were influenced by irrelevant factors when making soil quality'Ludgments. Brockhoff, 1984 Additional information did not increase the accuracy of experts forecastflg interest rates. Lusk and Hammond, 1991 Additional information did not increase the accuracy of meteorologists forecastingmicrobursts. Overconfidence Bias Lawrence and Makridakis, 1989 When forecasters were asked to place 95% confidence intervals around their forecast ranges, the ranges were about 10% narrower than they should have been. O’Conner and Lawrence, 1989 When forecasters were asked to set 50% and 75% confidence intervals around their own forecasts, only 37.3% and 63.3% of outcomes fell within the respective intervals. Delphi Technique Brockhoff, 1975 Found no significant difference in accuracy between panels with five, seven, nine, and eleven panelists. Boje and Murnighan, 1982 Found no significant difference in accuracy between panels with three, seven, and eleven panelists. Brockhoff, 1975 Accuracy of Delphi results increased for first three rounds with a loss of accuracy for additional rounds. Erffmeyer, Erffmeyer, and Lane,1986 Accuracy of Delphi results increased for first four rounds with no benefit for additional rounds. Harvey (2001) recommended that experts use a checklist when making their forecast in order to minimize the problems with judgments (i.e., experts not using information that they should use while using information that they should not). Given the evidence that expert forecasters are overconfident, Harvey found it reasonable to allow for an overconfidence bias of approximately 10 to 14 percent. 30 Scripts Scripts are opinion-based speculation with detailed causal assumptions described in writing. Techniques in this classification include role-playing, scenarios, and the traditional writings of many hard science fiction3 authors and futurists. Role Playing In role playing, subjects are asked to take on roles and act accordingly. Researchers use their decisions as forecasts. Table 5: Summary of Role Playing Findings Source Finding(s) Cyert, March, and Starbuck, 1961 Subjects made significantly different forecasts depending upon the role they were given (cost analyst vs. market analyst). Statman and . . Tyebjee, 1985 Repllcated findlngs of Cyert, March, and Starbuck (1961) Mandel, 1977 Concluded that researchers would obtain Similar results usnng experts or students as subjects Babcock et al, 1995 Found significantly different outcomes depending upon instructions given to subjects: “Ask the role players to act as they themselves would act given the role and the situation, or ask them to act as they believe the persons they represent would act."4 Armstrong, 2001 In reviewing the role playing literature, “role playing was effective in matching results for seven of eight experiments” and in “five actual situations, role playing was correct for 56 of 143 predictions while unaided expert opinions were correct for 16 percent of 172 predictions.” Armstrong (2001) concluded, “Experts are probably better at identifying what should happen than what will happen. Role playing should be more accurate as to what will happen.” 3 Hard science fiction is the subset of the genre that limits itself to known facts and possibilities. ‘ There does not yet appear to be any strong evidence to show which question leads to more accurate results. 31 Scenarios Schnaar (1989) noted that Herman Kahn popularized the scenario technique in the 1950s when he worked at the Rand Corporation. Bright (1978) advocated the use of scenarios, but sometimes referred to them as an anti- forecast. In his thinking, scenarios were important tools for contingency planning; but the probabilities of each scenario were of little import. Bright’s focus was on the benefits of planning for all reasonable outcomes. Table 6: Summary of Scenario Findings Source Finding(s) Found that scenarios only increased expectancies of the Carroll 1978 described event when the subject did not have a preconceived preference for an alternative forecast in an election context. Gregory, Cialdini, and Carpenter, 1982 Found that scenarios influenced behavior — 47% of subjects exposed to scenarios about subscribing to cable TV subscribed shortly thereafter, compared to 20% of the control group. Schoemaker, 1991 Advocated scenarios for contingency planning (“bounding the uncertainity”) Goodwin and Recommend using scenarios for contingency planning. Wt, 1997 In their review of the scenario literature, Gregory and Duran Gregory and concluded that every use of scenarios “enhance a person’s Duran, 2001 expectancies of the likelihood of the event depicted in the imaged scenario." Correlations Correlations are defined as forecasts based upon the performance of another factor without any causal assumptions. Techniques in this classification include methods such as extrapolation, analogies, and neural networks. 32 Extrapolation In his review of the literature, Armstrong (2001) found that the appropriateness of the data source used for extrapolation depended upon the goals of the forecasters (see Table 7). Table 7: Ranking of Data Sources for Extrapolation by Intended Use (Armstrong, 2001) Intent (1 = most appropriate or most favorable, 4 = least appropriate or least favorable.) To control To To for effects To forecast forecast To reduce of estimate effects of effects of Data cost of researcher’s current small large Source forecasts bias status charges charges Historical 1 1 1 1 4 A.”a"’.g°“s 2 2 2 4 3 srtuatlon Laboratory 3 4 4 3 2 experiment Field experiment 4 3 3 2 1 Armstrong concluded that there were five conditions that favored the use of extrapolation. 1) when a large number of forecasts is needed; 2) when the forecaster is ignorant about the situation; 3) when the situation is stable; 4) when other methods would be subject to forecaster bias; and/or 5) as a benchmark in assessing the effects of policy changes. Table 8 summarizes some of the major empirical findings on using extrapolation for forecasting. Findings suggest that simpler extrapolation methods are more accurate than complex extrapolations and that the Box-Jenkins method of extrapolation - which uses autoregressive integrated moving averages to provide time-series forecasts — should be avoided, as better methods are available. 33 Table 8: Summary of Extrapolation Findings Source I Finding(s) Srm Ie Extrapolation vs. Complex Extrapolations Found that simple extrapolations were more accurate than Dorn, 1950 complex extrapolations. Makridakis et al. Found that simple extrapolations were generally as or more 1982 accurate than complex extrapolations. Makridakis et al. Found that simple extrapolations were generally as or more 1993 accurate than complex extrapolations. Makridakis and Found that simple extrapolations were generally as or more Hibon, 2000 accurate than complex extrapolations. Use of Box-Jenkins In reviewing 14 studies, Box-Jenkins was less accurate Armstrong, 1985 than other extrapolation methods 71% of the time. Makridakis et al. Found that Box-Jenkins was one of the least-accurate 1993 methods. Analogies Analogies were originally simply used as patterns for growth models. No causal reasoning was desired; forecasters simply selected a pattern that they thought — or hoped — would be appropriate (Cetron and Ralph, 1971; Martino, 1972). Forecasters sometimes used biological analogies for growth models — Cetron and Ralph even discussed how one firm created forecasts based upon the growth rate of a rat’s cell. As can be seen in Tables 9 and 10, Lentz (1962) developed an extensive set of biological analogies to facilitate the use of biological growth formulas. Table 9: Cellular Analogy (Lenz, 1962) BIOLOGICAL GROWTH TECHNICAL IMPROVEMENT Initial Cell Initial Idea or Invention Cell Division Inventive Process Second Generation Cell “New” Idea or Invention Cell Division Period Time Required for Initial Invention to Initiate “New” Invenfion Nutrient Media Economic Support for Invention Cell Lifetime Useful Life of Invention Cell Death, Normal Obsolescence of Invention Cell Mass Technical Area or Machine Class Volume Limit of Cell Mass Limits of Economic Demand for Invention in Given Technical Area Size of Cell Mass Total of Existing, Non-Obsolescent Inventions in Technical Area Strength of Cell Mass Performance Capability Table 10: Bisexual Reproduction Analogy (Lenz, 1962) BIOLOGICAL GROWTH TECHNICAL IMPROVEMENT Male Parent, or Parent Cell Existing Invention or Discovery Female Parent Inventor Opportunifl for Fertilization Communication of Knowledge Conception Origination of Idea Embryo Evidence of Growth of Idea Gestation Period Period Required for Invention Birth Disclosure of Invention Nutrition Economic Support Maturation Period Reduction to Practice Maturity Operational Use of Invention Lifetime Period from Disclosure to Obsolescence Death, Normal Obsolescence Total Male Population Total Inventions Disclosed Minus Obsolete Invenfions Total Work Force Total Operational Inventions Total Strength of Work Force Performance Capability The main problem with forecasting by analogy is that the proper analogy is usually not known until after the new opportunity unfolds - at which point the researcher is using hindsight (Martino, 1971). Naive analogies are rarely seen in 35 the current literature. This may be due to the academic bias toward theory- based solutions. This is not to say analogies are no longer used. However, researchers now pick an analogy and use the parameters in explicit growth curve models. These hybrids are explicit models, not analogies or correlations, and give the appearance of being more scientific. However, the historical problems related to analogies still apply to these models. For example, the author of the most widely used forecasting model, the Bass Model, still struggles with the same problems the perplexed users of analogies: "Choosing the appropriate analogy of previously introduced new products is important for the Bass model. However, little is known about the best way to guess by analogy other to say that it depends on judgment” (Bass et al, 2001). Armstrong (2001) also found it “surprising that little research has been done on such topics as how to select analogies...and how much gain one might achieve by pooling data from analogies.” Neural networks Forecasts produced by neural networks are commonly perceived as a “black box” production - examining the model parameters does not indicate why the model makes good predictions (Remus and O’Conner, 2001). Given this lack of explicit causal assumption, neural network forecasting is classified as a correlation method. However, any neural network forecasts that explicitly documents its causal assumption should be considered a model, not a correlation. If causal assumptions are someday routinely included in neural 36 network forecasts, then the method should be reclassified as a model at that time. Remus and O’Conner (2001) recommended using traditional models if the data fit the assumptions for those models. In theory, the neural network forecast should be as accurate as the traditional models. In practice, Remus and O’Conner concluded that the traditional model was much easier to develop and use in these circumstances. Table 11 summarizes some of the major empirical findings on forecasting with neural networks. Table 11: Summary of Neural Network Findings Source F inding(s) Sharda and . Patil, 1990 Found neural networks were as accurate as Box-Jenkins. 223:; and Found neural networks comparable to traditional methods on U 39 a r, 1992 quarterly data, less accurate on annual data. Found neural networks to be superior to Box-Jenkins (Autobox) Kan 1991 when data included trend and seasonal patterns. Othenrvise, 9’ Kang found Box-Jenkins to be same or better than neural networks. In their comprehensive evaluation study, they found that neural Hill, networks were more accurate than any other tested method O’Connor, when using quarterly and monthly data. Other methods were and Remus, more accurate when using annual data; however, neural 1996 networks were more accurate than Box-Jenkins even with annual data. 37 Models Models are defined as forecasts with explicit causal assumptions that may be mathematically stated. These models could also be known as rule-based forecasting, but at least one forecasting expert (Armstrong, 2001) reserved this term for forecasts of time series data. Techniques in the “model” classification include expert systems, econometric models, and structural models (e.g., the Bass 1969 model). Expert Systems Armstrong (2001) sometimes distinguished between judgmental bootstrapping and expert systems, but was inconsistent in his descriptions (e.g., on page 188 he stated bootstrapping is a “type of expert system,” but on page 283 he introduced an article on expert systems by contrasting bootstrapping methods with expert systems). In this document, expert systems are systems that use a model of how an expert would act in making a forecast. Judgmental bootstrapping is a subset of expert systems that infers the rules an expert uses by reverse engineering these rules from the results. Forecasters who desire to create expert systems that directly ask experts how they make their forecasts should ensure the availability of experts with a lot of time (Collopy, Adya, and Armstrong, 2001). In theory, expert systems should be most useful when experts are making repetitive forecasts (e.g., analyzing traffic patterns to determine where to put a fast-food restaurant) and when problems are semi-structured.5 After reviewing 5 Use econometric techniques for very structured problems and judgmental techniques for unstructured problems. 38 the expert system literature between 1977 and 1993, Wong and Monaco (1995) found that prediction was only the fifth most common use of expert systems (behind planning, monitoring, design, and implementation) and that there were not many research articles about the accuracy of expert systems in a forecasting context. When using expert systems as a replacement for judgmental forecasts, Collopy, Adya, and Armstrong (2001) recommended using a Turing test to check face validity. The following tables summarize some of the empirical findings on how expert systems compare to other forecasting methods. 39 Table 12: Summary of Expert Systems vs. Judgmental Forecasts Source Finding(s) Yntema and Torgerson, 1961 Found that bootstrapping resulted in an accuracy of .89 while the accuracy of judges was .84 in a simple evaluation of ggmetric shapes (180 juggments by 6 judges). Kleinmuntz, 1967 Found expert systems to be more accurate than expert judgments in a counseling setting (the expert system was wrong 28.8% of the time vs. the judgmental error rate of 34.4%). Goldberg, 1970 Developed bootstrapping models more accurate than 79 percent of the clinicians in a mental health context (123 casesL Found that bootstrapping predictions for the performance of Dawes, 1971 incoming doctoral students was more accurate than the admission’s committee predictions (19 students). Found that 100% of the derived bootstrapping models were Wiggins and more accurate than the judgments of 98 experts in Kohen, 1971 forecasting the GPA of incoming graduate students (110 judgments). Developed an expert System that was better than the expert Michael, 1971 in forecasting catalog sales in terms of both unit sales and dollar sales. Concluded that experts were more accurate than Libby, 1976 bootstrapping in predicting whether or not a large corporation would declare bankruptcy (60 companies). Goldberg, 1976 Used Libby’s (1976) data to show that Libby’s results were due to severe skewness in data. By correcting for this skewness, the revised bootstrapping model was more accurate than the experts 72% of the time (vs. the previous 23%). Roose and Doherty, 1976 Despite some questionable methodology that violated accepted bootstrapping principles (i.e., they used stepwise regression), they found that bootstrapping was slightly more accurate in forecasting the success of life insurance agents than forecasts made by managers (200 juggments). Ebert and Kruse, 1978 In forecasting future returns of securities, bootstrapping models were more accurate than financial analysts for 72% of the comparisons (15 new securities were evaluated by 5 analysts). Note: they also used stepwise rgLression. Abdel-Khalik, Rashad and El- Sheshai, 1980 Bootstrapping models and lending officers were equally accurate in prediction loan defaults (28 loan officers). 40 Source Finding(s) Camerer, 1981 After reviewing the bootstrapping literature, Camereer concluded that that the empirical evidence clearly showed that bootstrapping should improve expert judgments. Dougherty, Ebert, and Callender, 1986 Developed bootstrapping models for three expert interviewers and predicted the future job performance of applicants. The bootstrapping models were much better than two of the experts and tied the third (120 taped interviews). Stewart et al, 1 989 Found mixed results in comparing an expert system with seven meteorologists. The human judgments were slightly better at forecasting hail and the expert system was slightly better at forecasting severe hail.6 Silverrnan, 1992 Developed an expert system that helped military planners spot biases in their own forecasts. When using expert system, new forecasts did not contain these biases (and will presumably be found to be more accurate). Ashton, Ashton, and Davis, 1994 In an artificial advertising context, experts were required to forecast annual sales. Use of a bootstrapping model resulted in 6.4% less errors than the expert judgments (13 judges). Reagan- Cirincione, 1994 Found expert systems to be much more accurate than judgments in two experiments (forecasting teachers’ salaries and baseball team records). Leonard, 1995 Developed an expert system for detecting bank fraud. The judges were better than this expert system (80% detection of actual frauds vs. 71 %). Smith et al, 1996 Found that an expert system used by British Gas was more accurate than human experts at forecasting short-term gas demand. Ganzach, Kluger, and Klayman, 2000 Global judgments of military conscripts” probability of success were made by experts and bootstrapping. Experts were slightly more accurate than bootstrapping (116 interviews). Note: success was judged by absence of failure. 6 In reviewing the literature on expert systems, there seems to be a tendency for expert systems to improve their accuracy on the more extreme forecasts (e.g., severe hail vs. hall). 41 Table 13: Summary of Expert Systems vs. Econometric Forecasts Source Finding(s) Stewart et Found mixed results in comparing an expert system with al, 1989 econometric methods for forecasting hall. The econometric forecasts were slightly better at forecasting hall and the expert system was better at forecasting severe hail. Moninger et Found mixed results in comparing several expert systems with al, 1991 several econometric models in a meteorological context. Leonard Found that an expert system for detecting fraud was more 1995 ’ accurate than an econometric model (71 % of actual frauds detected vs. 66%). Econometric Forecasts The distinction between econometric models and structural models is vague. Technically, it is difficult to create a definition that would differentiate the two techniques — which is one of the reasons against using the term econometrics as one of the four proposed forecasting classifications. In practice, econometrics usually refers to the use of regression analysis. As such, econometrics is a forecasting technique within the proposed model classification. Table 14: Summary of Econometric Findings Source Finding(s) Lutkepohl, Said the maximum number of variables should not be greater than 1991 the cube root of total observations. Neter et al, “A general rule of thumb states that there should be at least 6 to 1996 10 cases for every variable in the pool.” Grove and Given a good measure of success and ample historical data, M e ehl 1996 econometric approaches are Vlrtually always more accurate than ’ judgmental forecasts. After reviewing the literature - over 30 comparisons of judgmental Allen and and econometric forecasts — Allen and Fildes concluded that Fild es, 2001 econometric models “appear to be galnlng over extrapolatlve or judgmental methods, even for short-term forecast, though much more slowly than their proponents had hoped.” 42 Structural Models Researchers have concluded that little empirical research has been done to investigate the comparative forecasting performance of demand forecasting in various settings (Armstrong, Brodie, and McIntyre, 1987; Meade and Islam, 2001). The following table lists various models that have been used to predict the adoption of an innovation. Table 15: List of Growth Curve Models7 Source Model Gregg, Hassel, and Richards, 1964 Modified Exponential Gregg, Hassel, and Richards, 1964 Logarithmic Parabola Gregg, Hassel, and Richards, 1964 Simple Logistic Gregg, Hassel, and Richards, 1964 Gompertz Rogers, 1962 Cumulative Normal Bain, 1963 Cumulative ngnormal Bass, 1969 Bass Model Bass, 1969 Extended Logistic Bass, Krishnan, and Jain, 1994 Generalized Bass Model Tanner, 1978 Logiogistic Easingwood, Mahajan, and Muller, 1981 Nonsymmetric Responding Model Bewly and Fiebig, 1988 The F lexible-Logistic (FLOG) Model: Inverse Power Transfer (IPT) Bewly and Fiebig, 1988 The flexible-logistic (F LOG) Model: Exponential (ELOG) Bewly and Fiebig, 1988 The Flexible-Logistic (F LOG) Model: Box and Cox Meade, 1985 Observation-Based Modified Exponential (Local Logistic) Mar-Molinro, 1980 Auto-Regressive Error Term 7 Many of growth models in this list were first tabulated by Meade and Islam (2001). 43 Summary of Literature Review The review of the literature led to three main points of interest to this research. First, the consensus of forecasting experts was that no single forecasting method can obtain both accurate and valid forecasts over various conditions. In other words, various forecasting methods have unique strengths and weaknesses in the context of different conditions. Second, an enduring research question has been asked for decades: How can we demonstrate (empirically) some guidelines for the selection of forecasting approaches under different environmental conditions? This research addresses this question for pre-launch forecasts (i.e., forecasts made without the benefit of market data obtained from actually seeing the innovation in the market) for various innovation and price level contexts as described in Chapter 3. Third, a systematic way to organize the literature is proposed. The forecasting classification grid is based upon the work of earlier forecasters (largely Armstrong, 1985). The difference in approach may be due to the differing purpose of this research from that of earlier classification proposals. 44 Chapter 3 METHOD This research determined which forecasting methods are most appropriate for forecasting consumer adoption of radical and really new technological innovations. This research also investigated the impact of pricing on these determinations. While the questions are general, this research focused on consumer electronic innovations and evaluated five well-established models of innovation diffusion. Two model variants were also evaluated. In this research, it was useful to visualize a quadrant consisting of two continuums — the level of innovation (radical vs. really new) and the price level.(high vs. low). The terms same, horizontal, vertical, and opposite were used to describe how similar or different one innovation was from another. If an innovation was from the same quadrant, this meant that the innovations shared both the same level of innovation and the same price level. If an innovation was said to be from a horizontal quadrant, then it belonged to a different innovation classification, but stayed within the same price level. Likewise, if an innovation was said to belong to a vertical quadrant, it had the same innovation classification, but had a different price level. Finally, if an innovation was in an opposite quadrant, then both the innovation and price levels were different. Figure 6 shows how these terms are used in reference to the Personal Computer (radical, high-price) innovation. 45 Figure 6: How Descriptive Terms (Same, Horizontal, Vertical, 8: Opposite) Are Used Example: Forecasting PCs Price Level Radical Innovation Really New Innovation PCs . Camcorders, ngh Projection TVs Satellite Receivers (horizontal quadrant) (same quadrant) VCR, LOW Cordless Phones, CD Players Telephone Answering Devices (vertical quad rant) (opposite quadrant) These descriptive terms are used to separate innovations into four analogous groups. An analogous group is a collection of innovations that share both the same price level and innovation level. For example, VCRs, Cordless Phones, and Telephone Answering Devices belong to the same analogous group. Hypotheses While this research was largely exploratory research aimed at providing guidance for the three general research questions discussed in Chapter 1, some specific hypotheses were developed. These hypotheses were created to either provide confirmatory support or falsify assumptions behind the research questions. 46 Hypothesis 1. Hypothesis 2. Hypothesis 3. Forecasts using parameters from the same quadrant for a dataset will be more accurate than forecasts using parameters from other quadrants. a. Forecasts using parameters from the same quadrant will be significantly more accurate (have less error) than forecasts using parameters from the opposite quadrant. b. Forecasts using parameters from the same quadrant will be significantly more accurate than forecasts using parameters from horizontal quadrants. c. Forecasts using parameters from the same quadrant will be significantly more accurate than forecasts using parameters from vertical quadrants. d. This will be most apparent in comparison to forecasts using parameters from opposite quadrants. l- ZHIa > zHIb ii. ZH1a > zH1c Forecasts using parameters from adjacent (horizontal and vertical) quadrants for a dataset will be more accurate than forecasts using parameters from opposite quadrants. a. Forecasts using parameters from a vertical quadrant will be significantly more accurate than forecasts using parameters from the opposite quadrant. b. Forecasts using parameters from a horizontal quadrant will be significantly more accurate than forecasts using parameters from the opposite quadrant. The level of innovation will have a greater impact on the accuracy of a forecast than the price level. (i.e., forecasts using parameters from a vertical quadrant will be significantly more accurate than forecasts using parameters from horizontal quadrants.) 47 Data Sources In many cases, when an innovation was first made available is largely a matter of interpretation. For the purposes of this diffusion research, an innovation was considered to be first available when it met the following conditions. 1) The innovation had to be available to consumers nationwide. 2) The innovation should be available as a complete product — not merely plans or parts to be assembled by a skilled hobbyist. 3) The innovation had to free of burdensome regulations that would inhibit adoption of the innovation. With the exception of the CD Player dataset, the CEA data started several years after the introduction of the product- Other sources were obtained to fill in the missing data wherever possible. In some cases, the missing data had to be partially extrapolated. These extrapolations are described for each innovation. Personal Computers (PCs) One could argue that the 1949 Simon was the first personal computer, although it was never sold as a product and thus fails to meet the criteria used in this research. Rather the plans to the Simon were sold to hobbyists who built their own computer. The 1955 GENIAC was the first pre-assembled computer sold to consumers. It was followed by the Heathkit EC-1 (1959), the Honeywell Kitchen Computer8 (1966), the DEC PDP-8 desktop model (1968), the Arkay CT- 650 (1969), the Imlac PDS-‘l (1970), the Kenbak-1 (1972), the HP 9830 (1972), 8 The Honeywell Kitchen Computer even included a cutting board. 48 the French Micral (1973)9, the Scelbi-8H (1973), the Mark-8 (1974), the Altair (1975), the IBM 5100 (1975), the Pro Tech SOL Computer (1977), the Commodore PET (1977), the Apple ll1°(1977), the Radio Shack TRS—80 (1977) and the 1981 introduction of the IBM PC. Several sources were used to compile this list, but the Blinkenlights (2002) timeline was especially useful. In 1977 several new computers were made available to the public. Not only did these computers meet the innovation criteria used in this research, but all of them included a keyboard and output to a video display (e.g., a monitor or television). Thus 1977 was selected as the starting point for the diffusion of the personal computer. Personal computers being defined as programmable devices complete with a keyboard for input and a video port for output. The CEA dataset started in 1980, the data from 1977-1979 were created by a combination of extrapolation, various references, and judgment. Specifically, an average price of $1,000 was used for these three years based upon known prices for personal computers in 1977 and the CEA average price of $1,000 for 1980. While the CEA’s data started in 1980, it did not show a penetration of 1% until 1981. Since the data was rounded, a zero % penetration rate in 1980 meant that the consumer penetration was actually between 0% and .49%. Given that 500,000 units were sold in 1980 to businesses and households, a consumer household penetration rate of .4% was estimated for 9 The Xerox Alto was also created in 1973, yet it cannot be considered as a personal computer for this research since Xerox made their infamous decision not to market it. 1° The Apple I (1976) was sold as a motherboard only and may be considered a prototype since only about 200 were made. 49 1980. This number was repeatedly halved for 1979 (.2%), 1978 (.1%), and for 1 977 (05%). DBS Satellite Receivers According to the Satellite Broadcasting & Communications Association (SBCA), the first Direct Broadcast Satellite receiver was built by Taylor Howard in 1976 after the FCC declared smaller, personal satellite receivers would be allowed. In 1978, Howard published a manual to enable hobbyists to build their own systems. In 1979, manufacturers first sold complete systems to consumers, 5,000 units were shipped including a $36,000 version by Scientific Atlanta that made the cover of the Nieman Marcus catalog. The CEA dataset started in 1986. The SBCA was able to provide information from 1979-1986, so there was perfect overlap between the two datasets. Both datasets included quantities for 1986 and the data matched perfectly. While the price points and unit penetration were available from SBCA, the consumer home penetration had to be derived from SBCA data. This was done by dividing unit sales by the number of households for all years except 1984 and 1985. Since the CEA consumer penetration started in 1986 at 1%, the penetration for earlier years were capped below 1%. .8% was used for 1984 and .9% was used for 1985. 50 CD Players Philips invented the Compact Disc and teamed with Sony to bring it to market. Since both firms are important members of the Consumer Electronic Association, the CEA dataset tracked the diffusion of CD players since its US introduction in 1983. During the first year of US sales, 30,000 CD players were sold along with 800,000 005 — almost 27 CDs sold for every player. Camcorders A camcorder is defined as an integrated camera that records video into a video cassette. According to the CEA, the first camcorder hit stores in May, 1983. Interestingly enough, it was a Beta camcorder. The CEA dataset starts with 1985. The data for years 1983-84 were created by a combination of extrapolation, various references, and judgment. Specifically, the average prices of $2,950 (1983) and $2,000 (1984) were chosen by looking at press releases, consumer reviews, and advertisements from 1983 and 1984. The consumer penetration data was derived by repeatedly halving the CEA consumer penetration data. Since the CEA showed that 1% of consumer households owned a camcorder in 1985, 0.5% household penetration was used for 1984 and 0.25% household penetration was used for 1983. 51 Projection Televisions (PTVs) According to the CEA, the first rear projection television (PTV) was sold in 1982 and this date is used for the diffusion study.11 The CEA dataset starts with 1984. The data for years 1982-83 were created by a combination of extrapolation, various references, and judgment. Specifically, the average prices of $2,177 (1982) and $2,073 (1983) were obtained using the consumer electronic industry rule of thumb of assuming an annual 5% price reduction and working backwards from the CEA average price of $1,974 in 198412. The consumer penetration data was derived by repeatedly halving the CEA consumer penetration data. Since the CEA showed that 1% of consumer households owned a projection television in 1984, 0.5% household penetration was used for 1983 and 0.25% household penetration was used for 1982. VCRs The first video cassette recorder for the home market was the 1972 AVCO Cartrivision System.13 To reinforce the cliche that those who forget history are doomed to repeat it, their business model was later reinvented by DIVX. Two types of cassettes were available. Black ones for recording — that could be reused — and red ones that could be rented. The red cassettes could only be viewed once and could only be rewound by a special machine owned by the company that offered rentals. By the time the Betamax product was released in ‘1 In point of fact, the first rear projection set was introduced by RCA in 1947 - the 648PTK. However, it suffered from a dim image and was a market failure. Rear projection televisions were not available for another 35 years. 12 The average PTV price was derived by multiplying the following year's price by 1.05. So the actual decrease was .0476, not .05. 52 1975, AVCO was no longer manufacturing units. The first stand-alone VHS VCR in the US was available in 1977. The CEA dataset starts in 1974. The data for years 1972-73 were created by a combination of extrapolation, various references, and judgment. Specifically, the average prices of $1,600 (1972) and $1,000 (1973) were chosen by looking at press releases, consumer reviews, and advertisements from 1972 and 1973. Zero percent consumer penetration was assumed for 1972-1973 as the CEA data showed zero percent consumer penetration from 1974-1978. Cordless Phones and Telephone Answering Device (TADs) In 1976, the Federal Courts agreed with the FCC and permanently halted the practice of requiring consumers to inSert special “safety” devices between telephones (and telephone devices such as answering machines and modems) and the phone lines so long as these devices met FCC regulations. This decision put a stop to burdensome regulations that were dissuading consumers from connecting telephony innovations to their lines. While both cordless phones and telephone answering devices (T ADs) predate 1976, their home penetration was insignificant (less than .5% of households). Thus, for the purposes of this research, 1976 was selected as the start of the diffusion process for these innovafions. ‘3 The Sony U-matic (1970) was actually the first VCR and was initially intended for the home market. However, its costs were too great so Sony decided to market it to corporations instead. 53 The CEA datasets start with 1980 for cordless phones and 1982 for the TADs. The information from 1976 until the CEA data started was created by a combination of extrapolation, various references, and judgment. Specifically, the average prices of were obtained using the same methods employed to extrapolate the missing PTV prices. The consumer penetration data was derived by repeatedly halving the CEA consumer penetration data which measured consumer household penetration of cordless phones at 0% in 1980“ and consumer household penetration of TADs at 1% in 1982. Revised Classification of Innovations While reading sources to aid in the extrapolation of the two years of data for the VCR case not included in the CEA dataset, the author discovered a consumer electronic innovation that is no longer in use. Video Tape Recorders (VT Rs) were available to consumers in the sixties and some were specifically aimed at home consumers. The existence of these products falsified the assumption that the VCR was the first innovation that consumers could purchase to record television shows. Thus, VCRs were a really new innovation, not a radical innovation. 1‘ Similar to the extrapolation used for the PC data, a household penetration of 0.4% was used for 1980. 54 Figure 7: Revised Classification of 8 Consumer Electronic Innovations Innovations for Consumers Price Level Radical Innovation Really New Innovation (1983 - 2000) PCs Camcorders (1977 — 2000) (1983 - 2000) High Satellite Receivers Projection TVs (1979 — 2000) (1982 - 2000) VCR (1972 - 2000) L CD Players Cordless Phones CW (1976 — 2000) Telephone Answering Device (1976 — 2000) Figure 7 shows the revised classification of the innovations in this research. The reclassification of the VCR provided a third innovation for low- priced, really new innovations. However, this resulted in only one low-priced, radical innovation being used in this research. Models Many diffusion models have been used in various contexts. Throughout the forecasting literature, one common refrain was repeatedly stressed — no single forecasting method was appropriate for every situation (Cetron and Ralph, 1971; Armstrong, 2001). While there were many well-known models from which to choose, the following seemed most represented in the literature (Table 16). 55 Table 16: Diffusion Models Initially Considered Logarithmic Parabola (Gregg, Hossel, & Richardson, 1964) Modified Exponential (Gregg, Hossel, & Richardson, 1964) Observation-Based Modified Exponential (Meade, 1985) Bass model (Bass, 1969) Generalized Bass model (Bass, Krishnan, and Jain, 1994) Simple Logistic (Gregg, Hossel, & Richardson, 1964) Gompertz (Gregg, Hossel, & Richardson, 1964) Extended Logistic (Bass, 1969) Log-logistic (Tanner, 1978) Flexible Logistic (FLOG) — Inverse Power Transform (Bewley & Fiebig, 1988) FLOG — Box & Cox (Bewley & Fiebig, 1988) FLOG — Exponential (Bewley & Fiebig, 1988) From these models, it was desired to select a manageable number of diffusion models for the purposes of this research. Meade and Islam (2001) strongly recommended that, “A reasonable initial set of models should include the [simple] logistic, Gompertz, and Bass models.” Given the interest in price, the Generalized Bass model (price only) was appropriate. Some exploratory research was done with all of the models and the F LOG Box & Cox seemed more robust within the consumer electronic context than the other models (APPENDIX A). In the process of setting up all the models, the author became intrigued by Bass assumption that m should remain constant. In the market of interest, the number of US households is continually expanding. Therefore two variant models, a Bass variant and a Generalized Bass (Price) variant, were also developed. 56 Table 17: Diffusion Models used In Research Bass model (B) Generalized Bass model - Price (GB) Bass model variant (Bv) Generalized Bass model (Price) variant (GBv) Simple Logistic model (SL) Gompertz model (G) F LOG — Box & Cox model (BnC) While supported by the literature and some exploratory research, the decision of which models to select for the research was based upon the author's judgment. As a check on this selection, two forecasting experts were consulted.” After review, both experts concurred with the decision. Bass Model (B) . The Bass 1969 model has been stated in many forms. This research used the Lilien, Rangaswamy, and Van Den Bulte’s (2000) transfiguration of Bass X(r-l) m x(r) = [ p + q( )][m — X (r _ 1)] as it is common in the literature and since Lilien et al. also provided a large list of Bass parameters. Bass Model Variants The Generalized Bass model (Bass, Krishnan, and Jain, 1994) was developed to consider the impact of price and advertising in forecasts. Since the datasets provided by the CEA were industry data, information on average pricing was available, but individual firms did not share their related advertising expenditures. Thus a Price-only variant of the Generalized Bass model was ‘5 Professors Roger Calantone and Jon Bohlmann. 57 used. This equation x(r)=[p+q(X('_l))][m—X(r—l)ll + B( ”cliff-0)] is a m r-I subset of the complete Generalized Bass model (GB). Both the Bass model and the Generalized Bass model constrain m to be constant. However, it is common for the actual m to change over the period to be forecast. Therefore a changing m variant was created by the author for both the Bass model and Generalized (Price) Bass model. The equation for the Bass X(r - I) m(t) model variant (Bv) used is x(r) = [ p + q( )][m(t) — X (r - 1)] and the equation for the Generalized Bass (Price) model variant (GBv) is x(r) =[p+q(Xng'(t’)'))][m(r)—X(r-011 + B( PK'I),;((1:()'"))]. The author investigated changing m variants for the other models, but given how the other three models were structured, allowing m to change with I had zero impact on the results. Simple Logistic (SL) & Gompertz (G) The Simple Logistic and Gompertz models (Gregg, Hossel & Richardson, 1964) are some of the earliest and simplest diffusion models. Meade & Islam’s (2001) transfigurations were used. The equation for the Simple Logistic is __ m 1 + c exp(—bt) X (r) and the equation for the Gompertz is X (r) = m exp(—c (exp(—bt))) . 58 Flexible logistic (FLOG) - Box and Cox (BnC) Bewley and Fiebig (1988) developed several flexible logistic models that used the base equation Xi = m . Multiple variants use different I + c exp— (B(t)) (1+r)"—1 formulas for B(t). The Box and Cox model uses B(r) = (b ). The Box and Cox model has a tendency for one of its variables (c) to tend to infinity in some cases. Since using such extreme values would cause the parameters to give poor results for other cases, a cap of 100,000 was placed on the 0 variable in this research. This value allowed the BnC model to be viable with all the datasets. Selected Models and Proposed Forecasting Classification Grid Using the proposed classification grid, all seven forecasting methods are models (Figure 8). Most of the models barely meet the minimum definition of a model, but all of these forecasting methods explicitly express their causal assumptions mathematically. Since the Bass Models also provide a deeper theoretical reasoning as to why they work, these models are farther to the right on the Naive/Causal continuum. 59 Figure 8: Classification of Research Models Empirical Models Simple Logistic, Gompertz, & .Box & Cox eBass Models Models Naive Causal Opinion Verification of Models Once the Lilien, Rangaswamy, and Van Den Bulte’s (2000) Bass model was working, an attempt was made to verify it by comparing it to Meade and Islam’s (2001) transfigured Bass formula: x(.) = pm + (qp)X(r _ I) -%[X(r _ l)2 ]. The results did not match. A third Bass model was created, based upon the original article (Bass, 1969) with the equation x(.) = pm + (q — pm, - r) ——%[X(, - 02]. The results from this model perfectly matched that of the Lilien et al. transfiguration, validating both models and served as an indication that there was a problem with the Meade and Islam variant that turned out to be a typographical mistake.16 ‘6 There should be a minus sign between the q and p in the Islam and Meade paper. 60 Since the Bass model was going to serve as the benchmark for the other models, additional testing was done to ensure the Bass models were working as expected. The innovations listed by Lilien et al. (2000) overlapped with four of the datasets being used in this study. All data sets used penetration data for tracking diffusion. By reducing the CEA datasets to the same periods covered by the Lilien et al. datasets, it was possible to compare the Bass parameters listed by Lilien et al. to those obtained by this research. As shown in Table 18, the parameters obtained by this method differed from those described by Lilien et all. Table 18: Minding p's and q's Lilien et al. (2000) Gentry (2003) Product Years p g m p q m Camcorders 1986-96 0.044 0.304 30.5 0.022 0.035 100 CD Player 1986-96 0.055 0.378 29.6 0.034 0.246 100 Cordless TeleLhone 1984-96 0.004 0.338 67.6 0.034 0.136 100 VCR 1981-94 0.025 0.603 76.3 0.029 0.299 100 It would be understandable for some of the parameters to differ since the data came from different sources. However it seemed unlikely that all four datasets would significantly differ. After analyzing the Bass model itself, the conundrum was resolved. Changing the size of m within the Bass formula, when using discrete time notation, does not directly affect the percentage of those who adopt so long as m remains constant as specified. It plays a significant role when one is interested in the amount of units to be purchased, but has no impact on the percentage of adopters. If one multiplies the percentage of adopters obtained by the Lilien et al. parameters by the m given by Lilien et al, the results approached those obtained by the Gentry parameters in two of the four cases. 61 Table 19: Camcorder Diffusion with Lilien Adjustment Year Actual Lilien et aI Gentry Lilien Adjusted 1986 2% 4% 2% 1% 1987 4% 10% 4% 3% 1988 5% 17% 7% 5% 1989 8% 24% 9% 7% 1990 11% 33% 11% 10% 1991 15% 43% 14% 13% 1992 18% 53% 16% 16% 1993 19% 63% 18% 19% 1994 21% 71 % 21% 22% 1995 22% 79% 23% 24% 1996 25% 85% 25% 26% When using both the Lilien et al. coefficients and adjusting for m leads to very similar results compared to the Gentry parameters for both the Camcorder and VCR products. Figure 9: VCR Diffusion with Lilien Adjustment VCR Diffusion 100% 90%7 80% 70% 7 60% a 50% { i—x—Lilien Adjustedl 40% -°- Gentw_cM__3Jl 30% 1- 20% 4 10% . , 0% - .4 ., - , w ‘— N ('0 V I!) (D N 00 O) O ‘— N 00 V ssssssesssssss 62 Even with the adjustments, the Bass parameters obtained with the Gentry datasets were significantly different that those obtained by the Lilien et al. datasets for cordless phones and CD players. However, Figure 10 clearly demonstrates that is due to differences in the data as the Lilien et al. curves greatly differ from the actual data provided by the Consumer Electronics Association. Thus, it appears that the Bass models are valid in all cases and any true discrepancies between the Lilien results and the results of this research are due to differences in the data. Figure 10: Cordless Phone Diffusion Cordless Phone Diffusion 70% a 60%l «Liens. ____ i 50% ‘—A—Lilien l 40% _' _—>::Lilien Adjusteg 30% ~' 20% - 10% ’ 0% 1984 1985 1986 1989 1 1990 e 1991 ~ 1992 « 1993 1994 a 1995 1996 The other models used in this research were basic implementations of standard formulas. Perhaps because they were not as famous as the Bass model, it was not necessary to choose from many variants. After reviewing to ensure that these additional models were working as expected, their results were 63 compared to the Bass model. As expected, all models gave results similar to the Bass model. No additional validation procedures were performed. Process Curve Fitting In order to calculate which seven diffusion models had the potential to work best, all seven models were run with the eight innovation datasets provided by the CEA. Only the CEA datasets were used as they contained perfect (non- extrapolated) information for these fifty-six models. The curve fitting exercise was then duplicated with the extended datasets. The extended datasets cover the time period of interest for the forecasting. Forecasting The model parameters obtained through extended curve-fitting procedures were used to create the forecasts. The parameters from each of the 8 innovations were used to forecast the diffusion of the other 7 innovations. This was done for each of the seven models. Thus, a total of 392 forecasts were created. As part of the forecasting analysis, it was clear that the Generalized Bass models were not as well suited for diffusion forecasts as the other models (see APPENDIX B), so the GB models were not used for the quadrant analysis. Hypotheses Testing (Quadrant Analysis) Using the squared sum of errors obtained by the forecasting models, the results for each forecast were used to compare the relative importance of price level and innovation type. This was done in two ways. First, each forecasting method was reviewed as a whole and segmented by the two price levels and two innovation levels.17 Then the specific hypotheses were tested by seeing how many results predicted by the hypotheses were actually correct. This provided two distinct methods of looking at the price levels, innovation levels, and forecast method. While analyzing this information, it became clear that forecasts based upon the PC parameters did not work as‘well as the parameters from other innovations. A posteriori, this may be because PCs may have been purchased for reasons other than home entertainment. Many may have been purchased for home offices. The PC may be used for educational purposes as well as entertainment. The home office purchases and educational consideration may explain why the PC diffusion differed than that of the other innovations. Given the unique characteristics of the personal computer diffusion curve, the quadrant analyses were repeated without using the PC dataset. ‘7 It may prove helpful to have Figure 6: How Descriptive Terms (Same, Horizontal, Vertical, 8 Opposite) Are Used (page 46) at hand. 65 Chapter 4 RESULTS Table 20: Model Abbreviations B Bass model Bv Bass model variant GB Generalized Bass model (Price) GBv Generalized Bass model (Price) variant SL Simple Logistic model G Gompertz model BnC Box and Cox model Potential Fit of Models After determining and using the optimal parameters for seven models, the sum of the squared errors (SSE) were obtained by subtracting the curve-fitting results from the actual results in order to ”show how well each model did in comparison to one another for each innovation. One can make a case for measuring the best and worse models by either the total SSE (Table 21) or by their cumulative placement rankings (Table 20). Table 21: Curve Fitting Results e2 of e2 of e2 of e2 of e2 of e2 of e2 of Innovation (data starts) B Bv GB GBv SL G BnC PCS (1980) 0.011 0.012 0.007 0.007 0.016 0.015 0.013 Sat. Receivers (1986) 0.001 0.001 0.001 0.001 0.001 0.001 0.001 VCRS (1974) 0.074 0.063 0.036 0.029 0.055 0.017 0.027 CD Players (1983) 0.038 0.033 0.035 0.031 0.047 0.016 0.008 Camcorders (1985) 0.002 0.002 0.002 0.002 0.007 0.004 0.003 PTVS (1984) 0.000 0.000 0.000 0.000 0.001 0.000 0.001 Cordless Phones (1980) 0.005 0.005 0.005 0.005 0.007 0.012 0.005 TADS (1982) 0.021 0.018 0.011 0.011 0.042 0.012 0.006 Total: 0.153 0.135 0.097 0.087 0.177 0.077 0.064 66 Table 22: Curve Fitting - Comparative Placement Innovation (data starts) B Bv GB GBv SL G BnC PCs (1980) 3 4 1 2 7 6 5 Sat. Receivers (1986) 4 5 2 3 1 6 7 VCRs (1974) 7 6 4 3 5 1 2 CD Players (1983) 6 4 5 3 7 2 1 Camcorders (1985) 3 4 1 2 7 6 5 PWS (1984) 5 4 3 2 7 1 6 Cordless Phones (1980) 4 2 3 1 6 7 5 TADs (1982) 6 5 2 3 7 4 1 Total: 38 34 21 19 47 33 32 Judging by total SSE, the Box and Cox model is the best potential model (.064) given perfect information. However, if one uses the comparative placement method, the Generalized Bass variant is the best potential model. In either case, the Simple Logistic model is clearly the worse potential model. However, it is important to note that even the Simple Logistic model only had a total SSE of 0.177 for all eight innovations. Since this was a curve-fitting exercise, not a forecast, the accuracy of the various diffusion models is not surprising. At the .05 level of testing, there were no significant differences between any of the seven models. 67 Optimal Parameters The curve fitting exercise was duplicated with the extended datasets to determine the optimal parameters for each model. Table 23: Curve Fitting - Optimized Parameters for 8 Model Innovation p q PCs (1977-2000) 0.0076 0.1267 Satellite Receivers (1979-2000) 0.0003 0.2604 VCRs (1972—2000) 0.0014 0.3554 CD Players (1983-2000) 0.0170 0.2230 Camcorders (1983-2000) 0.0088 0.1329 PTVs (1982-2000) 0.0054 0.0515 Cordless Phones (1976-2000) 0.0039 0.2313 Telephone Answering Devices (1976-2000) 0.0049 0.2175 Table 24: Curve Fitting - Optimized Parameters for Bv Model Innovation p q PCs (1977-2000) 0.0076 0.1453 Satellite Receivers (1979-2000) 0.0003 0.2771 VCRs (1972-—2000) 0.0013 0.3871 CD Players (1983-2000) 0.0164 0.2494 Camcorders (1983-2000) 0.0087 0.1499 PTVs (1982-2000) 0.0054 0.0651 Cordless Phones (1976-2000) 0.0038 0.2552 Telephone Answering Devices (1976-2000) 0.0048 0.2418 Table 25: Curve Fitting - Optimized Parameters for GB Model Innovation p q B PCs (1977-2000) 0.0075 0.1401 -1 .5073 Satellite Receivers (1979-2000) 0.0005 0.2586 1.0531 VC Rs (1972-—2000) 0.0017 0.2243 -8.5919 CD Players (1983-2000) 0.0160 0.2603 1 .5604 Camcorders (1983-2000) 0.0023 0.1 195 -8.9563 PTVs (1982-2000) 0.0060 0.0547 4.2393 Cordless Phones (1976-2000) 0.0041 0.2360 0.7047 Telephone Answering Devices (1976-2000) 0.0053 0.2188 0.4371 68 Table 26: Curve Fitting - Optimized Parameters for GBv Model Innovation p q B PCs (1977-2000) 0.0074 0.1604 -1.5360 Satellite Receivers (1979-2000) 0.0005 0.2747 1.0545 VCRs (1972-2000) 0.0017 0.2530 -7.7575 CD Players (1983-2000) 0.0154 0.2889 1.5009 Camcorders (1983-2000) 0.0022 0.1319 -9.1342 PTVs (1982-2000) 0.0059 0.0691 3.5670 Cordless Phones (1976-2000) 0.0039 0.2607 0.7328 Telephone Answering Devices (1976-2000) 0.0036 0.2352 -1.6927 Table 27: Curve Fitting - Optimized Parameters for SL Model Innovation b c PCs (1977-2000) 0.1599 35.2554 Satellite Receivers (1979-2000) ‘ 0.2403 1016.7619 VCRs (1972-2000) 0.3705 632.6383 CD Players (1983-2000) 0.2839 30.4163 Camcorders (1983-2000) 0.1828 38.4293 PTVs (1982-2000) 0.1236 54.1284 Cordless Phones (1976-2000) 0.2431 95.7757 Telephone Answering Devices (1976-2000) 0.2377 78.5701 Table 28: Curve Fitting - Optimized Parameters for G Model Innovation b c PCs (1977-2000) 0.0865 4.9556 Satellite Receivers (1979-2000) 0.0856 12.1903 VCRs (1972-—2000) 0.2566 55.7292 CD Players (1983-2000) 0.1877 6.2126 Camcorders (1983-2000) 0.0898 4.6937 PTVs (1982-2000) 0.0458 4.4081 Cordless Phones (1976-2000) 0.1530 1 1 .5618 Telephone Answering Devices (1976-2000) 0.1543 1 1.0536 69 Table 29: Curve Fitting - Optimized Parameters for BnC Model Innovation b c18 k PCs (1977-2000) 0.8919 224.3901 0.3748 Satellite Receivers (1979-2000) 1.2582 26500.0001 0.4417 VC Rs (1972—2000) 2.2013 100000.0000 0.3718 CD Players (1983-2000) 6.3438 100000.0000 -0.2766 Camcorders (1983-2000) 2.7401 2118.1201 -0.0818 PTVs (1982-2000) 2.4619 1560.3284 -0.1931 Cordless Phones (1976-2000) 0.7533 534.4088 0.6042 Telephone Answering Devices (1 976—2000) 3.8056 100000.0000 0.0193 Actual Fit of Models (Forecasting) For the purposes of forecasting the consumer adoption of innovations, the Generalized Bass models were not as reliable as the other five diffusion models (APPENDIX 8). Therefore, only the results of the other five models were presented here. For each of the eight innovations, forecasts were created by using the optimal parameters of the other seven innovations. The results for the five diffusion models still of interest were tabulated by both sum of the squared errors and by the comparative placement method (Tables 30 to 45 and Tables 56-64). ‘8 As discussed in Chapter 3, an upper limit of 100,000 was used for variable c. 70 Table 30: Personal Computer Forecasting Results PC Forecasts Using coefficients optimized for Satellite Receivers (1979-2000) Using coefficients optimized for VC Rs (1 972—2000) Using coefficients optimized for CD Players (1983-2000) Using coefficients optimized for Camcorders (1983-2000) Using coefficients optimized for PTVs (1 982-2000) Using coefficients optimized for Cordless Phones (1976-2000) Using coefficients optimized for Telephone Answering Devices (1 976-2000) Total Ranking of PC Forecasts Using coefficients optimized for Satellite Receivers (1979-2000) Using coefficients optimized for VCRs (1 972—2000) Using coefficients optimized for CD Players (1983-2000) Using coefficients optimized for Camcorders (1983-2000) Using coefficients optimized for PTVs (1 982-2000) Using coefficients optimized for Cordless Phones (1976-2000) Using coefficients optimized for Telephone Answering Devices (1976-2000) Total e2 of 3 0.983 0.803 2.770 0.051 0.674 0.262 0.309 e2 of Bv 0.993 0.869 2.625 0.042 0.686 0.272 0.323 e2 of SL 0.983 0.834 2.889 0.100 0.581 0.271 0.328 5.865 5.822 6.007 Table 31: PC Forecasts - Comparative Results B 1 1 15 23 v 3 5 SL 2 3 21 e2 of 6 1.023 0.849 2.620 0.039 0.635 0.282 0.346 5.809 25 G 5 4 2 2 2 e2 of BnC 1.000 0.804 2.369 0.028 0.716 0.273 0.338 5.543 BnC 21 The Box and Cox Model performed the best overall for forecasting PC 71 diffusion with a SSE of 5.543 and tying for second in the comparative results. Table 32: D88 Satellite Receiver Forecasting Results Satellite Receiver Forecasts 62 of 62 of e2 of 62 of e2 of B Bv SL G BnC Using coefficients optimized for 0.773 0.792 0.749 0.792 0.788 PCs (1977-2000) Using coefficients optimized for 2.205 2.386 2.240 2.246 2.230 VC Rs (1972-2000) Using coefficients optimized for 5.885 5.773 6.043 5.724 5.432 CD Players (1983-2000) Using coefficients optimized for 1.102 1.080 1.208 1.059 0.976 Camcorders (1 983-2000) Using coefficients optimized for 0.062 0.061 0.067 0.064 0.061 PTVs (1 982-2000) Using coefficients optimized for 1.470 1.529 1.476 1.540 1.503 Cordless Phones (1976-2000) Using coefficients optimized for 1.647 1.717 1.656 1.724 1.735 Telephone Answering Devices (1 976-2000) Total 13.145 13.338 13.440 13.150 12.726 Table 33: DBS Satellite Receiver Forecasts - Comparative Results Ranking of Satellite Receiver Forecasts B Bv SL G BnC Using coefficients optimized for 2 4 1 5 3 PCs (1 977-2000) Using coefficients optimized for 1 5 3 4 2 VC Rs (1 972-2000) Using coefficients optimized for 4 3 5 2 1 CD Players (1983-2000) Using coefficients optimized for 4 3 5 2 1 Camcorders (1 983-2000) Using coefficients optimized for 3 2 5 4 1 PTVs (1982-2000) Using coefficients optimized for 1 4 2 5 3 Cordless Phones (1976-2000) Using coefficients optimized for 1 3 2 4 5 Telephone Answering Devices (1 976-2000) Total 16 24 23 26 16 The Box and Cox Model performed the best overall for forecasting the diffusion of satellite receivers with a SSE of 12.726 and tying for first in the comparative results. 72 Table 34: CD Player Forecasting Results CD Player Forecasts e2 of e2 of e2 of 62 of 92 of B Bv SL G BnC Using coefficients optimized for 1.505 1.476 1.578 1.478 1.485 PCs (1977-2000) Using coefficients optimized for 3.407 3.405 3.406 3.426 3.420 Satellite Receivers (1979-2000) Using coefficients optimized for 0.076 0.091 0.099 0.177 0.122 VC Rs (1 972-—2000) Using coefficients optimized for 1.233 1.232 1.260 1.228 1.215 Camcorders (1983-2000) Using coefficients optimized for 2.636 2.634 2.674 2.653 2.625 PTVs (1 982-2000) Using coefficients optimized for 1.309 1.269 1.332 1.311 1.321 Cordless Phones (1976-2000) Using coefficients optimized for 1.141 1.096 1.182 1.175 1.141 Telephone Answering Devices (1 976-2000) Total 11.345 11.235 11.578 11.464 11.337 Table 35: CD Player ForeCasts - Comparative Results Ranking of CD Player Forecasts B Bv SL G BnC Using coefficients optimized for 4 5 2 3 PCs (1977-2000) Using coefficients optimized for 3 2 5 4 Satellite Receivers (1979-2000) Using coefficients optimized for 2 3 5 4 VC Rs (1 972-2000) Using coefficients optimized for 4 3 5 2 Camcorders (1983-2000) Using coefficients optimized for 3 2 5 4 PTVs (1 982-2000) Using coefficients optimized for 2 5 3 4 Cordless Phones (1976-2000) Using coefficients optimized for 3 5 4 2 Telephone Answering Devices (1 976-2000) Total 20 1 1 30 25 19 The Bass Model variant performed the best overall for forecasting the diffusion of CD players with a SSE of 11.235 and placing first in the comparative results. 73 Table 36: Camcorder Forecasting Results Camcorder Forecasts e2 of B e2 of Bv e2 of SL e2 of G e2 of BnC Using coefficients optimized for PCs (1977-2000) Using coefficients optimized for Satellite Receivers (1979-2000) Using coefficients optimized for VC Rs (1972—2000) Using coefficients optimized for CD Players (1983-2000) Using coefficients optimized for PTVs (1 982-2000) Using coefficients optimized for Cordless Phones (1976-2000) Using coefficients optimized for Telephone Answering Devices (1 976-2000) Total 0.018 0.564 0.076 1.241 0.284 0.020 0.021 2.229 0.015 0.564 0.091 1.244 0.283 0.022 0.026 2.249 0.030 0.564 0.099 1.286 0.295 0.023 0.024 2.329 0.016 0.571 0.177 1.241 0.288 0.054 0.058 2.409 Table 37: Camcorder ForeCasts - Comparative Results Ranking of Camcorder 0.016 0.569 0.122 1.218 0.281 0.032 0.053 2.294 BnC Forecasts 3 EV SL G Using coefficients optimized for 4 PCs (1 977-2000) Using coefficients optimized for 3 1 2 5 4 Satellite Receivers (1979-2000) Using coefficients optimized for 1 2 3 5 4 VC Rs (1972-—2000) Using coefficients optimized for 3 4 5 2 1 CD Players (1983-2000) Using coefficients optimized for 3 2 5 4 1 PTVs (1 982—2000) Using coefficients optimized for 1 2 3 5 4 Cordless Phones (1976-2000) Using coefficients optimized for 1 3 2 5 4 Telephone Answering Devices (1 976-2000) Total 16 15 25 28 21 The Bass Model and the Bass variant performed the best overall for forecasting the diffusion of camcorders with respective SSEs of 2.229/2.249 and placements of second/first in the comparative results. 74 Table 38: Projection Television Forecasting Results PTV Forecasts 92 of 92 of 92 of 92 of 02 of 8 EV SL G BnC Using coefficients optimized for 0.219 0.230 0.200 0.234 0.230 PCs (1977-2000) Using coefficients optimized for 0.057 0.057 0.057 0.060 0.059 Satellite Receivers (1979-2000) Using coefficients optimized for 0.618 0.709 0.644 0.721 0.682 VC Rs (1972—2000) Using coefficients optimized for 3.186 3.168 3.276 3.155 3.067 CD Players (1983-2000) Using coefficients optimized for 0.360 0.357 0.375 0.357 0.345 Camcorders (1 983-2000) Using coefficients optimized for 0.423 0.454 0.422 0.496 0.449 Cordless Phones (1976-2000) Using coefficients optimized for 0.518 0.557 0.511 0.586 0.593 Telephone Answering Devices (1 976-2000) Total 5.382 5.531 5.486 5.610 5.426 Table 39: PTV Forecasts - Comparative Results Ranking of PTV Forecasts B Bv SL 6 BnC Using coefficients optimized for 2 4 1 5 3 PCs (1977-2000) Using coefficients optimized for 3 2 1 5 4 Satellite Receivers (1979-2000) Using coefficients optimized for 1 4 2 5 3 VCRs (1972--2000) Using coefficients optimized for 4 3 5 2 1 CD Players (1983-2000) Using coefficients optimized for 4 2 5 3 1 Camcorders (1 983-2000) Using coefficients optimized for 2 4 1 5 3 Cordless Phones (1976-2000) Using coefficients optimized for 2 3 1 4 5 Telephone Answering Devices (1 976-2000) Total 18 22 16 29 20 The Bass model performed the best overall for forecasting the diffusion of projection televisions with a SSE of 5.382 and placing second in the comparative results. 75 Table 40: Video Cassette Recorder Forecasting Results VCR Forecasts Using coefficients optimized for PCs (1977-2000) Using coefficients optimized for Satellite Receivers (1979-2000) Using coefficients optimized for CD Players (1983-2000) Using coefficients optimized for Camcorders (1983-2000) Using coefficients optimized for PTVs (1 982-2000) Using coefficients optimized for Cordless Phones (1976-2000) Using coefficients optimized for Telephone Answering Devices (1 976-2000) Total Ranking of VCR Forecasts Using coefficients optimized for PCs (1977—2000) Using coefficients optimized for Satellite Receivers (1979-2000) Using coefficients optimized for CD Players (1983-2000) Using coefficients optimized for Camcorders (1 983-2000) Using coefficients optimized for PTVs (1 982-2000) Using coefficients optimized for Cordless Phones (1976-2000) Using coefficients optimized for Telephone Answering Devices (1 976-2000) s2 of 3 1.176 4.450 1.653 0.798 4.908 0.232 0.224 B 2 1 e2 of Bv 1.268 4.601 1.470 0.928 5.027 0.252 0.237 Bv 5 3 e2 of SL 1.107 4.492 1.688 0.556 4.124 0.224 0.202 SL 1 2 e2 of G 1.266 5.081 1.624 0.994 4.634 0.229 0.196 13.518 13.848 12.449 14.041 Table 41: VCR Forecasts - Comparative Results G 4 5 N-kw e2 of BnC 1.255 4.744 1.586 1.376 5.250 0.219 0.216 14.658 BnC 3 4 Total 20 26 14 22 23 The Simple Logistic model performed the best overall for forecasting the diffusion of VCRs with a SSE of 12.449 and placing first in the comparative results. 76 Table 42: Cordless Phone Forecasting Results Cordless Phone Forecasts Using coefficients optimized for PCs (1977-2000) Using coefficients optimized for Satellite Receivers (1979-2000) Using coefficients optimized for VCRs (1 972-2000) Using coefficients optimized for CD Players (1983-2000) Using coefficients optimized for Camcorders (1983-2000) Using coefficients optimized for PTVs (1 982-2000) Using coefficients optimized for Telephone Answering Devices (1 9762000) Total 92 of B 0.305 2.330 0.174 1.784 0.150 2.007 0.015 6.773 e2 of Bv 0.313 2.358 0.195 1.656 0.175 2.034 0.014 6.751 e2 of SL 0.297 2.333 0.184 1.860 0.079 1.770 0.015 6.546 e2 of G 0.316 2.438 0.189 1.699 0.203 1.911 0.019 6.787 Table 43: Cordless Phone Ferecasts - Comparative Results Ranking of Cordless Phone Forecasts G e2 of BnC 0.315 2.377 0.165 1.562 0.319 2.107 0.019 6.868 Using coefficients optimized for PCs (1977-2000) Using coefficients optimized for Satellite Receivers (1979-2000) Using coefficients optimized for VCRs (1 972--2000) Using coefficients optimized for CD Players (1983-2000) Using coefficients optimized for Camcorders (1983-2000) Using coefficients optimized for PTVs (1 982-2000) Using coefficients optimized for Telephone Answering Devices (1 976-2000) NW Total 16 21 16 27 25 The Simple Logistic model performed the best overall for forecasting the diffusion of cordless phones with a SSE of 6.546 and tying for first in the comparative results. 77 Table 44: Telephone Answering Device Forecasting Results TAD Forecasts e2 of 02 of 92 of 02 of 82 of 8 EV SL G BnC Using coefficients optimized for 0.378 0.384 0.381 0.376 0.377 PCs (1 977-2000) Using coefficients optimized for 2.511 2.540 2.514 2.609 2.553 Satellite Receivers (1979-2000) Using coefficients optimized for 0.195 0.204 0.196 0.170 0.160 VC Rs (1 972--2000) Using coefficients optimized for 1.622 1.494 1.681 1.536 1.406 CD Players (1983-2000) Using coefficients optimized for 0.203 0.228 0.135 0.246 0.356 Camcorders (1983-2000) Using coefficients optimized for 2.136 2.164 1.909 2.041 2.232 PTVs (1 982-2000) Using coefficients optimized for 0.055 0.051 , 0.055 0.019 0.037 Cordless Phones (1976-2000) Total 7.147 7.106 6.919 7.010 7.126 Table 45: TAD Forecasts — Comparative Results Rankimf TAD Forecasts 8 EV SL G BnC Using coefficients optimized for 3 5 4 1 2 PCs (1977-2000) Using coefficients optimized for 1 3 2 5 4 Satellite Receivers (1979-2000) Using coefficients optimized for 3 5 4 2 1 VC Rs (1 972—2000) Using coefficients optimized for 4 2 5 3 1 CD Players (1983-2000) Using coefficients optimized for 2 3 1 4 5 Camcorders (1983-2000) Using coefficients optimized for 3 4 1 2 5 PTVs (1 982-2000) Using coefficients optimized for 4 3 5 1 2 Cordless Phones (1976-2000) Total 20 25 22 1 8 20 The Gompertz model performed the best overall for forecasting the diffusion of telephone answering devices with a SSE of 7.010 (second best) and placing first in the comparative results. 78 Hypotheses Testing (Quadrant Analysis) The primary purpose of this research is to provide guidance on which diffusion models should be used in various conditions. While the previous set of tables looked at the forecasts for each innovation, the following set of tables looks at of the forecasts as a whole and then as segments. Table 46: All Eight Innovations - Comparative Results 62 of e2 of e2 of e2 of e2 of B Bv SL G BnC Sum of e2 85.4 85.9 64.8 66.3 88.0 Total finish score (lower is better) 141 167 167 200 165 Rankings by e2 sums 2 3 1 5 4 Rankings by finish position 1 3 3 5 2 The Bass model performed the best overall for forecasting the diffusion of all innovations with a SSE of 65.4 (second best) and placing first in the comparative results. However, the results are not statistically significant. Table 47: Radical Innovations - Comparative Results Personal Computers, Satellite e2 of e2 of e2 of e2 of e2 of Receivers, CD Players B Bv SL G BnC Sum of e2 30.4 30.4 31.0 30.4 29.8 Total finish score (lower is better) 51 58 74 76 56 Rankings by e2 sums 2 3 5 4 1 Rankings by finish position 1 3 4 5 2 The Bass model and the Box and Cox model performed the best overall for forecasting the diffusion of radical innovations with respective SSEs of 30.4/29.6 and placements of first/second in the comparative results. Table 48: RadicaIIHigh Priced Innovations - Comparative Results e2 of e2 of e2 of e2 of e2 of PCs, Satellite Receivers B Bv SL G BnC Sum of e2 19.0 19.2 19.4 19.0 18.3 Total finish score (lower is better) 31 47 44 51 37 Rankings by e2 sums 3 4 5 2 1 Rankings by finish position 1 4 3 5 2 79 The Box and Cox model performed the best overall for forecasting the diffusion of radical, high-priced innovations with a SSE of 18.3 and placing second in the comparative results. Table 49: RadicaIILow Priced Innovations - Comparative Results e2 of e2 of e2 of e2 of e2 of CD Players 8 Bv SL G BnC Sumofe2 11.3 11.2 11.6 11.5 11.3 Total finish score (lower is better) 20 11 30 25 19 Rankings by e2 sums 3 1 5 4 2 Rankings by finish position 3 1 5 4 2 The Bass model variant performed the best overall for forecasting the diffusion of radical, low-priced innovations with a SSE of 11.2 and placing first in the comparative results. Table 50: Really New Innovations - Comparative Results Camcorders, PTVs, VCRs, e2 of e2 of e2 of e2 of e2 of Cordless Phones, TADs B Bv SL G BnC Sum of e2 35.0 35.5 33.7 35.9 36.4 Total finish score (lower is better) 90 109 93 124 109 Rankings by e2 sums 2 3 1 4 5 Rankings by finish position 1 3 2 5 3 The Bass model and the Simple Logistic model performed the best overall for forecasting the diffusion of really new innovations with respective SSEs of 35.0/33.7 and placements of first/second in the comparative results. Table 51: Really NewIHigh Priced Innovations - Comparative Results e2 of e2 of e2 of e2 of e2 of Camcorders, PTVs B Bv SL G BnC Sum of e2 7.8 7.8 7.8 8.0 7.7 Total finish score (lower is better) 34 37 41 57 41 Rankings by e2 sums 1 3 4 5 2 Rankings by finish position 1 2 3 5 3 80 The Bass model performed the best overall for forecasting the diffusion of really new, high-priced innovations with a SSE of 7.6 and placing first in the comparative results. Table 52: Really NewILow Priced Innovations - Comparative Results e2 of e2 of e2 of e2 of e2 of VCRs, Cordless Phones, TADs B Bv SL G BnC Sum of e2 27.4 27.7 25.9 27.8 28.7 Total finish score (lower is better) 90 109 93 124 109 Rankings by e2 sums 2 3 1 4 5 Rankings by finish position 1 3 2 5 3 The Bass model and the Simple Logistic model performed the best overall for forecasting the diffusion of really new, low-priced innovations with respective SSEs of 27.4/25.9 and placements of first/second in the comparative results. Table 53: High Priced Innovations - Comparative Results PCs, Satellite Receivers, e2 of e2 of e2 of e2 of e2 of Camcorders, PTVs 8 EV SL G BnC Sum of e2 28.8 28.9 27.3 27.0 28.0 Total finish score (lower is better) 65 84 85 108 78 Rankings by e2 sums 2 3 5 4 1 Rankings by finish position 1 3 4 5 2 The Bass model and the Box and Cox model performed the best overall for forecasting the diffusion of high-priced innovations with respective SSEs of 266/260 and placements of first/second in the comparative results. Table 54: Low Priced Innovations - Comparative Results CD Players, VCRs, Cordless e2 of e2 of e2 of e2 of e2 of Phones, TADs 3 EV SL G BnC Sum of e2 38.8 38.9 37.5 39.3 40.0 Total finish score (lower is better) 76 83 82 92 87 Rankings by e2 sums 2 3 1 4 5 Rankings by finish position 1 3 2 5 4 81 The Bass model and the Simple Logistic model performed the best overall for forecasting the diffusion of low-priced innovations with respective SSEs of 38.8/37.5 and placements of first/second in the comparative results. Cell Testing (Hypotheses Testing) The specific hypotheses discussed earlier (page 46) were tested by measuring the differences between the sum of squared errors for forecasts using parameters from various quadrants. Since the hypotheses made specific predictions about the accuracy of various comparisons, the total number of successful predictions were simply counted to compute the binomial distribution (Berry and Lindgren, 1996). Table 55: Results of Cell Comparisons number percent n correct correct zscore H1 270 206 76.3% 8.6" H1a (opp) 100 85 85.0% 70” H1 b (hz) 70 40 57.1% 1.2 H1c (vt) 100 81 81.0% 6.2“ H2 280 173 61.8% 3.9** H2a (vt vs. op) 140 73 52.1% 0.5 H2b (hz vs. op) 140 100 71.4% 51*“ H3 140 33 23.6% -6.3** **P < 0.01 Strong support for the first two hypotheses was found, although the results for hypotheses H1b and H2a were not significant. Support for H1d (not shown on Table 55) was also found as Zma > (Zmb; ZH1c). Not only was support lacking for the third hypothesis, but it was clearly refuted. 82 Quadrant Analysis without PCs As discussed in Chapter 3, it became clear that the diffusion of PCs followed a pattern that differed from the other consumer electronic innovations. Therefore the quadrant analysis was repeated without using this dataset. Table 56: All Seven Innovations - Comparative Results e2 of e2 of e2 of e2 of e2 of B Bv SL G BnC Sum of e2 55.0 55.4 54.2 55.9 55.9 Total finish score (lower is better) 95 108 117 121 99 Rankings by e2 sums 2 3 1 4 5 Rankings by finish position 1 3 4 5 2 The Bass model performed the best overall for forecasting the diffusion of all innovations with a SSE of 55.0 (second best) and placing first in the comparative results. Table 57: Radical Innovations — Comparative Results e2 of e2 of e2 of e2 of e2 of Satellite Receivers, CD Players B Bv SL G BnC Sum of e2 22.2 22.3 22.8 22.3 21.8 Total finish score (lower is better) 27 29 45 39 25 Rankings by e2 sums 2 3 5 4 1 Rankings by finish position 2 3 5 4 The Box and Cox model performed the best overall for forecasting the diffusion of radical innovations with a SSE of 21.8 and placing first in the comparative results. 83 Table 58: Radical/High Priced Innovations - Comparative Results e2 of e2 of e2 of e2 of e2 of Satellite Receivers B Bv SL G BnC Sum of e2 12.4 12.5 12.7 12.4 11.9 Total finish score (lower is better) 14 20 22 21 13 Rankings by e2 sums 3 4 5 2 1 Rankings by finish position 2 3 5 4 The Box and Cox model performed the best overall for forecasting the diffusion of radical, high-priced innovations with a SSE of 11.9 and placing first in the comparative results. Table 59: Radical/Low Priced Innovations - Comparative Results e2 of e2 of e2 of e2 of e2 of CD Players B Bv SL G BnC Sum 8er _ 9.8 9.7 10.0 10.0 9.8 Total finish score (lower is better) 13 9 23 18 12 Rankings by e2 sums 2 1 4 5 3 Rankings by finish position 3 1 5 4 2 The Bass model variant performed the best overall for forecasting the diffusion of radical, low-priced innovations with a SSE of 9.7 and placing first in the comparative results. Table 60: Really New Innovations - Comparative Results Camcorders, PTVs, VCRs, e2 of e2 of e2 of e2 of e2 of Cordless Phones, TADs B Bv SL G BnC Sum of e?- . 32.8 33.2 31.6 33.8 34.2 Total finish score (lower is better) 68 79 72 82 74 Rankings by e2 sums 2 3 1 4 5 Rankings by finish position 1 4 2 5 3 The Bass model and the Simple Logistic model performed the best overall for forecasting the diffusion of really new innovations with respective SSEs of 328/316 and placements of first/second in the comparative results. Table 61: Really New/High Priced Innovations — Comparative Results e2 of e‘2 of e2 of e2 of e2 of Camcorders, PTVs B Bv SL G BnC Sum of e2 7.4 7.5 7.6 7.8 7.5 Total finish score (lower is better) 22 29 32 40 27 Rankings by e2 sums 1 3 4 5 2 Rankings by finish position 1 3 4 5 2 The Bass model performed the best overall for forecasting the diffusion of really new, high-priced innovations with a SSE of 7.4 and placing first in the comparative results. Table 62: Really NewILow Priced Innovations - Comparative Results e2 of e2 of e2 of e2 of e2 of VCRs, Cordless Phones, TADs B Bv SL G BnC Sum of e2 25.4 25.8 24.0 25.8 26.7 Total finish score (lower is better) ~ 46 50 40 42 47 Rankings by e2 sums 2 3 1 4 5 Rankings by finish position 3 5 1 2 4 The Simple Logistic model performed the best overall for forecasting the diffusion of really new, low-priced innovations with a SSE of 24.0 and placing first in the comparative results. Table 63: High Priced Innovations - Comparative Results Satellite receivers, Camcorders, e2 of e2 of e‘2 of e2 of e2 of PTVs B Bv SL G BnC Sum of e2 19.7 20.1 20.3 20.1 19.4 Total finish score (lower is better) 36 49 54 61 40 Rankings by e2 sums 2 3 5 4 1 Rankings by finish position 1 3 4 5 2 The Bass model and the Box and Cox model performed the best overall for forecasting the diffusion of high-priced innovations with respective SSEs of 19.7/19.4 and placements of first/second in the comparative results. 85 Table 64: Low Priced Innovations — Comparative Results CD Players, VCRs, Cordless e2 of e2 of e2 of e2 of e2 of Phones, TADs B Bv SL G BnC Sum of e2 35.3 35.4 34.0 35.8 38.5 Total finish score (lower is better) 59 59 63 60 59 Rankings by e2 sums 2 3 1 4 5 Rankings by finish position 1 1 5 4 1 The Bass model performed the best overall for forecasting the diffusion of low-priced innovations with a SSE of 35.3 (second) and placing first in the comparative results. Cell Testing (Hypotheses Testing) without PCs Table 65: Results of Cell Comparisons number percent It correct correct 2 score H1 170 146 85.9% 9.4" H1 a (opp) 40 40 100.0% 6.3" H1 b (hz) 40 35 87.5% 4.7** H18 (vt) 90 71 78.9% 5.5" H2 170 132 77.6% 7.2** H2a (vt vs. op) 85 57 67.1% 3.1 ** H2b (hz vs. op) 85 75 88.2% 7.1 ** H3 85 31 36.5% -2.5** **P < 0.01 Strong support for the first two hypotheses was found, the results for all sub-hypotheses were significant. Support for H1d (not shown on Table 65) was also found as 2H1, > (Zmb; ZHrc). Not only was support lacking for the third hypothesis, but it was clearly refuted. 86 Chapter 5 DISCUSSION OF RESULTS It is important to differentiate between forecasts that are created before an innovation is easily available and forecasts that are created after years of history in the marketplace. The conclusions drawn from this research are appropriate for creating forecasts before the innovation is marketed. This research used datasets from the United States consumer electronic market. The conclusions drawn from this research may be applicable to other industries and other countries, but further research will be needed to determine if generalizations are valid. Answering the Research Questions The research questions asked which forecasting method(s) should be used under various innovation levels (radical and really new) and price levels (high and low). The results of the research provided specific answers to these questions. When forecasting the diffusion of a radical high-priced innovation, one should use the Box & Cox model. It is recommended that one also generate a Bass model forecast if a second opinion is desired. When forecasting the diffusion of really new high-priced innovation, one should use the Bass model with the Box & Cox model serving as a backup. The Bass variant model should be used when forecasting the diffusion of low-priced radical innovations, with either the Bass model or the Box & Cox model providing a second opinion. When forecasting the diffusion of low-priced really new innovations, the Simple Logistic model should be used. The robust Bass model may also be used if 87 multiple models are desired. Figure 11 summarizes when various models should be used. Figure 1 1: Recommended Models by Context Consumers Price Level Radical Innovation Really New Innovation High Box & Cox Bass Bass Box & Cox Bass variant Simple Logistic Low Bass / Box & Cox Bass Lessons From the Hypotheses Hypotheses 1 and 2 stated that the various combinations of innovation levels (radical and really new) and price levels (high and low) would result in four populations that were significantly different from one another. The research supported these claims. As theorized, parameters from populations that were different in terms of both innovation level and price level did less well than parameters from more similar populations. 88 Hypothesis 3 presumed that the level of innovation would have a greater impact on the accuracy of a forecast than the price level. This presumption was clearly wrong. Not only did the research falsify it, it did so in such a manner that the opposite statement appears to be true. The price level of an innovation actually has more impact on the accuracy of a forecast than the innovation level. Models As discussed in Chapter 4, the Box and Cox and Generalized Bass models were the best models when it came to curve-fitting while the Simple Logistic model did the poorest. Curve-fitting is a very useful tool and may be useful for forecasts when an innovation has already been available in the marketplace. However, the results of the research showed that a curve-fitting advantage did not translate into a forecasting advantage when creating a forecast for an innovation without a market history. Bass Models The popularity of the Bass model derives from two unique factors. As this research has reinforced, the Bass model is very robust — working well in all tested contexts. In addition, the Bass model’s two coefficients have a theoretical foundation. However, the coefficients of innovation and imitation are only theoretically sound if the model starts from the initial diffusion of the innovation. Otherwise, the model assumes that the innovation first appeared later than it actually did. As shown in Table 66, this false assumption artificially inflates the role of p (the coefficient of innovation) and artificially deflates the role of q (the coefficient of imitation). 89 Table 66: Watching p's and q's Description Bp Bq BSSE GBp GBq GBB GB SSE VCR (1974-2000) 0.003 0.349 0.074 0.010 0.177 -5.000 0.091 VCR (1975-2000) 0.004 0.344 0.073 0.009 0.179 -10.775 0.037 VCR (1976-2000) 0.005 0.337 0.070 0.010 0.180 -10.460 0.036 VCR (1977-2000) 0.008 0.328 0.067 0.014 0.188 -8.197 0.034 VCR (1978-2000) 0.011 0.317 0.063 0.003 0.140 -20.793 0.035 VCR (1979-2000) 0.016 0.301 0.058 0.004 0.145 -18.892 0.034 VCR (1980-2000) 0.023 0.279 0.051 0.007 0.154 -15.614 0.033 VCR (1981-2000) 0.034 0.250 0.043 0.007 0.163 —13.813 0.033 VCR (1982-2000) 0.050 0.211 0.033 0.023 0.177 -6.601 0.028 VCR (1983-2000) 0.075 0.159 0.021 0.056 0.152 -2.670 0.020 VCR (1984-2000) 0.114 0.086 0.011 0.113 0.087 -0.044 0.011 VCR (1985-2000) 0.168 0 0.005 0.171 0 0.201 0.005 VCR (1986-2000) 0.202 0 0.024 0.235 0 2.673 0.022 VCR (1987-2000) 0.245 0 0.065 0.311 0 5.681 0.049 VCR (1988-2000) 0.304 0 0.114 0.356 0 7.382 0.066 VCR (1989-2000) 0.381 0 0.147 0.486 0 6.846 0.117 VCR (1990-2000) 0.465 0 0.137 0.481 0 7.431 0.111 VCR (1991-2000) 0.558 0 0.120 0.488 0 6.848 0.097 VCR (1992-2000) 0.637 0 0.096 0.671 0 7.172 0.038 VCR (1993-2000) 0.694 0 0.068 0.719 0 6.434 0.020 VCR (1994-2000) 0.748 0 0.044 0.778 0 5.621 0.006 VCR (1995-2000) 0.828 0 0.034 0.845 0 5.173 0.003 VCR (1996-2000) 0.850 0 0.022 0.866 0 4.617 0.001 This does not mean that the Bass models cannot be useful if one’s data starts after the initial diffusion. On the contrary, the Bass models may still be used for forecasting just as any other model. Rather, this caution is meant for how one interprets the coefficients of innovation and imitation. Despite the flexibility given by Bass, Krishnan, and Jain (1994) in allowing the sign of the price coefficient (8) to fluctuate, researchers who conduct similar experiments are advised to constrain the price variable to be negative. While this will result in sub-optimal curve-fitting, the loss in accuracy should be relatively minor. Conversely, forecasts of other innovations using only negative price variables should see gains in their accuracy. It is expected that research done 90 with the negative constraint on the price coefficient should allow direct comparisons between the Generalized Bass models and the other diffusion models. The Bass model variants created for this research deliberately violated the assumption of a constant m. This resulted in a model (Bv) that outperformed any of the others in the radical low-priced innovation context. Unfortunately, there was just one innovation in this context — additional research is recommended to test the viability of this variation with more datasets in various contexts. Simple Logistic and Gompertz The Simple Logistic model is one of the oldest diffusion models known. True to its name, it is a very basic model. However, it clearly outperformed the other models in the context of really new low-priced innovations. The Gompertz model has also been used for quite a while. Based upon this research, it is not recommended for forecasting the diffusion of really new or radical innovations before the launch of an innovation. However, the Gompertz model may be very well suited for forecasts generated well after the launch of an innovation. While this was not the focus of this research, it was observed that the diffusion of the Projection Television innovation follows a perfect Gompertz curve. 91 Box and Cox As discussed in Chapter 3, the Flexible Logistic Box and Cox model has a problem where the 0 variable tends to run to infinity in some scenarios. This was addressed by capping the upper limit of c to 100,000. Despite (or because of) this fix, the author must admit to being skeptical as to how well the Box and Cox model would do in comparison to the other models. As it turned out, the Box and Cox was second only to the Bass model in terms of robustness. The Box and Cox was also the best model in the context of radical high-priced innovations. Contributions This research has provided the following contributions: Support for the use of multiple forecasting methods Guidance for when various models should be used Guidance for when various models should not be used Criteria for when an innovation is released Definition of an Analogous Innovation Evidence that Analogous Groups Matter Superior method for extrapolation Evidence that price levels matter more than innovation levels A forecasting classification grid was created and proposed This research has provided additional support for the traditional view that no single forecasting method is best for every situation, although the Bass model comes pretty close. The unique contribution of this forecasting research was in providing guidance for selecting forecasting models in various price and innovations contexts. This research also provided the first empirical study that suggests the Gompertz Model is not a preferred model for use in pre-Iaunch conditions. While 92 this finding still needs to be verified in other studies, this finding could help forecasters improve their accuracy by guiding them to more appropriate models. Three specific criteria were proposed as necessary before first counting when an innovation became available. The use of this criteria should allow for researchers to compare forecasting model parameters from one innovation with the same parameters from another innovation. A definition of an analogous innovation was proposed. This definition was used as the basis for research that provided evidence that analogous innovation groups make an important difference in determining which methods should be used for pre-launch forecasting. It appears likely that the definition may also need to include the industry (e.g., reallynew innovation, low price level, consumer electronic industry), but this is currently speculation and needs to be tested. This research provided evidence that the discrete time notation of the Bass Model used by this author was superior for extrapolation than the method employed by Lilien et al. Both methods work approximately the same (see Figure 12) for a given period of time, although the method used here had a slightly lower sum of squared errors. 93 Figure 12: Comparison of Gentry and LIIIen et al. Diffusion Forecasts for VCRs VCR Diffusion 100% a 80% l 60% . 40% - 20% — 0% 81 82 H-EL,, l l . - . +Actual _ l 2% l 4% 8% 19%-??? 30% 40%P2% 84% 89% 74% 713% 80% 81%[ —a— Lilien etai l 3% 8% 12% 21% 33%l48% 84%l79% 90%4‘95%_ 98% 99% 100 199i ._ __ _. _ _. ..____, —x—_Ad_justed Lilieni 2% 5% 9% 18% 25%l37% 49% 80% 88% 73% 75% 78%l 78% 76%| 4 —— .___,_2__. 2_G_emy (B)_,_ 3%, 6% 11% 15% 23%j3_0%i 39% 48%l57% 65% 73%1800/ojgflj/1. 8_9%,l p q m SSE Lilien etal. 0.025 0.603 76.3% 0.521 Adjusted Lilien n/a n/a n/a 0.038 Gentry (B) 0.029 0.299 100% 0.029 8384 85 88 87l88 89790 9192 93 94. However, when the forecast was extended, using the same parameters, the superiority of the method used in this research becomes apparent (Figure 13). 94 Figure 13: Extended Comparison of Gentry and Lilien et al. Diffusion Forecasts for VCRs VCRDiffusion 100% — ‘-—‘-— 80% 60% 40% l 20% i 0% " V/ i T ATV—k * ““”—'—‘~'T /T_ 7 ‘- 0') In N O) ‘- 0') L0 N O) 00 CD 00 (I) (I) O) O) O) O) O) 93 93 9 93 93 93 93 93 93 92 §+Actual + unenetaifiéé-Adiusted Lilien — Gentry (B) ' p q m SSE Lilien et al. 0.025 0.603 76.3% 0.578 Adjusted Lilien n/a n/a n/a 0.167 Gentry (B) 0.029 0.299 100% 0.049 A forecasting classification grid was also proposed to simply the classification of various forecast models. By revisiting the 1985 findings of Armstrong, the forecasting classification grid provides an exhaustive, exclusive, and concise method for classifying forecasts. 95 APPENDICES 96 APPENDIX A SELECTING DIFFUSION MODELS Table 16: Diffusion Models Initially Considered Logarithmic Parabola (Gregg, Hossel, & Richardson, 1964) Modified Exponential (Gregg, Hossel, & Richardson, 1964) Observation-Based Modified Exponential (Meade, 1985) Bass model (Bass, 1969) Generalized Bass model (Bass, Krishnan, and Jain, 1994) Simple Logistic (Gregg, Hossel, & Richardson, 1964) Gompertz (Gregg, Hossel, & Richardson, 1964) Extended Logistic (Bass, 1969) Log-logistic (Tanner, 1978) Flexible Logistic (FLOG) - Inverse Power Transform (Bewley & Fiebig, 1988) F LOG — Box & Cox (Bewley & Fiebig, 1988) FLOG — Exponential (Bewley & Fiebig, 1988) As discussed in Chapter 3, the Bass model, the Generalized Bass model, the Simple Logistic model, and the Gompertz model were selected on the basis of the research and the literature. As both a check on the literature and an opportunity to see how each model worked, each model was created and plotted against the actual VCR diffusion. A quick review of the following figures (starting on the next page) revealed that Meade and Islam’s recommendation was wise: The parameters of the Simple Logistic, Gompertz, and Bass models were easily adjusted to a shape similar to the actual VCR diffusion curve. The Generalized Bass model was likewise appropriate. This graphical review supported the initial decision to use these four models. The other models were also reviewed. The Logarithmic Parabola, Modified Exponential, Observation-Based Modified Exponential, and Log- 97 Logistics models did not lend themselves to a close approximation of the actual VCR diffusion. Thus, these models were removed from consideration. The Extended Logistic model and all three Flexible Logistic models were able to approximate the actual VCR diffusion and all were judged appropriate for use. Given the amount of modeling required for this research, it was decided to add just one of these four models to those already selected. Since the Extended Logistic model was a variant of the Bass model — which was already selected for the research along with several variants — it was decided to use one of the Flexible Logistic models instead. While all three FLOG models were suitable, it was judged that the Box and Cox model was slightly more appropriate for the VCR diffusion and it was selected for the research. Figure 14: Initial Look at Logarithmic Parabola Model VCR Diffusion - Logarithmic Parabola 100% — 90% 7 80% «j 70% 60% 50% 40% ~, 30% E 20% 1 10% i 0% 1' “_._' A8031 ' +Logan'thmic I Parabola ‘ 1 974 1 977 1 980 3 86 1 989 1 992 98 1 Figure 15: Initial Look at Modified Exponential Model 00% . 90% 80% 70%.* 60% 50%,« 40%>: 30%.1 20%i4 10%.1 0% 1 974 VCR Diffusion - Modified Exponential —4»—Amn0ai' * l . + Modified 1 _ Exponential. 1977 1980 1983v~ 1988 1989 E 1992 1995 1998—< Figure 16: Initial Look at Observation-Based Modified Exponential Model 100% 90% t 80% 70% 1 60% . 50% 40% 30% 20%., 1 0% 0% VCR Diffusion - Observation-Based Modified Exponential 1 976 1 978 f:;;_iuauai + Modified Exponential - observed ' 1990 4 1992 1994 1996 ‘ 1998 2000 1980 1982 1984' 1988 1988-1 99 120% -— 100% ‘ 80% . 60% 40% 20% 0% Figure 17: Initial Look at Bass Model 1 980 VCR Diffusion - Bass Model 1995 7 1998 7 [ __._____._ .... ; 49:-Actual + Bass Model Figure 18: Initial Look at Generalized Bass Model 120% -- 100% « 80% - 60% ~ 40% ' 20% ‘ 0% 1 974 1 976 1 978 VCR Diffusion - Generalized Bass Model (Price) 100 ...—C 8 Actual—‘8 8 +68 Model Figure 19: Initial Look at Simple Loglstlc Model 120% 100% , 80% - +Actual 60% a- . —I—Simple 40% “ LEE”? 20% 00/0 ' ’T ‘ flrT ' 7'“ T ' ’ V N O (‘0 (D O) N In 00 ix N 00 Q 00 00 O) O) 93 93 92 93 93 93 93 S’.’ 5’3 Figure 20: Initial Look at Gompertz Model VCR Diffusion - Gonpertz 120% - 100% f 80% . +Gomperlz 40% , ""'77‘*————‘— 20% , 0% Y*——r*r_"—* — .2 r: 8 8 8 8 8 8 8 93 93 9’. 93 93 93 93 93 93 VCR Diffusion - Simple Loglstlc Model 101 Figure 21: Initial Look at Extended Logistic Model VCR Diffusion - Extended Logistic 120% 100% - 8096 6096 40%:« 2095. 0%: 6+ 8’ Actual 8 + Extended I Logistic 1974 1977 1980 1983 1986 1989 Figure 22: Initial Look at Log-Logistic Model VCR Diffusion - Log-Logistic ,—§— Actual .' + Log J; Logistic 1974 1977 1980 1983 i 1986 ‘ 1989 « Figure 23: Initial Look at the Flexible Logistic Inverse Power Transform Model VCR Diffusion - FLOG IPT 120%- 100%; 80%4 80% f—o—AEmY—SI“ 40% +FLOGIPT 20%‘ 0% 4 . -. VCOQONV‘IOQONgCDwO N N N 6 Q Q w 03 O) O) O) 0 99229292222298 102 Figure 24: Initial Look at the Flexible Logistic Box and Cox Model VCR Diffusion - FLOG Box and Cox 120% 100% - 80% < i +Actua| N i + F LOG BnC 60% . 40% . 20% 0% 2000 1974 1978 1978 1980 1982 1984 1988 I 1988 A 1990 1992 1994 1998 1998‘ Figure 25: Initial Look at the Flexible Logistic Exponential Model VCR Diffusion - FLOG ELOG 120% ,, 100% , 80% . I + Actual —I— FLOG ELOG‘ 8088 i 4098 2088 0% 1974 1978 1978 1980 1982 1984 1988 1988 l 1990 A 1992 1994 1998 1998 i 2000 103 APPENDIX B DIFFUSION AND THE GENERALIZED BASS MODEL Bass, Krishnan, and Jain (1994) created the Generalized Bass model in response to criticism that the Bass model did “not combine contagion effects with traditional economic variables such as price.” They expected the price coefficient (8) to be negative, but did not constrain the coefficient. As can be seen in Table 25, three of the price coefficients had negative values when the parameters were optimized for curve-fitting. Table 25: Curve Fitting - Optimized Parameters for GB Model Innovation p (L 8 PCs (1977-2000) 0.0075 0.1401 -1 .5073 Satellite Receivers (1979-2000) 0.0005 0.2586 1.0531 VCRs (1972-2000) 0.0017 0.2243 -8.5919 CD Players (1983-2000) 0.0160 0.2603 1 .5604 Camcorders (1983-2000) 0.0023 0.1 195 89563 PTVs (1982-2000) 0.0060 0.0547 4.2393 Cordless Phones (1976-2000) 0.0041 0.2360 0.7047 Telephone Answering Devices (1976-2000) 0.0053 0.2188 0.4371 Ceteris paribus, a negative price coefficient increases diffusion while a positive price coefficient retards diffusion. In five of the studied innovations, a positive price coefficient was found to provide optimal results for curve-fitting. These optimal parameters were used in accordance with the freedom Bass, Krishnan, and Jain (1994) established for the price variable. This did not appear to be a problem when the hundreds of models were generated. However, during the analysis portion of the research, it became clear that using a mixture of positive and negative price variables was problematic. The 104 Generalized Bass models (and GB variants) were significantly different from the other models when parameters from an innovation with a positive price coefficient were used with innovations that had an optimized negative price coefficient. These differences resulted in poor showing (Table 67) by the two models that used the price variable (the Generalized Bass model and the GB vananfl. Table 67: Sum of Squared Errors for All Forecasts B BV GB GBV SL G BnC 65.2 65.7 74.9 75.4 64.6 66.2 65.9 Based upon these results, the Generalized Bass model and the GB variant were not analyzed further since they were significantly less accurate than the other models by over three standard deviations. Reliable conclusions about the comparison of the Generalized Bass model with the other five models should not be drawn from this research. Researchers who conduct similar experiments are advised to constrain the price variable to be negative. While this will result in sub-optimal curve-fitting, the loss in accuracy would be relatively minor. Conversely, forecasts of other innovations using only negative price variables should see gains in their accuracy. It is expected that research done with the negative constraint on the price coefficient would allow direct comparisons between the Generalized Bass models and the other diffusion models. 105 Bibliography Abdel-Khalik, A. R. & El-Sheshai, K. M. (1980). Information Choice And Utilization In An Experiment On Default Prediction. Journal of Accounting Research 325-342. Achrol, R. S. & Kotler, P. (1999). Marketing In The Network Economy. Journal of Marketing, 634 146-163. Ajzen, l. (1991). The Theory Of Planned Behavior. Organizational Behavior and Human Decision Processes. 50, 179-211. Allen, P. G. & Fildes, R. (2001). Econometric Forecasting. In J.S.Armstrong (Ed.), Principles of Forecasting: A Handbook for Researchers and Practitioners Boston: Kluwer Academic Publishers. Armstrong, J. S. & Andress, J. G. (1970). Exploratory Analysis Of Marketing Data: Trees Vs. Regression. Journal of Marketing Research. 7. 487-492. Armstrong, J. S. (1985). Long-Range Forecasting: From Crystal Ball to Computer. (Second ed.) New York: John Wiley & Sons. Armstrong, J. S., Brodie, R., & McIntyre, S. H. (1987). Forecasting Methods For Marketing: Review Of Empirical Research. International Journal of Forecasting, 3, 355-376. Armstrong, J. S., Morwitz, V. G., & Kumar, V. (2000). Sales Forecasts For Existing Consumer Products And Services: Do Purchase Intentions Contribute To Accuracy. lntemgtimal Journal of Forecasting 383-397. Armstrong, J. S. (2001). Principles Of Forecasting: A Handbook For Researchers And Practitioners. Boston: Kluwer Academic Publishers. Ashton, A. H., Ashton, R. H., & Davis, M. N. (1994). White-Collar Robotics: Levering Managerial Decision Making. California Management Review 83- 109. 106 Babcock, L., Lowenstein, G., Issacharoff, S., & Camerer, C. (1995). Biased Judgments Of Fairness In Bargaining. American Economic Review 1337- 1 343. Bain, A. D. (1963). Demand For New Commodities. Journal of the Royal Statistical Society. Series A. 16 285-299. Bass, F. M. (1969). A New Product Growth Model For Consumer Durables. Management Sciance, 15, 215-227. Bass, F. M., Krishnan, T. V., & Jain, D. C. (1994). Why The Bass Model Fits Without Decision Variables. Marketing Science, 13, 203-223. Bass, F. M., Gordon, K., Ferguson, T. L., & Githens, M. L. (2001). DIRECTV: Forecasting Diffusion Of A New Technology Prior To Product Launch. Interfaces, 31, $92-$93. Bemmaor, A. C. (1995). Predicting Behavior From lntention-To-Buy Measures: The Parametric Case. Journal of Marketing Research 176-191. Berry, D. A. 8. Lingren, B. W. (1996). Statistics: Theom And Methods. (second ed.) New York: Duxbury Press. Bewley, R. & Fiebig, D. (1988). Flexible Logistic Growth Model With Applications In Telecommunications. International Journal Of Forecasting, 4, 177-192. Bird, M. & Ehrenberg, A. S. C. (1966). lntentions-To-Buy And Claimed Brand Usage. Operations Research Quarterly 27-46. Blinkenlights Archaeological Institute (2002). Pop Quiz: What Was The First Personal Computer. Blinkenlights Archaeological Institute [On-line]. Available: http:I/blinkenlights.com/pc.shtml Boje, D. M. & Murnighan, J. K. (1982). Group Confidence Pressures In Iterative Decisions. Management Science. 2_8, 1 187-1 196. Bright, J. R. (1978). Practical Technology Forecasting. Austin: The Industrial Management Center, Inc. 107 Brockhoff, K. (1975). The Performance Of Forecasting Groups In Computer Dialogue And Face To Face Discussions. In H.Linstone & M. TUROFF (Eds), The Delphi Method: Technigpes and Applications London: Addison-Wesley. Brockhoff, K. (1984). Forecasting Quality And Information. Journal of Forecasting 41 7-428. Brucks, M. (1986). A Typology Of Consumer Knowledge Content. Advances in Conspmer Research, 13. 58-63. Camerer, C. (1981). General Conditions For The Success Of Bootstrapping Models. Organizational Behavior and Human Decision Processe_s 411- 422. Carroll, J. S. (1978). The Effect Of Imagining An Event On Expectations For The Event: An Interpretation In Terms Of The Availability Heuristic. Journal of aperimental Social Psychology.14, 88-96. Cetron, M. J. (1969). Technological Forecasting: A Practical Approach. New York: Gordon & Breach. Cetron, M. J. & Ralph, C. A. (1971). Industrial Applications_C_)f Technological Forecastiqu: Its Utilization In R&D Management. New York: Wiley- lnterscience. Clemen, R. T. (1989). Combining Forecasts: A Review And Annotated Bibliography. International Journal of Forecasting 559-583. Collopy, F ., Adya, M., & Armstrong, J. S. (2001). Expert Systems for Forecasting. In J.S.Armstrong (Ed.), Principles Of Forecasting: A Handbook For Researchers And Practitioners Boston: Kluwer Academic Publishers. Cyert, R. M., March, J. G., & Starbuck, W. H. (1961). Two Experiments On Bias And Conflict In Organizational Estimation. Management Science 254-264. Dangerfield, B. J. & Morris, J. S. (1992). Top-Down Or Bottom-Up: Aggregate Versus Disaggregate Extrapolation. International Journal of Forecasting, 8, 233-241. 108 Dawes, R. M. (1971). A Case Study Of Graduate Admissions: Application Of Three Principles Of Human Decision Making. American Psychologist 180- 188. Dom, H. F. (1950). Pitfalls In Population Forecasts And Projections. Journal of the American Statistical Association 311-334. Dougherty, T. W., Ebert, R. J., & Callender, J. C. (1986). Policy Capturing In The Employment Interview. Journal of Applied Psychology 9-15. Duncan, G. T., Gorr, W. L., & Szczypula, J. (2001). Forecasting Analogous Time Series. J.S.Armstrong (Ed), Principlesgf Forecasting: A Hand_t_>ook For Researchers And Practitioners Boston: Kluwer Academic Publishers. Dunn, D. M., William, W. H., & Spivey, W. A. (1971). Analysis And Prediction Of Telephone Demand In Local Geographic Areas. Bell Journal of Economics and Management Science, 2, 561-576. Easingwood, C., Mahajan, V., & Muller, E. (1981). A Non-Symmetric Responding Logistic Model For Forecasting Technological Substitution. Technolpgical Forecasting and Social Change, 20, 199-213. Ebbesen, E. & Konecni, V. (1975). Decision Making And Information Integration In The Courts: The Setting Of Bail. Journal of Personalgy' and Social Psychology. 32, 805-821. Ebert, R. J. & Kruse, T. E. (1978). Bootstrapping The Security Analyst. Journal of Applied Psychology 1 10-1 19. Erffmeyer, R. C., Erffmeyer, E. S., & Lane, I. M. (1986). The Delphi Technique: An Empirical Evaluation Of The Optimal Number Of Rounds. Group and Organization Stagies. 9, 509-529. Fishbein, M. & Ajzen, I. (1975). Belief Attitude lntantion And Behavior: An Introduction To Theorv And Research. Reading, MA: Addison-Wesley. Forrester, J. W. (1958). Industrial Dynamics: A Major Breakthrough For Decision Makers. Harvard Business Review, 36, 37-66. 109 Foster, 8., Collopy, F ., & Ungar, L. (1992). Neural Network Forecasting Of Short, Noisy Time Series. Computers and Chemical Engineering 293-297. Gaeth, G. J. & Shanteau, J. (1984). Reducing The Influence Of Irrelevant lnforrnation On Experienced Decision Makers. Organizational Behavior and Human Performance, 33, 263-282. Ganzach, Y., Kluger, A. N., & Klayman, N. (2000). Making Decisions From An Interview: Expert Measurement And Mechanical Combination. Personnel Psychology 1-20. Garcia, R. & Calantone, R. (2002). A Critical Look At Technological Innovation Typology And lnnovativeness Terminology: A Literature Review. Journal of Prod_uct lnnovat_ion Management 110-132. Gerwin, D. (1988). A Theory Of Innovation Process For Computer-Aided Manufacturing Technology. IEEE Transactions on Engineering Managment 35 90-100. Goldberg, L. R. (1970). Man Vs. Model Of Man: A Rationale, Plus Some Evidence, For A Method On Improving On Clinical Inferences. Psychological Bulletin 422-432. Goldberg, L. R. (1976). Man Vs. Model Of Man: Just How Conflicting Is That Evidence? Organizational Behavior and Human Decision Processes 13- 22. Goodwin, P. & Wright, G. (1997). Decision An_alvsis For Management Judgment. New York: Wiley. Gregg, J. V., Hossell, C. H., & Richardson, J. T. (1964). Mathematical Trend Curves: An Aid To Forecasting. Edinburgh: Oliver & Boyd. Gregory, W. L., Cialdini, R. B., & Carpenter, K. M. (1982). Self-Relevant Scenarios As Mediators Of Likelihood Estimates And Compliance: Does Imaging Make It So? Journal of Parsonalig and Social Psvchcmgy. 43. 89-99. 110 Gregory, W. L. & Duran, A. (2001). Scenarios And Acceptance Of Forecasting. In J.S.Armstrong (Ed), Principles of Forecasting: A Handbook for Researchers and Practitioners Kluwer Academic Publishers. Grove, W. M. & Meehl, P. E. (1996). Comparative Efficiency Of Informal (Subjective, lmpressionistic) And Formal (Mechanical, Algorithmic) Prediction Procedures: The Clinical-Statistical Controversy. Psycholpgy, Public Policy, and Law 293-323. Hacke Jr., J. E. (1972). A Methodological Preface To Technological Forecasting. In J.P.Martino (Ed), An Introduction to Technological Forecasting (2nd ed., pp. 1-12). New York: Gordon and Breach Science Publishers. Harvey, N. (2001). Improving Judgment In Forecasting. In J.S.Armstrong (Ed), Principles of Forecasting: A Handbook for Researchers and Practitioners Boston: Kluwer Academic Publishers. Hill, T., O'Connor, M., & Remus, W. (1996). Neural Network Models For Times Series Forecasts. Management Science 1082-1092. Hoagland, J. M. This Recession: Why And How. 3-22-2001. Memphis, Tennessee, Address before the TweIth Annual North American Research/Teaching Symposium on Purchasing and Supply Management. Ref Type: Serial (Book,Monograph) Jamieson, L. F. & Bass, F. M. (1989). Adjusting Stated Intention Measures To Predict Trial Purchase Of New Products: A Comparison Of Models And Methods. Journal of Marketing Research 336-345. Jantsch (1967). Technological Forecasting In Perspective. Paris: Organization For Economic Co-Operation And Development (OECD). Juster, F. T. (1966). Consumer Buyer Intentions And Purchase Probability: An Experience In Survey Design. Journal of the American Statistical Association 658-696. Kang, S. (1991). An Investigation Of The Use Of Feedforward Neural Networks For Forecasting Ph.D. Kent State University. 111 Kleinmuntz, B. (1967). Sign And Seer: Another Example. Journal of Accgrmtirm Research 163-165. Korchia, M. (1999). A New Typology Of Brand Image. European Advances in Cons_umer Research. 4, 147-154. Lawrence, M. & Makridakis, S. (1989). Factors Affecting Judgmental Forecasts And Confidence Intervals. Organizational Behavior and Human Decision Processes. 42, 172-187. Lee, M., Elango, B., & Schnaars, S. P. (1997). The Accuracy Of The Conference Board's Buying Plan Index: A Comparison Of Judgmental Vs. Extrapolation Forecasting Methods. International Journal of Forecasti_ng 127-1 35. Lenz, R. C. J. (1962). Technological Forecasting. (Second ed.) Wright-Patterson Air Force Base: United States Air Force. Lenz, R. C. J. (1971). Technological Forecasting Methodology. In M.J.Cetron & C. A. Ralph (Eds), Industrial Applications of Tachnological Forecasting; Its Utilization in R&D Management (pp. 225-242). New York: Wiley- lnterscience. Leonard, K. J. (1995). The Development Of A Rule Based Expert System For Fraud Alert In Consumer Credit. European Jorgnal of Operational Research 350-356. Libby, R. (1976). Man Verses Model Of Man: The Need For A Non-Linear Model. Organizational Behavior and Human Decision Processes 1-12. Lilien, G. L., Rangaswamy, A., & Van Den Bulte, C. (2000). Diffusion Models: Managerial Applications And Software. In V.Mahajan, E. Muller, & Y. Wind (Eds), New-Product Diffusion Models (pp. 295-311). Boston: Kluwer Academic Publishers. Lusk, C. & Hammond, K. R. (1991). Judgment In A Dynamic Task: Microburst Forecasting. Journal of Behavioral Decision Making, 4, 55-73. 112 Lutkepohl, H. (1991). Introduction To Multiple Time Series Analysis. New York: Springer-Venag. MacGregor, D. G. (2001). Decomposition For Judgmental Forecasting And Estimation. In J.S.Armstrong (Ed), Principles of Forecasting: A Handbook for Researchers and Practitioners Boston: Kluwer Academic Publishers. Maines, L. A. (1990). The Effect Of Forecast Redundancy On Judgments Of A Consensus Forecast's Expected Accuracy. Journal of Accoanting Research 29-47. Makridakis, 8., Andersen, A., Carbone, R., Fildes, R., Hibon, M., Lewandowski, R., Newton, J., Parzen, E., & Winkler, R. (1982). The Accuracy Of Extrapolation (T ime-Series) Methods: Results Of A Forecasting Competition. Journal of Foregting 1 1 1-153. Makridakis, S., Chatfield, C., Hibon, M., Lawrence, M., Mills, T., Ord, K., & Simmons, L. F. (1993). The M2-Competition: A Real-Time Judgmentally Based Forecasting Study. International Journal of Forecasting 5-22. Makridakis, S. & Hibon, M. (2000). The M3-Competition: Results, Conclusions And Implications. lntemational Journal of Forecasting 451-476. Mandel, R. (1977). Political Gaming And Foreign Policy Making During Crisis. World Politics 610-625. Mar-Molinero, C. (1980). Tractors In Spain: A Logistic Analysis. Journal of the Qperatiogl Research Sociaty. 31 . 141-152. Martino, J. P. (1972). Forecasting The Progress Of Technology. In J.P.Martino (Ed), An Introduction to Lechnological Forecasting (2nd ed., pp. 13-23). New York: Gordon and Breach Science Publishers. McNeal, J. (1974). Federal Programs To Measure Consumer Purchase Expectations, 1946-73: A Post-Mortem. Journal of Consumer Research 1-10. Meade, N. (1985). Forecasting Using Growth Curves-An Adaptive Approach. Journal of the Operational Research Societv. 36, 1 103-1 115. 113 Meade, N. & Islam, T. (2001). Forecasting The Diffusion Of Innovations: Implications For Time-Series Extrapolation. In J.S.Armstrong (Ed), Principles of Forecaflng: A Hancibook forfisearchers m Practitioners Boston: Kluwer Academic Publishers. Michael, G. C. (1971). A Computer Simulation Model For Forecasting Catalogue Sales. Journal of Marketing Research 224-229. Moninger, W. R., Bullas, J., de Lorenzis, B., Ellison, E., Flueck, J., McLeod, J. C., Lusk, C., Lampru, P. 0., Phillips, R. S., Roberts, W. F., Shaw, R., Stewart, T. R., Weaver, J., Young, K. C., & Zubrick, S. M. (1991). Shootout-89: A Comparative Evaluation Of Knowledge-Based Systems That Forecast Severe Weather. Bplletin of the American Met_aorological Sociey, 72, 1339-1354. Morrison, D. G. (1979). Purchase Intentions And Purchase Behavior. Journal of Marketing 65-74. Morwitz, V. G. & Schmittlein, D. (1992). Using Segmentation To Improve Sales Forecasts Based On Purchase Intent: Which 'lntenders' Actually Buy. Journal of Marketing Research 391-405. Morwitz, V. G. (2001). Methods For Forecasting From Intentions Data. In J.S.Armstrong (Ed), Principles of Forecasting: A Handbook for Researchers and Practitioners Boston: Kluwer Academic Publishers. Neter, J., Kutner, H., Nachtsheim, C. J., & Wasserrnan, W. (1996). Applied Linear Statistical Models. (4th ed.) Chicago: Irwin. O'Connor, M. 8. Lawrence, M. (1989). An Examination Of The Accuracy Of Judgmental Confidence Intervals In Time Series Forecasting. Journal of Forecasting, 8, 141-155. Reagan-Cirincione, P. (1994). Improving The Accuracy Of Group Judgment: A Process Intervention Combining Group Facilitation, Social Judgment Analysis, And Information Technology. Organizational Behavior and Human Performance 246-270. Remus, W. & O'Connor, M. (2001). Neural Networks For Time-Series Forecasting. In J.S.Armstrong (Ed), Principles of Forecasting: A 114 Hampook for Reseaj‘chars ancfl’ractitioners Boston: Kluwer Academic Publishers. Rhodes, R. (1999). Visions Of Technology: A Centug Of Vital Debate About Machines, Systems And The Human World. New York: Simon & Schuster. Rogers, E. M. (1962). Diffpsions Of Innovations. New York: The Free Press. Rogers, E. M. (1995). Diffusiona Of Innovations. (4th ed.) New York: The Free Press. Roose, J. E. & Doherty, M. E. (1976). Judgment Theory Applied To The Selection Of Life Insurance Salesmen. Organizational Behavior and Human Decision Processg 231-249. Schnaars, S. P. (1989). Megamistakes: Forecasting And Thjep Myth Of Raw Technological Change. New York: The Free Press. Schoemaker, P. J. H. (1991). When And How To Use Scenario Planning: A Heuristic Approach With Illustration. Journal of Forecasting. 10. 549-564. Sharda, R. & Patil, R. (1990). Neural Networks As Forecasting Experts: An Empirical Test. Proceedings of the 1990 IJCNN Meeting, 2, 491-494. Silverman, B. G. (1992). Judgment Error And Expert Critics In Forecasting Tasks. Decision Sciences 1199-1219. Smith, P., Hussein, S., & Leonard, D. T. (1996). Forecasting Short-Term Regional Gas Demand Using An Expert System. Expert Systems with Applications 265-273. Statman, M. & Tyebjee, T. T. (1985). Optimistic Captital Budgeting Forecasts: An Experiment. Financial Management, Autumn, 27-33. Steckel, J. H., DeSarbo, W. S., & Mahajan, V. (1991). On The Creation Of Acceptable Conjoint Analysis Experimental Designs. Decision Sciences 2; 435-442. 115 Stewart, T. R., Moninger, W. R., Grassia, J., Brady, R. H., & Merrem, F. H. (1989). Analysis Of Expert Jugment In A Hail Forecasting Experiment. Weather and Forecasting 24-34. Stewart, T. R. (2001). Improving Reliability Of Judgmental Forecasts. In J.S.Armstrong (Ed), Principles of Forecasting: A Handbook for Researchers and Practitioners (pp. 81-106). Boston: Kluwer Academic Publishers. ‘ Talaga, J. A. & Tucci, L. A. (2001). Consumer Tradeoffs In On-Line Textbook Purchasing. J_oprna| of Consmer Marketing, 18, 10-20. Tanner, J. C. (1978). Long Term Forecasting Of Vehicle Ownership And Road Traffic. Journal of Lhe Royal Statistical Sociey, Series A, 141, 14-63. Theil, H. & Kosobud, R. F. (1968). How Informative Are Consumer Buying Intention Surveys? R_eview of Economics and Statistics 207-232. Vanston Jr., J. H. (1982). Technolpgical Forecasting: An Aid To Effective Technology Management. Austin, Texas: Technology Futures, Inc. Vavra, T. 6., Green, P. E., & Krieger, A. M. (1999). Evaluating EZPass. Marketing Research, 11, 4-16. Wiggins, N. & Kohen, E. (1971). Man Vs. Model Of Man Revisited: The Forecasting Of Graduate School Success. Journal of Personality and _Social Psychology 100-106. Wittink, D. R. & Bergestuen, T. (2001). Forecasting With Conjoint Analysis. In J.S.Armstrong (Ed), Principles of Forecasting: A Handbook for Researchers and Practitioners Boston: Kluwer Academic Publishers. Wong, B. K. & Monaco, J. A. (1995). Expert System Applications In Business: A Review And Analysis Of The Literature. Information & Management 141- 1 52. Yntema, D. B. 8. Torgerson, W. S. (1961). Man-Computer Cooperation In Decisions Requiring Common Sense. IRE Transactions of the Professional Grou on Human Factors in Electronics 20-26. 116 22"