1V1£31_} RETURNING MATERIALS: Place in book drop to LIBRARIES remove this checkout from .—:_—-. your record. ‘FINES wiH be charged if book is returned after the date stamped be10w. AN EMPIRE AND C ON C! in AN EMPIRICAL ANALYSIS OF THE EFFECTS OF DECISION TYPE AND CONTROL OVER DATA ACCESS AND MODEL ACCESS ON USER PREFERENCE FOR MODELING ENVIRONMENTS by Margaret Dawson A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Management Graduate School of Business Administration 1988 AN EMPIRICA MD CC}; ON USE:- Hodel man solution to pr ““91 Prolife: the PrOblem o: mOdifY models. aProtected e: errors in con‘ limiting the : REGundancy of by making mod Ability to Cr completed to PreviouSly no the USer Of a modify models being made of This Stu three levels decision t e conj°int anal t he colnbinati ’> 4/-JL7. (f/ ‘7 ABSTRACT AN EMPIRICAL ANALYSIS OF THE EFFECTS OF DECISION TYPE AND CONTROL OVER DATA ACCESS AND MODEL ACCESS ON USER PREFERENCE FOR MODELING ENVIRONMENTS by Margaret Dawson Model management systems have been presented as a solution to problems of errors, redundancy of effort and model proliferation. MMS canprovide a means of reducing the problem of errors by limiting the ability to create and modify models. Models written by specialists and stored in a protected environment can eliminate errors in logic and errors in content. Errors in data can be reduced by limiting the source of data available to approved sources. Redundancy of effort and model proliferation can be reduced by making models available for shared use and limiting the ability to create new models. No prior research has been completed to examine users' reactions to limitations on a previously unlimited resource, or the relative importance to the user of access to data and the ability to create and modify models. Little is understood as to how the decision being made affects the users' reactions to the limitations. This study considered three levels of data control, three levels of model control and Anthony's categories of decision type (strategic, managerial and operational). Conjoint analysis was used to collect users' preference for the combinations of data control and model control under varying decisi and 100. Prac program at a 2 Analysis of va to address the Significa control and me W158. and (3) levels of eitr user Preferenc high Control , Preference Va considered, 1 °Perationa1 d varying decision types. Preferences were scored between 0 and 100. Practicing managers in an advanced management program at a major university were used as subjects. Analysis of variance was the primary statistical tool used to address the research hypotheses. Significant interactions were found between (1) data control and model control, (2) data control and decision type, and (3) model control and decision type. Higher levels of either control were associated with declines in user preference, with the effect lessened in the presence of high control of the other control variable. The decreased preference was greatest when strategic decisions were considered, less for managerial decisions and least for operational decisions. @ Margaret Leigh Dawson All rights reserved where are ma: impossible- " have been a h' I would of my conzitt despite his a appreciate hi in the resear invaluable. encouragement least, is my the Universit led me throug more and be 1: teacher and 1: The ladi friendship : "There are many without whom this book would have been impossible. There are many others without whom it would have been a heck of a lot easier." [Boynton 1982] I would like to express my appreciation for the efforts of my committee. Dean Philip Carter took this project on, despite his already-heavy load. I value his judgment and appreciate his leadership. Professor Dale Duhan's guidance in the research design and statistical anaylsis were invaluable. Professor Gary Ragatz was a constant source of encouragement and guidance. Lastly, but certainly not least, is my appreciation for Professor David Dilts, now at the University of Waterloo, Waterloo, Ontario. Dr. Dilts led me through my program and constantly challenged me to do more and be more than I might have been. He has been a teacher and mentor, but best, he has been a friend. The ladies of the management department provided friendship, support, and numerous handouts for the poor, starving doctoral students. Always, there was my husband and son, who have put up with the long hours, irritations, and frustrations. They believed in me, which helped me believe in myself. - Thanks to all - Acknowledg List of Pi List of Ta List of E: Chapter 1. Intr 2- Deci Syste Intrc Chara Data} hodei HOde ReSe Sun: Deci COnt TABLE OF CONTENTS Acknowledgement . . . . . . . . . . . . . . . . List of Figures . . . . . . . . . . . . . . . . List of Tables . . . . . . . . . . . . . . . . . List of Exhibits . . . . . . . . . . . . . . . Chapter 1. .2. Introduction . . . . . . . . . . . . . . . Decision Support Systems, Model Management Systems and the Research Framework . . . . Introduction . . . . . . . . . . . . . . . Characteristics of Decision Support Systems Databases and Database Management Systems . Models . . . . . . . . . . . . . . . . . Model Definition . Examples . . . . . . . . . . . . . . Model Base . . . . Model Management System Characteristics Research Framework . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . Decision Types and Control . Decision Types . . . . . . . Control . . . . . . . . . . System Rejection . . . Implications for MMS . . . . Control in an MMS Environment . Examples of Control Needs in ”Odels . O O O O O 0 Implications . . . . . . . . . summary 0 O O O O O O O O I O O I O 000.110.0000 p. 000500000. m .0050. vi 0 o 0 Po. 0 o o w o o 0 Ho 0 o o o o 10 13 15 17 21 24 28 32 34 34 39 45 46 47 50 54 55 Chapter 4. Variable Hethodol Variable Dec Cor Use Variable vii Chapter 4. Variables Definition, Research Hypotheses and Methodology . . . . . . . . . . . . . . . . . . . 57 Variables . . . . . . . . . . . . . . . . . . . 57 Decision Type . . . . . . . . . . . . . . . . 58 Controls . . . . . . . . . . . . . . . . . 59 Model Control . . . . . . . . . . . . . 61 Data Control . . . . . . . . . . . . . . 61 User Preference . . . . . . . . . . . . . . . 63 Variable Relationships and Research Hypotheses . . 64 Hypothesis 1 . . . . . . . . . . . . . . . . 65 Hypothesis 2 . . . . . . . . . . . . . . . . 66 Hypothesis 3 . . . . . . . . . . . . . . . . 68 Hypothesis 4 . . . . . . . . . . . . . . . . 71 Hypothesis 5 . . . . . . . . . . . . . . . . 72 Data collection . . . . . . . . . . . . . . . . . 76 Analysis . . . . . . . . . . . . . . . . . . . . 79 Factor Descripti1on . . . . . . . . . . . . . 81 Storage . . . . . . . . . . . . . . . . . . . 83 Pretest . . . . . . . . . . . . . . . . . . . 84 Data Collection instrument . . . . . . . . . . . . 86 Procedure . . . . . . . . . . . . . . . . . . 90 Subjects . . . . . . . . . . . . . . . . . . . 91 Typical Subject . . . . . . . . . . . . . . . 94 Special Concerns . . . . . . . . . . . . . . 95 Summary . . . . . . . . . . . . . . . . . . . . . . 97 55- Results of Data Analysis . . . . . . . . . . . . 98 Effects . . . . . . . . . . . . . . . . . . . . 100 Three-way interaction . . . . . . . . . . . . 100 Two-way interactions . . . . . . . . . . . 101 Data Control by Decision Type, Model Control by Decision Type . . . . . 101 Data Control by Model Control . . . . . 108 Main effects . . . . . . . . . . . . . . . 110 Data Control and Model Controi . . . . . 110 Decision Type . . . . . . . . . . . . . 111 Validity test . . . . . . . . . . . . . . . . . . 111 Summary . . . . . . . . . . . . . . . . . . . . . 119 Chapter 8 Discussio: Significa Hail Int« Implicat Co: Ihplicat Limitati Future R APPendix I Data Cc H Questio III pearson References ddltional é viii Chapter 6. Discussion of Analysis Results . . . Significance of Statistical Results Main Effects . . . . . . . . . Interaction Effects . . . . . . Linearity . . . . . . . . . . . Interpretation of the interaction between data control and model control . . . . . . Three-way interaction Implications . . . . . . . . . Computing Environments . . . . . . . . . Dumb Terminal . . . . . . . . . . . Networked Micro . . . . . . . . . . Stand-alone Micro . . . . . . . . . Implications for builders of MMS . . . . . . . Limitations . . . . . . . . . . . . . . . . . Future Research . . . . . . . . . . . . . . . Appendix 3E Data Collection Material . . . . . . . . . . II Questionnaire Responses . . . . . . . . . . . III Pearson Correlations . . . . . . . . . . . . References . . . . . Additional References not Cited 0 O O O O 121 122 122 123 131 134 135 136 138 138 139 139 143 144 145 146 165 177 178 184 Figure Resea: Resea: 055 C Corpc: The JI Resea Model Resea Antho lnfor Conpo Resea Corps Resea Prefe User Diffe . o 0—. fichJsz—owtygo “ .0... {JILNHNNNNHN E? O H U) 4'5 Inter Contro Inter and De PIEfe i2 Diffe 5" Prere 6‘1 Inter Inter 6‘3 Intez Linea LIST OF FIGURES Figure 1.1 Research Framework . . . . . . . . . . . . . . . 4 1.2 Research Framework (expanded) . . . . . . . . . . 5 2.1 DSS Components . . . . . . . . . . . . . . . . . 9 2.2 Components of a Model . . . . . . . . . . . . . 17 2.3 The Jenkins Model for MIS Research . . . . . . . 29 2.4 Research Scope intersection with the Jen ki ins Model for MIS Research . . . . . . . . . . . . . . 32 1.2 Research Framework . . . . . . . . . . . . . . . 32 3.1 Anthony's Framework . . . . . . . . . . . . . . . 34 3.2 Information Requirements by Decision Type . . . . 36 2.2 Components of a Model . . . . . . . . . . . . . . 48 1.2 Research Framework . . . . . . . . . . . . . . . 58 2.2 Components of a Model Showing Research Scope . . 59 4.1 Research Variables . . . . . . . . . . . . . . . 64 4.2 Preference by Decision Type . . . . . . . . . . . 65 4.3 User Preference by Control Type . . . . . . . . . 67 4.5 Difference in Preference by Control Level and Decision Type . . . . . . . . . . . . . . . . . . 70 4.5 Interaction between Data Control and Model Control . . . . . . . . . . . . . . . . . . . . . 72 4.6 Interaction between Model Control, Data Control and Decision Type . . . . . . . . . . . . . . . . 74 5.1 Preference by Decision Type . . . . . . . . . . . 104 5.2 Difference Scores . . . . . . . . . . . . . . . . 109 5.3 Full Data Set . . . . . . . . . . . . . . . . . 109 5.4 Preference by Control Level . . . . . . . . . . . 112 6.1 Interaction Effects . . . . . . . . . . . . 125 6.2 Interaction: Decision Type and Data Control . . . 127 6.3 Interaction: Decision Type and Model Control . . 128 6.4 Linearity . . . . . . . . . . . . . . . . . . . . 132 ix LIST OF TABLES Table 2.1 Planned Cost of Goods Manufactured . . . . . . . 19 2.2 Model Management System Characteristics . . . . . 25 3.1 Computer Use . . . . . . . . . . . . . . . . . . 38 4.1 Sorted Order by Decision Type . . . . . . . . . . 76 4.2 Model Creation and Storage . . . . . . . . . . . 85 5.1 Mean User Preference and Standard Deviations . . 100 5.2 ANOVA Results . . . . . . . . . . . . . . . . . . 101 5.3 ANOVA by Decision Type . . . . . . . . . . . . . 102 5.4 Cell Means and Standard Deviations of Difference Between Operational and Strategic Scores with Standard Deviations and T-values . . . . . . . . 105 5.5 ANOVA on Difference Scores . . . . . . . . . . 106 5.6 Subject 23 Self-reported and Actual . . . . . . . 110 5.7 Pearson Correlations . . . . . . . . . . . . . . 110 Exhibit 1 One Page LIST OF EXHIBITS Exhibit 1 One Page Description Received by Subjects xi 89 Chapter 1. Introduction Purpose - A short introduction to the topic and research question. This research is designed to study how users of decision support systems will react to the introduction of a model management system. This chapter gives a brief description of the issues involving model management systems and states the research question that will be addressed by this research. Decision support systems (DSS1) are defined as "interactive computer systems which help a decision maker utilize data and models to solve unstructured problems" (emphasis added) [Sprague 1980, p 86]. Models are defined in this research as mathematical representations of business situations. Examples of models include budgets, project analysis, simulations and forecasts. Models, as they are used here, do not include the normal business transaction processing and reporting cycle. Many authors suggest that DSS should contain a model management system with functions similar to a database management system for data [Sprague and Carlson 1982; Bonczek, Holsapple, and Whinston 1980, 1984]. 1 The abbreviation DSS is used in both the singular and plural sense in the literature. Therefore, phrases such as "an DSS is" and "many DSS are" can occur. 1 Many au‘ systems (HHS systems deve of database 1985; Spragt authors draa problems of and can be 5 Vere solved dealing wit] SYStem with Sl’Stem Eith that PIOble iAPPIEgate 1984; StOh] fIOm 001k i -- t CaUSe §ata lntrc InfoI UDCQI mlhir The i as a usef] System; h d0es nOt . t0 the in departmen Depot-men 2 Many authors see the development of model management systems (MMS) as the next evolutionary phase in computer systems development with strong parallels to the development of database management systems (DBMS) [Dolk and Konsynski 1985: Sprague 1980; Sprague and Carlson 1982]. All of these authors draw upon the analogy of DBMS and state that problems of redundancy and inconsistency exist in modeling and can be solved the same way redundancy and inconsistency were solved with database management systems. The research dealing with the theory of creating a model management system with capabilities similar to a database management system either assumes the desirability of an MMS or claims that problems exist that can be solved with an MMS [Applegate et a1 1985: Blanning 1984; Dolk & Konsynski 1985, 1984: Stohr and Tanniru 1980]. A claim (typical of many) from Dolk and Konsynski [1985, pg 35] is: ... the rampant proliferation of spreadsheets has caused major headaches for management, especially data processing management. The sudden introduction and unchecked growth of this new information resource has left many organizations uncertain (some even paralyzed) about how to minimize and control this phenomenon. The analogy to a database management system can serve as a useful guide to the design of a model management system: however, the analogy to database management systems does not transfer well from the user's perspective. Prior ‘to»the introduction of database management systems, the departments or persons served by the Information Systems DePartment had little direct interaction with the computer. Data was prep alteration t: staff of the introduction neant a chan centralized a DP staff t The cor tOdaY'S user “hen the onl computer [Rc can take the t° the main: hicroccmput‘ organizatio. rationale f Put thmugh and woodmar thELr Own ( analYSis. for users : computer 9; This . computing . 3 Data was prepared and reports were received, but any alteration to the processing was handled by the professional staff of the data processing (DP) department. The introduction of a database and database management system meant a change from a centralized file-oriented system to a centralized database system. Users were still dealing with a DP staff to affect changes to processing. The computing environment used most frequently by today's users tends to be more decentralized then previously when the only computer contact was with the corporate computer [Rockart and Flannery 1983]. This decentralization can take the form of terminals or microcomputers networked to the mainframe or minicomputers or stand-alone microcomputers. An increasing number of departments within organizations have purchased microcomputers. Part of their rationale for this action was the long delays in requests put through the DP channels [Benson 1983: Couger 1986: Keen and Woodman 1984]. Users have become accustomed to making their own changes to micro-oriented applications and analysis. The introduction of an MMS could imply a change for users from a local microcomputer to a centralized computer environment. This research is designed to study user preference for Loomputing environments, in particular the environment associated with the data and modeling aspects of decision support systems. The framework for this research is presented in Figure 1.1. This framework part of the larger 4 This research is designed to study user preference for computing environments, in particular the environment associated with the data and modeling aspects of decision support systems. The framework for this research is presented in Figure 1.1. This framework part of the larger Jenkins model [Jenkins 1983]. The framework will be explained as part of the Jenkins model in chapter 2. decision < > DSS type v v User preference Figure 1.1 Research Framework The framework encompasses decision making with the aid of a DSS. This research is concerned with the modeling environment as it is affected by the introduction of controls through implementation of a model management system. Therefore, the scope of concern is the modeling environment within the context of a DSS. User's reaction to controls in a DSS environment have not been addressed by prior research. DSS ar variety of operational decision be of the mode user's pref This r 30' dc SYSLEI envrrc tYpes? The Q: the major f Figure 1.2 add:QSSed b affect the \ 5 DSS are designed to support decision making for a variety of decision types: i.e. strategic, managerial and operational [Gorry and Scott Morton 1971]. The type of decision being made, in conjunction with the characteristics of the modeling environment, is expected to impact the user's preference for a computer modeling environment. This research is designed to answer the question: How do the controls introduced with model management systems affect user's preference for the modeling environment provided by the DSS under varying decision types? The controls considered in this research reflect the two major functions of DSS: data access and model access. Figure 1.2 indicates the three independent variables addressed by this research. These variables are expected to affect the dependent variable (preference for the modeling environment.) DSS decision <-———————> type data model I access access v v v User preference Figure 1.2 Research Framework (expanded) This general research question will be addressed through four hypotheses: El: 112.1: 112.2: 33.1: H3.2: H4: Hl: 82.2: 83.1: H3.2: 84: HS: 6 Change from strategic to managerial to operational decisions will be associated with monotonically increasing user preference for the modeling environment. Increasing data control will be associated with monotonically decreasing preference for the modeling environment. Increasing model control will be associated with monotonically decreasing preference for the modeling environment. The difference between user preference for the modeling environment under operational decisions and user preference for the modeling environment under strategic decisions will be greater for high data control environments than for low data control environments. The difference between user preference for modeling environments under operational decisions and user preference for the modeling environment for strategic decisions will be greater for high model control environments than for low model control environments. There will be an interaction between data control and model control. The strength of the relationship between data control and user preference will not be equal to the strength of the relationship between model control and user prefl diff! The re: means of pre will receive further than control-orie user's envir: The rena Chapter desision Slip; Sl'Stems will Vistas are Presented a: Chapte CharaCtem Chap: the hYpotr collectim Chap: ahall’sis, Chapte results for the research 7 preference AND the direction of the inequality will differ across the decision types. The results of this research will provide a better means of predicting the reception model management systems will receive when they are introduced. The implications go further than just model management systems since the control-oriented portions of application systems affect the user‘s environment in a similar fashion. The remainder of this report is organized as follows: Chapter 2 presents the literature describing the decision support systems within which model management systems will reside. Characteristics of model management systems are clarified and the research framework is presented as a part of the Jenkins model. Chapter 3 presents the literature regarding decision characteristics and controls. Chapter 4 describes the research variables, presents the hypotheses for statistical testing, describes the data collection methodology and the subject selection process. Chapter 5 presents the results of the statistical analysis. Chapter 6 discusses the implications of statistical results for design of model management systems, summarizes the research and discusses implications for future research. Model T Purpose - ' decision 5: DSS in 0rde necessary f be located literature a research, '1 the Jenkins This re in a decish management sistems (DE (DSS) in 0' This Chap: define the paradigm c relatiOnsI Caz-180” 19 A 085 Md D8113 f0} for Mel/21km 1a t is M69321; 0f Vt 039; and t} Chapter 2. Decision Support Systems, Model Management Systems and the Research Framework Purpose - To describe model management systems as a part of decision support systems. To explain the components of a DSS in order to provide the background understanding necessary for appreciating the environment in which MMS will be located and to define terms as they are used in the literature and new terms as they will be used in this research. To present the research framework as a part of the Jenkins model. Introduction This research focuses on the use and control of models in a decision support system (DSS) environment. Model management systems (MMS) are part of decision support systems (DSS). It is important to appreciate the whole (DSS) in order to understand the purpose of the part (MMS). This chapter will describe decision support systems and define the terms used in the literature. The well-known paradigm of Sprague and Carlson will be used to explain the relationships between the various components [Sprague and Carlson 1982, p 33]. (See Figure 2.1) A DSS has two major functional components: the database and DBMS for selection of data and the model base and MMS for manipulating or processing the data. The dialog system is the part of the software which is the interface between the user and the remainder of the DSS. The dialog system nay take me language Sl llarge boc vhich is or will focu- Datal This 5 Figure 2 . 1 base! and Cl: 055 a be]? a deci ”Structure d ”L The ten representation may take many forms, including menus or a command level language such as Microsoft's Disk Operating System (MS-DOS.) A large body of research dealing with user interface exists, which is outside the scope of this research. This research will focus on the other components of DSS. Database < > Model base A A l-—--—--> DBMS MMS <-------—-J Dialog system A V User Figure 2.1 DSS Components This section will discuss the remaining components in Figure 2.1 in the following order: characteristics of DSS, databases and database management systems, models, model base, and the model management system characteristics. Characteristics of Decision Support Systems DSS are defined as "interactive computer systems which help a decision maker utilize data and models to solve unstructured problems” (emphasis added) [Sprague 1980, p 86]. The term I'model" refers to a mathematical representation of a business situation. The use of the term should be l in a manner processing will be del The ch to author b theme is th achieved th; use; (b) use operations; EOdels. The lSBQ' p 340 The e mahagemen. (itcisiOn managemeI manageme; fOr an Ur one see tl This Secty‘ More "tens; 50%95 such 10 should be understood to mean the processing of information in a manner that would not be considered transaction processing which makes up the normal business cycle. Models will be defined more fully in a later section. The characteristics of DSS differ slightly from author to author but have a strong common core. The prevalent theme is that DSS are user controlled. This user control is achieved through (a) interactive interfaces that are easy to use: (b) user-initiated interaction: (c) user selection of operations: and (d) user selection and alteration of the models. These characteristics are described by many authors [Barbosa and Hirko 1980: Bonczek, Holsapple and Whinston 1980, p 340: Sprague and Watson 1975b; Sprague 1980]. Databases and Database Management Systems The existence of a database and its associated database management systems (DBMS) is assumed in any discussion of decision support systems. All of the authors in the model management literature have drawn an analogy between model management and data management. Therefore, it is imperative for an understanding of model management system (MMS) that one see the relationships between a database and its DBMS. This section provides a very brief description of databases. More extensive descriptions can be found in well-established sources such as Date [1982] or Ullman [1982]. A data unavailable between fiI organizatic data files duplicating marketing a form of a c data reorgg Vhich Houlc Potential“ t0 a new fj sorted diff include ne'. (Such as CL Accounts Re Red“ Customers ‘ Inconsister Accounts Re St°red f0r Databi probl‘éms 01 based sYSte the naeds C CDIIQQted a 11 A database is a means of introducing capabilities unavailable in file-oriented systems. The major difference between file-oriented systems and database systems is the organization of the data. In a file-oriented system, new data files may be created for a new application, often duplicating data stored in other files. For example, a new marketing analysis program may be supported by a reorganized form of a customer file used for Accounts Receivable. The data reorganization is accomplished by writing a program which would read in the Accounts Receivable file (and potentially additional data from another file) and write it to a new file in the reorganized form. The file may be sorted differently (perhaps by geographic region) and may include new data not present in the Accounts Receivable file (such as customer's age and sex) and may exclude data in the Accounts Receivable file (such as account balance). Redundancy is introduced by storing the same data about customers (such as name and address) in more than one place. Inconsistency occurs when updates (changes) are made to the Accounts Receivable file but are not reflected in the data stored for the marketing program. Database systems were presented as solutions to the problems of redundancy and inconsistency prevalent in file- based systems [Nolan 1973]. A database is designed to fill the needs of the organization so that the correct data is collected and stored. When designing a database, redundancy is reduce items can I data. The that use t." will use ex reorganizat; (08345} is th database and database. I} knows exactly Person Seek: r it, and lDSte A datab. SYStem proyi ““399 of 1 mailIidual attess Setn programs c data being policies c new data i 12 is reduced by defining data needs so that one copy of data items can be accessed by all the programs that need the data. The data is said to be independent of the programs that use the data. New applications can be written that will use existing data without any new data collection, reorganization, or recoding. The database management system (DBMS) is the software that acts as an interface between the database and the programs written to operate on the database. The DBMS is like a very efficient file clerk who knows exactly where everything is stored. As a result, the person seeking the data does not need to know how to find it, and instead uses the DBMS to find the data. A database and its associated database management system provide a means of control by centralizing the storage of the data [Date 1982]. All access (both individual and program) must pass through the DBMS, so access security can be enforced. Therefore, individuals and programs can be checked for access clearance prior to the data being made available. Furthermore, data integrity policies can be enforced to reduce the number of errors when new data is input or updates are made. The type of access granted individuals and programs can vary. Some may be granted READ-ONLY privileges: i.e. the data values can be read but may not be altered in any way. Some (such as data entry programs) may permit entry of new data records without allowing update (change) privilege to 13 the data. A special level of access is usually required to update or delete records. The granting of access is normally controlled by a person within the data processing area, such as the database administrator (DBA) or a security specialist. Access may be granted by department (all people in the Personnel Department may be given READ-ONLY access to the employee history file), by individual (the manager of Personnel has clearance to process corrections (update) to the history file) or by program (the daily Sales program may READ current account receivable balances and WRITE (update) new balances.) The database drawn upon by a DSS may be the corporate database, or it may be a special database designed for the DSS - The specialized database may contain organizational data extracted from the corporate database and stored in a centralized environment or downloaded (with or without reOrganization or aggregation) to a mini- or micro— °°mPuter. It may also include data external to the °rganization (such as economic indicators) which is placed into the DSS database. Models The model base and MMS make up the second major portion °f the DSS (See Figure 2.1) . Alter made clear the 1mPOEtance of models as a feature of decision support systems in a seminal article in which he developed a taxOnomy of DSS that categorized existing DSS as model 14 oriented or data oriented [Alter 1977]. Even the systems Alter called "data oriented" used models (as they are defined here) for data analysis. This research deals with mathematical models used to assist business decisions. Some examples of models are sales forecasting models, scheduling models, EOQ inventory models and financial spreadsheets. The computation required by the models can be done manually, as they were prior to the introduction of computers. A model is an attempt to represent a portion of reality. For example, a model airplane is a smaller (scaled down) version of the actual object that is able to represent some of the real object's characteristics. The model can be used to study how the actual object reacts to its environment. For example, the model airplane can be used in a wind tunnel to study various wing designs at different wind velocities. Any model is limited, however, in how much of reality can be accurately represented. For example, the model airplane would not be useful for determining the interior sound level experienced by passengers at the air speeds of the wind tunnel. Mathematical models are another way to represent reality [Baker and Kropp 1985]. An equation such as E a mo2 allows a person to predict the reaction of E to a change in m. Like scale models, mathematical models are able to capture just a portion of reality regardless of the model size. For example, the Brookings model of the United States 15 economy incorporates over 150 equations. A single module of the Brookings model, used to forecast financial behavior, uses nineteen equations and forty-two variables [Duessenberry et al 1965]. Yet even a model of the size of the Brookings model is not able to capture all the forces that influence the national economy. The increased use of computers has allowed firms to build more models of a larger and more complex nature. Anecdotal literature demonstrates that benefits can be achieved through modeling in an organization. Four articles appear in a single issue of Interface: (15:1 1985) that present case studies where quantifiable savings are realized through introduction of models to coordinate activities. Savings are realized through reduced inventories, reduced borrowing, reduction of lost sales and reduced waste. The models allow management to identify better ways to schedule operations, plan sales promotions and order materials. Model Definition The term ”model" is used by most authors without definition. For the purposes of clarity, and for a common basis of understanding, the following definition of models and their components will be used in this research. The terminology is not taken specifically from other sources. Differentiation of INPUT into retrieved data and parameters, and OUTPUT into output data and reports, is the major departure from the INPUT-PROCESS-OUTPUT description provided 16 by discussions of computer programs [Berlinger 1986, p. 2; Pearson 1986, p.41 and computer systems [Davis and Olson 1986]. Generally all models are made up of just five components: retrieved data, parameters, process, output data and reports (see Figure 2.2). Retrieved data is data which may be available from a database. The retrieved data may be transferred from the database electronically or input manually by the model user. Parameters are values that may be chosen by the model user rather than drawn from the database. The major difference between retrieved data and parameters is the source; i.e. the source of retrieved data is commonly the database while the source of parameters is commonly the user. Process is the mathematical operations that take place on the values of the input (retrieved data and parameters). Output data is a result of the model that may be stored and used as data in other contexts. Reports differ from data in that they are printed output used to present results and are not available electronically for further use. The major difference between output data and reports concerns their intended later use. Output data may be stored for later use as retrieved data; reports are in printed form and are used as a presentation vehicle for the output. 17 Retrieved data -——————> > Output data Process Parameter values ——————> > Reports Figure 2.2 Components of a Model Examples Some examples will demonstrate the differences between the components. Example 1. (Illustrates retrieved data, parameters, process and output data) A simple forecasting model: St+k = 1.03 St + .3 k Variable definition: St = Sales in period t St+k = Forecasted sales in period t+k ( k periods after t) k = the number of periods into the future Retrieved data: St (the observed Sales for time t. Could be retrieved from the database.) Parameters: 1.03 and .3 which are either chosen by the user or found by other analysis k (the distance into the future). Process: the mathematical computation of the value of the right hand side (1.03 St + .3 k) 18 Output data: the value of St+k~ Report: there is no report defined for this equation. Example 2. (Illustrates output data and report) Creation of a sales forecast report for the next twelve months. The forecast equation is: St+1 = 1°°3*St*at+1 Retrieved data: St - the most recently observed sales volume. For forecasts after t+l, the St value will be the forecasted value for that time period; i.e. the forecast for January will use December's actual sales volume as St- The forecast for February will use January's forecasted value as St. Parameters: 1.03 selected by the user at+1 is a seasonal adjustment parameter selected by the user or developed from prior analysis. Process: the computation of the St+1 for each month Output data: the value of St+1, which is retained for use in computing St+2 (and potentially available electronically for use by other functional areas as the Sales forecast used for planning and scheduling) Report: A printed report of the series of forecasts for the twelve months presented in a style suitable for communication with superiors and subordinates 19 Example 3. (Illustrates the components of a typical model) A spreadsheet for budgeting cost of goods manufactured and sold (See Table 2.1): ggne 30 $180,000 50,000 150,000 25,000 25,909 399,000 $480,000 60,000 Table 2.1 Planned Cost of Goods Manufactured Earsh_11 Beg Goods Inv $130,000 Cost of Goods Mfg: Number of units 25,000 Raw material a $2 50,000 Direct labor 6 $6 150,000 Variable OH 6 $1 25,000 Fixed on 75,090 Cost of goods mfg 25,000 0 $12 999,099 Cost of Goods avail for sale $430,000 Ending finished goods inv: 15,000 6 $12 (FIFO) 180,000 5,000 a $12 (FIFO) Cost of Goods Sold $250,000 [Adapted from Salmonson et al 1981] $420,000 Retrieved data and parameters: In this example, there is a very thin line between input data and parameters. The values of the non-computed amounts (such as number of units and unit cost of raw material) are the input values. These values may be available from a database or may be chosen by the user. The values can be obtained from the database when the spreadsheet represents actual performance. The values may be chosen by the user when planning for future periods. This gives the user the ability to ask ”What if” questions such as, ”What if the cost of raw materials went up to $6.25?" 20 Process: The process is all the computations done in the cells of the spreadsheet. For example, the March 31 cost of raw material is computed by a formula (number of units x cost per unit) and the user sees the value 50,000. The user could have developed the model for this one use or could have obtained the skeleton or outline of the budget from a collection of predefined "templates.” The template would contain all the words and formulae so the user only ”fills in the blanks." The use of a template would require the user to input just required data values (such as beginning finished goods inventory and the current costs of raw material, labor, variable overhead and fixed overhead). Output data: The output data is the cost of goods manufactured figures for March 31 and June 31. Report: A report can be created by printing the entire spreadsheet. These examples were presented to clarify the meanings of the terms retrieved data, parameters, process, output data, and reports. They were designed for illustrative purposes and are not intended to be a complete set of all models used by organizations. The two major functional components of DSS are illustrated by these examples: the use of a database and DBMS to extract data from the database for use as retrieved data in models and the use of models or a 21 modeling language as the process for manipulating the retrieved data and the parameters to create the output data and reports. Model Base In a DSS, the models themselves are organized into a model base to facilitate access to the models. The model base is a library of models consisting of permanent models, ad hoc models, user-built models, and ”canned" models [Sprague and Carlson 1982; Dolk and Konsynski 1984 1985; Ariav and Ginzberg 1985]. Permanent models are expected to be used periodically, in contrast to ad hoc models, which are written for once-only use. Ad hoc models may prove to be useful enough to warrant refining them and creating a permanent model. "Canned" models are those purchased from an outside source and used "out of the can," i.e. without modification. The terms permanent, ad hoc, and user-built are not mutually exclusive. A user may build an ad hoc model and store it for later use. Should it prove to be useful, it may be refined or expanded to create a permanent model (either by the original builder or by a specialist in the organization.) Canned models are those which are purchased from software developers. Besides models which as self-contained (such as those in examples 1, 2, and 3), the model base may contain model modules which provide subfunctions (such as computing a 090?: the wt p6 a: Dc us; rep 3 PFOq; C0013 and KC “115505 005,35 m”? to potential < 22 geometric mean). These modules may be pieced together by the user to provide the series of processes desired. The model base may also contain a modeling language (such as FORTRAN) or a package (such as IFPS or Lotus 1-2-3) which facilitates the ability of users to create models personally [Sprague and Carlson 1982; Bonczek, Holsapple, and Whinston 1984]. Just as the conceptual design of a database can be implemented as a network, hierarchical or relational database, a model base also can be implemented in more than one way [see Date 1982 or Ullman 1982, for database design options]. Blanning's [1982, 1984] work has focused on a relational implementation of models with a JOIN of models on input and output allowing for integration of models. Stohr and Tanniru [1980] present a network (CODASYL) approach and Dolk and Konsynski [1985, pg 625] present a conceptual model using the artificial intelligence approach of frames to represent the models. A prototype version for linear programming models has been implemented in Fortran with a CODASYL-compatible ADBMS database management system [Dolk and Konsynski 1985]. The issues of implementing an MMS are outside the scope of this research. The issue of concern to this research is the effects the introduction of an EMS may bring to user preference for the modeling environment. The use of models in DSS has not lived up to its full potential due in part to a lack of integration with the “mic £13; 0f tr model functi ‘SPIagu ”AMEMWe ’Mde] Man 23 information system. Users are experiencing problems with traditional modeling environments. Specifically: 1. Frequently the necessary input data or parameters are often not available or are very difficult to generate. 2. The output from the model is often difficult to use. 3. For complex, multifaceted problems, large comprehensive models have proven difficult to build and maintain. 4. Big, complex models are difficult for managers to understand: therefore they are not trusted. 5. When utilizing a library of small models, the user must enter the data manually. This is necessary since each model uses data in its own format. Output used as input must be reentered in the format required by the other models. 6. Generally a minimum of interaction occurs between the decision maker and the model. [Sprague and Carlson 1982, pp 259-266] Sprague [1980] was among the first to suggest that many of these problems can be addressed by a model base and a model management system (MMS) with characteristics and functions parallel to those of a database management system. (Sprague and Carlson use the terminology ”model base management system." More recent literature uses ”model ‘management system." For consistency, this research will use "model management system" exclusively.) M 24 Mbdel Management System Characteristics Many authors present the characteristics of a model management system [Applegate, et al 1985: Ariav and Ginzberg 1985; Blanning 1982, 1985; Dolk and Konsynski 1985; Klein et al 1985; Sprague and Carlson 1982: Stohr and Tanniru 1980]. Following is a typical description of an MMS: a software system which provides for the creation, manipulation and access of models. The ability to store and manipulate models is a critical function in the design and implementation of an MMS. Model storage functions include model representation, model abstraction, physical model storage and logical model storage. Model manipulation functions include model instantiation, model selection and model synthesis. The MMS accesses the centralized database of the organization and external databases to obtain the data necessary to solve a given problem. The capability for interactive input and correction is also provided. Two objectives for model management have been identified: (1) to expand and enhance decision support system (DSS) capabilities by improving the modeling component of the system; and (2) to enable organizations to centralize model management functions and insure the integrity, consistency, currency and security of model bases. [Applegate et al 1985, p. 1] While the characteristics of a model management system (MMS) do vary from author to author, a strong similarity emerges across the characteristics presented by the main researchers in the field. Table 2.2 presents some of the characteristics specifically mentioned by each author. (The absence of the characteristic means that the author did not 'make a specifig reference to the characteristic.) 25 ><><><><><><><>< Table 2.2 Model Management System Characteristics Authors Characteristics 1 2 3 4 5 6 7 8 9 (1) Model directory X X or encyclopedia ‘ (2) Manipulation of X X X X Models (3) Interactive Input X (4) Update Parameters X (5) Interactive X Correction (6) Update Models X X X X X and/or Model base (7) Model Integration X X X X X (8) Report Generation (9) Access to Models X X X X X X X X (10) Centralized X X X modeling (11) Storage of Models X X X X X X X X X (12) Model Administrator X (13) Access to X X X X X X Database (14) Model Generation X X X (15) Output to Database 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. Applegate, Klein, Konsynski, and Nunamaker 1985 Applegate, Konsynski, Nunamaker 1986 Ariav and Ginzberg 1985 Blanning 1982, 1985 Bonczek, Holsapple and Whinston 1980, 1984 Dolk and Konsynski 1984, 1985 Elam and Konsynski 1987 Klein, Konsynski and Becker 1985 Miller and Katz 1986 Sprague and Carlson 1982 Stohr and Tanniru 1980 acces nexib C85 My all d 26 A discussion of the characteristics given in Table 2.2 follows. The parenthetical numbers refer to the number associated with the characteristics in Table 2.2. Mbdel base: Sprague and Carson refer to the collection of models as a model base [Sprague and Carlson 1982]. Ariav and Ginzberg [1985] and Dolk and Konsynski [1984, 1985] refer to the model "directory" or "encyclopedia" (1). This is just a terminology difference. Just as a database management system has an associated database, so a model management system has some collection of models, whether that collection is described as a model base, a library of models or an encyclopedia of models. User Flexibility: Users are able to manipulate models (2). This manipulation includes the ability to alter input values (3), parameter values (4) and the model itself(5, 6). A very important characteristic of an MMS is the ability to integrate models (7). This means that the output from one model can be used as input for other models. Report generation (8) is also associated with user flexibility. Interactive User Access: The user has the ability to access models (9) through the MMS. Coupled with the user flexibility need, interactive access is a reasonable assumption. Centrally Administered: All the authors in table 2.1 draw an analogy between a database management system (DBMS) and an MMS. While only a few authors specify centralized 0001“ model expe Cent nah and Ce: CE da it 5} SU da‘ (6) thes. model Strong] means 01 the 0,71}, ”Wait 1,, 9% Nth] Mp] 1'93 the 27 control (10), they all refer to users accessing a common model base (11). Therefore, it is not unreasonable to expect a centralized administration of the model base. Centralized administration means that one person or group makes the decisions regarding which models to make available and who is granted access to the models (12). Centralized Access: Models are stored centrally. Central storage (10) means that access can be controlled centrally. Database Access by the Medels: Direct access to the database (13) by the MMS is given by most authors. Whether it is the corporate database or a database designed for the MMS is not answered in the literature, though both are suggested. It will be assumed that the MMS accesses a database through a database management system. Storage of User-altered models: Manipulation of models (6) and model generation (14) are strongly supported by these authors. Users will have some means of storing the models they create or alter from existing models. Output Storage: Sprague and Carlson [1982] argue strongly for storage of output in the database (15) as a means of integrating models. Stohr and Tanniru [1980] are the only other authors that specifically refer to storage of output in the database. Widely held support (five of the eight author sets) for the ability to integrate models implies that some means will be found for retaining output cc the cen‘ 28 for use as input, though the authors do not specify that the data be stored in the database. This research will assume that the generated output will be available for input to other models. The purpose of Table 2.2 and the subsequent discussion was to summarize the characteristics of MMS as they are given in the literature. In summary, MMS should provide a centralized model base with links to a database. Users should be able to access models in the model base, alter the model if desired, or create new models using existing modules or writing the model with a modeling language. The user should be able to select data and parameters for input and retain the output for later use. The user should also be able to retain altered or new models for later use. The characteristics of MMS present some possible degree on contradiction. For example, will it be possible to retain the degree of user flexibility desired in the presence of a centralized model management system? Research Framework Researchers in MIS have struggled to provide a framework or theory for research. Among the attempts is the Jenkins model which resents a framework particularly for ‘usexbsystem interface research [Jenkins 1983]. (See Figure 2.3) No one study attempts to encompass the entire framework, but rather each study looks at some combination oi 'C iol' 9'1“ co: co f c 81 bee) 612p j] incluc “YO/10. and Man. theory 1’ We” Va: be made b; “5ng Wild. 29 of the possible variables reflecting the models parts. The following discussion describes the framework components, gives examples of variables which have been used for each component and explains how this research will address each component. The Minnesota Experiments are used as a source for the illustrations of the model components [Dickson et.al. 1977.] A complete discussion of the research variables used in this study is given in Chapter 4. / TaSk \ Human‘4-:< 1\ 2: Decision Maker Information System Performance Figure 2.3 The Jenkins Model for MIS Research The characteristics of the human decision maker has been considered as a research variable in a number of empirical research projects. Characteristics measured have included demographics (age, sex, education, job experience), psychological (intelligence, risk taking, cognitive style), and managerial style (autocratic vs democratic, theory X vs theory Y). This research will not include human decision maker variables. An attempt to control for variability will be made by drawing upon subjects with common education backgrounds. W“ [st di: th vi 30 The task has varied along a number of lines: function (production, finance, accounting, marketing), level (strategic, managerial, operational) and character (level of difficulty, phase, structure). The majority of studies fix the task and manipulate the system variables. This research will vary the task. The information system deals with the characteristics of the system used in the study. In DSS terms, this includes the data, model and output. Data has been varied along a volume dimension (adequate vs overload) and a content dimension (raw vs summarized, raw vs statistical). No studies have addressed the issue of access to data at varying levels of user control. This research will manipulate the user's ability to select the data. Models have been studied in a present vs absent sense; i.e. the presence or absence of a decision aid. The studies tend to fix the model, if one is used. The ability of the users to select and alter models has not been addressed by prior research. This research will manipulate the user's ability to access models, alter existing models, and create new models. The features of data and model are frequently collapsed into just the issue of output. Output has been varied along the dimension of media (paper vs CRT), volume (adequate vs overload) and style (tabular vs graphical, color vs oonochr output Pe DSS-rel specifi measure decisic decisic of simj subset (“Umbex assess] Perfon satisfi anticij attitu T (rEpea terms Pre$en Charac in the not apt dimehsi be Var], 31 monochrome). This research will not specifically address output except as an extension of the use of models. Performance is the dependent variable in most of the DSS-related research. Performance can be associated with specific tasks (such as production schedules) or it can be measured in terms of the decision made (quality of the decision, cost or profitability associated with the decision, time to make the decision, learning over a number of similar decisions, accuracy of the decision). Another subset of performance measures include use of the system (number of reports requested, key stroke monitoring, assessment of system quality). A third subset of performance measures addresses human responses (attitudes, satisfaction with the system, confidence in the decision, anticipated performance). This research addresses the attitudes measure expressed in terms of user preference. The research framework presented in Figure 1.2 (repeated here for convenience) can now be described in terms of the Jenkins model. The scope of the research is presented as a subset of the Jenkins model in Figure 2.4. Characteristics of the human decision maker are not included in the research design, therefore, the human component does ‘not appear in the framework. The task will vary along the dimension called decision type. The information system will be varied by manipulating the modeling environment provided by the! the mode Hume] Decisiol Th 1' PF 0V 1' dEd Model may by the DSS. 32 Performance will be measured as preference for the modeling environment. Human Decision Maker ej’” / Taf"<\> Research Scope \ ~\\ Information / Performance System Figure 2.4 Research Scope intersection with the Jenkins Model for MIS Research decision type DSS data model access access User preference This chapter presents an explanation of the environment Figure 1.2 Research Framework Summary provided by a decision support system (see Figure 1.2). IModel management systems are characterized as a synthesis from the are usefu to develc subset 01 The couponen‘ also wil. hYPOthes FMS. 33 from the MMS literature. The MMS characteristics outlined are useful for understanding the objectives of those working to develop MMS. The research framework is presented as a subset of the Jenkins model. The next chapter will describe the other major component of the research framework, the decision type. It also will provide the background literature for hypothesizing the effect of introducing controls with an MMS. lurpose . issues t implicat decision make may researcr an impac enViroru Th¢ categor actiVit Categor operati Str‘ GQtQI-minj Chapter 3. Decision Types and Control Purpose - To present the decision environment and control issues that are relevant to this research. To discuss implications for model management systems. Decision Types Decision support systems (DSS) are designed to support decisions made by managers [Davis and Olson 1986]. Managers make many types of decisions and it is hypothesized by this research that the characteristics of the decision will have an impact on a user's preference for the modeling environment provided by the DSS (see Figure 1.1). The types of decisions made in an organization can be categorized using Anthony's framework for managerial activity [Anthony 1965]. This framework uses three categories: strategic planning, managerial control and operational control [Anthony 1965, p 22]. (See Figure 3.1) Strategic Planning v Management Control v > Operational Control Figure 3.1 Anthony's Framework Strategic planning is defined as the process of determining the organization's goals and objectives. 34 Strategf entire 4 impleoe: be acqu line an Example Capital decisic 0} be Per (ident manage activi Produc effiCi 35 Strategic planning deals with "macro" issues concerning the entire organization rather than the ”micro" issues of how to implement the goals. Examples of strategic planning would be acquisition of a competitor, expansion into a new product line and selection of a marketing strategy [Anthony 1965]. Management control is the intermediate level between strategy and operations. Management control is defined as: "the process by which managers assure that resources are obtained and used effectively and efficiently in the accomplishment of the organization's objectives." [Anthony 1965, p 17]. Examples of management control activities include working capital planning, planning staff levels and formulating decision rules for operational control [Anthony 1985, p 19]. Operational control deals with the tasks that need to be performed to fulfill the goals of the organization (identified through strategic planning and specified through managerial control). Examples of operational control activities are controlling credit extension; scheduling production; measuring, appraising and improving workers' efficiency [Anthony 1965, p 19]. The information requirements of the three decision types vary along a number of dimensions. Strategic planning is concerned with broad issues involving organizational policies and goals. Information is usually aggregated and more external data is used than for other decisions. The scope and variety are large and the process tends to be non- routine. Operational control uses well-defined information 36 with narrow scope. The information may be quite detailed and almost entirely from internal sources. The analysis applied tends to be routine. Managerial control falls between the two on each characteristic. These and other characteristics are presented in Figure 3.2 (reproduced from Gorry and Scott Morton 1971). Characteristics Operational Managerial Strategic of information control control planning Source Internal < > External Scope Narrow < > Very Wide Aggregation Detailed < > Aggregate Time horizon Historical < > Future Currency Highly < > Quite old Required Accuracy High < > Low Frequency of use Frequent < > Infrequent Figure 3.2 Information Requirements by Decision Type Another characteristic of decisions is the amount of "structure"1 present [Simon 1960]. Structured problems are those which are easily represented with mathematical relationships, such as budgets and profit/loss analysis. An example of an unstructured problem would be selection of a new factory site. Unstructured problems are those for which the decision maker has no ready means of analyzing the problem with the tools at hand. This may be due to a lack of experience (a specialist or more experienced person may 1 Simon used the terminology "programmed" and "unprogrammed" but Gorry and Scott Morton chose to use "structured" and "unstructured" to represent the same characteristics. See Gorry and Scott Morton, 1979, for details. see ar proble ) first exaop] proces report define also c desig; has we report compUt IUXUr) Struct intenc natur, strum 0n the 37 see an underlying structure) as well as the nature of the problem. Well-structured decisions and routine tasks are the first to be supported by computer applications. Common examples are the well-defined accounting transaction processing such as accounts receivable and payroll. Routine reporting from the transaction database is also well defined. This is in part because they are more routine, but also due to the fact that their structure simplifies the design of computer support. Reporting for decision making has moved beyond periodic reporting to include ad hoc reporting capabilities. The increased power of present day computers at much reduced costs allows organizations the luxury of providing computer support for the less well- structured problems. Decision support systems (DSS) are intended to lend assistance to problems of an ill-structured nature. The decision maker uses a DSS for the more structured portions of the problem or to impose a structure on the problem or a part of the problem. Gorry and Scott Morton [1979] combine the managerial activity categories of Anthony with the structured/unstructured terminology of Simon [1960]. They state that each managerial activity type includes decisions that are structured, unstructured and semi-structured (where parts of the decision process are structured and parts unstructured). decis from Flanr each Table decis Class apprc a DSS 38 DSS-type capabilities are being used in all three decision types, as reported by Rockart and Flannery [1983] from a survey of 200 end users. Table 3.1, from Rockart and Flannery, shows the percentage of total computer use for each activity. It is clear that the capabilities listed in Table 3.1 are used to some small or large extent for each decision type. For example, the report generation classification is a characteristic which would be appropriate to operational, managerial and strategic uses of a DSS. Table 3.1 Computer Use $119.53 W Operational 9 Report generation 14 Inquiry/simple analysis 21 Complex analysis 50 Miscellaneous __§ 100 This section presented a classification scheme for decisions that is common to the MIS literature. The classification scheme of strategic, managerial and operational will be used in this research. Strategic, managerial and operational decisions vary along a number of dimensions, including structure. The amount (or lack) of structure in a decision has implications for the ability to use a DSS to support the decision making process. While unstructured decisions can occur at all three levels, a greater than fo Hc exertir literat introdu "C Without does nc Within "t ox 1r ls Cc desirak [HEIChe b0nus a DEed fc directj °rqaniz to over eh 0: COUld b contr01 39 greater percent of the strategic decisions are unstructured than for managerial and operational decisions. Control Model management systems are represented as a means of exerting control over modeling. This section presents some literature on control which has implications for the introduction of an MMS. ”Control" is a word that is used by many authors without definition. The dictionary definition which follows does not indicate the possible means of achieving control within an organization: "to exercise authority or dominating influence over: direct; regulate: a restraining act or influence, a curb" [American Heritage Dictionary 1979] Controls are introduced in organizations to encourage desirable behavior and discourage undesirable behavior [Merchant 1985]. Organizational controls vary from employee bonus and incentive plans to locks on storeroom doors. The need for control stems from the need for organizational direction, the need to motivate employees to work for organization goals rather than personal goals and the need to overcome the personal limitations of the employees. "Control is seen as having one basic function: to help ensure the proper behaviors of the people in the organization.” [Merchant 1985, p 4] Among the various types of controls that should or could be used to encourage desired behavior are personnel controls and action controls. Personnel controls are 40 designed to encourage desired behavior through the individual's self-control and by fostering social control. Personnel control is implemented by (1) carefully selecting and placing individuals, (2) training (for formal and informal mentoring), to instill expectations and standards, (3) maintaining a culture that encourages desirable behavior, (4) designing group-based rewards that encourage performance beneficial to the organization rather than performance only of a self-beneficial nature, and (5) provision of the necessary resources such as information, supplies, staff support and/or decision aids (emphasis added) [Merchant 1985, p 40]. Action controls are those controls that "are used to ensure that individuals perform (or do not perform) certain actions that are known to be beneficial (or harmful) to the organization” [Merchant 1985, p. 29]. The action controls generally applied to computer systems are physical constraints such as door locks and computer passwords. The use of physical constraints over the behavior of employees is a way to enforce effective and efficient use of resources [Merchant 1985]. The need for ”controls” in a computerized environment is accepted by many disciplines, but the meaning of controls is not uniform across the interest areas. For example: (1) A firm's accountants have a highly developed sense of control (security) over physical assets. The (2) (3) 41 procedures for handling cash and valuable inventory are well developed and monitored. Policies such as the separation of duties to increase security over the assets have been transplanted into the computerized accounting environment [Arens and Loebbecke 1980]. EDP auditors, whether internal or external, apply policies of control, in part through audit trails, to the computer system bringing assurance of, among other features, transaction logs [Weber 1988; Gallegos et al 1987]. Data processing personnel's concern for control is from two perspectives: control over the hardware and control over software (including magnetically stored data as well as programs.) Hardware is protected by various devices: placing locks on microcomputer hard disks to limit access to key holders: bolting equipment (micros; terminals, printers) to the work surface: installing door locks on the computer room and requiring employee badges in order to restrict access to sensitive areas. Control over software (including data and programs as well as software packages) can be exerted by limiting access (through passwords, retina mapping, palm prints, and having librarians check out software, data and programs to authorized individuals) and by maintaining a policy of creating backup copies stored in a secure location. A plan for recovery from disaster (flood, 42 fire, bomb, disk crash) is included in the backup category [Brill 1983]. Data integrity is valued and protected by policies for data entry, update and deletion. Additionally, DP has adopted the accountants' policy of separation of duties for protection of computer resources. For example, programmers are prevented from alteration of the production copy of a program or accessing live data. All changes must be made and tested in separate environments and scrutinized before the copy becomes the production model [Date 1982]. (4) Management specialists and accountants use rewards and bonuses as a means of controlling (guiding, influencing) others' behavior [Lewellen and Huntsman 1970; Murphy 1985]. This research is not concerned with the design of a control system for DSS. Rather, it is concerned with the affect of the controls once they are introduced. While the literature in the area of the design of control systems for computer environments is extensive, studies on the users' reaction to the controls are difficult to find. The need for a sense of personal control is well documented in the psychology literature. The basis for concluding that humans (and other life forms such as dogs, cats, rats, monkeys and paramecium) prefer more control over the things making up their environment are frequently concluded from laboratory 43 experiments dealing with electric shock [Averill 1973]. Given a choice between administering or receiving electric shocks, human subjects generally choose to administer. Given the choice between self-administered electric shock and electric shock administered by someone else, most human subjects prefer self-administration. In other experiments, subjects were less-stressed by having the choice of order of tests versus no choice [Stotland and Blumenthal 1964]. The call for introducing controls into DSS-type environments has come from many quarters, not just the advocates of model management systems (MMS). Benson's research in end user computing pointed to increased use of microcomputers for data analysis. Support for introducing limitations on personal modeling comes from the IS professionals who expressed concern over the lack of documentation, data backup and security. Keen and Woodman [1984] reported increasing concern over incompatible machines, proliferation of personal databases and the hidden costs of ‘1ow cost' microcomputers slipped in under ‘other expenses' on departmental budgets. Support for introducing standardized data downloading policies comes from Couger [1986] who suggests that downloading data at different points in time is the contributing factor in the production of conflicting reports. Each author presents suggestions for exerting control over microcomputers. An MMS is one of the solutions 44 provided [Applegate et al 1985: Dolk and Konsynski 1984, 1985]. The introduction of action controls to guide behavior increases the predictability of employees' actions, but does not guarantee that errors will not take place. This is intuitively obvious when considering organizational needs such as establishing procedures for approving customer credit, for release of inventory, and rules for handling petty cash. The more routine the task, the more easily it can be "programmed" to increase consistency. Creating standards requires an excellent knowledge of what actions are desirable and the span of variations from routine that are likely. This knowledge exists for routine jobs, but unfortunately it is not available for more unstructured problems. The disadvantage of transferring the approach to less structured problems is that only equally or more highly skilled people are in a position to establish the standards. This means that the policies and procedures an employee is directed to follow should be established by someone with a good understanding of the decisions that the employee has to make. This becomes very difficult to do when specialist positions are considered. It also becomes very costly to use the time of one high-level executive to establish the procedures that another executive should use [Merchant 1985, p 127-129] forms and a repor when herch of us the v highs reduC resis the S SYSte rejEC teen: desig frOm 45 A second disadvantage to action controls is that "most forms of action control discourage creativity, innovation, and adaption" [Merchant 1985, p 128]. Fisher [1978] reports that workers feel increased intrinsic motivation when they feel they have control over their own performance. Merchant also states: some, perhaps most, people are not happy operating under them (action controls). Some people, especially the more independent, creative people, may leave to find other jobs that allow more opportunity for achievement and self- actualization. [Merchant 1985, p 128]. System Rejection While there is no research that addresses the question of user reaction to controls, the literature does support the view that fewer or lower controls will be preferred to higher or more controls [Merchant 1985; Fisher 1942]. The reduction in preference may be great enough to result in resistance to using the new system or complete rejection of the system [Lucas 1975: Markus 1983]. Lucas [1975] has written extensively on the topic of system failure. The core of his finding is that systems are rejected by users for two major reasons: the system is technically poor or the users were not involved in the design process. User resistance can take many forms varying from a lack of cooperation to actual sabotage of the system. If the system use is optional, such as for the use of a DSS, 46 resistance can take the form of complete nonuse of the system. Failure due to technical faults is not surprising. If a system does not perform correctly, if the hardware fails or if the software is full of errors: the users will be dissatisfied and will resist using the system. This suggests that new systems should be well developed, tested and debugged prior to implementation. Technical faults found after implementation should be remedied. If this is done, then user resistance should be eliminated or reduced. User resistance or rejection due to a lack of user involvement in the system design cannot be easily remedied after implementation. User involvement has become a watchword in system analysis and design methodologies [Gain and Sarson 1979: DeMarco 1978; Eliason 1987]. Soliciting the user's local knowledge has been credited with improved design and reduction in user resistance. The issue of user resistance can also be important when there is a shift in power brought about by introduction of the system [Markus 1983]. Those individuals who lose power are more likely to resist the change than those who gain power. Implications for MMS Applying these findings to MMS development, current thinking in system development would support user involvement in the design of MMS. This policy might be ignored if the introduction of an MMS were seen as a 47 technological change that is transparent (invisible) to the user. The risk is that the shift in power is not recognized. Users may be given less local control over the selection and creation of models and the selection and aggregation of data. This control would be passed over to modeling specialists or a centralized model and data management group. Since DSS are characterized as tools for upper level management where creativity and innovation are desirable, an MMS that restricts the user may be resisted. Control in an MMS Environment An MMS, as a component of a DSS, is assumed to be embedded in a computerized system. It will be assumed that a reasonable level of physical security is achieved through the use of disaster prevention procedures and recovery procedures as well as implementation of procedures for limitations to physical access to the facilities. Additionally, it will be assumed that a reasonable level of data security is achieved through passwords or other means to limit access to the data and programs. Therefore, all reference to data will be based on the assumption that the user has been given authorization to access a subset of the corporate database. Beyond these assumed controls, control in the context of MMS can take on a number of different meanings. CC 31' Ni 48 As defined in chapter 2, a model is made up of five components: retrieved data, parameters, process, output data and reports (see Figure 2.2, repeated here for convenience). Retrieved data -—————> > Output data Process Parameter values > Reports Figure 2.2 Components of a Model Control of each component could be implemented at any point along the spectrum from a complete absence of control to very high or tight control. This research will consider three points along the spectrum which will be called low, mid and high. The three points will represent values between the extremes so that low control is not a complete absence of control and high control is not the most extreme level possible. The details on how the controls and these points will be operationalized are discussed in detail in the chapter on research variables and methodology (chapter 4.) An organization could choose to introduce control over any or all of these components by a number of means. Considering just the three levels of control (low, mid and high) for the five components, there are 243 (35) possible 49 variations of control in Figure 2.2. Some examples will illustrate how portions of the model can be highly controlled while other components are less controlled. Example 1. High control over the entire modeling environment could take place by predefining the processes, the retrieved data sources, parameters and reports. The output data and reports could be restricted to authorized users. In an environment with high controls over all of the components, the user would activate the model and receive the report. A routine running of a sales forecast would be an example of this type of situation. Control could be loosened on portions of the model. For example: Example 2. Users could be allowed to alter parameter values and select input data and alter the processing. Report and output data could still be highly controlled. Example 3. Users could be permitted to alter report format without being able to alter the processing which creates the report or who has access to the output and report. Example 4. The organization could permit users access to a subset of the modeling software (the ability to define the processing) and still retain high control over the data accessible to the model or user. For example, many users in the organization may have access to a modeling package such as IFPS, but only some users are authorized to use 50 predefined models which import data from the financial database into the models. The users, the models, or both, may have an authorization level for access to data. Example 5. Different users could be afforded different capabilities. User A could be allowed to run model X without alteration and also be allowed to access and alter parameters for model Y. User B could be allowed to alter the input source for model X but not access or run model Y at all. These examples demonstrate that "control over modeling" can be implemented in a variety of ways. Therefore, any study of users' reactions to the controls will have to specify the types of controls being considered. Examples of Control Needs in Financial Models This section illustrates the differences in how controls can be applied to types of models used to support various decisions. The following examples are designed to illustrate how the control level (required to insure the security of the data and results) is related to the sensitivity of the data and the characteristics of the decision. Each example will present a decision to be made and the implications for the model components. This research will study whether the user's reactions to controls are as predicted in these examples. 51 Operational decision: Analysis of overdue accounts Retrieved data is drawn from the corporate database. Individual customer accounts need to be accessible. Therefore, direct database access is advantageous. Due to the security needs for Accounts Receivable, the file should be accessed by password and be "read only” so that the user is not able to alter account balances. Because of the sensitivity of the data, the data should not be downloaded where DBMS control is lost. This suggests that processing be done in the corporate computer environment rather than in a micro computer. Parameters will be stable due to the routine nature of the problem. Parameter values can be programmed into the model or read in from a file. As part of the periodic review of the Allowance for Doubtful Accounts, the accountant responsible for Accounts Receivable will review the model results in order to determine the need for adjusting the parameters. The amount of the adjustment will be determined by the Accountant and the change implemented by modeling specialists (such as management scientists or operation researchers) within the organization. Process will be stable due to the routine nature of the decision. Changes to the model will be done only when or if there appears to be a need to update the model. Once again, changes will be done by the modeling specialist. fa he de 96 PC as pa th fo or. US: 52 Output data: Due to the sensitive nature of the input data, output data should be controlled tightly by limiting access to the output data. The same criteria that were applied to grant access to the data itself should be applied to the output data. | Reports: The report generated has the same sensitivity as the output data and should be controlled in a similar fashion. Managerial decision: Budget preparation Input data is drawn from aggregations of historical data. If the data is not available from existing sources (file or report), then the user will need the ability to generate the values from the corporate database. Forecasting models may be used to generate predictions (other input values) for future periods. Parameters: If the resulting budget is to be aggregated at a higher corporate level, some of the parameters (such as interest rate) may be dictated outside the user's department. Other parameters may be given values by the user. Process: If the budgets are to be aggregated, the format and variable definition may be dictated from a higher organizational level. Internal budgeting should allow the user to design the format to fit the budget needs (including 53 more or fewer details) and give the manager the ability to perform "what if” analysis. Output data: Due to the proprietary nature of the output, the budget should be released to authorized personnel only. Control procedures can be used to assure that variable definition and parameter values are communicated as well as the fact that the output represents projected results from a particular forecast method. Report: As with the output data, the report should be released only to authorized personnel. Strategic decision: Debt or equity funding Input data will be drawn from internal as well as external sources. Since the internal data will be highly aggregated, access to the corporate database will not be useful. A specialized database with appropriate aggregations could be helpful. External data sources may be electronically accessible to aid data input. Parameters: Due to the speculative nature of interest rate projections, the user will need the ability to do "what if" analysis. Therefore parameters should be input by the decision maker and that person should have the ability to rerun the model with altered parameters. Process: Some standard computations may be preprogrammed for use by the decision maker. However, due to the non-routine nature of the decision, the programs are not dc 54 likely to exist. Analysis of creative financing combinations implies that the decision maker will need the ability to easily create models to study new financing combinations. Output data and reports: The output will probably be used only by the decision maker. Therefore, the output should not be accessible to other personnel. Implications The examples given illustrate that controls can be placed on the data and modeling aspects of a DSS in various ways for various reasons. To summarize: As the examples illustrate, high control over modeling is best suited for operational level decisions where frequently or routinely run models are supported by database access. The frequent or routine nature of the decisions makes access to predefined models or standard modules cost effective. The employees running the models do not have a need to make alterations to the model, so the ability to alter models is withheld. High control over modeling is least suited to the decision needs of strategic problems where data tends to be highly aggregated and external. Due to the non-routine nature of strategic decisions, limiting modeling to predefined models would require the ability to determine the decision maker's needs prior to the occurrence of the 55 problem. Users will be unlikely to accept limitations on their ability to create or alter models to suit the decision. Managerial decisions fall between operational and strategic decisions. For this reason, the decision maker's reactions are likely to fall between those for operational and strategic decisions. High control is appropriate for sensitive data across the decision types. Managers making operational decisions will not be inconvenienced by high controls if the means of selecting the data is available. Routine uses of the data will probably be well supported. Strategic decisions that require new data selections falling outside predefined data extractions will probably not be as well supported. As with controls over model, decision makers are less likely to be inconvenienced or hindered by limitations on the ability to select data for operational decisions than they are for strategic decisions. This is based on the assumption that operational decisions will be well-supported. Acceptance of restrictions for managerial decisions is expected to fall between the levels of acceptance for operational and strategic decisions. Summary This chapter provides the background for use of the Anthony categorization of managerial tasks: i.e. 56 operational, managerial and strategic. The characteristics of the decision types are discussed. Combinations of controls over DSS environments are presented. Increasing controls (limitations) over the components of the model are expected to be associated with decline in preference (or acceptance). The next chapter will describe how controls will be addressed by the research methodology. tc Dr th an Chapter 4. Variables Definition, Research Hypotheses and Methodology Purpose - To identify variables, state the research hypotheses and present the methodology This research used a combination of questionnaire and data collection through conjoint analysis implemented with card sorting [Green and Tull 1978: Green and Srinivasan 1978]. This chapter discusses the variables that are considered in the research, the relationship between the variables, the research hypotheses and the methodology for studying them. variables Model management systems (MMS) are as yet theoretical and not available for study. Therefore, it is not possible to determine the acceptance or rejection of an implemented MMS. It is possible, however, to study the users' stated preference for a modeling environment that may result from the introduction of an MMS. This research is designed to answer the question: How do the controls introduced with model management systems affect user's preference for the modeling environment provided by the DSS under varying decision types? This research studied the effect of the modeling 57 58 environment (as described by the controls) and the decision type on the user's preference for the modeling environment. This research will consider four variables, three independent and one dependent. The independent variables are decision type, data contrOl and model control. The dependent variable is user preference for modeling environments. The relationship between the variables is presented in Figure 1.2, repeated here for convenience. Each of these variables will now be discussed in full. DSS Decision <————————> Type Data Model Access Access User Preference Figure 1.2 Research Framework Decision Type As was shown in the literature review, the task at hand is a common independent variable. In this research, the task is represented by the decision type. It has been found that, " behavior appears to be (to a large extent) «determined by the characteristics of the task...” [Chervany and Dickson 1974]. In this research the task will be ma: 0? SE t‘n de af th en th co be Fi 7 59 manipulated using the categories given by Anthony [1965]: i.e. strategic, managerial and operational. The operationalization of decision type will be presented in the section on research methodology. Controls A MMS will exist as a component of a DSS; therefore, the decision maker will have a modeling environment largely described by the capabilities of the DSS. Other factors affecting the larger modeling environment such as organizational reward structure and peer support are outside the scope of this research. The nature of the modeling environment provided by the DSS was manipulated by varying the data control and model control levels. A model management system is presented as a means of controlling modeling. As previously presented, control can be exerted over some or all of the model components (see Figure 2.2 repeated here). Research Scope V Retrieved data -—-———> --- Output data Process Parameter values -"-—> —~——-> Reports Figure 2.2 Components of a Model Showing Research Scope 60 This research is designed to focus on those parts of the model which will be most directly affected by an MMS, as indicated by the research scope in Figure 2.2 above. The focus is on control over the access to data (retrieved data) and control over access to models, alteration of models and retention of personalized models for later use (process). These factors reflect the two major functions of a D88; selection of data and manipulation of data through modeling. The importance of these two functions is reflected in the problems in DSS as presented by Sprague and Carlson [1982], and the problems associated with the lack of an MMS, as presented by the literature [Applegate et. a1. 1985: Dolk and Konsynski 1984, 1985]. Further, the MMS literature explains that problems exist with user-developed modeling because of (a) out-of-date or faulty data (due to errors in manual reentry) and (b) errors in user-defined models [Sprague and Carlson 1982]. The other components (parameter values, output data and reports) have not been featured in the MMS literature except from the perspective of error- laden reports and output passed on to colleagues who are unaware of the source. The recognition that these problems occur increases the probability that management will choose to implement controls over the use of data and the creation of models. 61 Model Control For this research, control over models and modeling is called MODEL CONTROL. Model control was manipulated by varying the amount of control (or limitations) on how models may be created or altered. Model control was presented to the subjects at three levels, described as low, mid and high model control. Low model control is not a complete absence of control but rather an environment with few limitations. Likewise, high model control does not represent the most restrictive modeling environment possible, but rather a modeling environment with many restrictions. Mid level model control is a modeling environment with more limitations or restrictions than low model control but with fewer limitations than high model control. Operationalization of model control is presented in detail in the section on the research methodology. Data Control The controls over data puts limitations on the ability to specify which data can be extracted from the database. For this research, the ability to select the data used in terms of a data extraction is called DATA CONTROL. Data control was manipulated by varying the amount of control (or limitations) on how data may be selected or extracted. Data control was presented to the subjects at three levels, described as low, mid and high data control. Low data (:ontrol does not imply a complete absence of control but 62 rather an environment with few limitations. Likewise, high data control does not represent the most restrictive data access possible, but rather an environment with many restrictions. Mid level data control is an environment with more limitations or restrictions than low data control but with fewer limitations than high data control. Operationalization of data control is presented in detail in the section on the research methodology. The strengths of the effects being tested will certainly be influenced by the richness of the model base, the modeling tools and the data extractions capabilities. A rich environment should provide (1) a model base with models designed to address the needs of the managers with predefined models, and modeling languages and modules to assist the mangers in designing personal models: and (2) predefined data extractions to meet most needs with a data extraction language to provide for needs beyond those provided by the predefined extractions. A poor model base and data extraction environment could receive such low preference that declines in preference due to introduction of controls would be difficult to identify. To increase the possibility of the effects being significant, this research will assume a rich environment. This quality (richness of modeling resources) is controlled for in the methodology by 'describing it as a rich environment. 63 User Preference The ideal dependent variable would be one which measures the impact of decision type and modeling environment on the decision-making process. This suggests measures of decision effectiveness (the quality of the decision) or efficiency (use of resources such as human and computer time), as suggested by the Jenkins model [Jenkins 1983]. These measures should be taken, but they will not be possible until working MMS or pseudo-MMS are available. In the absence of the ability to measure the impact on decision making directly, the dependent variable will be user preference for the modeling environment. For simplicity, the variable will be referred to as USER PREFERENCE. User preference is within the subset of human attitudes and opinions given by the Jenkins model as dependent variables [Jenkins 1983]. The use of preference as a dependent variable is not without precedence. Consumer preference is the normal dependent variable in marketing research when conjoint analysis is used to study product characteristics [Green and Srinivasan 1978]. Within the MIS research, Alavi solicited managers' preferences for characteristics of a D88 [Alavi 1982]. As given by the research framework, it is expected that taser preference for the modeling environment will be taffected by the independent variables. User preference for 64 the modeling environment is an important variable for research in MMS. User preference has not been established as a calibrated measure, at best it is a relational measure, a person preferring A to B. Some measure of low preference could be equated with unacceptable performance, so a person can prefer A to B, B to C and also state that C is unacceptable. The ultimate success or failure of computer systems hinges on user acceptance, particularly for systems where use is optional [Lucas 1975]. This research does not address when the modeling environment is unacceptable, but the ultimate result of low preference is ultimately a rejected system. variable Relationships and Research Hypotheses The research design can be represented by a cube (See Figure 4.1). The research hypotheses address each of the main effects and the two- and three-way interactions. Model Control Operational Decision Type Managerial high mid Strategic low low mid high Data Control Figure 4.1 Research Variables 65 Hypothesis 1 It is hypothesized that the relationship between decision type and user preference (the main effect of decision type) is monotonically increasing; i.e. as decision type changes from strategic to managerial to operational, user preference for the modeling environment will increase monotonically (see Figure 4.2). 81: Change from strategic to managerial to operational decisions will be associated with monotonically increasing user preference for the modeling environment. P r high I e f e I r e n low I c e Strategic Managerial Operational Figure 4.2 Preference by Decision Type This hypothesis has not been addressed by prior literature and theoretical support for H1 is tenuous. The hypothesized positive relationship is due to the less- routine nature of strategic decisions. The routine nature of operational decision increases the likelihood of the [existence of software packages and experience with in-house programming to support the model development and the data 66 extraction processes. The ad hoc nature of strategic decisions means there is less experiential support, fewer software packages and less predefined data extraction processes in place, which would help in creating and using models to assist in the decision process [Gorry and Scott Morton 1971]. Recall that the dependent variable is user preference fQr_th§_mggeling_enyirgnmgnt; it is not a measure of preference for the decision type. Therefore, arguments that the challenging nature of strategic decisions with firm-wide implications will be associated with greater preference cannot be used to argue for a negative relationship. Note also that H1 does not address the linearity or nonlinearity of the relationship. It only addresses the direction of the relationship. The alternative hypothesis can be expressed in terms amenable to statistical testing: Bl: "Strategic < "Managerial < ”Operational where u i = mean preference for decision type i The null hypothesis for statistical testing is: El 0: ”Strategic > “Managerial > "Operational where u 1 = mean preference for decision type i Hypothesis 2 The relationship between each control type and user preference (main effects due to data control and model control) is hypothesized as monotonically decreasing. This 67 means that as controls are increased, users will show less preference for the modeling environment. See Figure 4.3. 32.1: Increasing data control will be associated with monotonically decreasing preference for the modeling environment. 32.2: Increasing model control will be associated with monotonically decreasing preference for the modeling environment. P r high I I e f e mid I I r e n low I I c e low mid high low mid high Data Control Model Control Figure 4.3 User Preference by Control Type Support for this hypothesis comes from non-MIS research [Merchant 1985; Fisher 1942], but is intuitively acceptable for MIS research; i.e. people prefer low fewer constraints or limitations to more constraints. Both of the controls considered in this research are expected to affect preference in a similar fashion. (Once again, linearity is not hypothesized, but the direction of the relationship is.) These hypotheses will be statistically tested by considering the alternative hypothesis: form 68 H2.1: ”low > “mid > ”high where u 1 = mean preference for level i of data control 82.2: "low > ”mid > ”high where u j = mean preference for level j of model control These hypotheses will be accepted by rejecting the null of each hypothesis. 32.10: "low <= ”mid <= “high where u 1 = mean preference for level i of data control 112‘20: ”low <= "mid <= “high where u j = mean preference for level j of model control Hypothesis 3 Hypotheses 1, 2.1 and 2.2 deal with the main effects due to decision type and control level. The next hypotheses address the interactions between decision type and control. 83.1: The difference between user preference for the H3.2: modeling environment under operational decisions and user preference for the modeling environment under strategic decisions will be greater for high data control environments than for low data control environments. The difference between user preference for modeling environments under operational decisions and user preference for the modeling environment for strategic decisions will be greater for high model control environments than for low model control environments. Interaction between the decision type and the controls is expected to occur. Strategic decisions in high data 69 control environments are hypothesized to be associated with lower user preference for the modeling environment than user preference for the same modeling environment for operational decisions. In low control environments, the operational decisions will still be preferred, but the difference will be less. (See Figure 4.4 a). To understand this effect, consider the counteracting influences of the two main effects. Increased data control is expected to cause a decrease in user preference (H2.1) while a change in decision type from strategic to managerial to operational is expected to increase user preference (H1). In the absence of interaction between decision type and data control, the pure additive effect of the two main effects would create lines which are parallel with the line for operational decisions above the line for strategic decisions. (See Figure 4.4 b) H3.1 states that the interaction between data control and decision type will be such that when data control is increased, user preference for the modeling environment will decrease at a greater rate for strategic decisions than the rate of decrease in user preference for the modeling environment under operational decisions. The hypothesized interaction will be tested by generating a difference score. Each subject gives each level of data control a preference score under each decision type. The difference scores will be computed as the preference score 70 for the modeling environment for operational decisions minus the preference score for strategic decisions. Similarly, interaction between decision type and model control is expected to be similar to that between decision type and data control. (See Figure 4.4 c and d) I Operational decisions P r high I e O f e mid T r 0 e T n low C 0 e low mid high Data Control a. Alternative H2.1 P r high I e 6 f e mid T r 0 e T n low c 0 e low mid high Data Control c. Alternative H2.2 Figure 4.5 Difference in Preference Decision Type 0 Strategic decisions preference . <_| I T O T Difference in low mid high Data Control b. Null H2.1 Difference in preference (J I T O ‘0-—-I low mid high Data Control d. Null H2.2 by Control Level and 71 The alternative hypotheses for statistical testing are: H3.1 : D10" < amid < Dhigh where 9 is the differences in preference for data control at level i H3.2 : Dlow < Qmid < Dhigh where R 18 the differences in preference for data control at level j The null hypotheses are: H3.1 : 0 Dlow >= amid >= Dhigh where 9 is the differences in preference for data control at level i H3.2o : D10" >= Qmid >= Dhigh where Dj is the differences in preference for data control at level j Hypothesis 4 H4 addresses the two-way interaction between data control and model control. H4: There will be an interaction between data control and model control The nature of the interaction cannot be argued from prior research. However, an interaction is expected because of the interdependent nature of data access and model access. Limitations on one may diminish the value of the other. For example, in the presence of high control over model access, the value of data access may be diminished: i.e. the data has less value without models for data manipulation. Therefore, introducing higher data control would result in less decline in preference for the modeling environment than if model control was at a lower level. Figure 4.6a presents the absence of an interaction between 72 data control and model control. presence of an interaction. I Low Model Control P r high . e f I e mid o r I e O n low C 9 e low mid high Control Level a. Null H4 Figure 4.6b indicates the 0 High Model Control I O I i I 0 low mid high Control Level b. Alternative H4 Figure 4.5 Interaction between Data Control and Model Control Hypothesis 5 Hypothesis 5 addresses the three-way interaction between decision type, data control and model control. The relationship between decision type, control and user preference should apply for each control type. H5: The strength of the relationship between data control and user preference will not be equal to the strength of the relationship between model control and user preference AND the direction of the inequality will differ across the decision types. The direction of the relationships between data control and user preference and between model control and user 73 preference have been hypothesized in H3.1 and H3.2. The strength of the relationship is simply the magnitude of the reduction in preference for a given increase in data control or model control. In a linear relationship this would be expressed in terms of slope. There is no prior research to suggest the strength of the relationships should be the same for the data control and model control. For example, one could argue that the high aggregation level of data for strategic decisions suggests that the decision maker has alternative sources of data in aggregate form. For this reason, data extraction restrictions will have less impact on preference for the modeling environment than the same increased restrictions for managerial and operational decisions. Likewise, it could be argued that the need to create or alter models is higher for strategic decisions. Therefore, the impact on preference caused by introducing limitations on model access should be greater for strategic decisions than for operational decisions (which are well-served by existing models). Assuming the main effects for decision type, data control and model control as well as the two-way interactions between decision type and control all hold, and for convenience, assuming linearity of the relationships, the absence of three-way interaction means that the collection points for strategic decisions will be a plane 74 with a greater decline in preference for equal increases in data control and/or model control. (See Figure 4.6 a.) The presence of a three-way interaction might cause the collection of points for strategic decisions to resemble the non-planar shape in Figure 4.6 b. P r high e f e mid r e n low c low mid high low mid high Data Control Data Control a. Null H5 b. Alternative H5 Figure 4.6 Interaction between Model Control, Data Control and Decision Type The statistical test for H5 will be the three-way interaction term in ANOVA. If the interaction term is significant, the nature of the interaction will be studied by looking at the rating given the profile with low data control and mid model control compared to the rating given the profile with mid data control and low model control. The following is the rationale for this comparison. Suppose a person were asked to rank the three combinations or equal control level; i.e. LL (low data control and low model control), MM (mid data control and mid 75 model control), and HH (high data control and high model control). Presuming that the main effects for data and model control are in the hypothesized direction, a ranking of equal control levels for data and model control profiles should be: LL MM HH If data control and model control have equal impact on user preference for the modeling environment, then the change from low to mid-level data control will have the same impact on user preference as the change from low to mid- level model control. Then LM (low data control, mid-level model control) and ML (mid-level data control, low model control) should be equally preferred. Similarly, this relationship should hold for MH and HM. Excluding the extreme pairs, LH and HL (because there is no rationale for predicting their location), the ranking of the other seven profiles from most preferred to least preferred, and showing equality on the same level, should be: LL LM ML MM NH HM HR The nature of the interaction may be highlight by the order in which (1) LM and ML and (2) MH and HM are rated across decision type. For example, the order may be as in Table 4.1. Note that LM and ML are ranked in reverse order 76 between strategic and operational decisions, as are MH and HM. Table 4.1 Sorted Order by Decision Type Strategic Managerial Operational LL LL LL LM ML ML 1'" "L LM MM MM MM MH HM HM “H H" MH HH HH HH Data collection The data collection process used two approaches. First, conjoint analysis was used to collect the data for testing the research hypothesis. Secondly, a questionnaire was used to collect demographic information, computer experience and the controls in place in the subjects' work place. As a data collection and analysis tool, conjoint analysis provides a means of studying the trade-off decisions common to so much of MIS empirical research. Conjoint analysis is widely used in marketing research for soliciting consumers' preferences for product characteristics [Green and Wind 1975; Jain et a1 1979; Green and DeVita 1975; Rao 1977]. Also, it has been used in an accounting dissertation [Wilson 1984] and has appeared in accounting journals [Siers and Blyskal 1987]. Conjoint analysis is a data collection method, so called because the data is collected in one process (conjointly) rather than in separate collection processes for each item. Conjoint 77 analysis is used for studying the trade-off decisions made between two alternatives, neither of which represents the best of all factors. For example, in the course of developing a new air freshening product, it has been found that consumers prefer floral scent to pine and prefer cylindrical containers to rectangular. If the product development group are considering combinations of characteristics, they would like to know which consumers would prefer more: a cylindrical container with pine scent or a rectangular container with floral scent. The advantage to conjoint analysis is the ability to measure the relative importance of the features when trade offs are made; i.e. is scent more or less important than container shape? This provides a better analysis method than examining the characteristics singly. Conjoint analysis is also able to reveal preference for intermediate values as well as extreme points. For example, beverages are preferred at the extremes (hot or cold) rather than at a mid-way, luke-warm temperature. Saltiness or spiciness is preferred at an intermediate value more than at either extreme of very spicy or bland. A number of factors, each of which occurs at multiple levels, can be built into one data collection process. Conjoint analysis provides the means of testing combinations of these factors for consideration of the trade-off decisions that are made by a subject when considering 78 profiles (combinations of factor levels) that do not represent the subject's optimal combination. For this research, the factors of interest are the research variables: data control, model control and decision type. Conjoint analysis has been implemented in two basic ways: paired comparisons and card sorting. Paired comparison is done by presenting pairs of factors and requesting that subjects rank all the matrix entries. The number of paired comparisons for an experiment with n factors is n(n-1)/2. Paired comparisons are suitable for questionnaire data collection because the directions are easy to follow. One disadvantage to paired comparisons is that as subjects become fatigued with the task, the quality of their responses decline [Green and Srinivasan 1978]. They tend to use convenient patterns to complete the task rather than thoughtfully indicating real preferences. Card sorting has been found to shorten task time and thus avoid some of the response degradation due to fatigue. In card sorting, profiles of the product (in this case, the modeling environment), made up of the combinations of factors at various levels, are presented on individual cards. Subjects are then asked to sort the cards in order of preference. Card sorting is best administered by a researcher rather than through self-administration [Green and Srinivasan 1978]. 79 The method used in this research is card sorting because the research setting makes it feasible and the multiple sorts are less tedious for the subjects than paired comparisons. Analysis Data collected using conjoint analysis can be analyzed with either an individual or a group focus. A focus on the individual means that each subject's data is analyzed individually [Darmon 1979]. Since the number of subjects is one, no statistical results are computed. The output is descriptive measures for each subject. This approach is taken when individual results are important. Alternatively, the data can be analyzed as a group. This approach allow for statistical tests [Green and Srinivasan 1978]. This research treats the subjects as a group. This approach is used when the research is less interested in individuals and more concerned with group tendencies. Data collected in this way is analyzed by two main approaches: ordinary least squares (OLS) regression and monotonic analysis of variance (MONANOVA). MONANOVA is frequently the procedure of choice when subjects are treated as individuals. MONANOVA is a methodology and also the name 80 of a monotonic analysis of variance computer program1 developed by Kruskal [1964] that uses the methodology presented by Luce and Tukey [1964]. MONANOVA is appropriate when the dependent variable is ordinal and analysis of variance is excluded. This is commonly the case when cards are sorted and not rated. The major advantage of M9NAN925 is the rich set of interpretations that can be obtained from the results of the program. There are two major disadvantages of MONANOVA. (1) The descriptive measures produced by MONANOVA can be misleading when subjects do lexicographical sorts. It does well, though, when the sorts are compensatory. (2) An assumption of MONANOVA is that there is no interaction between the factors. Since no prior research has used the combination of variables described for this research, a false assumption of a lack of interactions could make MONANOVA results invalid. This research specifically hypothesizes the presence of interactions, so MONANOVA is not the appropriate analysis tool. A more recent trend in conjoint analysis is toward the use of OLS regression to analyze the data [Cattin 1982]. This is possible when the cards are rated so that the dependent variable is ratio rather than ordinal data. (The dependent variable in this research was given a score which 1 When reference is made to the computer program, the name will be underlined. 81 provided ratio values.) When the subjects are treated as a group, OLS provides the statistical tests not available with MONANOVA. Since the independent variables in this research are categorical, the form of regression used will be analysis of variance (ANOVA). The analysis tool will be the within subject design available as an option of the MANOVA subroutine in the Statistical Packages for the Social Sciences (SPSS) [ SPSS 1986, pp. 486-567]. Factor Description Combinations of data and modeling capabilities and controls were used to describe the modeling environment provided by the DSS. To that end, it was assumed that the DSS has a database and associated DBMS and that some form of modeling capabilities exist. The modeling environment was manipulated by introducing controls (or limitations) on the data and modeling aspects of the DSS. Data control was manipulated by varying the user's ability to extract data in a form that can be used directly as input. Data control could be introduced along the entire spectrum from very high data control to very low. The levels were created by introducing greater limitations (higher controls) on the base level condition. This research used three levels of data control, which are defined as follows: low: Users have access to predefined data extractions and may request new data extractions be written 82 for them by the programming staff. They also have the tools (such as retrieval languages) to define new data extractions: i.e. the base level condition with no control (limitations) imposed. mid: Users may call predefined data extractions and may also request that new data extractions be written. (Control is introduced by removing the ability to define new extractions personally.) high: Predefined data extractions are available and they are the gnly data extractions permitted. (Control is increased by removing the ability to request new extractions from the programming staff.) Manual data entry was considered possible at all levels of control as an alternative to using a data extraction as input. Similarly, model control was studied by varying the control over model access, alteration and creation of new models. Control over models could be implemented along the entire spectrum from very high to very low control. As with data control, the mid and high level were created by adding limitations to the base level capabilities. This research used three levels of model control, as defined below: low: Users have access to the model library (of predefined models) and may request new or altered models which will be written by modeling specialists. Also, they will have the tools (such 83 as a modeling language or model modules) to alter existing models or create new models. (This is the base level condition without limitations.) mid: Users have access to all the models available in the library and also may request alterations of existing models or new models which will be written by modeling specialists. (Control is introduced by removing the ability to personally create new models or alter existing models.) high: A library of models is available for use. Users may select the model but may not alter it. (The added limitation is removal of the ability to request new or altered models from the specialists.) Storage A secondary issue associated with control over process is control over the retention of models that users have created or altered from existing models in the model base. The characteristics of MMS suggest that users be permitted to create new models and retain personalized (created or altered) models. An organization could choose to introduce an MMS (to overcome model proliferation) that will restrict the ability to retain any new or altered models. In particular, the organization may prevent any models from entering the central model base except those tested and approved by specialists. These organizations would 84 establish formal procedures for the development and approval of new models. As with the other controls, control over model storage could also be implemented at various levels. This research defines three such levels: low: New or altered models may be retained for personal use (as in mid) and may also be made available to other users through the central computing environment. mid: New or altered models may be retained for pggsgnal use only. high: New or altered models may not be saved (stored or retained) by any electronic means (such as floppy disk or hard disk), inside or outside the central computing environment. Pretest An early pretest using the three controls (data, process and storage), showed that full factorial design is not possible because some of the combinations result in nonsensical situations. (See Table 4.2) In particular, when model control is high, alteration of existing models and creation of new models is not permitted. Low and mid- level storage control permits retention of models. It does not make sense to present a profile where models cannot be created or altered but new or altered models can be retained. Another nonsense combination occurs when mid- 85 level model control permits new models to be written by specialists is paired with high control over storage, which prohibits storage. Exclusion of the nonsense combinations leaves a fractional factorial design. The problems associated with this fractional factorial design are the inability to examine interactions between model control and storage control and the loss of ANOVA as an analysis tool. Table 4.2 Model Creation and Storage Storage low mid high low Model Creation mid nonsense high nonsense nonsense The pretest of the data collection instrument demonstrated that user reaction to storage of new and altered models was highly associated with model control. Because of the close association between the need to create models and retain them for further use, the storage variable is incorporated in the model control variable. This was considered a better approach than eliminating storage of new and altered models from the profiles. It was felt that storage should be specified rather than leaving it to the subjects to use personal assumptions. The combined variable will be referred to simply as MODEL CONTROL. The levels of 86 model control are created by pairing like levels of model and storage control. Stated briefly: W Model Signage high Only predefined models No storage mid Predefined or request new Personal use only low Predefined, request or Personal or create new or alter shared use Decision type could be incorporated as a third factor, but this would increase the size of the card sorting task to 27 (33). Instead, decision type is introduced by requesting three separate sorts, one for each decision type. One advantage of this approach is that it presents the subjects with a more manageable task. Secondly, it allows for the design to incorporate descriptions of the decision types. This allows the subjects to focus on one decision type per sort rather than trying to compare across decision type in one sorting task. Data Collection instrument The data collection instrument included: (1) a three-page written description of the task and background information, (2) a one-page description of the card contents (3) for each decision setting (Strategic, Managerial, and Operational), (a) a one-page written description of the implications of the decision setting attached by paper clip to (b) a packet of nine cards to be sorted and rated, and 87 (4) a questionnaire for demographic information and model use on the job. The three-page description, the decision settings and card contents are given in the Appendix. The one-page description of the card contents and a sample card is given in Exhibit 1. The order of the tasks were randomized. (Three of the six possible orders were used. Each decision type occurred as the first, second and third sort.) A slip of paper was attached to all three sets of tasks indicating the order the tasks were to be performed. The cards were attached to the decision setting description with a paper clips. To further assist in identification of the three sorting tasks, the description sheets, cards and questionnaires were color coded. The Strategic material was yellow, the Managerial was blue and the Operational was green. There was no procedure used to test for or assist color-blind subjects. The data collection instrument was tested first by administering the three-page written description and the card sorting tasks to two classes of MBA students. Based on the results of the pretest, the card sort was modified, as described, by collapsing model control and storage control into one variable. Minor wording changes were made to the written description. As second pretest of the revised task was done using MIS doctoral students. For both protests, the correlations between the card sorting and the self- 88 reported ranking from the questionnaires provided a satisfactory measure of validity for the sorting tasks. The subjects were give an opportunity to read the three-page description and ask questions prior to beginning the sorting tasks. Few questions were raised. The one-page description was given to the subjects so they could see what would appear on the cards. The control levels were created by adding capabilities above the high control level; i.e. high data control profiles had USE EXISTING, mid-level data control profiles had USE EXISTING and REQUEST NEW, and low data control profiles had USE EXISTING, REQUEST NEW, and DEFINE NEW. Thus, the profiles were made up of some or all of the capabilities. The one-page description permitting the subjects an easy means of reminding themselves what each term represented. Model storage only appeared on the cards when the profile included the ability to create new models or alter existing models. The subjects were told they could ask questions at any point during the exercise. There were few requests for assistance. 89 Exhibit 1 One Page Description Received by Subjects Data Extraction will appear on the cards as SOME or ALL of: DEFINE NEW - REQUEST NEW - USE EXISTING - Model access will CREATE NEW - REQUEST NEW - USE EXISTING - You may define new data extractions. You may request new data extractions which must be written by programmers. Only existing data extractions available. appear on the cards as SOME or ALL of: You may write new models through the use of modeling packages, or alter existing models. You may request new models which must be written by the modeling specialists. Models may be used as is. Model storage will appear on the cards when CREATE NEW and/or REQUEST NEW occur as: PERSONAL USE - SHARED USE - New or altered models may be stored for personal use. New or altered models may be stored for shared use. You may also access models stored by other users. An example of one of the cards is given below. The letter M is for record-keeping purposes and should not be used to assist you in sorting the cards. M Data extraction: Model access: Storage: REQUEST NEW USE EXISTING CREATE NEW REQUEST NEW USE EXISTING PERSONAL USE SHARED USE 90 Procedure In order to familiarize the subjects with the sorting task, they participated in a practice sort using eight cards with three factors at two level each. Each card described a restaurant. The factors were distance (5 min or 15 min walk), food quality (excellent or fair) and price ($4.95 or $6.50). The subjects were led through the recommended procedure of sorting the cards into three or four stacks according to preference, and then sorting each sub-stack. Finally, the stacks were combined. The subjects were asked to rate each restaurant between 100 and 0. A score of 100 was described as an ideal restaurant and one with a score of 0 as absolutely unacceptable. The subjects were told they did not have to assign scores of either 100 or 0 and ties were permitted. An opportunity was given to ask questions. As part of the pretests, time was measured in order to test whether the order in which that the three sorting tasks were completed influenced the total time needed to complete the tasks. Each of the three task orders was used by approximately one third of the subjects. The subjects were also told to perform the tasks in the order given on the time slip. The subjects were asked to record the beginning and ending time for each task. The time recorded was to reflect the total time from the beginning of the decision setting until they were completed with the entire decision 91 setting: i.e., the time recorded included reading, sorting, and responding to the questionnaire. The conclusions that were drawn from analysis of the times measured during the pretest are: 1. There is a learning effect that reduces the time required for each subsequent task. 2. The order of the task does not appear to affect the total time taken, except perhaps when there is a natural ordering to the tasks. 3. The times taken to complete the three sorting tasks were not significantly different. The implications from the pretests are that the tasks can be completed in various orders without severely affecting the total time required to complete the task. Minor refinements in the task description and the layouts on the cards were made in response to comments from the pretest subjects. The first pretest was analyzed using MONANOVA on individuals. The second pretest was analyzed with ANOVA on the group data. Both pretests provided a measure of support for the research hypotheses. Subjects Ideal subjects for this study would be familiar with DSS environments. These subjects would be practicing managers, preferably in upper-level management positions. Unlike surveys that can be completed without a researcher present, it is best to explain and demonstrate the sorting 92 tasks for conjoint analysis to the subjects. Fortunately, a subject group existed that did not require compromising the subjects' business experience for the sake of group administration. The subjects for this study were drawn from a pool of executive MBA students enrolled in the Advanced Management Program at Michigan State University. They were chosen in part for convenience in data collection, but also because they are practicing managers. This provided for the collection of a large quantity of data in a short period of time without sacrificing the preference for actual managers as subjects. The data collection procedure differed slightly for the two classes. The difference in data collection process was necessitated by the differing requirements of the class instructors. For both groups, the practice sort with restaurant data was used to illustrate the sorting procedure. One class (the second year students) received the questionnaire and task description information and was asked to complete the questionnaire and read the task description prior to class time the next week. During class time the second week, they were given the card sorting task instructions. They completed the card sorting during class time. The second class (the first year students) received all the material in one evening and were given both verbal and written instructions. The second class was asked to 93 complete the questionnaire and card sorting and return all the material the next week. There were advantages and disadvantages to each method. An advantage of completing the card sort during class time was the greater return rate. Twenty eight subjects out of thirty nine participants (72%) provided usable responses to the card sorting. The disadvantage to this method was that the subjects were highly motivated to minimize time spent on the task rather than maximizing the value of their responses. (Both groups were in evening classes and the subjects were all employed full time. They were permitted to leave when the task was finished.) Many of the discarded responses were sorted without the rating marked on the cards or without all of the tasks completed. If a subject neglected to rate one of the three sets, the entire subject's response set was omitted from the data analysis computations. An advantage of completing of the material outside of the class (first year students) was that the subjects could choose when to do the exercise, and hopefully avoid the time pressures present in the class setting. This was expected to provide more thoughtful responses to the card sorting. The disadvantage of the method was the poor response rate. The material was given to the subjects near the end of the academic year, when course requirements and final examinations competed for the subjects' attention. Eleven 94 sets of material were returned out of the fifty distributed, a return rate of 22%. A follow-up letter with a mailing label and return postage was sent to the non-respondents to encourage completion of the task. One additional set was returned. Typical Subject It may be useful to present a ”typical” subject based on the questionnaire responses. No statistical tests were made on the data collected from the demographic questionnaire responses. Therefore, these comments are descriptive in nature. Phrases like "most likely" are used to show modality of response rather than statistical significance. The questionnaire and details of the responses are in the Appendix. For pronoun convenience, the subject will be presumed to be male. Although the subject's sex was not a questionnaire item, the majority of subjects present during the data collection process were male. The subject could come from any functional area of the firm, but was more likely to be in an engineering-related position than any other. He has held his current position for about three years. He uses a microcomputer in his job from four to six hours per week and uses a mainframe or minicomputer three hours per week or less. This usage rate for the micro is the same or just slightly more than one year ago and somewhat less than a year ago for mini- and mainframe 95 computers. The computer software package most frequently used on his microcomputer is for word processing or spreadsheets. The mini or mainframe software is most likely to be used for data retrieval or word processing. Also, the subject is likely to be involved with strategic, managerial and operational decisions. He is not provided with a set of approved models to draw upon to assist in decision making. If models are used, he is just as likely to be involved in the development of the model as to use a model developed by a specialist. He is more likely to alter the model personally than delegate the task to an assistant. Manual input of data is normal, so that selection of data for input is not an issue. While output may be stored for later use, it will probably be reentered manually into the next model. The subject is moderately confident in the model results and has used the model twelve to twenty-seven times in the past six months. Furthermore, he is very likely to use the model again. The subject has stored over twenty-five models that were personally created and another dozen created by others. Lotus 1-2-3 is the predominant choice of modeling software. Special Concerns The subject characteristics just outlined and the difference in the data collection method used raise a number of concerns that need to be addressed prior to the data analysis. 96 The first concern is that subjects had diverse experience with computers. This experience is anticipated to have an influence on the responses given. In order to address the issue, SPSSx ANOVA runs were done using EXPERIENCE as an independent variable. Results showed that experience was not significant as a main effect or in any interaction combination. When subjects were considered by group (high experience vs low experience), the significance patterns for the research variables were identical. Therefore, no reason was found to treat the subjects as though they were drawn from different populations. For this reason, experience is not included as a research variable. A second concern was the order in which the tasks were performed. An ANOVA using ORDER as a variable did not yield any significant main or interaction effects involving order. When subjects were considered by group (i.e. order of the tasks), the significance patterns for the research variables were identical. Therefore, no basis was found for concluding that the order of the tasks affected the results. As a consequence, task order is not included as a research variable. The third concern was the data administration methodology. An ANOVA program using METHOD as a variable did yield one significant interaction, that between method and data control. On closer examination, it was determined that the difference between the two methods was in the 97 strength of the relationship rather than the direction. When separate analyses were done, it was found that the pattern of significant effects was constant across the two methods. Therefore, it was concluded that there was no basis for treating the classes as being drawn from separate populations. Summary This research uses three independent variables (decision type, data control and model control) and one dependent variable (user preference for the modeling environment). The hypotheses address the three main effects, three two-way interactions, and the three-way interaction implied by the 3x3x3 design. Data was collected using conjoint analysis implemented with three card sorting tasks, one for each decision type. The subjects were drawn from an executive MBA program. A profile of a "typical" subjects was drawn from the questionnaire. The next chapter presents the results from the data analysis process. Chapter 5. Results of Data Analysis Purpose: To present the results of the data analyses and the validity test. Interpretations of these results will be made in Chapter 6. This chapter presents the results of the statistical analysis used to test the research hypotheses. The hypotheses are repeated here for convenience. H1: Change from strategic to managerial to operational decisions will be associated with monotonically increasing user preference for the modeling environment. H2.1: Increasing data control will be associated with monotonically decreasing preference for the modeling environment. H2.2: Increasing model control will be associated with monotonically decreasing preference for the modeling environment. H3.1: The difference between user preference for the modeling environment under operational decisions and user preference for the modeling environment under strategic decisions will be greater for high data control environments than for low data control environments. H3.2: The difference between user preference for modeling environments under operational decisions and user preference for the modeling environment for strategic decisions will be greater for high model control environments than for low model control environments. H4: There will be an interaction between data control and model control. H5: The strength of the relationship between data control and user preference will not be equal to the strength of the relationship between model control and user preference AND the direction of the inequality will differ across the decision types. 98 99 The statistical method is analysis of variance (ANOVA) implemented with the Statistical Package for the Social Sciences [SPSSx 1986]. A repeated measures design was used; so called because each subject provided a score for each cell of the research design. This necessitated implementing ANOVA within the MANOVA subroutine of SPSSx using the "within subjects" design. During the remainder of this chapter, when reference to ANOVA is made, it should be understood that it was implemented with the ANOVA subroutine and the "within subjects" design. Thirty-eight subjects provided complete data sets, therefore n=38 for all statistical tests unless specified otherwise. Significance is determined using alpha = .05 for all significance tests. The 3x3x3 design results in twenty-seven cells. The cell means representing user preference for the modeling environment are presented in Table 5.1 with standard deviations noted parenthetically. 100 Table 5.1 Mean User Preference and Standard Deviations Operational: Model Control ng M19 High Low 72.45 68.16 57.61 (32.12) (23.57) (24.86) Data Mid 66.18 67.21 57.32 Control (24.97) (19.14) (23.32) High 63.00 61.66 56.84 (25.34) (28.15) (35.74) Managerial: Model Control _L9!______H1d_____nlgh_ Low 82.05 72.58 48.68 (26.83) (23.56) (21.8) Data Mid 70.95 63.71 37.42 Control (23.10) (19.61) (19.92) High 54.32 45.58 25.68 (21.29) (25.37) (26.05) Strategic: Model Control _L9y Mid High. LOW 85.71 79.32 44.47 (24.18) (18.17) (24.07) Data Mid 70.24 63.45 29.89 Control (21.31) (17.86) (22.77) High 47.05 38.82 20.32 (23.43) (25.67) (27.64) The effects reported with ANOVA should be addressed in decreasing order of interactions, therefore the research hypotheses will be discussed in the order H5 (three-way interaction), H3.1, H3.2 and H4 (two-way interactions), finishing with H1, H2.1 and H2.2 (main effects). Effects Three-way interaction H5 addresses the three-way interaction. The ANOVA results are used to address the general question of whether the three-way interaction was significant. The results of 101 the ANOVA are presented in Table 5.2. Significance is based on the averaged-F statistic for repeated measures available within ANOVA [8935 1981, p. 66]. Table 5.2 ANOVA Results Signifisanse_9f_ Effest 82222999.: Data Control .000 Model Control .000* Decision Type .000* Data Control x Model Control .000* Data Control x Decision Type .000* Model Control x Decision Type .000* p+ < .011 Data Control x Model Control x Decision Type .103 * Significant at alpha = .05 + Geiser-Greenhouse correction (see footnote) The three-way interaction is not significant. Therefore, the null hypothesis cannot be rejected in favor of H5. Two-way interactions Data Control by Decision Type, Model Control by Decision TYPe 1 The F-Max statistic available through MANOVA was used to consider violation of the assumption of equal cell variance. When the assumption is violated, the critical F- value from the table is too small, resulting in significance values which are too low. The Geiger-Greenhouse correction adjusts the degrees of freedom used to test for significance [Keppel, 1982, pp. 469-471]. The Geiser-Greenhouse correction over-corrects, so significance actually falls between the significance provided by the average-F and that found using the correction. Both significance values are presented for those effects where the F-max statistic indicates the correction is necessary. 102 H3.1 and 3.2 are designed to address two of the two-way interactions, the interactions between the controls and the decision type. The third two-way interaction (between model control and data control) is reflected in H4. The ANOVA results in Table 5.2 indicate the significance of all three two-way interactions. Two approaches were used in order to understand the nature of the interaction. Firstly, additional ANOVAs were used to consider whether the pattern of effect significance was constant across decision type. Secondly, a difference score was computed to represent the difference in preference for each modeling environment under strategic and operational decisions. Table 5.3 summarizes the results of the ANOVAs by decision type. Table 5.3 ANOVA by Decision Type Effeot §trategio Managerial Operational Data Control .000 .000 .475 Model Control .000* .000* .029* (.05 < p+ < .10) Contrasts Strategio Managerial Operational Data: Low to Mid .000 .000 Mid to High .000* .000* Model: Low to Mid .000* .000* .161 Mid to High .000* .000* .009* * significant at alpha = .05 Geiser-Greenhouse correction The individual ANOVAs by decision type can be used to examine the interaction between decision type and each control type by considering the main effects of each of the controls and the interaction between data control and model 103 control within each decision type. The pattern of effect significance is not constant across the decision types, as can be seen from Table 5.3. While the main effects for data control and model control are significant for strategic and managerial decisions, only model control is (possibly)2 significant under operational decisions. The nature of the interactions between each control and decision type (Data Control x Decision Type and Model Control x Decision Type) can be illustrated by plotting cell means by decision type. Figures 5.1 (a,b, and c) illustrate the nine cell means under each decision type. The nature of the interaction between the controls and decision type is in the strength of the relationship rather than the direction; i.e. increased control (data or model) is associated with decreased preference, but the decrease in preference is much greater for strategic decisions than for managerial, and less again for operational to the point where the slope is not significantly different than zero. Trend analysis was done in ANOVA with the CONTRAST option. Both strategic and managerial decisions show significant differences between low and mid control and also between mid and high control for data and model control. 2 It is only "possibly" significant because the Geiser-Greenhouse correction indicates a significance level between .05 and .10. The ”true" significance level lies between that indicated by the ANOVA (p = .029) and that computed with the Geiser-Greenhouse correction (.05 < p < .10). 104 Figure 5.1 (a) Lowmdi WM Hum Operational 100 80 ..................... 60 . . . . 5:5: ..... 40‘ ..................... m ..................... O T I 1 Low Mid H61 Data Central Figure 5.1 (b) Managerial 1(1) 3) 60 40 20 ° Ll. ML 105 Figure 5.1 (c) Strategic 100 + Law M a) —0'—— W m _+_ 6° my» Mod-I 4O 20 O W E I Low Mid Hig') 106 The trend analysis indicates that for operational decisions, only the change in model control from mid to high is significant. The second approach to consideration of the two-way interaction is suggested by the wording of H3.1 and H3.2. Hypotheses 3.1 and 3.2 approach the two-way interaction by hypothesizing the magnitude in the difference in preference across decision types. Hypothesis 3.1 and 3.2 are restated here: H3.1: The difference between user preference for the modeling environment under operational decisions and user preference for the modeling environment under strategic decisions will be greater for high data control environments than for low data control environments. H3.2: The difference between user preference for modeling environments under operational decisions and user preference for the modeling environment for strategic decisions will be greater for high model control environments than for low model control environments. To address H3.1 and 3.2, a difference score was computed as the score for a profile (a combination of data control and model control) under operational decisions minus the score for the profile under strategic decisions. Each subject provided nine difference scores, one for each combination of control levels. The cell means and standard deviations of the difference scores are presented in Table 5.4. Six of the nine cells have positive means, indicating that the profile was more preferred under operational decisions than it was under strategic decisions. In order 107 to determine the significance of the differences, the student's t-test was used on the cell means from the operational and strategic cell means in Table 5.1. The t- value is indicated parenthetically after the standard deviation. Five of the nine cells have significantly higher ratings for the profile under operational than under strategic decisions. Two (LL and LM) show significantly higher ratings under strategic decisions than under operational decisions. Table 5.4 Cell Means and Standard Deviations of Difference Between Operational and Strategic Scores with Standard Deviations and T-values Model Control __L9!__. __M1§__ -13.26 -11.16 13.14 Low (35.31) (22.93) (31.78) (-2.03 ) (-2.31 ) ( 2.34 ) Data -4.05 3.76 27.42 Control Mid (27.53) (21.93) (30.40) ( -.76) ( .89) ( 4.06 ) 15.95 22.84 36.53 High (31.05) (33.62) (41.57) ( 2.85 ) ( 3.70 ) ( 4.98 ) * Significant at alpha = .05 H3 states that the amount by which the operational decision scores will exceed the strategic decision cells will be larger for high control environments than it is for low control environments. This tendency can be seen by the larger means found from in cells with both data control and model control at mid level or when either data control or 108 model control at high level. Figure 5.2 illustrates the cell means from Table 5.4. To test the significance of the relationship, ANOVA was performed on the differences scores. The results from the ANOVA are presented in Table 5.5. Table 5.5 ANOVA on Difference Scores Effeot Signifioanoe Data Control .000 Model Control .000* p+ < .01 Data Control x Model Control .054* Contrasts: Data Model Low-Mid .000* .005* Mid-High .001* .000* * significant at alpha = .05 + Geiser-Greenhouse correction The CONTRASTS option was used to test the significance of the difference between the levels of the data and model control factors because the hypothesis was directed (difference ”larger” rather than not "equal"). (See Table 5.5) Significant differences were found for all four contrasts. The results of the contrasts support the differences in the levels of the factors. The direction is shown in Figures 5.1 (a, b and c). Therefore, the null hypothesis can be rejected in favor of the alternate hypothesis, H3.1 and H3.2. Data Control by Model Control The interaction between data control and model control is statistically significant (See Table 5.2). The nature of MamDiffaIu 100 109 Figure 5.2 Difference Scores (Op.- Strat) Low Midi Mid lbdd W Modd L; m2 .4 Dub Cairo Figure 5.3 F'ull Data Set 3;: . . . -\‘\ ........ “—3.9; . . . . \‘\\: ..... W ooooooooooooooooooooo 110 the interaction can be seen in Figure 5.3. In the presence of high model control, increasing data control has less negative impact on user preference than in the presence of low or mid-level model control. Similarly, in the presence of high data control, increasing model control has less negative impact on user preference than in the presence of low or mid—level data control. Main effects Hypothesis 1, 2.1 and 2.2 deal with the main effects for decision type and the controls, restated here: H1: Change from strategic to managerial to operational decisions will be associated with monotonically increasing user preference for the modeling environment. H2.1: Increasing data control will be associated with monotonically decreasing preference for the modeling environment. H2.2: Increasing model control will be associated with monotonically decreasing preference for the modeling environment. It is inappropriate to use the significance testing for main effects from ANOVA in the presence of significant interactions. Therefore, to address these hypotheses, we will refer back to illustrations already provided. Data Control and Model Control Increased controls were hypothesized to be associated with decreased preference. The significant two-way interactions between the control types and each control type and decision type makes any main effect of one control type conditional on the level of the other control type and the 111 decision type. While statistical significance cannot be argued, there is no evidence to suggest that the direction of the relationships is positive rather than negative. The direction of the relationships is shown in Figures 5.1 (a, b, and c) and by consideration of the cell means in Table 5.1. Decision Type Change from strategic to managerial to operational decision was hypothesized as being associated with increased preference for the modeling environment. Due to the presence of significant interactions, statistical significance cannot be argued. Figures 5.4 a-i present the of the cell means across the decision types. Five of the nine graphs illustrate the direction hypothesized. The hypothesized direction is supported in the presence of high data control and/or high model control. While there is some support for the direction hypothesized, further research is required to address this question statistically. validity test The data collection process was designed to measure the validity of the data sorting process. This was done by asking each subject to complete a questionnaire soliciting the relative importance of data and model control (a score from 0 to 10) and the relative importance of each level of the factors (a score from 0 to 10). This allowed 112 Figure 5.4 (a) Low Data Control Low Model Control 1CD m . . W—‘\- ..... w ..................... w ..................... m ..................... 0 I r r Stobgtc Monogtid (boatload Decion Type Figure 5.4 (b) Low Data Control Mid Model Control 1CD m ‘ ° W """" m ..................... w ..................... m ..................... 0 r n r Stohguc Mmogdd W Decbim Type 113 Figure 5.4 (c) Low Data Control High Model Control 1C1) m ..................... mm/t/ “““ w ..................... m ..................... O T T l Shot-go WW Decmm"Type Figure 5.4 ((1) Mid Data Control Low Model Control 100 m ..................... w ..................... w ..................... m ..................... 0 1 r T Stobgio Mango-id W Dochion Typo 114 Figure 5.4 (e) Mid Data Control Mid Model Control 100 w ..................... -————-«-——-—""“"'- E m ..................... m ..................... m ..................... i 0 I l I J Stohgic W W Deo'eim Type Figure 5.4 (f) Mid Data Control High Model Control 100 115 Figure 5.4 (g) High Data Control Low Model Control 1CD m ..................... E m . . . . / ..... 4O ..................... m ..................... O T I I Strobgic Moog-id Opeotimd Deobim Type Figure 5.4 (h) High Data Control Mid Model Control 1CD 116 Figure 5.4 (1) High Data Control High Model Control 100 m ..................... E m ..................... 4O ........ / ..... m . F/ ............. 0 I r I Strategic Ming-rid W Dec-ion Type 117 computation of a self-reported score which can be compared with the score the subjects provided in the sorting tasks for the profiles. The highest possible score from the questionnaire is 100 (10 for the factor x 10 for the level). The lowest possible score would be 0 (0 for both factor and level). This computation is compensatory in nature rather than lexicographical. Pearson's correlation was calculated using the responses to the questionnaire associated with each decision type and the rating the subject placed on the card. For example, subject number 23 responded to the managerial decision setting questionnaire with the values: Data Extraction = 10 low Model creation = 5 low 10 mid = 8 high 7 mid = 10 high II II II ll 0 The self-reported score for each profile is computed from the appropriate questionnaire responses. For example, the self-reported score for the mid-low profile (mid level on data control and low level on model control) would be 115 (10 x 8 + 5 x 7). This subject's self-reported and on-card ratings are presented in Table 5.6. This subject's correlation between self-reported and on-card is .500. 118 Table 5.6 Subject 23 Self-reported and Actual Braille Seltzreoorted. smears low, low 135 70 low, mid 150 30 low, high 100 40 mid, low 115 5 mid, mid 130 50 mid, high 80 40 high, low 85 50 high, mid 100 70 high, high 50 0 Table 5.7 presents the statistics for the Pearson correlation computation for those subjects which were included in the data analysis for hypothesis testing. The number of subjects is less than those included in the SPSS computations because of three reasons: (1) the subject did not complete the portion of the questionnaire with the factor weight; or (2) the subject gave all cards the same score; or (3) the subject responses to the questionnaire gave equal self-reported scores to all profiles. (Pearson correlation is computed as the covariance of the factors divided by the product of the variances. If one of the variances is zero, the Pearson correlation cannot be computed.) Table 5.7 Pearson Correlations m Her... 92... n 35 35 35 Mean 0.758 0.709 0.609 Std dev 0.300 0.312 0.385 Range 1.565 1.342 1.404 119 Two thirds of the correlations computed meet or exceed the critical value of 0.7273. If the level of alpha is relaxed to .1, then seventy-five percent of the correlations exceed the critical value. This lends confidence to the sorting and rating method at least to the extent that is fairly consistent with the self-reported measures. Summary In summary, the research hypotheses were supported or not supported as given: H1 (main effect for Decision Type): Statistical significance cannot be measured because of the presence of significant interactions. The direction hypothesized appears to be supported in the presence of high model control and/or high data control. H2.1 (main effect for Data Control) and H2.2 (main effect for Model Control): Statistical significance cannot be measured because of the present of significant interactions. The direction hypothesized appears to be supported. 3The test for significance of the Pearson correlation coefficient was made using the formula: x = r/[(N-2)/(l-r2)] [SPSS 1975, p. 290] X is distributed as t. The critical value of r (the correlation coefficient) can be obtained by setting the formula equal to the value obtained from the t-table. Using a double-tailed test with alpha 8 .05 and 7 (9-2) degrees of freedom, the critical t-value is 2.365. Solving for r, the critical value of r which indicates significant correlation is 0.727. 120 H3.1 (interaction between Data Control and Decision Type) and H3.2 (interaction between Model Control and Decision Type) were accepted on the basis of the statistical analysis. H4 (interaction between Data Control and Model Control) was accepted on the basis of the statistical analysis. H5 (interaction between Data Control, Model Control and Decision Type) was not supported. The interpretation of these results as they relate to the research purpose is presented in chapter 6. Chapter 6. Discussion of Analysis Results Purpose: To interpret the results of the statistical analyses. This chapter will discuss the implications of the statistical analyses. The research hypotheses are repeated here for convenience. H1: Change from strategic to managerial to operational decisions will be associated with monotonically increasing user preference for the modeling environment. H2.1: Increasing data control will be associated with monotonically decreasing preference for the modeling environment. H2.2: Increasing model control will be associated with monotonically decreasing preference for the modeling environment. H3.1: The difference between user preference for the modeling environment under operational decisions and user preference for the modeling environment under strategic decisions will be greater for high data control environments than for low data control environments. H3.2: The difference between user preference for modeling environments under operational decisions and user preference for the modeling environment for strategic decisions will be greater for high model control environments than for low model control environments. H4: There will be an interaction between data control and model control. H5: The strength of the relationship between data control and user preference will not be equal to the strength of the relationship between model control and user preference AND the direction of the inequality will differ across the decision types. 121 122 Significance of Statistical Results Main Effects Hypothesis 1 addresses the main effect due to decision type. The hypothesized effect was that for a given or fixed modeling environment, there would be a higher preference for the modeling environment when making operational decisions than when making managerial. Similarly, the preference would be higher when making managerial decisions than when making strategic. Because of the presence of significant interactions, the strength of these effects is conditional on the levels of data control and model control. This tendency is more strongly supported in the presence of mid- or high-level control over data or models. However, the direction of the effect was as hypothesized. The significance of these results is that managers have one DSS. For managers with tasks of more than one decision type (strategic, managerial, or operational) their preference for the DSS will be based on the tasks assisted by the DSS. A manager using the DSS for mostly operational decisions will have a higher preference (opinion) for the DSS than a manager doing mostly managerial or strategic decisions. This effect needs to be considered in the design of the controls for DSS. The effect of the controls will vary according to the decision being supported by the DSS. The impact of imposing uniform control across the 123 organization will not result in uniform reactions to the controls. Hypotheses 2.1 and 2.2 address the main effects due to data control and model control. H2.1 means that if decision type and model control are fixed and data control is allowed to vary, users will have lower preference for modeling environments with high data control than for modeling environments with mid or low data control. Similarly, H2.2 means that if decision type and data control are fixed and model control is allowed to vary, users will have lower preference for modeling environments with high model control than for modeling environments with mid or low model control. Because of the presence of significant interactions, the strength of these effects are conditional on the level of the other control variable and the decision type. Interaction Effects Hypothesis 3.1 addresses the interaction between data control and decision type. This hypothesis was supported by the research data as was hypothesis 3.2, dealing with the interaction between model control and decision type. In the absence of interactions, we would expect a simple additive effect. For example, in the absence of interactions, and assuming linearity, the graphs of the nine cell means1 for 1 Representing all the possible combinations of low, mid control over data and models. 124 the three decision types might look like Figures 6.1 (a), (b) and (c). The main effect for decision type is shown by the higher preference for each modeling environment under operational decisions than under managerial or strategic. The additive nature of the effects can be seen by plotting low model control for all three decision types on one graph. (See figure 6.1 d) The hypothesized interaction is that the slopes of the preference lines will differ across the decision types such that the slope of the preference line for a given level of model control for strategic decisions will be greater (more negative) than the slope of the preference line for the same level of model control under operational decisions. (Contrast Figures 6.1 d and e) Managerial decisions are expected to fall between operational and strategic. These hypotheses were supported by the ANOVA results and can be seen in Figures 6.2 (a,b,c) (which holds model control level constant across decision type.) The interaction between model control and decision type can be seen in Figures 6.3 (a,b,c) which reverses the roles of the control types in the diagram. The importance of the interaction between data control and decision type is that the presence of managerial or strategic decisions has an effect on the strength of the decrease in user preference when tighter controls are introduced. Preference for the DSS will decline much more with the introduction of tighter data control if the user Prefemoe 125 Figure 6.1 (a) Strategic 100 j - w ..................... I LOW Modd w ..................... I ___*__ .................. Mid Model 60 ..................... I Hid) m w . \ .............. I 40 . ,‘\\.\ ........ I 30 ........ \\- ..... I 20 ..... \ \.\.\* ...... 1O .............. \.\‘ ...... 0 T I T Low Mid Hugo Doto Control Figure 6.1 (b) Managerial 100 “I ...-.... m ..................... I Lu W m ..................... ‘, + 70 ' Mid Modd ...—..— m Hid! m 50 4O 30 20 10 fi—;‘ 126 Figure 6.1 (c) Operational Lowkbdd MidModd HigIModd Figure 6.1 ((1) Low Model Control Across Dec. Type 1m + 90 LowModI m + 70 mm _.__ 60 WM 50 I 40 30 20 1O 127 Figure 6.1 (e) Low Model Control Across Dec. Type 1CD __-._ 90 Low Med-I m +- 70 Mid Modd —+— 6° my. Mod-I 50 4o 30 20 1O 0 I T I Low Mid Hugo Doto Cmtol Figure 6.2 (a) Low Model Control 100 - Sid-9b m ———.—- W ...—.....— w W! 40 m ..................... 128 Figure 6.2 (b) Mid Model Control oooooooooooooooooooooo Figure 6.2 (c) High Model Control --------------------- ooooooooooooooooooooo “it i I I 5W iiii 129 Figure 6.3 (a) Low Data Control __.-__ Stomgic H) +- himoge'io -——*— 6° Ooactior‘ol 40 m ..................... 0 I I I Low Mid Hng‘I Mood Control Figure 6.3 (b) Mid Data Control "10 -—-—- Strategic m + kimogrio' + 60 Occur) ion! 40 20 0 I I T 130 Figure 6.3 (c) High Data Control 131 makes managerial or strategic decisions that if the user makes only operational decisions. Similarly, user preference declines more rapidly for high model control if managerial or strategic decisions are made than if only operational decisions are made. The interaction between model control and data control was significant. As can be seen in Figures 5.1 (a), (b) and (c), the nature of the interaction is difference in the slopes of the preference lines. They are not differences in direction, but rather differences in strength. This means that the change from mid to high data control does not have the same effect on preference for all level of model control. Similarly, the change from mid to high model control does not have the same effect on preference for all level of data control. Linearity Before discussing the interaction between data control and model control, the question of linearity of the relationships needs to be addressed. Presuming for a moment that the relationship between preference and data control is linear and negative, the solid line in Figure 6.4 represents the ”true" relationship between data control and preference. This research used a value between high and low control that was intended to be the mid-point of the spectrum. The subjects may have interpreted the mid-level of data control to be closer to low (or high) control rather than the mid- 132 Figure 6.4 (a) Actual versus Observed Figure 6.4 (c) 133 LouModd WM HEM 134 point. In Figure 6.4 (a), if the subjects interpreted mid- level control to be at M' (i.e. closer to High than to Low), then their preference for that control level would be at x. Plotting the three preferences a§_thgugh the mid-level was the midpoint on the horizontal produces the broken line in Figure 6.4 (a). This pattern is similar to the plots seen of the research data. Treating the broken line as the line for low model control, and in the absence of interaction, the lines for mid and high model control should be piece- wise parallel to that for low data control (see Figure 6.4 (13))- This same argument for the mid-level of model control may explain why the lines are not equidistant in Figure 5.1 (a,b,c). Combining the measurement for the two controls and assuming that the description of mid-level was interpreted as being closer to low than to high control for both controls, in the absence of interaction, the nine cell means should resemble Figure 6.4 (c). Interpretation of the interaction between data control and model control The preceding argument could be used as an explanation for the shape of the individual lines, but the lines are not piece-wise parallel. As can be seen in Figures 5.1 (b,c), the slope of the line for high model control is less than that for mid and low model control. Therefore, the 135 interaction between data and model control can be expressed as follows: In the presence of high model control, increased data control has less negative impact on preference than a similar increase in data control in the presence of mid or low model control. Similarly: In the presence of high data control, increased model control has less negative impact on preference than a similar increase in model control in the presence of mid or low data control. Defining data and model control for a DSS cannot be done in isolation. The restrictions placed on access to data will affects users reactions to restrictions on model access. Where there is high control over one, preference does not decline as rapidly if control is increased on the other. This may be because the value of the second is decreased by the restrictions on the first. For example, if the data cannot be extracted in the form required, restrictions on modeling have less impact. The user will not need to modify a model if the data is not available. Likewise, if the user cannot write or modify a model, then restrictions on the ability to extract the data have less negative impact. Three-way interaction Hypothesis 4 addresses the three-way interaction between data control, model control and decision type. This interaction was not supported by the data. The lack of statistical significance may be due to the small sample 136 size. Further research with a larger sample size is required to test if there is a lack of three-way interaction. Implications The research hypotheses have implications for the introduction of control into a DSS environment. The hypotheses demonstrate that the presence of controls or limitations on data or models will be associated with a decline in preference for the modeling environment comprised of the DSS, particularly when managerial or strategic decisions are being considered. The implication is that increased controls have the greatest impact on user preference for those decisions which required the greatest flexibility and are frequently associated with the broadest impact on the organization. The importance of these results is that organizations may choose to apply controls uniformly across and through the organization. If an organization chooses to implement tighter controls, the affect on user's preference will be stronger for users making managerial and strategic decisions than for those users making operational decisions only. These users may become dissatisfied to the extent that they may avoid using the system. The statistically nonsignificant results for operational decisions found in this research are probably the result of the rich data extraction and model base 137 environment described. Operational decisions were well- provided for even with the highest control levels. If the‘ research were replicated with a less-rich data extraction and model base (and there is no basis to assume the environments are rich in most organizations) then a significant effect would be expected for operational decisions also. The conclusion of this argument is: the less an organization provides its users in terms of model base and data extractions, the more sensitive user preference will be to limitations. Restrictions on the ability to specify data extraction will have greater negative impact on user preference for the modeling environment in organizations that do not supply a wealth of data extractions. Likewise, restrictions on the ability to alter existing models and create new models will have a greater negative impact on user preference for the modeling environment in organization that do not supply a wealth of predefined models. This research has established that users have a higher preference for low-control environments. This result is important when an organization is considering changes in controls. These results are even more important when the computing environment is considered. It is important to consider the computing environment available because the 138 type of computing environment has implications for the options users have for responding to increases in controls. Computing Environments Three types of computing environments will be described by their hardware: dumb terminal to a central computer: microcomputer networked to a mini or mainframe computer and microcomputers as stand-alone devices. The hardware making up the computing environment is important for two reasons; (1) the hardware configuration affects the ability to implement and enforce controls of the type studied in this research and (2) the hardware configuration affects the possible actions a user could take in response to the introduced controls. Dumb Terminal When the DSS is resident on a centrally located computer and users access it through a dumb terminal, then central administration and control of the data can follow well-established procedures for data access. Likewise, the model library can be centrally administered. Access to the models, update procedures for the models and creation of new models can be monitored by a central administration. Data control and model control can be enforced with a minimum of difficulty. In the absence of alternative computing resources, users have little choice except to accept and use the DSS or reject it completely. 139 Networked Micro When the DSS is resident centrally and users access it through networked microcomputers, centralized control (as given in the centralized environment previously described) can be implemented. Users now have choices beyond the "take it or leave it" condition given previously. The users have a device which is capable of use as a DSS itself. This means that rejecting the centralized DSS does not leave the users unsupported. Additionally, attempts to restrict data use to the central computer can be circumvented. If the file can be read, it can be downloaded. Secondly, there is software available which allows screen displays to be captured and written to a filez. Once the data is stored locally, the control and security of the data is in the hands of the users. Likewise, if access to centralized models is considered restrictive, the local computer can be used as an alternative for doing modeling. Stand-alone Micro Stand-alone microcomputers provide the least amount of scope for centralized control. Data stored centrally must be either downloaded and transferred to the microcomputer (for instance by means of writing the data onto a floppy disk for transfer to the micro) or reentered manually. Restricting data extractions, or failing to facilitate data extractions for transfer to the micro would increase the 2 FANSI-CONSOL by Hersey Micro-Consulting, Inc., Ann Ar 140 likelihood that users will resort to manual reentry. (It is referred to as "reentry" because the data already exists in magnetic form.) Manual reentry increases the probability of introducing errors into the data. Either procedure encourages infrequent updates so the currency of the data, as well as the accuracy, becomes suspect. Modeling is done likewise either at the local level or using a model library provided, for example, on floppy disks. As with the networked micro, the micro can be used to sidestep or avoid the centralized restrictions to create personal models. Whenever a microcomputer is available, users may use the power of the micro to provide capabilities denied them centrally. Users will find the tools to accomplish the tasks they wish to perform. If the modeling packages or languages are not provided by the IS department, individuals or departments may choose to purchase software out of their discretionary budget, or users may apply inappropriate (but convenient) tools to the task at hand3. [Do you remember the cite for this one? I recall hearing/seeing it somewhere. I remember something like 75% of people surveyed use Lotus for word processing.] If an organization chooses to implement high controls over centralized data access and modeling, and further places restrictions or limitations on the accessability or support for micro use, modeling may be driven underground. 3 Of people surveyed, many used Lotus 1-2-3 to do word p 141 People are likely to continue to use what has been useful before, or may search out alternate support if the DSS is restrictive. Underground modeling using personal models will merely duplicate the current uncontrolled modeling that proponents of MMS wish to overcome. The interaction between the control types has additional implications for the introduction of controls. The importance of this result for the introduction of controls through model management systems is that introducing high control over even one aspect of the modeling environment may reduce the users preference below a level they would consider minimal for system acceptance, regardless of the other controls present. The MIS literature has shown that user satisfaction has implications for user acceptance [Lucas 1975]. Dissatisfaction can lead to disuse of the system, particularly when use is voluntary, as is the case with DSS. The interaction between each control and decision type suggests that special attention must be given to the type of decision assisted by the DSS. Users will be much more sensitive to restrictions on access to data and restriction on the ability to alter existing model and create new models when strategic decisions are being made than for managerial and operational decisions. These results have implications for the introduction of controls into the modeling environment: 142 Careful consideration should be placed on the data access needs of the affected users prior to the introduction of controls which would limit data access. This research has shown that the more restrictive the data controls, the less users will prefer the resulting modeling environment. Careful consideration should be placed on the model creation and alteration needs of the affected users prior to the introduction of controls which would limit model creation and alteration. This research has shown that the more restrictive the model controls, the less users will prefer the resulting modeling environment. The full span of decision responsibilities should be considered. Controls applied to users whose primary responsibility are operational in nature will affect those users differently than other users with managerial or strategic decision responsibilities. This research has shown that users will react negatively (i.e. a reduction in preference for the modeling environment) to the introduction of data controls or model controls. This reaction will be strongest for strategic decisions, less strong for managerial decisions, and less again for operational decisions. Some users will make decisions falling into two or three of these categories, so considering just their main responsibility will be misleading when 143 predicting the user's reaction to the introduction of limitation or controls. Implications for builders of MMS If an MMS is designed to provide a measure of control in the modeling environment, the designer needs to be aware of the users potential reactions to the introduced limitations. Just as it was found that one computer system could not meet all an organizations needs, so a single MMS is unlikely to meet all the modeling needs of an organization. A careful matching of user's modeling and data needs will have to be weighed against the control requirements. DSS which are designed to meet operational needs can offer acceptable modeling environments with high data and model control. DSS designed to meet managerial and strategic decision needs will have to provide data access and modeling flexibility required to provide a DSS acceptable to the users. Since users may make decisions in more than category, data control and model control may have to be adjusted to the needs rather than making blanket rulings by job classification or function. The MMS literature has stressed the problems present in the modeling environments currently available in DSS. The focus of MMS needs to be on the introduction of greater capabilities rather than the need to introduce controls. The problem of old data and error-laden data can be eased by 144 facilitating data extraction and summarization. This can be done within the policies of the organization for data control. Reluctance to download data needs to be balanced with the need for access.* Providing centralized modeling capabilities reduces the need to download sensitive data. Providing DSS users with education on the need to maintain local control can also reduce risks. The work done to date to implement MMS has had limited success for small modeling environments. The very nature of dynamic business problems demands that the introduction of capabilities through MMS not be accompanied by limitations on the user's ability to alter models or create new models. Limitations This study used user preference as a dependent variable. A person's prediction for preference is certainly not as strong a dependent variable as a measure of preference based on use of an actual modeling environment. This limitation was recognized at the outset of this study. In the absence of the resources to implement a variety of control environments, it was determined to be the best alternative for a dependent measure. The sample is small. The thirty-eight subjects each provided twenty-seven values, for a total of 1026 data points. This data set has provided some preliminary support for the research hypotheses, but the small sample size does 145 cause some reservations. Further research into this area will provide confirmation or expansion of the results presented here. Future Research Laboratory research on actual modeling environments would allow for the collection of a wider variety of dependent measure: such as user satisfaction, accuracy of decision, time required to make the decision, and error rate of system usage. In a micro environment, it would also be possible to monitor subjects side-stepping the limitations. Field research should be developed to study current attempts to introduce controls, to study user reactions to the controls, and to study the effectiveness of the controls. The first research paths focus on user-oriented variables. There is still major technological development required to obtain working MMS capable of meeting the theoretical definitions of an MMS. Current research combining artificial intelligence with DSS may provide a means of overcoming some of the present shortcomings of DSS [Goul, Shane, and Tonge 1988] APPENDIX I Data Collection Materials Letter to subjects: Dear Study Participant; This research is designed to collect information regarding the use of computers to aid in making decisions. Attached is a form asking for your consent to take part in this research, a questionnaire, and a three-page description of terms that you will need for the second part of the research. The questionnaire asks questions about your use of models, particularly computerized models. Models are quite similar to other computer programs except that they are beyond the routine transaction processing and report generation cycles. Some examples of models are: sales forecasts, budgets, production scheduling, and project analysis. You are building and using computerized models if you are using a software package (like Lotus 1-2-3 or Visicalc) to consider the present value of projects or to analyze productivity. You may create the models personally, use models created for you by specialists, or use models that a colleague with a similar task has developed and shared with you. Please complete the questionnaire and bring it to class next week. The three-page write-up will be of maximum use if you read it shortly before class time next week. Your responses will be confidential. If you are willing to participate in this research, please sign the consent form attached. If you would like to receive a report of the results of the study, please provide your mailing address on the consent form. If you have any questions regarding the survey or study in general, please contact me. Thank you, Margaret Dawson Doctoral Candidate Department of Management 232 Eppley Center Michigan State University East Lansing, MI 48824 146 147 CONSENT FORM This research is designed to explore your reactions to the type of computer support you prefer to assist in making decisions of various types. You will be asked to read some material, sort cards and rate them in order of preference and respond to three short questionnaires and one longer questionnaire. Your responses will be anonymous and you are under no obligation to complete the task and may stop at any time. (Please sign to indicate your agreement to participate in this research) If you would like to receive a report of the research results, please indicate your mailing address: This research is being conducted as part of the requirements for my doctoral dissertation at Michigan State University. If you have any questions, please direct them to: Margaret Dawson Department of Management 232 Eppley Center' Michigan State University East Lansing, MI 48824 148 TASK DESCRIPTION This exercise in which you will participate involves sorting cards in order of preference. Each card will describe a computer modeling environment in which (1) the data extractions, and (2) modeling access and storage capabilities vary. Each of these two environmental situations is described below. This is followed by a description of the tasks you will be asked to do. Models A model is a representation of a business situation that is used to assist decision making. Some examples of models are spreadsheets for budgets, regression models for forecasts, and present value formulas for investment analysis. While modeling can be done manually using paper and pencil, many models use a computer. Modeling can take place on micro-computers, mini-computers, or mainframe computers. Computer languages such as FORTRAN, BASIC and COBOL are used to represent the mathematical relationships in the model. Modeling "packages” such as Lotus 1-2-3, IFPS, or SPSS (a statistical package) make it much easier for decision makers to create their own models. Data Extraction Models frequently use data that is stored in the company's database. For example, forecasting models use actual sales from previous time periods. Data from the database can be made available by a computer program so that the person using the model does not need to enter the numbers manually. These programs will be referred to as data extractions. Model Access and Storage Models may be stored in a central model library for shared use. Models in the model library are accessed through a central computer. Storage devices such as a hard disk or floppy disk may be available for personal models which are not shared with others. You may use any models you have stored outside the central model library and in addition you may be authorized to use some or all of the models in the central model library. This would permit you to use the same model again or perhaps to modify the model before using it again. For example, you may choose to change the formula for creating a sales forecast or you may change the projected interest rate used in a model of next year's capital expenditures. 149 The Task When completing this task, assume that you have a micro-computer which is connected to the central computer of the business. This will provide you with access to the model library in the central computer and data extractions from the central database(s). Also, assume that you have a position within a firm that requires you to make decisions of various types as a normal part of your job. You will been given three packets of cards. Each packet of cards is accompanied by a "decision setting description" and a short questionnaire. The decision setting describes the type of decision being made and the implications that decision has on the use of models to assist in making the decision. Each of the cards contains the description of a modeling environment, including the descriptions of data extractions and model access and storage. Three different data extractions are described. In the first type, the only data extraction programs you may use are those already written by the programming staff. In the second type, you may use the existing data extractions programs or request new data extraction programs which will be written by the programming staff. In the third type, you may use existing data extractions, request new data extractions from the programming staff, or write new data extractions yourself. Manual input of data is permitted under all three environments. Similarly there are three model access and storage environments described. Your ability to alter existing models or design new models will vary in the environments described. In the first type, you may design new models using modeling packages available through the central computer, or alter existing models, or request new models which will be developed by modeling specialists. In the second type, you may use existing models but the only means of obtaining new models is to request them through the modeling specialists. In the third type of environment, the only models available are those already defined; i.e. there is no provision for introducing new or altered models. For each model access type there is an associated model storage type. When models are written for you by specialists, they will be stored for your personal use. When you are permitted to create new models or alter existing models, they can be stored for your personal use and you can permit others to use them. Likewise, you will 150 have access to models your colleagues have created. You will be give a page that presents the information in a form that will be useful to you during the sorting of the cards. You are asked to read the decision setting description and then sort the cards to indicate your preference for a modeling environment for that decision setting. The cards should be sorted in order of preference so that the top card is the most preferred modeling environment for that decision setting. After sorting the cards in order of preference, score each card according to the following criteria. A score of 100 represents a modeling environment you would consider ideal FOR THAT DECISION TYPE. A score of 0 represents a modeling environment you would consider as completely unacceptable FOR THAT DECISION TYPE. You do not need to assign scores of either 0 or 100 and you may have more than one card with the same score; i.e. ties are permitted. Write the score for each card on the line provided in the upper right hand corner of the card. After you have finished scoring the cards, complete the questionnaire associated with that decision setting. This task will be repeated three times, once for each of three decision settings. The order in which the decision settings should be considered is indicated. PLEASE DO THE TASKS IN THAT ORDER. 151 Strategic Planning Setting Strategic planning is the process of determining the organization's goals and objectives. Strategic decisions are characterized as having long-range implications for the firm. Examples of strategic decisions include analysis of merger opportunities, introduction of new product lines, and issuing bonds or new stock. When a computer model is used to assist the decision maker, the person using the model is likely to be the person responsible for the decision. Due to the nature of strategic decisions, it is highly unlikely that a model will exist in the model library which can be used as it is defined to assist the decision process. If there is not a model which can be used exactly as it is, then there is a 20 percent chance that a model will exist which could be altered to suit the decision. It is highly unlikely that the data needed for the model can be obtained by using an existing data extraction. 152 Managerial Decision Setting Managerial decisions are characterized as the process of obtaining and using resources effectively and efficiently in the accomplishment of the organizations objectives. Examples of managerial decisions include selection of suppliers, analysis of past operations, and planning for human resources. The results obtained from the models may be used in preparing recommendations and reports to supervisors as well as assisting the user. Due to the nature of managerial decisions, there is a 50/50 chance that there is a model which will exist in the model library which can be used as it is defined to assist the decision process. If there is not a model in the library that can be used exactly as it is, then there is a 50/50 chance that a model will exist which could be altered to suit the decision. There is a 50/50 chance that the data can be extracted by a existing data extraction. 153 Operational Decision Setting Operational decisions are characterized as implementation of strategic plans through allocation of resources. Examples of operational decisions include scheduling the week's production, hiring new employees, collection of data and preparation of routine reports, and placement of advertisements in local publications. The results of models used for operational decisions are frequently used as a guide for implementing decisions. The results may be used by the decision makers as well as being used to report to supervisors. Due to the nature of operational decisions, it is highly likely that a model will exist in the model library which can be used as it is defined to assist the decision process. Likewise, it is highly likely that a model will exist which could be altered to suit the decision. It is almost certain that the data can be extracted by a existing data extraction. 154 Card Descriptions This is presented to show card contents. The actual cards were larger than the space alloted here. D ._____ . B .____. Data extraction: Data extraction: DEFINE NEW REQUEST NEW REQUEST NEW USE EXISTING USE EXISTING Model access: Model access: CREATE NEW CREATE NEW ' REQUEST NEW REQUEST NEW USE EXISTING USE EXISTING Storage: Storage: PERSONAL USE PERSONAL USE SHARED USE SHARED USE F A _____ Data extraction: Data extraction: REQUEST NEW DEFINE NEW USE EXISTING REQUEST NEW USE EXISTING Model access: REQUEST NEW Model access: USE EXISTING REQUEST NEW USE EXISTING Storage: PERSONAL USE Storage: PERSONAL USE I G _____ Data extraction: Data extraction: REQUEST NEW DEFINE NEW USE EXISTING REQUEST NEW USE EXISTING Model access and storage: USE EXISTING MOdel access and storage: USE EXISTING 155 E Data extraction: USE EXISTING Model access: CREATE NEW REQUEST NEW USE EXISTING Storage: PERSONAL USE SHARED USE H Data extraction: USE EXISTING Model access: REQUEST NEW USE EXISTING Storage: PERSONAL USE C Data extraction: USE EXISTING Model access and storage: USE EXISTING 156 (Similar questionaires were used for Managerial and Operational. The originals were printed on one page.) TO BE COMPLETE AFTER SORTING STRATEGIC SETTING After sorting the cards, please indicate the relative importance you placed on data extraction and model access and storage. Score the more critical factor as 10. Give the other factor a score from 1 to 10 to indicate its relative importance. For example, if "Data extraction" is the most important factor, it is scored as 10. If "Model access and storage" is half as important, it would receive a score of 5. Data extraction Model access and storage Please indicate the relative importance you placed on the value of each of the factor values. Write a 10 by the factor value for data extraction which is most preferred. Give the other two data extraction values a score from 1 to 10 to indicate their relative importance. For example, if ”Only existing data extractions is permitted” is the most preferred value for data extraction, give it a scored as 10. If the ability to define new extractions is half as important, it would receive a score of 5. Data extraction: You may define new data extractions. You may also use existing plus request new extractions through programmers. You may request new data extractions which must be written by programmers. You may also use existing extractions. Only existing data extractions available. Write a 10 by the factor value for model access and storage which is most preferred. Give the other two values a score from 1 to 10 to indicate their relative importance. 157 Model Access and storage: You may write new models through the use of modeling packages, use models as is, or request new models from the modeling specialists. Models may be stored for either personal or shared use. You may request new models which must be written by the modeling specialists. You may also use existing models as is. Models written for you by specialists will be stored for your personal use. Models must be used as is. Are there factors you consider AS important or MORE important that the three used in this task? If yes, please indicate what those factors are their importance in comparison with data extraction and model access and storage. 158 (Reduced to comply with margin requirements) mmmmnmm Functional ares: Job description or njor responsibilities: How long in this position: PM! misseotimconcernsyarueofcmtessuamz-lpartofyourprofmimallife. mmpastthreemonths,mhowmnhti-didyoumem2 Pleaseplaceonenrt(X)in eachcoh-I. lunior Mica-0cm: Mair-owner Notatall __ _ O-Smspesweek _ __ h-ohoursperweek _ _ 7-9hoursperweek __ _ lo-thourspesweek __ __ morethmuhoursperweek _ Howdoesthismmtoqarewithmuseofoaqutersm? Pleasecirclethem-berthstbest describesyulrresponse. Minior mink-sow: 1 2 3*L_J__L__l httypesofsofturedoywuse? Pleasechsokallthstspplyaocordingtometherywusem-m(at 1eastonceatnek)orm31m(lessthsnonoesweek). Microcoqute: lflni or Mainfr- OCCASIMLY cum 000481“! Word Processing __ __ __ _ Spread sheets __ __ __ __ Prop-in __ __ __ _ fits retrieval _ __ __ __ Statistical __ __ __ __ am .... ____ .... .... Electronic nil __ _ __ _ 159 PM II Decisions can be classified as ,m and W. Strategic decisions involve developing the orpniution's goals and ob actives. halos of strategic decisions would include consideration of s new m: line, determination of s new stock or how ism, ad selection of a location for a new mfscmim division. Idutify a decision you are (nine: with, III or have significant influence over that you would classify as strategicinnstm,preferablyonethstisaidedbycqutermdels. Usethisdecisionssabuisforyour answers to the followit‘ gustions. Repardinthecquterizedndelorndehywueforthestratqicdecisionywdescrihed,”(X)sll thstapply: l. Donotmeandelforthisdecision 2. Usesmodel developed by specialist tailored tonyspecifications 3. Usesmodeldevelopedbysomsoneelsefortheirue 4. Usesmodeldevelopedbysmelsesndalteredforlyuse 5. Use smodel developed personally with assistance fru specialist 6. Usesmodel developed personally without assistance 7. momma-1mm.»- 8. Wuflnudalare-debynsssismtatqdirection wmmmummlormu: tantalum-many 10. has is transferred directly fru corporate database without-man in”: ll. Ifywhenmedthecmizedndelmthsnonce,howoftenhsnywmedthemodelinthepsst sis “the? 12. Circletbmmicbhestdescribssmconfidueinthsmodel results: at very catfidut neutral confide: 1 2 3 g 5 Q 1 13. Circlethemwichhestindicateshowlikslyyousretouethendelspdnforthes-ors similar decision! very very unlikely unlikely neutral likely likely l 1 1 4 5 6 7 Illysreyoulihslyormlihelytomethsndelqainr lb. Canyouselectsnd/orvsrythsdstsmedinthendel'! (Yesorlo) 15 Csnthewtputmtedbythsndelhestoudforlaterue? (Yosorlo) HW,Myouw-Ittomethsonqmuinsnothsr-ndel,-rk(1) tichspplies: it must be “M many. it can to mod directly. PAB2 160 Win decisions imlude resom'ce acquisition and allocation within the goals of the organization. “lea of nna'erial decision would include selection of a sales pro-otion, selection of new equip-1t, and selection of a njor vendor. Notify a decision you are f-iliar with, nke or have significant influence over that you would classify as ml innatnre, preferablyonethat isaidedhycqutaraodels. Usethisdecisionasahasis foryour answers to the following questions. Describe the decision briefly: Usaamlaodelueedpriortotheuseofacoqnterndel? Ifyes,pleasedescrihetheeodel: Ifaeodel(cquterisedtlml)isnotpreaentlymedtoswportthedecision,istheresaereasonto prev-1t the develomt or intromction ofamdel? Regardinthecmterisedaodeloraodelsywueforthemerial decisionyoudescribed,“ (X) all that apply: 1. Donotuseamdelforthisdecision 2. Use a ndel developed by specialist tailored to Iy specifications 3. Usesaodel developedhysmoneelse fortheirme h. Usesaodeldevelopedhysmoneelsaandalteredforlyuse 5. Use a aodel developed personally with assistance fro. specialist 6. Use a eodel developed personally without assistance 7. fines to the nodal are nde by as d. Ganestothendelare-dehyanassistantataydirection Regardinthedataused inthendeloraodels: 9. Data is input many 10. Data is trmferred directly fru corporate database without usual input ll. Ifymhenmdthecqmtcisedndelaonthanmce,howoftmhsveywmedthendelinthepast sis mnths? 12. Circle the amber \diich best describes your confidence in the eodel results: not very calfidmt neutral confident 1 2 3 g 5 L__l 13. Circlethen-heruiichbestindicatashowlikelyymaretousetheaodelqainforthes-ora siailar decision? very very unlikely unlikely mural likely likely 1 l 3 i.— 5 Af___7. Ityareywlikelyormlikelytouetheaodelqain? 16. canymselectand/orvarythedatamedintlneodelr (Yesorlo) 15 Cantheoutputmatedhythe-odelhestoredforlatermel (Yesorlo) Km",manywunttouetheoutputinanothereodel,-rt(X)whichapplies: it Inst he re-entered-ually. itcanheused directly. PM 3 161 Operational decisions include the ilplaentation of the plans and aonitorim of the resumes at the day-to- day level. bales of weretional decision would include assinim jobs to qloyees, prodution schedulim, ad orderim invatory. Identify a decision you are failiar with, aka or have significant influence over that you would classify as operational in nature, preferably one that is aided by equtar aodels. Use this decision as a basis for your answers to the followin (pastime. Describe the decision briefly: Base-malaodeluedpriortothemeofacmtareodel? Ifyes,pleasedescribetheeodel: Ifaadel (equtarisedm-Iusl) isnotpreaatlyusedtompportthedecision, istheresmreaonto prevent the developnt or introduction of a adel? Regardimthecqmterisedaodeloradelsywmefortheoparationaldecisionyoudescribed,-rk(l()all thatapply: l. Donotuseaadel forthisdecision 2. Useseodel developed byspecialist tailored toayspecificatiom 3. Ueeaadeldevelopedbysoaoneelsefortheiruse e. Ueeaadeldevelopedbysmoneelseandalteredforlyuse 5. Useaadel developed personally with assistance fr:- specialist 6. Useaaodel developed personally without assistace 7. Meetotheaodelareadebyee 8. Meetotheeodelareadebyaassistantatqdirection Winthedatauedintheadeloradels: 9.mtaisiqut-nelly lthtaistrmferreddirectlyfrucorporatedatabaewithout-malinput 11. lfyouhenmedthecquterisedadelanthame,howoftahsnymuedtheadelinthepast sir mthsl 12. Circletheu‘eraichbestdeecribesyun-confidaceintheaodel results: not very cafident natral confidat l 1 3 _§ 5 6 1 l3. Circlethetr-beraichbestindicateshowlikelyywaretometheadelegainforthesaeora siailar decision! very very unlikely unlikely nnxtral likely likely Illyareyoulihalyormlihelytouetheadelqain? ... k. Gayouselectad/orvarymedatauedintheadel? (Yesorlo)_ 15 Camemtputgaeratedbytheadelbestoredforlateruel (Yesorlo) HW,Ihenywwatto:-etheartputinmtheradel,-rk(naichapplia: it-Istbe watered-wally. itcabemeddirectly. no“ 162 Pm III has followim question apply to the technical assistance available to you for pin-chase ad use of coasters l and nodelim in particular. Please ark (X) all that apply. For eagle, if techiical assistance is available for the selection of software and you have used that service, then you should art as in the eagle. 5‘ 1'ecInical Assistance Eagle: software selection Selection and purchase of new software Selection of existing software for use Selection ad purchase of eicrocquter Model developmt on eini or ainfr-e cqautar Model developnt on aicrocqmter |||||||~5 Donotuse lllllllE mta extraction fr. a corporate database for use in nodels Circle the n-ber that best indicates your ml level of satisfaction with the technical assistance available to you in developing and min adels. very very unsatisfied matisfied neutral satisfied satisfied 1 1 3 g 5 6 7 nefonowinmiestions lytothetraininavailabletoyouforcoqnstermingaeraladaodelinin particular. Please-rt (It all that apply. For eagle, iftrainininmlpw isnot available,thenyoushould-rktbelineasinthees.le. Donotue Was unlumpm Gwalueofainiorainfr-ecquter Gaeraluseofaicrocqiuter Prop-inlwauchasmorhscal lbdeldevelopataeiniorainfracquter llodeldevelopatonaicrocqauter Other |||||||~E 'lllllll‘f Cirwc as ..s e the unber that best indicates your gaeral level of satisfaction with the training available to t you in developing and usin aodels. i.“ very very matisfied unsatisfied neutral satisfied satisfied 1 2 1 j 5 L__Z PMS 163 Pm IV his section deals with the controls that my be used in your miution. It my assist you for a. decisions and hinder you for others. If the control is not used in your organisation, mrh X under "lot Used." for each control used in your organisation, indicate the effect of the control on the ease of use of the adels. You my use the strategic, mnagerial and operational decisions forPertIII ofthissuneyasareferacepoint forasweriuthesequestions. Porerqle, ifyour orpairationdoesnotallowywtotakefloppydishsbmeadthathindersyourabilitytoworhonstratagic decisionsbutdoesmtbinderyourmerialadoperationaldecisions,youwouldmrktbelinesasinthe eagle. hinders Does not assists greatly hinder or assist greatly Use the scale: 1 J. 3 4 5 Q Z Controls lot Used Strategic Material Werational hale: I my not take floppy disks hme _ _Z._ __I,_ __J__. Imynottakefloppydissshme Intuseapasswordfor eccesstothecorporatecquter __ Iastbegivenperlissionto accessthedatalneed. I at at prop-rs to write data extractions I cannot put data into the catral database nnreisatechnicalreviewof eodelslcreate l a required to write doamntation for adels I create Icauseaodelsfrmaset ofaprovedmdels __ __ __ __ Itydaarmtischargedfor aycatralcoauterme Imnotperlittedatrytothe catralcqutarrom lutptperaissionto accesstoeodelimsoftm-e Other controls: PM: 6 164 PARIV ms section deals with collecting a coat of eodels. For each aodelim package or 1m you use, indicatetheamm-berofeodelsyouhavestoredwhetheronfloppydisk harddiskonaaicrocquter or on file in a eini or minfr- cqauter. For esaqle, if you have 25 Lotus 1-2-3 spreadsheets on a l l aicrocmgutarsmeofwhichwerecreatedbyacolleague, youwouldhavea ine iketheasnle. Mini or Package or Micro Hainfrme Ill-bar created umber you use me by you created by others Jul-2'3 X _. _l9__ EL (Contimnonthebachifnecessary) Circlethen-barwhichbestindicatesbowthetotalla-berofadelscreatedbyyouinthelasteonth coaareswith: nachfewer aboutthesme mnyarwe oneaonthago 1 2 3 9 IL 6 J sirmnthsago l 2 3 I. 5 j____z Circlethem-bervdiichbestindicatesbowthetotaln-barofadelsMDYmcoaareswith: achfewar aboutthesme mnyare oneaonthqo 1 2 3 i 5 jfi_l sisaontlnqo l 2 3 11 5 j___1 Inst percatage ofyoureodels willyouprobablyueeqainwithout cheqe? t ibatpercatageofyunadelswillywprobablylfl'uemin? t msooncwnrsmcms'nmm mmmmmm Pa 7 APPENDIX II Questionnaire Responses The subjects responded to a series of questions on how models were built, maintained and used for the decision types (strategic, managerial and operational). Table A presents the results. Of the forty-one questionnaires returned, five subjects indicated that they did not make strategic decisions (or left the page blank). The numbers for managerial and operational decisions are five and eleven respectively. The frequencies are expressed as the percent of the respondents parenthetically. The use of models is larger for managerial (61%) and operational (70%) decisions than the number used for strategic decisions (47%). Those who used models were just as likely to develop the model personally for strategic decisions as for managerial and operational. The pattern of who created and altered the models appears to hold across the decision types. Operational models were more likely to be used frequently than managerial and strategic models. Confidence is fairly high for all three decision types. 165 166 Table A Use of Modeling by Decision Type Frequency of Response Strategic 5 19 (53) 4 (11) 3 ( 8) 5 (14) 2 (5) 7 (19) 8 (22) 4 (ll) 18 (50) 5 (14) 16 (44) 16 (44) 8 (22) Frequency of Response 14 12.64 6 25.36 98 5 14 (39) 1o (28) 4 (11) 4 (11) o ( 0) 6 (17) 8 (22) 5 (14) 22 (61) 3(3) 22 (61) 20 (56) 9 (25) 21 21 6 48 200 .55 .63 11 9 8 3 Managerial Operational Decision type not applicable (or blank) (30) DOES NOT USE A MODEL Models: (27) DEVELOPED BY SPECIALIST (10) DEVELOPED BY SOMEONE ELSE Table A cont. 18 19 19 11 (10) DEVELOPED BY SOMEONE ELSE AND ALTERED FOR OWN USE (10) DEVELOPED PERSONALLY WITH ASSISTANCE OF SPECIALIST (17) DEVELOPED PERSONALLY WITHOUT ASSISTANCE Changes to the models: (33) MADE BY USER (20) MADE BY ASSISTANT Data input: (60) MANUAL (20) DIRECT TRANSFER (63) SELECT OR ALTER DATA Data output: (63) STORED FOR LATER USE (37) USE OUTPUT DIRECTLY WITHOUT MANUAL REENTRY Frequency of use in last 6 months: 18 n 27.63 mean 6 Median 44.96 Std Dev 124 range Frequency of Response wQU‘lF-‘OHO ..a UleUIQ 167 Table A cont. Confidence in model results Response scale NUIWNOHH .6 GUIO 1.73 O\ 1 \lGU‘l-bUN n Not confident Neutral Very confident mean Median S r td Dev ange Likelihood of using again: Response scale ...a HUINHOHO N “GD .15 1.309 01 1 Very unlikely 2 3 4 Neutral 5 6 7 Very likely n mean Median Std Dev range Table B (a) presents the responses to the questions regarding the technical assistance available. Table B (b) presents the responses to the satisfaction the subjects reported in the technical assistance available. Eighteen of forty subjects were satisfied to some degree with the technical assistance available. 168 Table B (a) Technical assistance available available not available Use Do not use 20 12 9 SELECTION OF SOFTWARE FOR PURCHASE 25 10 6 SELECTION OF SOFTWARE FOR USE 21 8 11 SELECTION OF MICRO COMPUTER - PURCHASE 16 13 9 MODEL DEVELOPMENT ON MINI OR MAINFRAME 15 9 15 MODEL DEVELOPMENT ON MICRO 15 17 8 DATA EXTRACTION (b) Satisfaction with technical assistance Frequency: Response scale 3 1 Very unsatisfied 2 2 7 3 10 4 Neutral 6 5 8 6 4 7 Very satisfied n = 40 mean = 4.35 range = 6 Median = 4 Std Dev = 1.688 The training available was taken advantage of by the subjects for the use of micro, mini and mainframe computers. There was less training available for programming and model development. When it was available, the subjects are less likely to make use of it. The responses to the question asking about their satisfaction with the training provided are well spread over the full range of the scale. The mean and median fall near or on the middle number indicating a neutral response - neither satisfied or dissatisfied with 169 the training available. The results of the training questions are presented in Table C. Table C (a) Training available available not available Use Do not use 16 13 12 USE OF MINI OR MAINFRAME 24 8 9 USE OF MICRO COMPUTER 7 2 20 PROGRAMMING 8 12 20 MODEL DEVELOPMENT ON MINI OR MAINFRAME 12 12 17 MODEL DEVELOPMENT ON MICRO COMPUTER (b) Satisfaction with training n = 41 mean = 4.293 range = 6 Median = 4 Std Dev = 1.707 Frequency: Response Scale 3 1 Very unsatisfied 5 2 2 3 13 4 Neutral 7 5 7 6 7 7 Very satisfied The next portion of the questionnaire dealt with the controls used in organizations. For those controls which were in place, the subjects were asked to indicate the impact on the ease of use of models for each decision type, strategic, managerial and operational. Table D presents the responses for each control. These results should be interpreted carefully. There is a possibility that the subjects responded that the control was not used if they were not aware of its 170 existence. An example would be the controls dealing with access (3) and input (5) of data. Nineteen subjects responded that the controls limiting access and update of data were not used. Based on this sample, organizations have not introduced limitations on modeling which are apparent to the users. The controls (6,7,8,11) dealing directly with model creation and use are not generally in place in organizations. For those subjects which report the use of these controls, the responses are generally that the controls assisted rather than hindered the use of models. In particular, an approved set of models (8) is reported as assisting the modeling process. Table D Controls (1) FLOPPY DISKS MAY NOT BE TAKEN HOME (Frequency by decision type) Not Used Strat Mgr Op Response scale 29 0 O 0 1 Hinders greatly O O O 2 O O O 3 4 4 4 4 Does not hinder 1 1 1 5 or assist 0 1 O 6 1 2 2 7 Assists greatly 6 2 7 n 4.67 5.13 4 mean 4 4.5 4 Median 1.211 1.356 1.414 Std Dev 3 3 3 range Table D cont. 171 (2) PASSWORDS REQUIRED FOR ACCESS TO CORPORATE COMPUTER Response scale Not Used Strat 8 (.1 OOHONHN 25 3.64 4 0.952 4 Mgr ...: OONWNNH 26 3.71 4 1.013 5 0P (.1 OOOObHH 25 3.44 4 0.874 3 1 2 3 4 5 6 7 Binders greatly Does not hinder or assist Assists greatly n mean Median Std Dev range (3) MUST BE GIVEN PERMISSION TO ACCESS DATA Not Used Strat Mgr Op Response scale 19 2 2 1 1 Binders greatly 2 2 1 2 2 3 4 3 9 9 10 4 Does not hinder 0 0 0 5 or assist 0 0 0 6 0 0 0 7 Assists greatly 15 16 16 n 3.2 3.19 3.44 mean 4 4 4 Median 1.146 1.109 0.892 Std Dev 3 3 3 range (4) MUST ASK PROGRAMMERS TO WRITE DATA EXTRACTIONS Not Used Strat Mgr Op Response scale 20 1 1 1 1 Hinders greatly 2 3 4 2 2 4 2 3 6 5 4 4 Does not hinder 1 0 1 5 or assist 2 2 2 6 0 0 0 7 Assists greatly 14 15 14 n 3.71 3.4 3.43 mean 4 4 3.5 Median 1.437 1.404 1.555 Std Dev 5 5 5 ' range 172 Table D cont. (5) CANNOT PUT DATA INTO CORPORATE DATABASE Not Used Strat Mgr Op Response scale 19 1 1 1 l Binders greatly 0 0 0 2 1 2 2 3 11 11 9 4 Does not hinder 0 1 '1 5 or assist 0 0 0 6 0 0 0 7 Assists greatly 13 15 13 n 3.69 3.73 3.69 mean 4 4 4 Median 0.855 0.884 0.947 Std Dev 3 4 4 range (6) TECHNICAL REVIEW OF MODELS CREATED PERSONALLY Not Used Strat Mgr Op Response scale 32 0 0 0 1 Binders greatly 0 0 0 2 0 0 0 3 1 1 1 4 Does not hinder 1 1 2 5 or assist 1 0 0 6 1 1 1 7 Assists greatly 4 3 4 n 5.5 5.33 5.25 mean 5.5 5 5 Median 1.291 1.528 1.258 Std Dev 3 3 3 range (7) DOCUMENTATION REQUIRED FOR PERSONALLY CREATED MODELS Not Used Strat Mgr Op Response scale 32 0 0 0 1 Binders greatly 0 0 0 2 1 1 2 3 0 0 0 4 Does not hinder 1 2 1 5 or assist 0 0 0 6 0 0 0 7 Assists greatly 2 3 3 n 4 4.33 3.67 mean 4 4 3 Median 1.414 4.155 1.155 Std Dev 2 2 2 range 173 Table D cont. (8) APPROVED SET OF MODELS AVAILABLE Not Used Strat Mgr Op Response scale 26 0 0 0 1 Binders greatly 0 0 0 2 0 0 0 3 5 6 6 4 Does not hinder 1 1 1 5 or assist 2 l 1 6 2 2 2 7 Assists greatly 10 10 10 n 5.1 4.9 4.9 mean 4.5 4 4 Median 1.287 1.287 1.287 Std Dev 3 3 3 range (9) DEPARTMENT CHARGED FOR COMPUTER USE Not Used Strat Mgr Op Response scale 18 1 1 1 1 Binders greatly 1 1 1 2 2 2 2 3 12 14 12 4 Does not hinder 0 0 0 5 or assist 0 0 0 6 0 0 0 7 Assists greatly 16 18 16 n 3.56 3.61 3.56 mean 4 4 4 Median 0.892 0.85 0.892 Std Dev 3 3 3 range (10) ENTRY TO CENTRAL COMPUTER ROOM RESTRICTED Not Used Strat Mgr Op Response scale 23 0 0 0 1 Binders greatly 2 2 2 2 0 0 0 3 10 12 11 4 Does not hinder 0 o o 5 or assist 0 0 0 6 0 0 0 7 Assists greatly 12 14 13 n 3.67 3.71 3.69 mean 4 4 4 Median 0.778 0.726 0.751 Std Dev 2 2 2 range 174 Table D cont. (11) PERMISSION REQUIRED TO ACCESS MODELING SOFTWARE Not Used Strat Mgr Op Response scale 33 0 0 0 1 Binders greatly 0 0 0 2 0 0 0 3 2 2 2 4 Does not hinder 1 1 1 5 or assist 0 0 0 6 0 0 0 7 Assists greatly 3 3 n 4.33 4.33 4.33 mean 4 4 4 Median 0.577 0.577 0.577 Std Dev 1 1 1 range The last section of the questionnaire asked the subjects to estimate the number of models they had stored on micro, mini or mainframe computer. They were asked to report the number by package or language, micro, mini or mainframe, and to break down their response to the number which were created personally and the number created by others. Table E (a) presents the statistics for the responses and Table E (b) presents the distribution of packages or languages used. As was mentioned earlier, the predominant use was micro-based. The questionnaire requested just models stored, but the responses to the question of what software was used included a number of word processing packages. Lotus 1-2-3 was the most widely reported package with 68.5% (24 of 35) of the subjects including Lotus. 175 Table E (a) Number of Models Stored: Computer type: Micro Mini or Mainframe Self Others Self Others n 35 35 33 33 mean 26.06 12.46 2.21 1.81 Median 15 1 0 0 Std Dev 28.26 33.176 8.088 7.97 range 112 188 40 45 (b) Packages or Languages reported Number of responses Mean number Software Micro Mini,Mf Self Other Lotus 1-2-3 24 1 19.67 6.67 Symphony 3 0 17.67 6.00 Multiplan 8 0 11.125 2.00 Storm 5 0 7.4 3.00 IFPS 6 2 1.57 2.14 DBase III 2 0 0.00 10.50 Project Mgr 2 0 12.50 2.50 Anova (Stat.) 1 0 4 8 Basic 0 1 5 10 380 0 l 25 30 CAD 1 0 10 100 Cash Mgt. 1 0 0 1 C-Calc 0 1 15 15 Data Pro 1 0 1 3 DBMS 1 0 35 2 EASYTRIEVE 0 1 25 2 Enable WP/DBMS 1 o 25 75 Focus 0 1 0 2 Forecasting 1 0 0 2 Fortune - word 1 0 50 0 Graphwriter 1 0 10 0 Harvard TPM 1 0 2 1 Multimate 1 l 100 0 Multics (not reported) 5 10 Scripsit 1 0 6 0 Spellbinder 1 0 4 0 Writing Ass't 1 0 20 0 Other (unspecified) 1 0 50 6 Table F presents the responses to questions asking for a comparison of the number of models created personally 176 and by others in the last month with one month and six months previously. The majority of subjects reported the number to be about the same as previous time periods. Table F Comparison of models by person creating and storage location One month ago Six months ago Self Others Self Others Response scale 3 2 4 2 1 Much fewer 2 0 4 0 2 1 0 5 1 3 28 19 18 16 4 About the same 3 10 4 5 5 0 0 2 6 6 0 0 0 1 7 Many more 37 31 37 31 n 3.70 4.13 3.54 4.42 mean 4 4 4 4 Median 1.00 0.96 1.30 1.31 Std Dev 4 4 5 6 range Table G presents the responses to questions asking what percentage of the models will be used again without change and the percentage of models that will not be used again. The subjects reported that about half of their models will be used again without change and about a third of their stored models will not be used again. Table G (a) Percentage of models will probably use again without change n - 32 Mean = 49.47 Range 3 100 Median = 50 Std Dev 2 29.144 (b) Percentage of models will probably not use again n = 32 Mean = 32.34 Range = 100 Median = 20 Std Dev = 30.506 Subject # Strat 4 0.820* 8 0.847* 9 0.951* 10 0.908* 12 13 0.834* 14 0.852* 15 0.964* 16 0.876* 17 -0.573 18 0.974* 19 0.992* 21 0.845* 22 0.223 23 0.850* 24 0.617 25 0.880* 26 27 0.502 28 0.663 29 0.787* 30 0.839* 31 0.812* 32 0.695 33 0.930* 34 0.801* 35 0.933* 37 0.958* 38 0.921* 40 0.848* 41 0.806* 42 0.811* 43 0.789* 44 0.674 45 0.050 46 0.938* 47 50 0.915* mean 0.758 std 0.300 count 35 max 0.992 min -0.573 range 1.565 Pearson Correlations Mgr 0.609 0.856* 0.921* 0.913* no questionnaire -0.386 0.956* 0.675 -0.018 0.817* 0.932* 0.838* 0.611 0.500 0.587 0.686 0.911* 0.834* 0.770* 0.639 0.939* 0.895* 0.907* -0.231 0.638 0.910* 0.564 0.942* 0.876* 0.775* 0.645 0.853* 0.904* 0.938* 0.761* 0.834* 0.709 0.312 35 0.956 -0.386 1.342 APPENDIX III 0P 0.644 0.207 0.908* 0.366 0.943* -0.428* 0.958 0.512 0.825* 0.885* -0.100 0.699 -0.324 -0.273 0.587 0.948* 0.793* 0.950* 0.266 0.477 0.976* 0.905* 0.810* 0.518 0.945* 0.929* 0.459 0.838* 0.725 0.749* 0.730* 0.780* 0.938* 0.844* 0.310 0.609 0.385 35 0.976 -0.428 1.404 177 Mean 0.77 0.78 0.96 0.84 0.60 0.68 0.92 0.59 0.13 0.94 0.76 0.88 0.17 0.52 0.60 0.90 0.90 0.82 0.74 0.78 0.95 0.93 0.90 0.63 0.79 0.96 0.87 0.96 0.84 0.88 0.85 0.84 0.85 0.68 0.96 0.87 0.81 0.759 0.225 37 0.963 0.000 0.963 U rated cards [1 equally 3 equal weight on quest. rated cards equally equal weights on quest. quest not complete quest not complete REFERENCES Alavi M. "An Assessment of the Concept of Decision Support Systems as Viewed by Senior-level Executives," MIS Quarterly, December 1982, pp.1-9. Alter, S. "A Taxonomy of Decision Support Systems,"§19an Management_Be_iew Fall 1977 pp- 39- 56- X)Anerisan_neritase_nistionarx. Boughton-Mifflin Company. Boston, Mass, 1979. Anthony. R-N- Elannin9_and_99ntr91_§xstems. Harvard University Press, Boston, 1965. Applegate, L.M., Klein, G., Konsynski, B.R., and Nunamaker, J.F., "Model Management Systems: Proposed Model Representations and Future Designs,” Sixth_1nternatignal C9nferense_9n_1nformation_sxsten§. 1985. pp- 1-16- Applegate, L. M., Klein, G., Konsynski, B. R., and Nunamaker, J. F., ”Model Management Systems: Design for Decision Support ” Decision_§uppert_fixstens Number 2 1986. pp 81- 91. Arena. A-A- and Loebbecke. J-K- Auditin91_6n_1ntesrated Apprgagh, Prentice-Hall, New Jersey, 1980. Ariav, G. and Ginzberg, M.J. ”DSS Design: A Systematic View of Decision Support.” Qonnunisations_of_the_asn. Vol. 28 No. 10, October, 1985, pp.1045-1052. Averill, J.R. "Personal Control over Aversive Stimuli and its Relationship to stress," £§Y2h21991231_59119tin. Volume 80, Number 4, 1973, pp. 286-303. Barbosa, L.C. and Birko, R.G. "Integration of Algorithmic Aids into Decision Support Systems,” M1§_Qngztg:1y, March 1980, pp. 1-8. Benson, D.B. ”A Field Study of End User Computing: Findings and Issues," u1§_ggaxtg;1y, Vol. 7 No. 4, 1983, pp.35-45. Berlinger. E. Siristlx_§trusture9_hasic. West Publishing C0-. St. Paul, MN, 1986. Blanning, R.W. "A Relational Framework for Model Bank Organization." IEEE_£9rksh9p_in_Lansuase_AEtonation. 1984. pp. 148-154. 178 179 Blanning, R. W. "A Relational Framework for Model Management in Decision Support SystemS." n§§:§2_1ransactions ed Gary W. Dickson, 1982, pp. 16- 28. Bonczek, R.B., Holsapple, C.W. and Whinston, A.B. ”Developments in Decision Support Systems" in Adygnggg , ed. Marshall C. Yovits, Volume 23, Academic Press, 1984, pp. 141-175. Bonczek, R. H., Holsapple, C. W. and Shinston, A.B. ”The Evolving Roles of Models in Decision Support Systems," Dggi§193_§gigngg, Volume 11, 1980, pp. 337- 356. Boynton. s. Qb2s91ats1_The_Qonsunins_£assion. Workman Publishing, New York, 1982. Brill. 4.3. Buildins_C9ntr91s_into_§trustured_fixstens. Yourdon Press, New York, NY, 1983. Cattin, P. ”Some Findings on the Estimation of Continuous Utility Functions in Conjoint Analysis," Adyangg§_1n Mark_ting_Besearsh 1982. pp- 367 372- Cattin, P. and Bliemel, F. "Metric vs. Nonmetric Procedures for Multiattribute Modeling: Some simulation results" Decision_fioienses. Vol. 9. 1978. pp-472-480- Chervany, N.L. and Dickson, C.W. ”An Experimental Evaluation of Information Overload in a Production Environment," Mgngggmgnt_§gigngg§, Volume 20, Number 10, June 1974, pp. 1335-1344. Couger. J.D- "E Pluribus Computum.” Barxard_nusinsss_aexiew. September-October, 1986, pp.87-91. ' Darmon, R.Y. ”Setting Sales Quotas with Conjoint Analysis," 12urnal_of_harketins_8esearsh. Vol XVI. February. 1979. pp. 133-140. Date. C-J- An_1ntrodustion_to_natabase_fixsten§. Addison-Wesley, 1982. Davie. G B- and Olson. H-H- uana9emsnt_1nformation_§xstensi -1_-° 0 1°. 01 “ 1° h: ‘ oou:1 1° Editign, McCraw-Bill, New York, 1985. DeMarco, T. 5trustured_AnalYsis_and_§Y§ten_§peCifisation. Yourdon Inc., 1978. 180 Dickson, C.W., Senn, J.A. and Chervany, N.L. "Research in Management Information Systems: the Minnesota ExperimentS.” Management_asienoe. Volume 23. Number 5. May 1977, pp. 913-923. Dolk, D. R. and Konsynski, B. R. "Model Management in OrganizationS." Information_i_nanasenent. North-Holland, Volume 9, 1985, pp. 35-47. Dolk, D.R. and Konsynski, B.R. "Knowledge Representation for Model Management SystemS." IEEE_Tran§astion§_on ngtwggg, Volume SE10, Number 6, November 1984, pp. 619-628. Duessenbery 3.8- et a1. The_Br29kinss_QuarterlY_Eson9metris M9del_of_the_UEited_States. Rand-McNallY. Chicago. 1965. Eliason, A.L. v ° Implementatign, Little, Brown and Company, Boston, 1987. Elam, J.J. and Konsynski, B. ”Using Artificial Intelligence Techniques to Enhance the Capabilities of Model Management Systems,” pg91§19n_§giengg§, Volume 18, 1987, pp. 487-502. Fisher, R.A. "The Theory of Confounding in Factorial Experiments in Relation to Theory of Groups," Annals of Eugenics, 11, 1942, pp.341-353. Gallegos, F., Richardson, D. R., Borthick, A. F., Audit_§ng C9ntr9l.91.1nfornation.§xstsns. South-Western Publishing, West Chicago, IL, 1987. Gene. C- and Sarson. T- Strustursd_sxstem_AEalxsis. Prentice-Hall, 1979. Sorry G.A. and Scott Morton, M.S. "A Framework for Management Information Systems," Slgan_uanaggmgnt Bgyigg, Fall, 1971, pp.55-70. Goule, M, Shane, B and Tonge, F.M. ”Using a Knowledge-Based Decision Support System in Strategic Planning Decisions: An Empirical Study," Journal of Management Information Systems, Volume II, No. 4, Sprint, 1986, pp 70-84. Green, P.E. and DeVita, M.T. ”An Interactive Model of Consumer Utility-” l9urnal_of_£9nsunsr_flessarsh. Vol. 2, 1975, pp.146-153. 181 Green, P.E. and Srinivasan, V. "Conjoint Analysis in Consumer Research: Issues and Outlook." Jguzng1_gfi ggngnmg1_3g§ggzgh Vol. 5, September 1978, pp.103-123. Green, P.E. and Tull, D.S. o Dggigigng, Prentice-Hall, Inc. 4th ed. 1978, pp.477- 495. Green, P.E. and Wind, Y. ”New Way to Measure Consumers' Judgggntié' Barxard_nusines§_8exiey. July-August 1975. pp. - . Intgligggfi, V01. 15 NO. 1, 1985. Jain, A.B., Acito, F., Malhotra, N. and Mahajan, V. ”A Comparison of the Internal Validity of Alternative Parameter Estimation Methods in Decomposition Multiattribute Preference Models." lgnzng1_91_ngzkgting Eggggxgh‘L, 1979, Vol. 16, pp.313-323. JenkinS. A-M- MIS_Des1gn_Yariablss_and_nesision_hakins o ' . UMI Research Press, 1983, Ann Arbor, MI, pp. 7-48. Keen, P.G.W. and Woodman, L.A. ”What to do with all those micrOS.” Barxard_Business_BexisR. September-October 1984, pp.142-150. .....-- Keppel, G. -‘ - 1 -.1°_ 1. 8 s 1 ‘ : a1°.°°° , 2nd Ed. Prentice-Hall, Englewood Cliffs, NJ, 19 82. Klein, G., Konsynski, B.R. and Becker, P. "A Linear Representation for Model Management in a DSS," Jpgxnal 9f_Manasment_1nfornatien_fixstems. Fall. 1985. Volume II, No. 2, pp. 40-54. Kruskal, J.B. "Analysis of Factorial Experiments by Estimating Monotone Transformation of the Data," , Series B, 27, February 1964, pp.251-263. Lewellen, W.G. and Huntsman, B. ”Managerial Pay and Corporate Performance." Anerisan.£29nonis_fiexiew. Volume 60, September, 1970, pp. 710-720. LucaS. H-C- Jr. flhx_Information.§xstems_£ail. Columbia University Press, New York, 1975. Luce, R.D. and Tukey, J.W. "Simultaneous Conjoint Measurement: A New Type of Fundamental Measurement." 19urnal_9f_Mathsmatioal_£sxsholosx. Vol. 1. 1964. pp-l- 27. 411 182 Markus, M.L. ”Power, Politics, MIS Implementation," Comnunisation§_of_ths_agu. Volume 26. Number 6. June 1983, pp. 430-444. Merchant. R.A. Contr2l_in_Business_Qrsanizations. Pitman. Boston, 1985. Miller, L.W. and Katz, N. "A Model Management System to Support Policy Analysis.” Decision_§upport_sxstens. Number 2, 1986, pp. 55—63. Murphy, M.J. "Corporate Performance and Managerial Remuneration.” 19urnal_of_Accountins_and_fison2niss. Volume 7, 1985, pp. 11-42. Nolan, R. L. ”Computer Data Bases: The Future is Now," Barxard_8usiness_3exiew. September-October 1973. pp 98- 114. Pearson. O-R. 2r9srannins_with_Basisi_a_£raCtisal_aabreast. McGraw-Bill, New York, NY, 1986. Rao, V. R. "Conjoint Measurement in marketing Analysis,” in 9 tv: z,’ r‘ '3 O 9". ’ 1'. 1 2 !, ed. J. N. Sheth, Chicago: American Marketing Association, 1977, pp. 257-286. Rockart J. F. and Flannery, L.S. "The Management of End User Computing." Cammunisations_of_tbs_agu. Vol. 26 No. 10, October 1983, pp.776-784. Salmonson, R.F., Hermansen, R.B., and Edwards, J.D. A_§szgy of.Basis.Assountin9. Irwin. Homewood. IL. 1981. Siers, H.L. and Blyskal, J.K. ”Risk Management of the Internal Audit Function.” Management_8229untins. February 1987, pp. 29-35. Simon. R.A. The_NeE_Ssience_of_nanasement_nesisien. Harper 8 Brothers, New York, 1960. Sprague, R. H. "A Framework for the Development of Decision Support Systems,” (MIS Quarterly, Number 4,1980, pp. 1- 26) Reprinted in Decision_§uppert_sxstsms. (ed) William C. House, Petrocelli, 1983, pp. 85-124. Sprague. R-H- and Carlson. R.D. Buildins.£ffectixe_nesision Suppgzt_§yfitem§, Prentice-Hall, 1982. Sprague, R.H. and Watson, H. "MIS Concepts - Part II," 19urnal_of_§Y§tsn_Manasenent. Volume 26. Number 2. 1975b, pp. 35-40. 183 Edition, nasraw-Hill, NY, 1975. §2§§_deatg_1;2, McGraw-Hill, New York, NY, 1981. SESSx_n§er;§_§nidg, 2nd Edition, SPSS Inc., Chicago, IL, 1986. Stohr, E A., and Tanniru, M.R. "A Data Base for Operations Research Models,” Working Paper CRIS #3, GBA #80 102(CR), New York University, also in Internatignal I Volume 4, No. 1, Dec. 1980. Stotland, E. and Blumental, A.L. "The Reduction of Anxiety as a Result of the Expectation of Making a Choice," ' Volume 18, I Number 2, 1964, pp. 139-145. Ullman. J.D. 2rin2iples_of_natabase_fixstems. Second Edition, Computer Science Press, 1982. Weber, R. second edition, McGraw-Bill, New York, 1988. Wilson, P. R. Yalu__of_lnfornation. unpublished doctoral dissertation, University of North Carolina at Chapel Hill, 1984. ADDITIONAL REFERENCES NOT CITED Acito, F. ”An Investigation of Some Data Collection Issues in Conjoint Measurement. " in Contempo__rx_nark_tins IDQQQDI. Educators Conference Proceedings, B. A. and D.N. Bellenger, eds. Chicago: American Marketing Association. 1977 Acito, F. and Jain, A.B. "Evaluation of Conjoint Analysis Results: A Comparison of'Methods.” Besesreh Vol. XVII, February 1980, pp.106-112. Adams, D. R., Wagner, G. E., and Boyer, T. J. Qsmpgrer , South-Western, 1983. D . e e ‘ O. O O ‘ ‘ ‘ ‘ O 0_ e ‘ L, s ' ! .”. I ‘ I I 1!..'_ .‘ i - I AmeriCan Institute of Certified Public Accountants, Commerce Clearing House, Chicago, 1980. Baker. K R- and Kr0pp. D H. An_Introdustion_to_ths_nss_ef Qesisien_nege1s, John Wiley & Sons, 1985. Barron, F.B. ”Axiomatic Conjoint Measurement,” Desisisn Sciences. Vol- 8. pp-48-59- Bettman, J.R. and zins, M.A. "Information Format and Choice Task Effects in Decision Making." isgrnsl_sr_gsnssmer Resesrsh, Vol. 6, September 1979, pp.141-153. Blanning, R.W. ”What is Happening in DSS?" Inserrsses, Volume 13, Number 5, October 1985, pp. 71-80. Bohl. H- Essentia1s_of_Information_£resessins. SRA. 1986- Bonczek, R.B., Holsapple, C.W. and Whinston, A.B. F9nnda1i9ns_9f_Decision_§uppert_fixstsms. Academic Press, 1981. Bopp, A. E. "On Combining Forecasts: Some Extensions and Results," Msnsgemenr_§sienee, Volume 31, Number 12, December 1985, pp. 1492-1498. Cardenas. A-F. Data_Bass_hanasenent_sxstsns. Allyn and Bacon, 1985. Carmone, F.J., Green, P.E. and Jain, A.B. "Robustness of Conjoint Analysis: Some Monte Carlo Results." legrnsl 9f_narketins_nesearsh. Vol- XV. Hay 1978. pp-300-303- Cattin, P. and Wittink, D.R. "Further Beyond Conjoint Measurement: Toward a Comparison of Methods," agysnses in_harketins. Vol 4. 1977. pp-41-45- 184 185 Cattin, P. and Wittink, D.R. ”A Monte Carlo Study of Metric and Non-Metric Estimation Methods for Multiattribute Models,” Research Paper No. 341, Graduate School of Business, Stanford University. 1976. Cochran, W.G. and Cox, G.M. Experimenrs1_nesigns, second edition. New York: John Wiley 8 Sons, 1966. Ditlea, S., "Spreadsheets can be Hazardous to your Health," Personal_semputins. Vol- 11 No. 1. January. 1987. ppe 60-69 e Fenwick, I. "A User's Guide to Conjoint Measurement in Harketin9-" l9urnal_of_gonsunsr_3esearsh. Vol- 12. 1978, pp.203-211. Green, P.E. "On the Design of Choice Experiments Involving Multifactor Alternatives." l9urnal_of_ansumer_Besearch Vol. 1, September 1974, pp.61-68. Green, P.E., Carroll, J.D., and Carmone, F.J. "Some New Types of Fractional Factorial Designed for Marketing Experiments, " in J.N. Sheth, ed., Besesreh_1n Mergering, Vol. 1. Greenwich, Connecticut: JAI Press, 1978. Green, P. E., Carroll, J. D., and Carmone, F. J. ”Design Considerations in Attitude Measurements," in MQYIDQ_A Bead_Eith_Attitude_Besearsh. eds. Y. Wind and H.6- Greenberg, Chicago: American Marketing Association, ppe 9.1.8 o Green, P.E. and Rao, V.R. "Conjoint Measurement for Quantifying Judgmental Data.” Besesren, Vol. VIII, August 1971, pp.355-363. Green. P-E. and Wind. Y. Mu1tiattribute_nesisions_in Markstin9_:_A_Eeasursment_aEnroash11__flinsdale1_1L1 DIYden_£rs§§1_12121 Green, P.E., Wind, Y. and Jain, A.B. "Preference Measurement of Item Collections." 19urnal_of_uarkstins_8essarcb. Vol. IX, November 1972, pp.371-377. Henry, W.A. "The Effect of Information-Processing Ability on Processing Accuracy-" 19urnal_of_£9nsumer_assear9h. Vol. 7, June 1980, pp.42-48. Holland, C. W. and Cravens, D. W. "Fractional Factorial Experimental Designs in Marketing Research. " genrns1_sr Marketin9_Bessarsh. Vol. X. August 1973. pp 270-276- 186 Keen, P.G.W., "Decision Support Systems: A Research Perspective," s° thllenges, Pergamon Press, Oxford England, 1981. Keen, P.G. and Wagner, G.R. "DSS: An Executive Mind-Support System,” nersmsrisn, November 1979, pp. 117-122. Klingler, D. E. ”Rapid Prototyping Revisited,” assessrisn, October 15, 1986, pp.131-132. McCarthy, W.E. "On the Future of Knowledge-based Systems in Accounting." To appear in DB_Ssott_hemerial_Lesture series, Missouri-Columbia, Vol. XV, 1986. McCarthy, W.E. ”The REA Accounting Model: A Generalized Framework for Accounting Systems in a Shared Data Environment.” The.Assountins_Bexier. July. 1982. pp- 554-578. McCarthy, W.E. "An Entity-Relationship View of Accounting Models." The_Assountins_BeyisE. October 1979. pp- 667-686. Merten, A.G. & Severance, D.G. "Data Processing Control: A State-of-the-art Survey of Attitudes and Concerns of DP Executives," M1§_Qnsrrerly, Volume 5, Number 3, June 1981, pp. 11-32. Montgomery, D.B., Wittink, D.R., and Glaze, T. ”A Predictive Test of Individual Level Concept Evaluation and Trade- off Analysis," Research Paper No. 415, Graduate School of Business, Stanford University. 1977 Olshavsky, R.W. and Acito, F. "An Information Processing Probe into Conjoint Analysis." Desisisn_§eienees, Vol. 11, 1980, pp.451-470. Sandler, L. "Wall Street is Finding its Trusty Computers have their Dark Side," fls11_srreer_lenrns1, December 4, 1984. SempreviVO. P-C- SYstsms_8nalysisi_nefinition. ProceSS. and Design, SRA, 1982. Sprague, R.H. and Watson, H. ”MIS Concepts - Part I," I9urnal_of_SYsten_Manassment. Volume 26. Number 1. 1975a, pp. 34-37. Tversky, A. "Additivity, Utility and Subjective Probability." 1ourna1_of_Mathematisal_zsysbolosy. Vol 4, February 1967, pp.1-20. 187 Ullrich, J. R. and Painter, J. R. "A Conjoint Measurement Analysis of Human Judgment. " MW Wee. Vol- 12. pp-50-6l- Winer, B. J. e a Second Edition, New York: McGraw-Bill Book Co. 1973. Zahedi, F. ”A Database View for Simulations to Unify Simulation and Database Analysis," £resee§ings_sf_the 12§§_AID§_QQD£§£§DQ§. PP- 359'351- Zinkhan, Joachimsthaler, and Kinnear, IQBED§1_QI_M§IKEELDQ Beseersh, 1987, ppset up for new chapter