. .. . we; . m...“ . u v. . _ WW...“ $.. a... . . v z I r v » Lt. I . « V. 2. 1...... 1.. u 143%?» . ....7....:.r . «34;, .2. . . . A. A..- ,. a» ”.3; «flywufiéuflnL; ff; . r . f . . W .. . U 9% :ahJme—lw” . .. . ”mun“? I. . . s. .41 2m... F... H . . , .. pip-"m n. . u . . . Ivy. . .. . . 21...... aim”... .. 5.x}... 5‘27: . . z 1 55: Exam . .3 . .. .vfiw$sfu 1%. .513». .3... a .422... «w. “an . .5 . my ........ a... 34.3%.. . , . .532... . 3|;1..l..€a ' ‘ , 0.1m»! .3 13.54. Hutu...» 7‘ . mm .11.. n... 21. a $51.»...qu 3%.: V I 1 . .n .unrufira... .. .. x9. Ann...) .(. ‘3‘“..03 , . 1?. .27.!!! . wen-n: .. ..I.I 1. . . ..L\.q.. .!.\\XI ‘- { .waVl... A “Y’H"".‘~~ .. n J003 fiffagL/fi LIBRARY Michigan State University This is to certify that the dissertation entitled THE BEHAVIORIAL EFFECTS OF CONTENT RATING INFORMATION OF KNOWLEDGE SYSTEM CONTENT USE presented by ROBIN SUZANNE POSTON has been accepted towards fulfillment of the requirements for the Ph.D degree in Management Information Systems Ghoul @aiiii Majbj Professor‘s ‘Si‘g’n‘ature ape/0:5; Date MSU is an Affirmative Action/Equal Opportunity Institution vw—w. *— —‘_._.'._—‘ _ _ I II. I ., _,‘ __.'._._..__ rm.” ‘._ q. a- PLACE IN RETURN Box to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/01 c:IClRC/DateDue.p65-p.15 THE BEHAVIORAL EFFECTS OF CONTENT RATING INFORMATION ON KNOWLEDGE SYSTEM CONTENT USE By Robin Suzanne Poston A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Accounting and Information Systems 2003 ABSTRACT THE BEHAVIORAL EFFECTS OF CONTENT RATING INFORMATION ON KNOWLEDGE SYSTEM CONTENT USE By Robin Suzanne Poston Knowledge management system content reuse is critical to leveraging the intellectual capital within a firm, especially when high quality content is reused. While following the recommendations of others (i.e. ratings) as to what is high quality content can be a good strategy, it is possible that these recommendations are intentionally or unintentionally biased, leading to poor recommendations and inappropriate reuse of content. To address this, knowledge systems offer indicators of credibility in content ratings and content recommendations to better direct knowledge workers to high quality content. Individual psychology theory suggests inaccurate ratings may trigger individuals to use credibility indicators and content recommendations. In this stud 3' two different credibility indicators—“sample size and source expertise— wand one content recommendation characteristicmfilter sophistication—«are examined to see if individuals can use this information to overcome inaccurate content ratings. Four laboratory experiments provide evidence that ratings have a strong influence on content usage decisions regardless of rating accuracy and the moderating effect of source expertise matters while the effect of sample size as indicators of rating credibility and filter sophistication in content recommendations does not. Cepyright by ROBIN SUZANNE POSTON 2003 ACKNOWLEDGEMENTS There are many people responsible for helping me achieve my goal of completing this dissertation and this degree. The debt I owe those listed below and others is too great to ever be fully repaid, but this acknowledge is my attempt at letting them know how important they were and continue to be in my life. I am grateful for the tireless and patient help and support of my dissertation committee: Cheri Speier (chair), Joan Luft, Roger Calantone and V. Sambamurthy. Cheri played many roles in my academic life over the last five years. She has been a peer, mentor, counselor, friend, and coach—playing each role with patience, understanding and unswerving confidence in me. Joan joined my committee early and spent many hours explaining concepts and encouraging my progress but always keeping the expectations high. Roger and Samba became committee members with energy and force, pushing me to be deep in my understanding. My committee was the “dream team”, each with superior strengths driving my work to be its best. I appreciate the entire faculty in the Department of Accounting and Information Systems at MSU. I am grateful for the interest and enthusiasm the faculty has for the development of doctoral students. The faculty treat doctoral students as colleagues, working together to move research and teaching forward. Many faculty members have taken the time to talk about research, teaching and career issues with me. Many faculty members have inquired and encouraged my progress, and a few went even further that I specifically want to thank: Bill McCarthy, Sev Grabski, and Nancy Gustafson. I also want to thank my doctoral student colleagues at the Department of Accounting and Information Systems who have always been willing to discuss challenging conceptual issues, provide constructive feedback, and help with practical problems. Although few had interests in my research area, my peers offered and followed through with suggestions and ideas that improved my work substantially. Specifically, I want to thank Terence Pitre, Roger Stace, and Christian Mastalik who did not wait for me to ask, but came to me asking how they could help. I could not be here without my parents and my family. My mother who answered my calls without objections to short turn around times edited all the rough drafts of this work. My parents both provided financial and emotional support throughout my entire doctoral program. Most encouraging was that my father received his doctorate in engineering from The University of Texas at Austin, which was a constant reminder that if he could do it, so could I. (By the way, my mother also edited his dissertation). I especially want to thank my brother Regan in consulting, who spent many hours sharing . ideas about research, which kept the research business oriented. i Finally, without the diversions and encouragement provided by my friends, it would have taken a lot longer to get where I am. My tennis friends kept me exercising, my golf friends kept me outside in the sunshine, my girl fi'iends kept me busy dancing on Saturday nights, my boyfriend kept me eating healthy meals, and my roommate listened to my roadblocks and successes no matter the hour. Thanks to all of you!! TABLE OF CONTENTS 1. INTRODUCTION AND RESEARCH QUESTIONS. . .. ................................. 1 1.1 INTRODUCTION AND RESEARCH QUESTIONS ................................. l 1.2 IMPORTANCE OF TOPIC ................................................................ 4 1.2.1 Practical Importance ..................................................................... 4 1.2.2 Theoretical Importance .................................................................. 6 2. PRIOR RESEARCH ........................................................................... 7 2.1 KNOWLEDGE MANAGEMENT ........................................................ 7 2.2 INFORMATION RETRIEVAL ......................................................... 11 2.3 INFORMATION SEARCH STRATEGIES ............................................ 12 2.4 INFORMATION USAGE ................................................................ 14 2.4.1 Content Ratings ......................................................................... 14 . 2.4.2 Credibility Indicators .................................................................. 16 2.4.3 Content Recommendations ............................................................ 18 3. THEORY DEVELOPMENT ................................................................ 20 3.1 DEFINITION OF CONTENT QUALITY .............................................. 20 3.2 CONTENT RATINGS, CREDIBILITY INDICATORS, AND CONTENT ‘ RECOMMENDATIONS ...................................................................... 21 3.3 THE EFFECT OF CONTENT RATINGS ON THE KNOWLEDGE CONTENT SELECTION AND USE PROCESS ......................................................... 25 3.4 THE EFFECT OF CREDIBILITY INDICATORS AND CONTENT RECOMMENDATIONS ON KNOWLEDGE CONTENT SELECTION AND USE ................................................................................................... 25 3.4.1 Credibility Indicators and Content Recommendations Influence on Rating Judgment ....................................................................................... 26 3.4.2 Rater Sample Size ...................................................................... 27 3.4.3 Rater Expertise .......................................................................... 32 3.4.4 Filter Sophistication .................................................................... 33 4. RESEARCH METHODOLOGY ............................................................ 35 4.1 PARTICIPANTS AND POWER ANALYSIS ......................................... 35 4.2 EXPERIMENTAL MATERIALS ....................................................... 36 4.2.1 Computerized Consulting Cases ...................................................... 36 4.2.2 Knowledge System Work Plans ...................................................... 38 4.2.3 Description of Task Type .............................................................. 39 4.3 EXPERIMENTAL PROCEDURES ...................................................... 40 4.4 DESIGN AND MEASURES ............................................................. 40 4.4.1 Design and Independent Variables ................................................... 40 4.4.2 Dependent Variables ................................................................... 44 4.4.3 Process Variables ....................................................................... 46 4.4.4 Controls and Other Manipulation Checks ........................................... 46 vi 5. ANALYTICAL PROCEDURES AND RESULTS ....................................... 48 5.1 TESTS OF ORDER EFFECTS ........................................................... 48 5.2 DESCRIPTIVE DATA ABOUT EXPERIMENTAL SUBJECTS .................. 51 5.3 STATISTICAL METHOD ............................................................... 53 5.3.1 Covariate Measures .................................... ' ................................ 54 5.3.2 Confirmatory Factor Analysis of Covariate Measures ............................ 55 5.3.3 Effect of Covariate Measures on Dependent Variable ............................. 57 5.3.4 Assumptions Underlying Statistical Analyses ...................................... 57 5.4 MANIPULATION CHECKS ............................................................. 59 5.5 HYPOTHESIS TESTING ................................................................. 60 5.5.1 Hypothesis Testing Results .......................................... . ................. 60 5.5.2 Baseline Condition ..................................................................... 64 5.5.3 Rater Sample Size (H1) ..................................... . .......................... 64 5.5.4 Rater Expertise (H2) ................................................................... 65 5.5.5 Filter Sophistication (H3) ......................... , .................................. 65 5.5.6 Summary of Hypothesis Testing ....... . .............................................. 66 5.6 SUMMARY OF RESULTS OF POST HOC ANALYSIS ON INFORMATION . SEARCH DATA .............................................................................. 66 5.6.1 Answers to Post-Task Questions .................................................. ». .. 67 5.6.2 Subjects Who Knew Their Performance Level .................................. -. .. 68 5.6.3 Information Search Process Measures ........................................... ... 71 5.6.4 Initial Information Search Strategy ............. . .................................... 73 ' 5.6.5 Post Hoe Analysis Summary ................. . ....................................... 75 6. DISCUSSION OF RESULTS ................ 76 6.1 INTERPRETATION OF THE RESEARCH RESULTS .............................. 76 6.1.1 Influence of Content Ratings on Task Performance ............................... 77 6.1.2 Moderating Influence of Credibility Indicators and Content Recommendations .............................................................................................. . .. 78 6.2 OVERALL CONCLUSIONS FROM THE RESEARCH STUDY ................. 82 6.3 IMPLICATIONS OF THE RESEARCH RESULTS ................................. 82 6.3.1 Theory ................................................................................... 82 6.3.2 Practice ................................................................................... 89 6.4 CHAPTER SUMMARY .................................................................. 90 7. LIMITATIONS AND FUTURE DIRECTIONS ......................................... 91 7.1 STRENGTHS AND LIMITATIONS ................................................... 91 7.2 FUTURE RESEARCH DIRECTIONS ................................................ 94 8. REFERENCES .............................................................................. 100 9. APPENDIX ................................................................................... 1 15 APPENDIX A: EXPERIMENTAL CELLS .............................................. 115 APPENDIX B: SCREEN PRINTS OF EXPERIMENTAL MATERIALS .......... 116 APPENDIX C: MANIPULATION SCREENS .......................................... 128 APPENDIX D: 100% QUALITY WORK PLANS ..................................... 142 vii APPENDIX E: SCREEN PRINTS OF ALL WORK PLANS ........................ 144 APPENDIX F: ADMINISTRATION OF EXPERIMENT MATERIALS ........... 172 : F. 1 List of Work Plans for each of the Four Work Plan Order Scenarios” .172 F. 2 First Page of Sign Up Sheet for Study Participation .............................. 173 F. 3 Tutorial Protocol ........................................................................ 174 F .4 Hand Out to Subjects with Login IDs ................................................ 175 F5 Session Control Log ..................................................................... 176 APPENDD( G: COUNTS OF SUBJECTS CHARACTERISTICS PER TREATMENT CONDITION ................................................................................. 1 77 APPENDIX H: CORRELATION TABLE ................................................ 180 APPENDIX 1: WORK PLAN ANSWER MEAN MEASURES BY TREATMENT _ CONDITION ......... . ....................................................................... 181 APPENDIX J: DISCUSSION OF POST HOC ANALYSIS DETAILS ............. 189 J .1 Results of Post Hoc Statistical Analysis on Information Search Data. . . . . .. 189 J .2 Subjects Who Knew Their Performance Level ...................................... 193 J .3 Information Search Process Measures ................................................ 196 J .4 Measures for Initial Information Search Strategy .................................... 204 viii LIST OF TABLES Table 3-1 Characteristics of Content Ratings, Credibility Indicators and Content Recommendations ................................................................................. 22 Table 4-1 Variables and Operationalization .......... . .......................................... 4 1 Table 5-1 Mean Decision Quality and Decision Time by Session ........................... 49 3 Table 5-2 Mean Decision Quality and Decision Time by Work Plan Orders .............. 50 Table 5-3 Chi-Squared Statistics for Subject Year in School by Treatment ............... 52 Table 5-4 Chi-Squared Statistics for Subject Age by Treatment ............................ 53 Table 5-5 Chi-Squared Statistics for Subject Gender by Treatment ......................... 53 Table 5-6 Chi-Squared Statistics for Subject Experience by Treatment .................... 53 Table 5-7 Covariates and Post Hoc Analysis Constructs ...................................... 54 Table 5-8 Results for the Extraction of Component Factors .................................. 56 Table 5-9 Principal Components Analysis Factor Matrix. . . . . . . . . . . . . . .................... 56 Table 5-10 Regression of Decision Quality on Control Variables Post Factor Analysis.57 Table 5-11 Results of the Kolmogorov-Smirnov (K-S) Goodness of Fit Test ............. 58 Table 5-12 Results of the Levene Test of Homogeneity of Variance ........................ 59 Table 5-13 Results of Manipulation Checks for Treatment Conditions ..................... 60 Table 5-14 Summary of Means and Standard Deviations by Treatment Condition ....... 61 Table 5-15 Summary of F-Statistics and p-Values for each Hypothesis .................... 62 Table 5-16 Summary of t-Statistics and p-Values for Control Variables by Hypothesis.62 Table 5-17 Post Hoc Analysis Constructs and Measures ..................................... 69 Table 5-18 Knowing Rating Accuracy and Decision Performance .......................... 70 Table 6-1 Counts of Initial Search Strategy by Credibility Indicator ....................... 85 LIST OF FIGURES Figure 3-1 Knowledge System Content Rating Research Model ............................ 26 , Figure 3-2 Interaction Effect of Credibility Indictors/Content Recommendations and Content Rating Accuracy on Task Decision Quality .......................................... 28 Figure 5-1 Means Plots for Decision Quality in Rater Sample Size Experiment. . . . . ......63 Figure 5-2 Means Plots for Decision Quality in Rater Expertise Experiment. . . . . ..........63 Figure 5-3 Means Plots for Decision Quality in Collaborative Filter Experiment. . ........63 . Figure 6-1 Modified Decision Model ............................................................. 84 I Figure 6-2 Initial Search Strategy Mean Plots for the Rater Sample Size Experiment. . .85 Figure 6-3 Initial Search Strategy Mean Plots for the Rater Expertise Experiment ....... 86 Figure 6-4 Initial Search Strategy Mean Plots for the Collaborative Filter Experiment...86 CHAPTER 1 1.INTRODUCTION AND RESEARCH QUESTIONS 1.1 Introduction and Research Questions Professional services firms, such as Ernst & Young, Accenture, and PricewaterhouseCoopers, were some of the earliest adopters of electronic knowledge systems and they continue to promote the use of knowledge repositories to capture knowledge gained from providing client services (Orlikowski 1993, 2000; O’Leary 2001a, 2001b). These firms maintain large electronic repositories of work outcomes that are accessed by all employees (a.k.a. intra-organizational knowledge management systems) (O’Dell and Grayson 1998; Davenport and Hansen 1999). Items stored in repositories are usually submitted by anyone in the firm and include deliverables to clients, work plans, budgets, lessons learned, and anything that someone thinks might have future value to others in the firm (Hansen, Nohria and Tiemey 1999; Davenport and Hansen 1999). Companies support these systems in order for their employees to access and re- use old work products when doing new work, which should increase overall firm productivity (Orlikowski 2000; DeTienne and Jackson 2001). Users typically perform a keyword search to find system content for a current task by specifying industry, revenue . size, job type, etc. If the search algorithm is sufficient, a long list of the system contents that are relevant to the current task is generated (Balabanovic and Shoham 1997; Ansari, Essegaier and Kohli 2000). Finding relevant content is not always the issue, because contents can vary in quality and the user’s goal is to find the most relevant, highest quality (i.e., most reliable) content as input for the current task (Sarvary 1999; Thomas, Sussman and Henderson 2001 ). Contents vary in quality because employees are free to contribute whatever they want to electronic knowledge systems, which means knowledge of high quality along with less than high quality could be put into the system due to differing motivations, knowledge levels and skill sets (COnnolly and Porter 1990; Constant, Kiesler and Sproull 1994; Hansen and Haas 2001). The focus of this study is on how users find high quality content (see Chapter 3 for definition of content quality). Firms cannot delete all the low quality items or ensure only high quality items are submitted originally because manually monitoring all content is costly (Shon and Musen 1999). As a result, knowledge systems help users find high quality content by maintaining a user feedback scheme, where content is rated as it gets used, for example, on a scale of one, meaning worthless, to five, meaning excellent (Standifird 2001; Wathen and Burkell 2002). Normally, when using the system, users will rank-order search results by ratings to help in selecting what content to use first. But, this low cost solution of sharing user opinions on system content may do more harm than good because content is subjectively evaluated and ratings are voluntarily given (J adad and Gagliardi 1998). Also, ratings can be either intentionally or unintentionally incorrect (i.e., rating level does not accurately reflect actual content quality level) because those supplying the ratings may: manipulate them for self-serving purposes, not have the ability to recognize content quality, hold incorrect assumptions of what is salient to others, not foresee how others will be using the same content, be influenced by already published ratings, or use a different context when assessing content quality (Davenport and Hansen 1999; Falconer 1999; Cramton 2001; Cosley, Lam, Albert, Konstan and Riedl 2003). Much of the inaccuracy in ratings cannot be eliminated by more accurate future ratings because current ratings influence future ratings, which would reinforce the inaccuracy (Cosley, Lam, Albert, Konstan and Riedl 2003). Thus, electronic content may be ineffectively re-used, where high rated but low quality content is used or low rated but high quality content is ignored leading to inferior task performance. To help users determine when ratings are incorrect, the knowledge system provides additional information such as indicators of rating credibility and content recommendations (Balabanovic and Shoham 1997; Irn and Hars 2001'). Content ratings are often reported as an average of the ratings supplied along with credibility indicators such as the number of raters supplying ratings or the level of rater expertise, and content recommendations with the level of sophistication of recommendation algorithms (Cosley, Lam, Albert, Konstan and Riedl 2003; Balabanovic and Shoham 1997). ’While not necessarily the reason provided, highly sophisticated filters supporting content recommendations may suggest to users a certain level of credibility in ratings. That is, because a highly sophisticated filter recommends certain items, this may suggest to users high ratings associated with these items are accurate while low ratings are inaccurate. Decision theory research that examines the effectiveness of this type of information in decision settings has provided mixed results and has suggested features of the decision process may determine when this type of information gets used (Tversky and Kahneman 1974; Sedlrneier and Gigerenzer 1997, 2000; Stiff 1994). This study investigates the conditions when system-provided ratings, credibility indicators and content recommendations are used in making system content use judgments. The following research questions are addressed: How can credibility indicators help people determine the level of accuracy in ratings of knowledge system content? How can content recommendations help people determine the level of accuracy in ratings of knowledge system content? An important goal of this research is to strengthen our theoretical understanding of how system supported features influence how knowledge system users select and use system contents. 1.2 Importance Of Topic This research is important from both theoretical and practical perspectives. Theoretical importance focuses on the way in which this research builds and tests decision theory related to how individuals use rating information. Practical importance - focuses on the relevance Tof this research to practitioners in designing and using knowledge systems with ratings schemes provided. 1.2.1 Practical Importance In many companies, system users need to screen content to find what rs most appropriate (including highest quality) for their specific tasks (Davenport, DeLong and Beers 1998). It would be beneficial for people to use their own judgment and not just blindly following ratings and find the highest quality content in the quickest manner. Accordingly, it is time consuming for system users to judge all content individually themselves. If users are highly expert in the subject matter, search results may be rank- ordered by ratings and incorrect ratings are overcome by personal judgments of the content meaning incorrect ratings get ignored. However, a more efficient solution might be to provide credibility indicators of ratings and/or content recommendations that direct users to accurate ratings helping them more quickly find high quality content. Also, many Imowledge system users are searching for solutions to tasks where subject matter experience will be low causing them to be uncertain about which content is high quality (Nonaka and Takeuchi 1995, 1996; Brajnik, Mizzaro, Tasso and Venuti 2002). Highly expert senior employees typically delegate the cumbersome process of searching the knowledge system to those more junior (Orlikowski 1993, 2000). Junior employees will be experienced enough to have a belief about content quality but need reassurance in determining what content is the highest quality. Ratings along with credibility indicators and/or content recommendations may provide this reassurance. Informing users and designers on how content ratings, credibility indicators, and content recommendations help screen knowledge system content is an important system usage issue (Davenport and Hansen 1999). Users with low subject matter expertise need help deciding when ratings are incorrect while users with more experience need help quickly assessing when ratings are incorrect in order to efficiently and effectively use knowledge system contents. Knowledge systems provide help through rating credibility indicators and. content recommendations, but it does not always work as intended. The research is designed to determine whether providing additional help in the form of system supported indicators of rating credibility and system generated content recommendations can prompt users to: 1. not use incorrect ratings but evaluate content quality personally and 2. use correct ratings to screen content quality in order to achieve the highest level of task performance. Results from the research may influence the design of knowledge system ratings schemes by suggesting improvements in rating information disclosures. Results may also provide guidance to system users in understanding the implications of rating information characteristics. 1.2.2 Theoretical Importance Two rating credibility indicators (e. g., sample size and source expertise) and one content recommendations characteristic (e. g., filter sophiStication) are examined as influences on deciding whether to rely on ratings or not in assessing content quality. To examine the two rating credibility indicators, research dealing with how humans use statistical sample size and source credibility information guides predictions (Tversky and Kahneman 1971, 1974; Sedlrneier and Gigerenzer 1997, 2000; Hovland and Weiss 1951). While research results are mixed, one dominant theme indicates specific features of the decision process prompt the use of this information. The main contribution of this study is in identifying an important new setting and application for investigating the use of system provided rating information in decision-making. To examine content recommendations, an exploratory approach is followed in studying when and how humans use system- generated recommendations to determine whether to rely or not on ratings (Ansari, Essegaier and Kohli 2000; Balabanovic and Shoham 1997). System- generated content recommendations are a recent phenomenon and their influences on human decision-making are not widely understood. Finally, the influence of rating information on task decision quality (i.e., decision effectiveness) is the focus of this study, although task decision time is also measured and analyzed given the trade offs between quality and time. CHAPTER 2 2. PRIOR RESEARCH The primary research in the literature covering knowledge management and information usage topics is summarized in this section. 2.1 Knowledge Management Definitions of knowledge have been consistent in the literature, where knowledge is unlike data or information. Data is raw or unabridged descriptions of observations about the past, present or future world and information is a collection of facts or data. Knowledge is the product of human reflection and experience, dependent on context and located in the individual(s) or embedded in routines or processes (DeLong and Fahey 2000; Alavi and Leidner 2001). Knowledge is more unstructured than data or information and little research exists about how to codify it (Roos and Von Krogh 1996; Zack 1999). Many studies have been focused on developing taxonomies of knowledge (for a list of these studies see Holsapple and Ioshi 2001). Taxonomies generally classify . knowledge as tacit, explicit, individual, social, declarative, procedural, causal, conditional, relational, and/or pragmatic (Alavi and Leidner 2001). Also, there are four knowledge processes that are generally discussed as creation/construction, storage/ retrieval, transfer and application of knowledge (Holzner and Marx 1979; Nonaka 1994; Pentland 1995). Many studies have discussed that knowledge management is essential to the competitive advantage of the corporation in general (Riesenberg 1998; Argote and Ingram 2000; Teece 2000; DeTienne and Jackson 2001) and consulting firms specifically (Teece 1995; O’Dell and Grayson 1998; Hansen, Nohria and Tierney 1999). Competitive advantage comes from converting intangible knowledge into a product or service for which customers will pay (Edvinsson and Sullivan 1996).. Competitive advantage also results because knowledge is an asset that complements production and is difficult for competitors to imitate (Grant 1996; Rivkin 2000). The need for knowledge management to sustain competitive advantage was spawned by the exodus of middle managers during the downsizing of the late eighties and early nineties. Organizations discovered that institutional memory and unique knowledge was leaving with exiting employees (Erickson and Rothberg 2000; Shah 2000). Thus, knowledge management became a more important concept to corporate leaders. Measuring and understanding changes in knowledge has been widely investigated (Pirolli and Wilson 1998). New knowledge is created by the exchange and combination of information and data (Nahapiet and Ghoshal 1998). Without knowledge transferring tools, people are central to the flow of information, share social relationships with . knowledge sources and connect with those who have information for knowledge creation (Floyd and Woolridge 1999; Hansen and Morten 1999). Trust, certainty, information transfer, speed and co-specialization all determine how social networks of information transfer are built (Rangan 2000; Mehra, Kilduff and Brass 2001). Knowledge management systems are tools for building social networks and fostering knowledge creation (Hackbarth and Grover 1999; Tiwana 2000). Different types of knowledge system practices are evident, such as performing formal training, adopting knowledge repositories, holding knowledge fairs, building communities of practice, maintaining expertise yellow pages, and supporting talk/chat rooms (Gray 2001). The most common application of knowledge systems are coding and sharing best practices in a repository, creating corporate knowledge directories, and creating knowledge networks (Alavi and Leidner 2001). Another term used for these systems is organization memory information systems (Stein and Zwas 1995; Wijnhoven 1999; Olivera 2000). Expert systems, artificial neural networks and artificial intelligence are specialized tools that can be embedded into knowledge systems assisting users in decision-making and are not the subject of this study. Various case studies of how firms have deployed knowledge management systems in corporations have been performed highlighting that successful knowledge systems: are expensive, require solutions of people and technology, recognize the politics involved, require knowledge managers, achieve benefits more from knowledge markets than hierarchies, acknowledge sharing and using knowledge are unnatural acts, improve work processes, and require a knowledge contract (Graham and Pizzo 1996; Mullin 1996; Davenport and Prusak 1998; Pan and Scarbrough 1999; Zack 1999). Additional case studies of knowledge systems have focused on consulting firms such as Accenture (a.k.a. Andersen Consulting), Ernst & Young, PriceWaterhouse Coopers, and KPMG (Quinn 1992; Wijnhoven 1999; Davenport and Hansen 1999; O’Leary 1998, 20013, 2001b). Lessons learned from these studies include the need to foster cooperation and mutual trust among employees (Orlikowski 1993; Nelson and Cooprider 1996; Falconer 1999). Also, not all consulting firms adopt the same type of knowledge system due to differing business models. Smaller boutique firms like McKinsey use their knowledge system to connect people more efficiently and not codify all available knowledge (e. g., they adopt knowledge yellow pages). Meanwhile, the Bi g-4 consulting firms like Ernst & Young take all available consultant experiences and categorize and codify them with formal methods (e. g., they adopt knowledge repositories full of work outcomes) (Maister 1993; Kubr 1996; Sarvary 1999). Given all the potential benefits however, additional case studies have shown knowledge system projects can fail (Davenport, DeLong and Beers 1998). With over 50% of knowledge management projects failing based on corporate surveys, studies have tried to measure the return on knowledge to the company (Ambrosio 2000; Housel, El Sawy, Zhong and Rodgers 2001). Studies have also tried to measure the perceived output quality of knowledge systems in focus groups of CIOs (Kankanhalli, Tan and Wei 2001) and perceived knowledge management effectiveness via academic surveys (Khalifa, Lam and Lee 2001). Additional research has examined the factors affecting knowledge system adoption based on innovation diffusion theory (Ryan and Prybutok 2001). Individual and organizational barriers to knowledge sharing make managing this process difficult (DeLong and F ahey 2000; Chow, Deng and Ho 2000). Those that possess knowledge are reluctant to share that knowledge because they feel it would threaten their status in the firm (Orlikowski and Hofrnan 1997; Orlikowski 2000). As a result, free rider problems ensue as individual may refuse to contribute to the creation of knowledge while accessing and using knowledge that others have contributed (Ba, Stallaert and Whinston 2001). Knowledge asymmetries between employees can lead to differences in organizational performance and reduced firm productivity (Thomas, Sussman and Henderson 2001). Even if knowledge is fiilly shared, people have limited attentional capacity and cannot absorb all the information provided to them (Greco 1999). 10 It is unclear exactly how these inhibitors affect knowledge sharing, especially the inability of individuals to effectively and efficient use knowledge system content, has not been fully examined. Little significant research addresses how system supported features (i.e., information about knowledge content such as ratings or indicators of rating credibility and content recommendations) of knowledge management systems affect knowledge system content use in decision tasks. The next section will address the psychology literature related to how decision makers use certain types of information. 2.2 Information Retrieval The information retrieval literature examines how individuals seek out, retrieve, and determine relevance of documents (Maglaughlin and Sonnenwald 2002; Brajnik, Mizzaro, Tasso and Venuti 2002). While various system-oriented relevance definitions exist, user-oriented relevance is defined as whatever content the information seeker says is useful to his/her purpose (Park 1994; Howard 1994). Studies indicate information seekers make judgments regarding what information to select based on their specific task, with a primary criteria being content reliability (Spink and Greisdorf 2001; Maglaughlin and Sonnenwald 2002). Studies also suggest individuals place authority and confidence in documents based on author competence and trustworthiness, content reliability, and institution affiliations (Fritch and Cromwell 2001; Sundnar 1998, 1999). In electronic environments, however, some of the traditional indicators (e. g., author background, qualifications, and credentials) of document reliability are absent, making judgments less straightforward (Fritch and Cromwell 2001; Tate and Alexander 1996). People fail to properly evaluate electronic information driving a need for independent verification, identification and validation of information sources (Fritch and 11 Cromwell 2001; Lynch 2001). While recent studies have suggested improving how information systems are designed to optimize information retrieval given the criteria of relevance, most system features support user feedback on reliability through user assessments (Hjorland 2001; Brajnik, Mizzaro, Tasso and Venuti 2002). But findings show electronic searchers are not comfortable with advanced search features, make little use of feedback when available, and typically do not scan results beyond the first page of hits (Jansen, Spink and Saracevic 2000). 2.3 Information Search Strategies Acknowledging that different strategies of information searching exist and examining the search patterns of individuals may provide insights into how rating information is utilized in knowledge system content usage decisions. While the immediate research does not fully analyze search strategies, post hoc analysis may benefit from a discussion on prior research in information search strategies and future research is needed to more firlly examine these issues. Before discussing search strategies, however, an understanding of the dimensions of information processed in using knowledge system search results is helpful. One dimension is the number of search result items listed (i.e., old work plans) and the other dimension is the number of lines in each search result item (i.e., project steps). These dimensions are called “search results complexity” in this study and are consistent with the natural format of knowledge system search results and consistent with the model which consumer and cognitive psychologist use to study how people process/search information (Payne 1976; Svenson 1979). Cognitive psychologists believe that when individuals use a particular decision process, they will tend to search and acquire information in a manner consistent with the 12 information needs of the decision process. The needs of the decision process guide the search process reducing the demands of cognitive load. Models of search behavior have been described based on information inputs and do not reqmre the performance of complex arithmetic calculations as suggested by the models (Payne 1976; Montgomery and Svenson 1976). This is important because individuals appear to process information using heuristic methods not arithmetic expressions (Slovic and Lichtenstein 1971; Newell and Simon 1972; Svenson 1979). These heuristic methods appear to be associated with patterns of information processing/search. How individuals search in knowledge systems . is a open question and future research is needed to better understand how people search . knowledge system contents, especially determining when their Selection strategy is based on judging search result items as an entire unit (i.e., entire old work plans) versus comparing parts of content across search result items (i.e., by project steps) (Tversky ' 1969; Einhom 1971). When individuals perform complex tasks, they use search patterns (or decision models or heuristics) to keep the information processing requirements of the task within the limits of their cognitive processing capabilities. They possess many search patterns ’ that are systematically used in different task situations and individuals (Montgomery and . Svenson 1976; Newell and Simon 1972; Payne 1976; Svenson 1979; Tversky and Kahneman 1974). However, in general, individuals try to match the search pattern and the task in order to keep within their cognitive limits or reduce their cognitive stain. Future investigations are needed to determine how search patterns and using additional information provided along with search results interact to influence search strategies. 13 Using additional information provided along with search results may offer help and is the topic of discussion in the next section. 2.4 Information Usage To help users retrieve system content given the complexities of the environment, knowledge systems offer content ratings, credibility indicators and content recommendation schemes. Prior research on how people use this information is covered in this section. 2.4.1 Content Ratings While little has been investigated about knowledge management systems and content rating schemes, research examining other rating schemes has found negative ratings of sellers is highly influential and detrimental to the final bid price for eBay auctions (Standifird 2001). Knowledge system ratings and other information about system content are cues that persuade users to select and use certain content. To better understand how knowledge system content ratings might influence decisions to select and use content, the literature on persuasive effects of information is explored next. A theoretical model often used in the persuasion literature is the elaboration- likelihood model (ELM) which says the amount of thought the message receiver devotes to a message (e. g., in this study, a message is an item listed in the search results along with‘its rating and other information) is the primary determinant of which specific message cues (e. g., own judgment of item quality and rating value) drive attitude change (e. g., selection and use of an item from the list) and what processes cause cues to influence this change. The high end of the elaboration continuum is based upon diligent consideration of relevant information and corresponds to the central route to persuasion. l4 The low end is based on the receiver associating an attitude with some positive or negative cue and represents the peripheral route to persuasion. Another model used is the heuristic-systematic model (HSM) which says systematic processing where the message receiver accesses and scrutirrizes all available information relevant to the judgment task (e. g., considers own judgment of content quality and rating value) is different from heuristic processing where the message receiver only uses a subset of the available information then applies basic inferences (e. g., follow the advice of experts and do not rely on own judgment of quality) to complete the judgment task. While not explicitly tested in this study, ELM and HSMl suggests individuals process ratings using diligent consideration through the central route and process credibility indicators and content recommendations using an attitude association through the peripheral route of persuasion. Research using ELM and HSM has specifically tried to determine what heuristics people employ when not diligently processing information. With little thought to the main message content (e. g., own judgment of content quality), group opinions operate as simple cues where the group not the quality of the message influences people and heuristics are used such as consensus implies correctness (Maheswaran and Chaiken 1991). However, with the opportunity to think about the main message content, people who are presented with group opinions generate explanations as to why those opinions were expressed causing them to focus only on supporting evidence and changing their own attitudes to agree with the group (Petty and Cacioppo 1981). ‘ ELM and HSM have been used to examine the persuasive effects of numerous communication variables, including: source credibility (Ratneshwar and Chaiken 1992), source attractiveness (Petty, Cacioppo and Schumann 1983), rhetorical questions versus direct statements (Munch and Swasy 1988), implied versus stated conclusions (Kardes 1988), multiple versus single message execution (Schumann, Petty and Clemons 1990), visual message elements (Miniard, Bhatla, Lord, Dickson and Unnava 1991), message repetition (Batra and Ray 1986), and comparative versus non-comparative message claims (Droge 1989). 15 These findings suggest ratings may not be processed diligently, but could be viewed as group opinions and processed using the heuristic consensus implies correctness. Limited research has addressed the persuasion of specific knowledge system rating information or heuristics that users utilize when selecting and using knowledge system content. Nonetheless, for this study, ELM and HSM suggest differences exist in how rating as opposed to credibility indicators and content recommendation information may influence user decisions of knowledge system content usage. 2.4.2 Credibility Indicators This section discusses the literature on decision theory, which is focused on how people use information in decision-making. First is a discussion about sample size then source expertise. 2.4.2.1 Sample Size Empirical studies have mixed findings about whether people can adequately use credibility indicators like the number of raters submitting ratings aggregated into the reported rating level within knowledge systems (i.e., called sample size, where larger sample sizes suggests higher credibility). Studies show sample size is usually ignored in decision-making (Nelson, Bloomfield, Hales and Libby 2001; Griffin and Tversky 1992; Tversky and Kahneman 1974). Griffin and Tversky (1992) distinguish information according to two characteristics: strength and weight (i.e., sample size). In their terminology, the strength of information is the degree to which it appears favorable or unfavorable. The weight of evidence is its statistical reliability. They provide evidence that people tend to pay too much attention to strength and not enough to weight (i.e., sample size). 16 While these and other studies cover contexts that include narrow and simple domains (e. g., information about coin flips), they provide insights on how information is used in decision-making that can guide predictions for this study. Other relevant studies have illustrated that making use of sample size is conditional on the setting examined. Settings where people used sample size in decision-making include when the decision made involved determining how often something happened versus what was the average outcome of a situation (Sedlmeier and Gigerenzer 1997, 2000; Keren and Lewis 2000; Sedlrneier 1998) and when the tasks involved determining a cause-effect relationship (Van Overwalle and Van Rooy 2001). These studies suggest a trigger in the task setting that causes the decision maker to use additional information (i.e., sample size). Limited empirical evidence exists examining whether and how knowledge system users utilize sample size (i.e., the number of raters) when deciding whether to rely or not on ratings of knowledge system content. 2.4.2.2 Source Expertise In addition to rater sample size, rater expertise (i.e., the percentage of raters designated as an expert in the content topic) presented by the knowledge system aids users in determining rating credibility by providing insight into the raters’ authority, competence, and reliability (Fritch and Cromwell 2001; Flanagin and Metzger 2000). Typical demographic data provided by a knowledge system about raters includes length of membership in the electronic community, education credentials, hierarchical position, or business subunit assigned to in the firm (Thompson, Levine and Messick 1999; Davenport and Prusak 1998). System users may rely more on ratings provided by those considered experts versus non-experts in the topic (Strasser, Stewart and Wittenbaum l7 1995; Stewart and Strasser 1993). However, people are known to inadequately assess the expertise of themselves or others accurately (Kennedy and Peecher 1997; Koriat 1993) leaving the question of whether they will accept and use a reported level of expertise of a group of raters. Also, users might rely on ratings when judging content quality even when raters mis-rate the content. The source credibility literature provides additional background into people’s perceptions of expertise of raters. Source credibility is defined either as beliefs about the sOurce’s character (i.e., perceived social status) or about the source’s competence (i.e., perceived expertise) and is shown to have an impact on the receiver (Ilgen, Fisher and Taylor 1979; Coleman and Irving 1997). Source credibility has been studied in many information environments, including commercial lending (Beaulieu 1994), earnings forecasts (Hirst, Koonce and Miller 1999), auditing (Beaulieu 2001), on-line support groups (Wright 2000), advertising (Settle and Golden 1974), and employer feedback (Levy, Albright, Cawley and Williams 1995). In the knowledge system context, these findings suggest users will utilize ratings more when high source expertise is present than - when low source expertise is present. Nonetheless, limited research exists examining whether and how knowledge system users utilize source expertise (i.e., the percentage of raters who are experts) when deciding whether to rely or not on ratings of knowledge system content. 2.4.3 Content Recommendations Collaborative filters are computer algorithms that recommend content for users to select by identifying users whose choices of content are similar to those in a given individual and recommends content they have selected (Ansari, Essegaier and Kohli l8 2000; Balabanovic and Shoham 1997). Collaborative filtering could be used to better support the search for high quality knowledge content (Ansari, Essegaier and Kohli 2000; Balabanovic and Shoham 1997). Although collaborative filtering cannot recommend entirely new content, it does incorporate user preference similarities across individuals (Ansari, Essegaier and Kohli 2000). For example, collaborative filter recommendations are used by amazon.com and bamesandnoble.com to recommend books, CD’s and movies on the basis of the preferences of their other customers (Ansari, Essegaier and Kohli 2000). Little research has been performed on the behavioral effects of collaborative filter recommendations on decision-making. 19 CHAPTER 3 3. THEORY DEVELOPMENT This section begins with a definition of content quality, followed by a description of content ratings, credibility indicators and content recommendations in the context of knowledge systems. Subsequently examined is how content ratings are expected to affect the decision process of selecting and using knowledge content. Finally, a research model is developed with specific hypotheses to be tested regarding how credibility indicators and content recommendations help and mislead in the decision process of selecting and using knowledge content. 3.1 Definition Of Content Quality High quality in knowledge system content can be defined as work products that are informative, helpful, useful, desirable, meaningful, good, or significant. When content varies along these dimensions, using the content with the hi ghest-quality is desired. These characteristics have been examined as dimensions of information in the information systems (Gallagher 1974; Swanson 1974; Zmud 1978), consumer research (Wilton and Myers 1986), and management literatures (Moenaert, Deschoolmeester, Meyer and Souder 1992). More specifically, from the information systems perspective, Zmud (1978) drawing on Swanson (1974) and Gallagher (1974) defined four dimensions of information: 1. significance, usefulness or helpfulness, 2. accuracy, factualness, and timeliness, 3. quality of format or physical presentation and readability and 4. meaningfulness or reasonablenessz. 2 Swanson (1974) identified the following items related to an evaluation of information received by a system user: timely, relevant, unique, accurate, instructive, concise, unambiguous and readable. Gallagher (1974) used a scale of whether information is: informative, helpful, useful, desirable, meaningful, good, relevant, important, valuable, applicable, necessary, material, responsive, effective, and successful. Also, 20 When knowledge system users perform searches, all the characteristics listed above are pertinent in judging system content. System users utilize ratings as a cue regarding the level at which system content was informative, helpfiJl, useful, desirable, meaningful, good, and significant (Gallagher 1974; Zmud 1978), which, for purposes of this study, is how content quality is defined. In this study, content quality is examined while all other information characteristics were held constant. For example, the timeliness of system content was held constant by dating every item within the last year, so ratings should not have been perceived as cues about whether contents were timely. Another example is that all system content was related to the task subject matter and hence relevant to the task, so ratings should not have been perceived as cues about whether contents were relevant. 3.2 Content Ratings, Credibility Indicators, and Content Recommendations Often, those who have utilized knowledge system content for a particular task are asked to evaluate and provide a rating of that content. Then the system aggregates and reports the average of all submitted ratings for that content. Content ratings reported in knowledge systems have several inherent characteristics as shown in Table 3.1. Ratings can be described by their level, which should indicate the level of content quality (e. g., 1 = worthless, 3 = moderately useful, through 5 = highly useful), strength or extremeness (e. g., considered strong if the ratings is at scale ends [rating = 1 or 5] versus weak if the rating is in the middle [rating = 3]) and scale type, which can be continuous or dichotomous. The last two characteristics are not examined in this study. Another characteristic of ratings is that ratings can be either intentionally or unintentionally Larcker and Lessig (1980) summarize these measures into perceived importance and perceived usableness of information. 21 incorrect because those supplying the ratings may manipulate them or use a different context when assessing the rating level. Thus, while not reported by knowledge systems, ratings may be on a continuum, which is the degree of accuracy in reflecting actual content quality (e. g., if they are accurate, ratings = 5 and content is of high quality versus if they are inaccurate, ratings = l and content is of high quality). Table 3.1. Characteristics of Content Ratings, Credibility Indicators and Content Recommendations (shaded rows refer to characteristics covered in this study) Characteristic Description Predicted Behavior Effect Content Ratings Level Reflects level of content quality (i.e., l = worthless, 3 = moderately useful, through .5 = highly usefuig If an item is rated a l (5), then it will be ignored (used). Degree of accuracy Degree to which rating level accurately reflects the content quality level. When ratings are accurate (i.e., rating is 5 and content is of high quality), decision-making performance should be higher. Strength (not examined in this study) Reflects the strength (i.e., extremeness) of content quality. Strong if rating is at scale ends (rating = l or 5) versus weak if rating is in the middle irating = 3). Strong ratings will be more quickly Judged as use or ignore while weak ratings will take some effort to determine whether to use or ignore. Scale type (not examined in this study) Continuous versus dichotomous categorical. Research suggests for evaluation scales should be continuous in order to capture weak assessment levels. Continuous scales will be more trusted than dichotomous categorical due to their granularity and assumed geater precision. Credibility Indicator Rater sample size (i.e., number Discloses the number of users providing More (fewer) raters that rated the of users providing ratings) ratings that were aggregated into final content, the more (less) credible the rating level provided. Could be high rating level is assumed to be. (i.e., 100 users) or low (i.e., 3 users). Rater expertise Percentage of raters providing ratings More reliance should be placed on considered experts in content topic. ratings provided by experts than non-experts. A Text explanations (not examined in this study) Raters provide explanations to substantiate the rating level they chose. Explanations provide a rationale for rating levels chosen and should shed light on the appropriate rating level for the content. Consistency (not examined in this study) Whether the aggregated rating level provided comprises the average of all the same level or a wide dispersion of levels Greater variance in ratings reported should reduce the reliance placed on the rating values reported. 3 A low rating (rating = 1) means others found content a waste of time. This could be because the information contained in the content is erroneous or misleading but it could also be because the information was useless, basic or too general to be useful. 22 (i.e., if the rating provided = 3, does it comprise all 3’s or one-half l ’s averaged with one-half 53L Content Recommendations i Collaborative filter Highly sophisticated collaborative filters 'lf high (low) quality content is sophistication refer someone selecting a high (low) selected in the first place, better quality to other high (low) quality filters direct system users to other content. Low sophisticated collaborative high (low) quality content filters refer someone selecting a high supporting them in getting their (low) quality to other content that is low task done more (less) effectively (high) quality. and efficiently. If high (low) quality content is selected in the first place, worse filters direct users to low (big) quality content. To provide additional insight about the credibility of rating levels, the underlying characteristics of rating credibility can also be reported as shown in Table 3.1. Examples of credibility indicators include: rater sample size, which discloses the number of raters providing ratings that were aggregated into the final rating levels provided; rater expertise, which provides the percentage of those submitting ratings who are classified within the firm as experts in the content topic; text explanations,‘ which substantiates reasoning behind rating levels chosen; and consistency, which reflects the degree of rating dispersion or variance around the aggregated rating value (i.e., if the rating provided = 3, does it comprise all 3’s or does it average equal numbers of l ’s and 5’s). The last two rating credibility characteristics are not covered in the study. While not intended to directly deliver insights into the credibility of ratings, system-generated content recommendations, also shown in Table 3.1, are provided to help users identify quality content and may suggest whether to rely on ratings or not in deciding content quality. Content recommendation algorithms recommend content by identifying users whose choices of content are similar to those by another user and recommending content the other user has selected (Balabanovic and Shoham 1997; 4 See research by Gregor and Benbasat 1999. 23 Ansari, Essegaier and Kohli 2000). One limitation of examining content recommendations is their main purpose is to help users find additional relevant system content and not necessarily help find the highest quality content, which is the focus of this study. Nonetheless, while providing content recommendations is not a widely applied concept in knowledge systems, large professional services firms are considering using them for their intra-organizational knowledge systems and they are a highly accepted search tool on the Internet (Balabanovic and Shoham 1997; Ansari, Essegaier and Kohli 2000). This research is an initial attempt to better understand how content recommendations help users in the knowledge system environment. The collaborative filter algorithms can vary in sophistication, where highly sophisticated collaborative filters consistently refer someone selecting a certain quality level to other content of like quality based on other similar users’ selections. However, low sophisticated collaborative filters are not as refined and are less able "to develop a strong linkage in recommending content. This causes less sophisticated collaborative filters to be inconsistent in matching the quality levels of original and recommended content, which, in turn, causes less consistency in decision-making. Thus, less sophisticated collaborative filter could refer someone selecting high quality content to content that is lower in quality even though it is still based on other similar users’ selections. Nonetheless, it is likely that system users will learn through experience whether filter sophistication is high or low based on evaluating the quality of content recommended. Filtering systems typically do not inform users about underlying algorithms or sophistication levels (Ansari, Essegaier and Kohli 2000). 24 3.3 The Effect of Content Ratings on the Knowledge Content Selection and Use Process When using the knowledge system, users first perform a content-based search using keywords of the task topic, and the knowledge system returns a list of content matching the keywords as search results (Brajnik, Mizzaro, Tasso and Venuti 2002). This can be a long list, given the large amount of content that may be stored in the knowledge system (Davenport and Prusak 1998). System users hold prior beliefs that screening content based on ratings reduces the amount of searching and increases the chance of finding high quality contents. Relying on prior beliefs to follow ratings is beneficial when ratings are highly accurate because ratings will guide users to high quality content. However, following less accurate ratings causes system users to select and evaluate highly rated but low quality content, increasing the chances of low task performance outcomes. This proposition is straightforward and will be examined as a baseline condition. Knowledge system users will typically have low subject matter . experience, which reduces the level of certainty about what is high quality content, and causes them to rely more on ratings, even when they should not. Thus, to help people decide whether to rely on ratings or not in content quality decisions, knowledge systems offer credibility indicators and/or content recommendations, and their influence on decisions is further discussed in the next section. 3.4 The Effect of Credibility Indicators and Content Recommendations on Knowledge - Content Selection and Use Credibility indicators purport to signify and content recommendations may imply how believable content ratings are and suggest whether users should rely on or discount the use of ratings. For example, if there are many (few) raters or experts who submitted 5 Based on interviews with those using knowledge systems in large consulting firms. 25 ratings, this should indicate the rating level is more (less) credible. Another example is that users may believe ratings associated with content that is recommended by filters with highly (low) sophisticated algorithms are more (less) credible. Thus, knowledge system users may use credibility indicators and content recommendations as input to making a decision on whether to rely on rating levels or not. Then, reliance on rating levels affects judgments on content quality, which determines what content is reviewed and selected for use in the task. The research model being examined is found in Figure 3.1. Figure 3.1. Knowledge System Content Rating Research Model Knowledge System Content Rating Research Model level of A Task Decision Rating 7 Quality Accuracy Rater sample size Hla,b&c Rater expertise H2a,b&c Filter sophistication H3a,b&c Credibility Indicators & Content Recommendations 3.4.1 Credibility Indicators and Content Recommendations Influence on Rating Judgment The two salient credibility indicators potentially used by system users that will be examined are rater sample size and rater expertise and the one content recommendation dimension that will be examined is filter sophistication. First, the proposed logical cognitive process of how system users utilized rater sample size in decision-making is 26 discussed. Then, since this process is assumed to be the similar regardless of which credibility indicator or content recommendation item is provided, only the unique qualities of rater expertise and filter sophistication and not the entire process are discussed next. At the end of the discussion for each item are formal hypotheses. 3.4.2 Rater Sample Size The proposed process of how system users utilized rater sample size in decision- making contains two main arguments. The first is that inaccurate ratings more than accurate ratings should prompt system users to attend to rater sample size and when rater sample size is small, this may cause users to rely less on ratings. Thus, the effect of rater sample sizes on decision-making should be greater when ratings are inaccurate than when they are accurate (i.e., in Figure 3.2 this is represented as comparison in magnitudes: | PI- uzl < [143- ml). The second argument is that less reliance on ratings is expected to reduce ' decision quality when ratings are accurate and to improve it when ratings are inaccurate (i.e., in Figure 3.2 this is represented as a comparison of directions: [ul- in] > 0 and [113' P4] < 0). Next is a more detailed discussion of the first argument, followed by a more detailed discussion of the second argument, then the formal hypotheses. 27 Figure 3.2. Interaction Effect of Credibility Indicators/Content Recommendations and Content Rating Accuracy on Task Decision Quality Rater Sample Size and Expertise and Filter Sophistication “1 High Task [.12 Rater sample size, rater Decision expertise & filter soph. low Quality ”4 Rater sample size, rater Low 1 iexpertise & filter soph. high I I, Accurate Inaccurate Content Ratings Reflect Content Quality Hla, H2a, H3a = (p4 - p3) - (p1- uz) > 0 Hlb, sz, H3b = “4 - 1113 > 0 ch, H2c, H3c = pl - p2 > 0 The first argument suggests the effect of rater sample sizes on decision-making should be greater when ratings are inaccurate than when they are accurate. Knowledge system users initially review highly rated content. When the ratings are highly inaccurate, ratings direct system users to highly rated but low quality content first. They are expected to review the content, question its quality, and try to determine whether ratings are credible. Unexpected inaccuracies in ratings and uncertainty in judgments caused by low subject matter experience should prompt a search for reasons why an inaccuracy might happen (Wong and Weiner 1981; Weiner 1985). In other settings, the on a facet of the process may have prompted the use of sample size information; for example, when determining how often something happened versus what the average outcome of a situation was, when the tasks involved determining a cause in a cause-effect relationship, when the decision was based on rational not intuitive thought processing, and when the context was familiar versus unfamiliar (Kunda and Nisbett 1986; Denes- 28 Raj and Epstein 1994; Epstein 1994; Gigerenzer and Todd 1999). Consistent with these studies, the unexpected inaccuracy in ratings in the current study’s setting may cause knowledge system users to turn to rater sample size for help in determining whether ratings are credible and in explaining why they might not be (Rhine and Kaplan 1972; Stiff 1994; Sedlrneier and Gigerenzer 1997, 2000; Van Overwalle and Van Rooy 2001). Although rater sample sizes are always normatively relevant, users are more likely to use them when prompted by the conflict between ratings and the user’s initial assessment of content quality they purport to suggest. Inconsistencies in how many users provide ratings (i.e., rater sample size) for different content results because the typical knowledge systems allow anyone in the company using the content to rate it. More users submitting ratings about content should indicate the ratings are more credible and should be relied on in making content selection decisions. When rater sample size is a high value provided along with either a high or low rating level, this should suggest more people agree on the rating level and believe the rating indicates the content quality. With low subject matter experience, knowledge system users may rely on the judgment of others and rely more on ratings when many other users agree on that rating. High rater sample size should promote reliance on ratings and not on own judgments. High rated but low quality content could be accepted or may not be evaluated thoroughly; as a result, searching for a better answer is discontinued. However, when rater sample size is low, knowledge system users may discount rating levels and decide the quality of content by reviewing search results individually. When ratings are inaccurate, discounting ratings may be beneficial. The unexpected inaccuracy should prompt using the low credibility indicators, which suggest discounting ratings, so 29 knowledge system users should search and evaluate more content until finding a higher quality solution. Reducing reliance on inaccurate rating levels improves the chances that ratings will not influence what content is used in task solutions. However, studies indicate people get frustrated before reviewing all content and low subject matter experience users cannot always judge content correctly, so while performance quality improves it may not reach the highest possible level (Jansen, Spink and Saracevic 2000; Ford, Miller and Moss 2001) When ratings accurately reflect content quality, knowledge system users select highly rated, high quality content first and evaluate the content as high quality. Since ratings are accurate, knowledge system users are not expected to turn to rater sample size for causal explanations. Given uncertainty in judgments due to low subject matter experience, high rater sample size may reinforce beliefs of high content quality. In this case, knowledge system users are not expected to question the ratings and should review . the highest rated content first then select and use high quality content to solve the task. . Low rater sample size should suggest the rating level is less credible because less input is available most likely causing knowledge system users to determine content quality using their own judgment When rater sample size is low and ratings are accurate, knowledge system users may start with highly rated, high quality content first and evaluate the content as high quality. Some users may believe low rater sample size indicated non-credible ratings (i.e., inaccurate ratings) and may perform additional selection and evaluation of search results. However, many knowledge system users should realize the first content reviewed was rated highly and was high quality or they may never pay attention to sample size since they were not prompted to do so and in 30 either case they may ignore rater sample size. Thus, when ratings are accurate, knowledge system users are expected to have slightly lower decision performance on average when rater sample size is low since some users will tend to rely less on ratings. The second argument suggests that less reliance on ratings is expected to reduce decision quality when ratings are accurate and to improve it when ratings are inaccurate. This argument suggests, given uncertainty in judgments, the perceived conflicts in rating levels and personal judgments of content quality should prompt knowledge system users to use rater sample size for help in determining why ratings may not be credible when ratings inaccurately more than when ratings accurately reflect content quality (Stiff 1994; Sedlrneier and Gigerenzer 1997, 2000). Thus, in the inaccurate ratings case users may be more likely to attend to rater sample size, while in the accurate ratings case they might not be. In the inaccurate ratings case, attending to low rater sample size may cause reliance on personal judgment resulting in improved decision quality. Meanwhile, since there is less conflict, doubt, and uncertainty when ratings are accurate, decision performance differences are expected to be smaller across rater sample size levels than when ratings are inaccurate. The formal hypotheses for rater sample size interactions and planned contrast predictions are (see Figure 3.2 for a graphical representation of this set of hypotheses): o Hla: The difference in decision quality between being provided high and low rater sample size will be greater when the content ratings are inaccurate than when the content ratings are accurate. 0 Hlb: Given low subject matter experience, decision quality is higher when the rater sample size is low than when the rater sample size is high when content ratings are inaccurate. 31 o ch: Given low subject matter experience, decision quality is higher when the rater sample size is high than when the rater sample size is low when content ratings are accurate. 3.4.3 Rater Expertise Another indicator of rating credibility is the percentage of raters deemed experts in that content’s topic. If knowledge system users perceive the users who are submitting ratings about content to be more expert than their own expertise level, they are more likely to accept and rely on the ratings (i.e., ratings are considered more credible and should be relied on in judgments) (Wegner 1986; Thompson, Levine and Messick 1999). Information thought to come from experts should have a greater impact on decisions because it is thought to be more authoritative (Slater and Rouner 1992). Evidence indicates that expertise of the source is important to perceptions of the credibility of information (Hovland and Weiss 1951; Bimbaum, Wong and Wong 197 6; Olson and Cal 1984) However, when recipients disagree with a statement from a highly credible expert, they may reduce their respect for the source, downgrade the importance of the statement, rationalize the disagreement with excuses for the source or change their own beliefs to agree with the source (Rhine and Kaplan 1972). Studies have found these different reactions to expertise disclosures depend on features of the decision process that encourage the use of information. To understand when recipients changed their own beliefs to agree with the source, a closer look at the decision process is needed. In each study where an expert source changed the person’s beliefs, a facet of the process prompted the desire to change one’s own beliefs to match the experts. These facets include a low level of personal expertise, highly relevant information from the source for 32 the decision task, an expert taking a position opposed to his/her own best interest, and a large amount of disagreement between the expert’s statement and the recipients’ beliefs about the statement (Walster, Aronson and Abraharns 1966; Beach, Mitchell, Deaton and Prothero 1978; Slater and Rouner 1992; Stiff 1994). The specific facets of the decision process discussed previously in this study expected to influence whether people use rater expertise are inaccurate ratings. The formal hypotheses for rater expertise interactions and planned contrast predictions are (see Figure 3.2 for a graphical representation of this set of hypotheses): o H2a: The difference in decision quality between being provided high and low rater expertise will be greater when the content ratings are inaccurate than when the content ratings are accurate. 0 H2b: Given low subject matter experience, decision quality is higher when rater expertise is low than when rater expertise is high when content ratings are inaccurate. o ' H2c: Given low subject matter experience, decision quality is higher when rater expertise is high than when rater expertise is low when content ratings are accurate. 3.4.4 Filter Sophistication Collaborative filters objectively determine what content to recommend based on data sets of users’ preferences. Their recommendations attempt to guide knowledge system users in managing the long list of search results from their content-based keyword query of the knowledge system (Brajnik, Mizzaro, Tasso and Venuti 2002). Thus, given low subject matter experience, knowledge system users may seek additional objectively derived guidance in finding high quality content for use in their task (Yao 1995). However, collaborative filters do not disclose how much uncertainty is involved, the 33 reasons for their recommendations, or the level of sophistication in algorithms used (Ansari, Essegaier and Kohli 2000). While system users prefer high quality content, those whose preferences are input into the algorithms may have different understandings of what is high quality content. If knowledge system users do not understand how recommendations are derived, they may ignore content recommendations. Once again the specific facets of the decision process discussed previously in this study expected to influence whether people use filter sophistication are inaccurate ratings. The formal hypotheses for filter sophistication interactions and planned contrast predictions are (see Figure 3.2 for a graphical representation of this set of hypotheses): o H3a: The difference in decision quality between being provided recommendations from a collaborative filter that is low and high in sophistication will be greater when the content ratings are inaccurate than when the content ratings are accurate. 0 H3b: Given low subject matter experience, decision quality is higher when the collaborative filter sophistication is low than when the collaborative filter sophistication is high when content ratings are inaccurate. o H3c: Given low subject matter experience, decision quality is higher when the collaborative filter sophistication is high than when the collaborative filter sophistication is low when content ratings are accurate. 34 CHAPTER 4 4. RESEARCH METHODOLOGY To test the hypotheses, four inter-related experiments were conducted. The first experiment tested the baseline condition for whether content ratings without credibility indicators influenced content use in the task solution. The subsequent three experiments tested whether providing credibility indicators impacted how ratings affected content use. Thus, the second experiment investigated the first set of hypotheses (Hla-ch) and studied whether providing sample size along with ratings was important. The third experiment looked at the second set of hypotheses (H2a-H2c) to find out if providing the percentage of raters who were experts in the content along with ratings mattered. Finally, the fourth experiment tested the third set of hypotheses (H3a-H3c) and considered the influence of collaborative filter recommendations, along with ratings, on content use. The following is a description of the materials, participants and power analysis, and procedures. The section concludes with a separate discussion of the research design and measures that are the same across and unique to each experiment. 4.1 Participants and Power Analysis Participants for the study were undergraduate students taking a business information systems and technology course open only to juniors or seniors in a large Midwestern university. Several steps were taken to ensure the participants selected were representative of the population of interest—juniors and seniors performing a first year consultant level task, verbal tutorial consistent with first year consultant training, and selection of a work plan topic (i.e., data modeling and database design) covered in their current coursework. During pilot tests of the experiment, changes were made to 35 experimental materials and the tutorial to improve subjects’ understanding of the task involved. Subjects were randomly assigned to treatment conditions to help alleviate the possibility of individual characteristics affecting the results; however, specific individual characteristics thought to influence task performance were controlled (see Controls and other manipulations checks section below). Subjects received course credit (1.5%) for their participation. In order to ensure best efforts, incentive pay was provided based on both task performance quality and efficiency. A power analysis was conducted to determine the sample size needed to detect significant effects in the population, given the experimental design. Based on an estimated population medium effect size ofR2=0. 15, for power of 90%, twenty-one participants were needed for each of the fourteen separate experimental cells (294 in total) (Cohen and Cohen 1983) (see Appendix A for experimental cells). Based on this sample size, the chance of detecting a significant effect of the experimental manipulations when one exists is approximately 50%. 4.2 Experimental Materials The following section describes the experimental materials used in this study which comprise the computerized consulting cases, knowledge system work plans, and a description of the task type. 4.2.1 Computerized Consulting Cases With external validity to the consulting industry in mind, a simulated knowledge system was designed for subjects to perform a consulting related task. Since the subject population was junior and senior undergraduates, the experimental task was designed as a 36 typical exercise that a consultant might perform during his/her first year in a firm. The experimental task was to select and use old work plan line items from a knowledge system to construct a new work plan to build a data model and design a database for a new client. Building a data model and designing a database were topics the students covered in their current classes increasing familiarity with terminology and work plan line items. Thus, subjects have some, but limited experience with the appropriate steps to follow in a data modeling and database design project. All subjects across experimental conditions were provided a verbal ten-minute tutorial on building work plans and on using the computerized introduction materials, consulting case, knowledge system search results, answer spaces, and post-task questionnaires. The tutorial emphasized the layout of the work plans, the difference in work plan quality levels, and how to combine work plans. The introduction materials provided a review of data modeling and database design, the constructiOn of work plans for client jobs, and the layout of the knowledge system including the ability to pull up sample work plans not used in the consulting case. After reading the introduction materials, all participants were provided the consulting case. Then they were instructed to access and review knowledge system search results provided and to select line items of their choice to be transferred into an answer space to build, edit, and submit their answer. Subjects were told their manager asked them to build a work plan by re-using old knowledge system work plans and that the characteristics of a “good” work plan have the following: supervisor hours for all important tasks, consultant level(s) assigned to all project steps and informative/non- vague project steps (see Appendix B for screen prints of on-line experimental materials). 37 Rating accuracy was operationalized as ratings that either accurate or inaccurate (i.e., matched or mismatched with actual content quality). The manipulations for each experiment occur only in the knowledge system search results screen as ratings that are accurate or inaccurate and credibility indicators or content recommendations differ between subjects (see Appendix C for screen prints of manipulation screens). Subjects were not told explicitly whether ratings are accurate or inaccurate, while they were told whether credibility indicators or filter sophistication is high or low. 4.2.2 Knowledge System Work Plans Knowledge systems work plans were designed to represent hypothetical work plans fiom work performed by colleagues employed by the subjects’ hypothetical firm. These items were created using identical fonts, layouts, and lengths (i.e., work plan all had six steps), and were based on business world knowledge system work plans provided by practicing consultants. All work plans listed project steps and consultant rank and » varied in the level of quality. The highest quality items (i.e., 100% quality) were designed as follows (see Appendix D for work plans): 0 project steps were based on the steps identified in an undergraduate information systems text book (Whitten, Bentley and Dittrnan 2000) for building a work plan for data modeling and database design tasks, and o consultant ranks for each project step were set based on feedback from practicing consultants. Lower quality content items were created by changing the highest quality items in three ways (referred to below as the “three quality characteristics”): (1) deleting supervisor hours for many tasks needing supervision, (2) eliminating the assignment of any consultant level to a project step and (3) replacing project steps with uninformative/vague ones (see Rosenau 1998 and Murch 2001 for work plan design 38 guidance). These three changes were the characteristics highlighted to subjects to guide them in their selection and use of knowledge system items. Pilot tests of work plans with practicing consultants suggested these three criteria were sufficient to accurately drive quality judgments as each consultant was able to identify the highest quality work plan. Additional pilot tests with undergraduates suggested less consistency, but a high capability to identify high quality work plans. Fourteen work plans, which became the list of knowledge system search result items, were produced for how to do a data modeling project. These work plans varied in quality by changing the contents across the three quality characteristics: 1 item had none - of the characteristics, 6 items only included one of the characteristics, 6 items had combinations of two of the characteristics and 1 item included all three characteristics of , quality. Another fourteen items were produced of a database design work plan with the same quality distribution as the data modeling work plans. There were tWenty—eight items in total (see Appendix E for screen prints of all work plans). Four different orders of items to be listed as knowledge system search results were randomly generated. All participants across treatment conditions accessed the same set of twenty-eight items, which were provided in one of the four orders to preclude an order effect. 4.2.3 Description of Task Type McGrath (1984) defines three types of task for groups: idea-geneartaion, intellective and judgment. Based on McGrath’s (1984) definitions, idea generation is a collaborative task where individuals add ideas, intellective is a coordination task where individuals are trying to solve problems with correct answers, and judgment is a conflict resolution task where no correct answer exists and group consensus is necessary. While 39 the three types of tasks were originally defined for group interactions, they have been used in computer-human interaction settings, which the current task entails (Straus and McGrath 1994). In the current study, a correct (i.e., best) answer from the search results exists for building a new work plan for a data modeling and database design project. Thus, the current task most closely resembles the definition of an “intellective” task. 4.3 Experimental Procedures Experimental sessions were at a pre-set location and times in order to monitor participation. A ten-minute tutorial on the task of building work plans was administered. The experimental materials were programmed in HTML, ASP, and MS Office products and placed on a host computer so that subjects participated in the study via the Internet. Subjects were provided an individual identification number upon arrival to their ' experimental session time. Controls were built into the program such that each identification number was granted one-time authorization to the cases and once answers to each case and each screen of the questionnaire were submitted they could not be changed. The program allowed participants to return to and review the introduction materials while performing the case (see Appendix F for copies of administrative materials). 4.4 Design and Measures This section discusses the experimental design, independent variables, dependent variables, process variables, and controls and other manipulation checks. 4.4.1 Design and Independent Variables To check the baseline condition of the effect of rating accuracy on decision performance, the first experiment employed a two level (content rating and content 40 quality: accurate and inaccurate) between-subj ects randomized design. Content ratings6 were operationalized as a reported rating value equal to five indicating the item is “highly valuable”, four indicating the item is “somewhat valuable”, two indicating the item is “somewhat worthless” or one indicated the item is “worthless” (a list of variables and their operationalizations is found in Table 4.1). Content rating value equal to three was not included in order to improve the strength of the ratings accuracy manipulation. Subjects viewed a list of items from the knowledge system where each item had accurate or inaccurate ratings. Table 4.1. Variables and Operationalization Variable I Operationalization Ind_ependent Variables for Hl-H3 High Rating Accuracy Accurate ratings, where work plan contents include the following # of quality characteristics: 0 all three and rating = 5 (1 Knowledge System item), 0 two and rating = 4 (6 Knowledge System items), 0 one and rating = 2 (6 Knowledge System items), and 0 none and rating: 1 (1 Knowledge System item). Low Rating Accuracy Inaccurate ratings, where work plan contents include the following # of quality characteristics: 0 all three and rating = l (1 Knowledge System item), 0 two and rating = 2 (6 Knowledge System items), 0 one and rating = 4 (6 Knowledge System items), and 0 none and ratigf 5 (1 Knowlengystem item). Specific Independent Variables for H1 Subjects told: “across the system, Number of Raters is 3 to 97 depending on the item. The Number of Raters in your search results is LOWiHIGHJ compared to the averagg for an item of 50.” Rater sample size high Number of raters randomly assigned to work plan item ranged from 93-97. Rater sample size low Number of raters randomly assigfld to work plan item ranged from 3-7. S ecific Independent Variables for H2 6 The following is evidence the scale used is consistent with the natural setting: one system explained the assessment process as “casting a vote. . .is entirely optional, if you think that the [item] is superb, you might rate it as a five star. . ., or if you think that it’s unspeakably dismal, you might choose to rate [it] a single star” (http://www.allforums.net/forums/). Also, one large consulting firm asks “How would you rate this .. .item? Best Item(5), Very Useful(4), Useful(3), Less Useful(2), and No longer Useful(l).” 41 Subjects told: “across the system, % of Raters Who are Experts is 4% to 92% depending on item. The % of Raters Who are Experts for your search results is LOWJyIGH] compared to the average for an item of 48%.” Source expertise hi Number of raters randomly assi ed to work plan item ran ed from 88-92. gn 3 Source expertise low Number of raters random assigned to work plan item ranged from 4-8. Specific Independent Variables for H3 Subjects told: “Recommendations from the system can exactly or not exactly match the quality of the original item. Recommendations from the system in your search results are known to [NOT] EXACTLY match in quality between items recommended and the original item.” Collaborative filter Recommendations of work plan items were provided by recommending item sophistication high that had the same level of quality. Collaborative filter Recommendations of work plan items were provided by recommending item sophistication low that had the reverse level of quality. To test Hla-ch, the second experiment employed a two (content rating and content quality: accurate and inaccurate) by two (rater sample size: low and high) between-subj ects7 randomized design. Rater sample size was operationalized as “number of raters” where each knowledge system item was randomly assigned a value from 93 to 97 (3 to 7) for the high (low) condition. Sample size was allowed to vary slightly to maintain external validity. Even though sample size ranges were kept narrow, the highest and lowest quality items were assigned the mid-point value for sample size of 95 (5) for the high (low) condition in order to eliminate subjects using extreme values to direct inferences of rating credibility. Narrow ranges for sample size are important because this study examines between-subj ects treatment conditions of how the number of raters affects perceptions of rating credibility not how the variance in the number of raters influences these perceptions. Subjects were told “across the system, Number of Raters is 3 to 97 depending on the item. The Number of Raters in your search results is LOW [HIGH] compared to the average for an item of 50.” Subjects viewed a list of knowledge 7 A between-subjects design is consistent with the natural setting where people will have ratings from only a large or a small number or raters (experts) such as between departments or subject areas within a firm or between firms. A hypothetical example includes when more work is being performed on a topic, it may be accessed and rated by more raters, while other content related to work performed less often will be used and rated less. 42 system items with accurate or inaccurate ratings and between 93 to 97 or 3 to 7 number of raters. To test HZa-H2c, the third experiment employed a tWo (content rating and content quality: accurate and inaccurate) by two (percentage of raters are experts: low and high) between-subj ects randomized design. Percentage of raters are experts was operationalized as “% raters experts” where each knowledge system item was randomly assigned a value fi'om 88% to 92% (4% to 8%) for the high (low) condition. The highest and lowest quality items were assigned the mid-point value of 90% (6%) for the high (low) condition to neutralize the effect of the variance of the percentage of raters on inferences of rating credibility. Subjects were told the balance of raters were not experts in the item’s topic. Subjects were “across the system, % of Raters Who are Experts is 4% to 92% depending on item. The % of Raters Who are Experts for your search results is LOW [HIGH] compared to the average for an item of 48%.” Subjects viewed a list of knowledge system items with accurate and inaccurate ratings and between 88% to 92% or 4% to 8% raters who are experts. To test H3a-H3c, the fourth experiment employed a two (content rating and content quality: accurate and inaccurate) by two (collaborative filtering sophistication: low and high) between-subj ects randomized design. Collaborative filtering was operationalized by providing subjects referrals to other items under the heading “recommend also”. High (low) filtering sophistication was operationalized as a referral to another item of equal (unequal) quality, regardless of rating level. Subjects were told “Recommendations from the system can exactly or not exactly match the quality of the original item. Recommendations in your search results are known to [NOT] EXACTLY 43 match in quality between items recommended and the original item.” Subjects viewed a list of knowledge system items with accurate or inaccurate ratings and a note that the helpfulness of recommendations from the system in their search results do [NOT] EXACTLY match. All subjects received the same information at the beginning of the experiment. After reading the introduction materials and consulting case (i.e., task instructions), subjects activated the knowledge system search results screen. At this point, each subject viewed a different manipulated independent factor operationalized as discussed on the knowledge system search results screen depending on the case to which they were randomly assigned before the experiment began. 4.4.2 Dependent Variables Knowledge systems should support the joint objectives of a decision-maker to maximize decision quality and minimize effort (Todd and Benbasat 1992). Thus, for all four experiments, the dependent variable was a measure of task performance quality based on using the highest quality items. The study also measured task performance time as a potential control being examined during data analysis for its correlation with performance quality. Since a subject could trade off task decision quality for time, experimental performance incentives rewarded participants for both decision quality and time efficiency: 0 Task decision quality— The “best” answer is defined as a work plan submitted where its contents matched the contents of combining the two 100% quality items. Each subject’s score was calculated as the number of line items in the subject’s answer matching the line items in the “best” answer divided by the total number of line items in the “best” answer minus 75%8 of the number of line items included in the subject’s answer that were not found in the “best” answer, and 8 See further discussion in section 4.4.2.1. 0 Task decision time—measured as duration of time from when the participant accesses the case screen to when the participant submits an answer. 4.4.2.1 Scoring Procedures for Decision Quality The decision task was to create the best work plan for a new client given a list of old work plans in a search result from the company knowledge system. Subjects were told to develop the best work plan, based on criteria provided by their manager, they could as quickly as possible. Work plans varied in quality fi'om the most reliable and accurate (i.e., high quality content) to similar versions but lacking informative steps, personnel assignments, or enough senior time allocated (i.e., lower quality content). Thus, the highest quality work plan became the benchmark for scoring subjects’ answers to the task. There were 36 line items in the highest quality work plan, including 19 for data modeling combined with 17 for database design. Subjects could not add or delete text, but only choose line itemsfrom the work plans provided. The line items of each subject’s answer were compared to the 36 line items of the highest quality work plan. For every line item matching a line item in the highest quality work plan, subjects received one point. Subjects were told the best answer had between 26-50 line items. As long as the answer had less than 36 lines (i.e., the number of lines in the best answer), the final score was calculated as the total number of points earned by including line items that matched the highest quality work plan. However, if the subject’s answer had more than 36 lines, his/her final score was calculated as the total number of points earned minus three-fourths of a point for each line over 36. 45 A penalty was used to penalize those who “dumped” content into their answer without careful selection. However, including extra line items is not as egregious as leaving out important content as managers can always prune subordinates work easier than figuring out what is missing from it; thus, a penalty of <100% was used. A 75% penalty was selected because errors of commission affect work efficiency but not effectiveness (i.e., time is lost by the senior who must sift through work plan lines provided by the junior to determine what to use in the final work plan). Identical procedures were followed in scoring all subject’s answer regardless of treatment condition. The objectivity of the scoring procedure was enhanced by scoring answers without any indication of subject’s treatment condition. 4.4.3 Process Variables Content ratings, credibility indicators, and content recommendations were expected to influence what items subjects select as well as their judgments of work plan quality. Thus, in an exploratory nature, to understand item selection behaviors better, this study captured the “click stream” or item selection pattern of participants. The data was used to find patterns across experimental conditions for item clicked first, item clicked most often, number of items selected, and items used most often in answers. 4.4.4 Controls and Other Manipulation Checks The computer screens, settings, information, procedures and incentives were the same for all subjects, except for information related to manipulated independent variables. Thus, the environment and motivational influences were held constant across all subjects. While individual differences between subjects should be controlled by random assignment of subjects, some individual differences were deemed important to 46 control. Important individual differences in information processing in decision-making were shown to exist for gender and experience (N ewell and Simon 1972). Thus, gender was captured as a self-reported value and one-item measures regarding prior task and system use were included as a measure of prior experience for work plan design and knowledge system usage. Because subjects think the task involved using information provided by others, how much someone relies on the input from others to manage their actions may be important. Accordingly, six measures capturing propensity for self-monitoring were used (Snyder 1974; Snyder and Gangestad 1986). Finally, a person’s inherent trust in documented information on a computer screen could influence judgments and was measured based on modified versions of validated items for trust in on-line shopping (Borchers 2001; Cheung and Lee 2000) (a list of items measuring each control construct are in Table 5.7). To reduce order effects (i.e., order of knowledge system item presentation), items were randomized in four different sets of orders; however, order effects were also be tested. Manipulation checks include measures to determine whether content ratings, credibility indicators and content recommendations were attended to based on the different treatment conditions. 47 CHAPTER 5 5. ANALYTICAL PROCEDURES AND RESULTS The results of statistical analyses of data gathered during experimental sessions are presented in this chapter. The experimental subjects are described first, followed by the statistical methods used to analyze the data. The assumptions related to statistical techniques and results of appropriate manipulation checks are then presented. 5.] Tests of Order Effects Experiments are designed to achieve internal validity by eliminating biases that could cause the results instead of the intended manipulations predicted to cause the results. To increase internal validity, the experiment is designed to hold constant all influences on the results except the ones under systematic study. Important variables that are not controlled in this manner, or which are not sufficiently important to control, are allowed to vary randomly across treatment conditions (Keppel 1973). However, due to design limitations, some experimental factors may threaten internal validity. To check whether these factors affected internal validity, several order effects tests were performed on potentially non-random influences on task performance: session order and work plan order. Subjects signed up for one of thirty lab experiment session times. Session times were limited to twenty students because the lab used for students to receive the oral tutorial and to access experimental materials only had twenty-four computers. Since decision time could be traded against decision quality, both variables are included in analyses. AN OVA results indicate there were no significant differences in decision quality across sessions (F=1.l61, p=.263) as expected, but there were significant 48 differences in decision time across sessions (F =2.575, p<.000) which was not expected. Mean decision quality and times by session are listed in Table 5.1. Table 5.1 Mean Decision Quality and Decision Time by Session Session 1 2 3 4 5 6 7 8 9 10 Decision 24.6 19.3 17.1 15.3 14.1 22.0 19.6 18.1 20.0 14.3 Quality (9.3) (10.6) (10.9) (9.9) (9.1) (9.1) (8.9) (11.2) (13.1) (2.0) Decision 30.8 30.6 28.0 35.3 29.4 29.4 40.3 32.0 22.5 33.0 Time (6.1) (16.5) (11.4) (10.1) (8.5 (9.9) (9.2) (14.4) (8.3) (7.5) n= 6 8 19 12 13 15 ll 6 6 3 Session 11 12 13 14 15 16 17 18 19 20 Decision 17.7 16.0 23.3 18.5 19.0 15.6 23.6 17.1 16.3 15.5 Quality (13.1) (10.0) (3.4) (11.2) (9.1) (12.6) (6.4) (10.7) (13.8) (4.9) Decision 29.8 31.6 40.3 26.0 36.1 27.2 26.7 30.6 30.0 51.5 Time (13.3 (7.2) (11.9) (6.2) (7.5) (9.5) (10.9) (12.8) (9.4) (2.1) n= 12 14 3 l9 18 18 7 10 15 2 Session 21 22 23 24 25 26 27 28 29 30 Decision 13.9 21.3 20.5 7.0 19.3 14.3 15.9 11.3 24.3 17.7 uality (10.3) (12.4) (10.6) (-) (14.7) (12.2) (9.5) (9.5) (11.2) (11.0) Decision 31.6 33.3 38.7 44.0 24.9 30.3 25.6 30.6 34.9 27.0 Time (10.0) (10.1) (11.9) (-) (7.7) (10.3) (8.8) (7.5) (10.0) (9.1) n= 21 19 15 1 18 18 14 20 16 20 Key: Mean, (Standard Deviation), n = number of participants. The number of participants in a session could be driving the time differences. Having a large number of participants in a session increases the chance that different treatment conditions and diverse task performance strategies among subjects will influence other subjects through social pressures. Thus, mean decision time per session was regressed on the number of participants and the standard deviation of decision time for each of the thirty sessions. Results indicate more participants in a session resulted in less time spent on the task (t = -2.139, p = .042), while a greater standard deviation in time per session is not related to the average decision time per session (t = -l .237, p = .227). 49 Accordingly, since small and large sessions may provide different environments, data from the four smallest sessions (i.e., sessions 10, 13, 20, and 24 in Table 5.1) were eliminated. Mean decision time was again regressed on the‘number of participants and the standard deviation of decision time for the remaining twenty-six sessions. As anticipated, results indicate no relationship between task time and number of participants (t = .361, p = .721) or the standard deviation in time (t = .919, p = .368). The four small sessions eliminated appear to have created a different environment for subjects than the remaining twenty-six large sessions. Therefore, to maintain environmental homogeneity, the data from these sessions are eliminated bringing the total number of subjects included in analysis to three hundred seventy (370). Subjects were randomly assigned to one of four experimental materials with the sequence of work plans presented as search results reordered, regardless of treatment condition used. However, the highest and lowest rated work plans were always located in position 5 to 10 among the total fourteen work plans listed for both data modeling and database design on the search results screen. This was done to reduce the chances of work plan position influencing decision performance. As expected, AN OVA results indicate there were no significant differences in decision quality (F=. 199, p=.897) or decision time across different work plan orders (F =1.093, p=.3 52). The means of decision quality and times by work plan orders are listed in Table 5.2. Table 5.2 Mean Decision Quality and Decision Time by Work Plan Orders Work Plan Order 1 2 3 4 Decision Quality 18.2 (11.8) 17.8 (11.2) 17.0 (10.8) 16.8 (10.3) Decision Time 29.6 (9.1) 30.6 (10.2) 31.7 (10.9) 29.5 (10.9) n = 52 192 73 53 50 5.2 Descriptive Data About Experimental Subjects Subjects in the experiment were students enrolled in an Introduction to Management Information Systems course during the Fall 2002 semester at a large public university in the Midwestern US. The course is a required component of a Business major at this university and is typically taken in a student’s junior or senior year. Four hundred ten students participated in one of the four inter-related experiments. The data from nine students were removed because each subject indicated participation in previous pilots of the same experiment. The data from twelve more students who indicated English was not their native language were removed because pilot studies indicated that these subjects found it difficult to read experimental instructions and complete the experimental task timely. Additionally, the data was excluded for five subjects because of an insufficient attempt to complete the task or because they included almost all of the content choices into their answer without deciding what to use. After eliminating the nine subjects in the smallest sessions, data from the remaining 370 participants is analyzed belowg. The experiment was held in the same week in the semester in order for students to have adequate and comparable exposure to the course content, which provided the necessary background to perform the experimental task. Additionally, 98% of students were business majors and 97% of the students in the subject pool were in their junior or senior year. These characteristics of the subject pool suggest a fairly homogeneous sample with respect to background, experience levels, skills, and knowledge of 9 When eliminated data are included in ANCOVAs, main effects of rating accuracy on decision quality remain significant while interactions for all three experiments are insignificant. However, the eliminated data were removed to ensure homogeneity of subject pool based on objective criteria as noted above, not based on their contribution to statistical significance. 51 computing and creating work plans. However, to further check whether the sample was homogeneous, demographic factors were captured and analyzed for variance across treatment conditions: year in school, age, gender and experience with knowledge management systems (see Appendix G which summarizes sample sizes by treatment for year in school in Table G], for age in Table G.2, for gender in Table G3 and for experience in Table G.4). Random assignment of subjects to treatment conditions are expected to eliminate any systematic differences among the treatment conditions due to additional demographic factors. Chi-square tests were conducted on year in school, age, gender and experience to check for possible differences across treatments within each of the four inter-related experiments. The chi-square test is a non-paramet1ic test with no assumptions regarding the underlying distribution of the data. The test does assume a random sample and expected fi'equencies should be at least one with no more than twenty percent of the categories being less than five. The data analyzed here meets these requirements. The chi-square statistics indicate no significant differences for year in school, age, gender or experience across treatments in any of the experiments (for chi-square statistics for year in school see Table 5.3, for age see Table 5.4, for gender see Table 5.5, and for experience see Table 5.6). Table 5.3 Chi-Squared Statistics for Subject Year in School by Treatment Exprmt Baseline Rater Sample Rater Expertise Filter Size Sophistication Chi- .340 3.445 6.719 5.055 square (d.f.=2,p=.844) (d.f.=6,p=.751) (d.f.=6,p=.348) (d.f.=6,p=.537) 52 Table 5.4 Chi-Squared Statistics for Subject Age by Treatment Exprm Baseline Rater Sample Rater Expertise Filter t Size Smhistication Chi- 1.525 21.929 17.903 16.206 square (d.f.=4,p=.822) (d.f.=15,p=. l 10) (d.f.=15,p=.268) (d.f.=18,p=.578) Table 5.5 Chi-Squared Statistics for Subject Gender by Treatment Exprm Baseline Rater Sample Rater Expertise Filter t Size Sophistication Chi- 1.574 .559 2.984 3.099 square (d.f.=1,p=.210) (d.f. = 3,p = .906) (d.f. = 3,p = .394) @f. = 3,p = .263) Table 5.6 Chi-Squared Statistics for Subject Experience by Treatment Exprm Baseline Rater Sample Rater Expertise Filter t Size Sophistication Chi- .219 7.448 11.166 5.438 square (d.f.=3,p=.974) (d.f. = 9,p = .591) (d.f. = 9,p = .265) (d.f. = 12,p = .942) 5.3 Statistical Method The analytical techniques used to evaluate the experimental data, control variables, and assumptions underlying the use of the statistical tests are presented in this section. Analysis of variance (ANOVA) models are intended for applications when the effects of one or more independent variables (i.e., classification or experimental factors) on the dependent variable are of interest (Neter, Kutner, Nachtsheim and Wasserman 1996). ANOVA, ANCOVA, and post-hoc planned comparisons were used to analyze the data. For more than one dependent variable, the use of MANOVA or MANCOVA are needed to maintain control over the experiment-wide error rate and are used when there is some degree of inter-correlation among the dependent variables (Kerlinger 1986). The purpose of this study is to understand how each manipulation affects decision quality; 53 however, decision quality could be traded for decision time. Since higher quality decisions can be achieved through longer decision time, the relationship between decision quality and time was evaluated. The inter-correlation between decision quality and time is not significant in this study (r =.001, p=.978), and thus, MANOVA/ MANCOVA was not used and decision time will not be considered. 5.3.1 Covariate Measures The covariates examined and measures used in the experiment are presented in Table 5.7. To remove extraneous influences fi'om the dependent variable increasing the within-group variance, specific individual characteristics (gender, domain expertise, distrust, and self-monitoring) were examined as potential covariates that may influence task decision quality. (See Chapters 4 for details regarding the necessity for controlling for these characteristics). While the intent of random assignment is to eliminate systematic differences among the treatment conditions, some individual characteristics may be deemed too important not to control. The use of ANCOVA is recommended when the covariates under examination are highly correlated with the dependent variables but not with the independent variables (Hair, Anderson, Tatharn and Black 1998). Table 5.7 Covariates and Post Hoc Analysis Construct Potential Measures (all self reported unless indicated Reference Covariate otherwise) Gender Check box for Female or Male (female = 1, male = 0) -- Expertise On a scale of 1 (know nothing) to 5 (am an expert = 5), -- how would you rate your knowledge about knowledge management systems? The following use a 10-point scale from 1=Strongly agree to lO=Strongly disagree: Distrust Relying on "ratings" of Search Result items is risky. Wrightsman The "rating" provided for a Search Result item cannot 1991 be trusted. Self- I can only argue for ideas, which I already believe. Snyder Monitoring (reverse) 1 974; 54 F7 I guess I put on a show to impress or entertain others. Snyder and I would probably make a good actor. Gangestad In a group of people I am rarely the center of attention. 1986 (reverse) I have considered being an entertainer. At a party I let others keep the jokes and stories going. (reverse) ”To“ Hoc Measures (all self reported unless indicated Reference Construct otherwise) Confidence I would like to run another search to look at more work plans, then possibly revise the work plan I submitted. 1 do not want to give the plan of work that I submitted to my manager. There are better answers than the one I submitted. I am confident my choices were the best ones possible. (reverse) 5 .3 -2 Confirmatory Factor Analysis of Covariate Measures To maximize the explanation of the entire set of covariates and make data analysis more parsimonious, confirmatory factor analysis was used to assess discriminant validity for the covariate constructs of distrust and self-monitoring and‘post hoc construct of confidence. Factor analysis is an interdependence technique where all variables are Simultaneously considered, each related to all others. With twelve measures and 340 in the smallest sample size for the measures, there is a 28-to-l ratio of observations to Variables, which is greater than necessary and there appears to be adequate sample size for calculating the correlations between measures (Hair, Anderson, Tatharn and Black 1 998). Factor analysis was performed using the scores for all the measures related to Confidence, distrust, and self-monitoring. To examine the factorability of the correlation matrix, some degree of multicollinearity is needed since the objective of factor analysis is to identify interrelated sets of variables. Thus, the bi-variate correlations among the original measures are shown in Appendix H. Inspecting the correlations reveals many 55 correlations above .30, the Bartlett Test of Sphericity = 766.9 (p=.000), however, this test is sensitive to large sample sizes, and the Kaiser-Meyer—Olkin Measure of Sampling Adequacy = .648 indicating factor analysis is appropriate (Hair, Anderson, Tatharn and Black 1998). Each item with low loadings on the factor it was purported to measure and high loadings on other factors was eliminated. Only one measure was removed which was the first measure for self—monitoring. The result is four measures for confidence, two for trust, and five for self-monitoring. The remaining measures were factor analyzed tOgether providing the eigenvalues for three factors as shown in Table 5.8. The three factors represent 55 percent of the variance of the eleven measures. The VARIMAX rotation component analysis matrix is shown in Table 5.9. Table 5.8 Results for the Extraction of Component Factors Label Eigenvalue Percent of Variance Cumulative Percent of Variance 1 2.556 23.2 23.2 2 1.869 17.0 * 40.2 3 1.579 14.4 54.6 Table 5.9 Principal Components Analysis Factor Matrix (with coefficients below 0.2 suppressed, highest loadings are italicized) Meas- Factor Factor Factor Communality ure 1 2 3 Confl .28640 .43 779 .29171 Coan .32741 .631 75 -.21114 .55089 Conf3 .34799 . 74624 .70003 Conf4 .24399 .55 745 .40212 Distl .24643 .8589] .79847 Dist2 .28430 .83939 .79426 Self2 .65 982 .46501 Self3 . 74689 -.291 19 .66024 Self4 . 70443 -.25695 .56233 Self5 .64000 -.20289 .45137 Self6 .5272] -.21873 .32665 56 Factors scores were generated and used in ANCOVAs because they are orthogonal while summated scores are not. 5.3.3 Effect of Covariate Measures on Dependent Variable~ Before including potential covariates in the remaining analysis, decision quality was regressed on each variable alone for each of the four inter-related experiments separately. This is to determine if the potential covariates provide explanatory power (see t-statistics in Table 5.10). Regression results indicate a significant relationship between decision quality and domain expertise depending on experiment. Table 5.10 Regression of Decision Quality on Control Variables Post Factor Analysis t-statistic (p-value) Experiment Baseline Rater Rater Filter Sample Size Expertise Sophistication Decision Quality . Gender .048 (.962) -1.602 (.112) -.558 (.578) ' 1.445 (.152) Expertise -. 158 (.875) 2.867 (.005) -2. 626 (.010) -2. 680 (.009) Distrust .395 L695) .692 (.490) .749 (.456) -.367 (.714) Self Monitoring .168 (.867) .022 (.983) .146 (.884) 1.225 (.224) The effect of control variables on the dependent variables is more random than anticipated. Domain expertise and no other control variables appears to matter consistently across experiments. Thus, variables with significant relationships with a dependent variable were included only as covariates in the AN COVA model in the experiment for which the significant relationship occurred. 5.3.4 Assumptions Underlying Statistical Analyses The univariate test procedures of AN OVA are valid when assuming the dependent variable is normally distributed and variances are equal for all treatment groups. Evidence indicates when sample sizes are equivalent and relatively large, F tests 57 in AN OVA are robust with regard to these assumptions except in extreme cases (Hair, Anderson, Tatham and Black 1998). All of the tests conducted in this study are made between cells with fairly large, equal sample sizes, however, this study will examine these assumptions regardless of whether the F tests are robust. While the assumption of independence among observations only applies to MANOVA type tests, this assumption is also examined in this study. First, Kohnogorov-Smimov tests were conducted to assess the distribution of each of the dependent variables in each treatment condition. As expected, none of the tests of normality within each cell were significant for either dependent variable (see test results in Table 5.11). Thus, the null hypothesis of each test stating a normal distribution fits the data cannot be rejected and the assumption of normal distribution within treatments was satisfied. Table 5.11 Results of the Kolmogorov-Smimov (K-S) Goodness of Fit Test ‘ Experiment Manipulation ‘ K-S Z p- value Decision Quality Baseline Condition Accurate Ratings .735 .652 Inaccurate Ratings .414 .995 # of Raters Accurate X Low Sample Size .837 .486 Inaccurate X Low Sample Size .991 .280 Accurate X High Sample Size .743 .640 Inaccurate X High Sample Size 1.068 .204 Rater Expertise Accurate X Low % Experts .714 .687 Inaccurate X Low % Experts .825 .503 Accurate X High % Experts .636 .814 Inaccurate X High % Experts .846 .471 Filter Sophistication Accurate X Low Sophistication .519 .950 Inaccurate X Low Sophistication .887 .411 Accurate X High Sophistication .990 .281 Inaccurate X High Sophistication .757 .615 58 Second, the Levene test was used to assess the homogeneity of variances across treatment conditions within each experiment. The Levene test is computed performing a l-way AN OVA on the absolute difference of each case from the mean. The Levene test was not significant for either dependent variable in all treatments, except for the treatments related to the percentage of raters who are experts (see test results in Table 5.12). Thus, the null hypothesis of each test stating variances are equal across groups cannot be rejected and the assumption of equal variances within treatments was satisfied for all treatments (see Table 5.14 for a summary of means and standard deviations per treatment condition). Table 5.12 Results of the Levene Test of Homogeneity of Variance Exprmt Baseline Rater Sample Rater Expertise Filter Size Sophistication Decision ality Levene 1.579 2.063 .751 .237 p-value .215 .110 .524 ' .870 Third, random assignment of subjects to treatments was used to insure the independence among observations in all treatment conditions. 5.4 Manipulation Checks Data collected in the post-experiment questionnaire was used to perform manipulation checks to assess the adequacy of the experimental manipulations in all four experiments. Subjects were only asked questions related to the manipulations of the experiment for which they were assigned. Across all experiments, ratings provided to subjects either accurately or inaccurately reflected the actual content quality [i.e., highly rated items were actually high (low) quality content in the accurate (inaccurate) conditions]. Thus, all subjects in each experiment were asked about this manipulation. 59 Then in the experiments that also provided the number of raters, percentage of raters who were experts, and collaborative filter recommendation sophistication level, those subjects were only asked about the specific information they received. F-tests from ANOVAs were used to compare subjects’ answers to these questions between associated treatment conditions (see Table 5.13 for results). As expected, all the test statistics are significant, including the marginally significant one for collaborative filter sophistication; thus, subjects in different treatment conditions perceive the differences between their conditions and manipulations appear to be working as anticipated"). Table 5.13 Results of Manipulation Checks for Treatment Conditions Treatment Means Direction Expected F-test p-value Baseline Accurate = 3.59 Accurate=18 and the Quality value of reversed scale Confidence Factor if score <18, where 18 is Calibration the midpoint of the possible quality score. Rating Calculated as the value of Manipulation Check for Rating if assigned Condition to the accurate rating condition and the value of reversed scale of Calibration Manipulation Check for Rating if assigned to the inaccurate rating condition. Manipulation Check for Rating: I felt the "ratings" provided were actually consistent with the overall quality of their associated work plan. Confidence I would like to run another search to look at more work plans, then possibly revise the work plan I submitted. I do not want to give the plan of work that I submitted to my manager. - There are better answers than the one I submitted. I am confident my choices were the best ones possible. (reverse) Unexpectedly, t-tests indicate across all experiments, no difference exists between treatments for Decision Quality Calibration, but for Ratings Condition Calibration, subjects in the accurate ratings conditions were better calibrated than those in the inaccurate ratings condition. This suggests subjects tended to believe ratings reflected content quality regardless of whether ratings were accurate or inaccurate. This could mean subjects knew ratings were accurate when ratings were accurate but did not know they were inaccurate when they were inaccurate. Alternatively, it could mean subjects tended to assume ratings are accurate regardless of reality. To further examine how Rating Condition Calibration may influence decision performance, Table 5.18 illustrates the decision performance differences between those who knew versus did not know when ratings were accurate. Independent sample t-tests indicate when ratings were accurate, those that knew this achieved a higher quality score 69 and when ratings were inaccurate, those that did not know this achieved a higher quality score. With respect to time taken on the task, in all cases those that did not know their rating accuracy level took more time than those that did know their rating accuracy level, however, this difference is statistically insignificant. Also, regression results indicate those correctly knowing how well they performed and knowing the actual rating accuracy level actually performed better than those that did not know how well they performed or the actual rating accuracy level. This means those performing badly who knew it, performed better than those who did not know how badly they performed. Table 5.18 Knowing Rating Accuracy and Decision Performance Mean (standard deviation) Panel A: Inde endent Samples t-tests Between Knowin ot Knowing“— Rating Knew/Didn’t Quality Time in Accuracy Know Score Mins. All Subjects Knew 17.3 (11.9) 29.8 (10.4) Didn’t Know 15.7 (10.0) 31.6 (10.1) t-test . t=1.312, p=.190 t=1.560, p=.120 Accurate Knew 24.9 (8.7) 29.8 (11.4) Didn’t Know 21.8 (8.1) 30.2 (9.1) t-test t=2.022, p=.045** t=.251, p=.802 Inaccurate Knew 6.7 (6.4) 29.9 (8.8) Didn’t Know 12.6494) 32.3 (10.6) t-test t=5.009, p=.000** t=1.630,p=.105 Panel B: Regressing Decision Performance on KnowinflNot KnowinL Dependent Knew/Didn’t Know t-statistic Variable Variable Quality Rating Condit’n Cal. t=12.93, p=.00** Decision Qual. Cal. t=2.43, p=.00** Time Rating Condit’n Cal. t=-.37, p=.71 Decision Qual. Cal. t=-13, p=.90 Key: ** p<.05. " Results reported for Rating Condition Calibration calculated as a dichotomous value: Calculated as = 1 if in accurate (inaccurate) rating condition and selected a value of <= 4 (>= 7) on the Manipulation Check for Rating below. Otherwise = 0. 70 It was expected that inaccurate ratings would trigger subjects to use credibility indicators or content recommendations and with low credibility indicators filter sophistication this should suggest inaccurate ratings. Unexpectedly, subjects with low credibility indicators or filter sophistication appear to know their rating condition least. Thus, inaccurate ratings may not be triggering the use of additional rating information, as expected, and future research is needed to explain this finding. 5.6.3 Information Search Process Measures Information search measures were also dynamically collected reflecting behaviors subjects followed regarding the selection and use of search result items. Information search measures have been widely used as a process tracing technique (Payne 1976; Svenson 1979). The measures come from two sources consistent with these techniques: the actual usage of search results in the work plan answer created and the click streams each subject followed while performing the task. 5.6.3.1 Work Plan Answer Measures As expected, examining the source of the lines used to create work plan answers, subjects with accurate ratings expend less effort choosing to build a task answer out of fewer work plans. Further examination of the items included in work plan answers indicates in all cases, subjects with accurate ratings expend less effort choosing to build a task answer more often from the first work plan opened and used more high rated content than those with inaccurate ratings. This suggests subjects in the accurate ratings condition opened the highest rated work plans first and used it in their answer more often that subjects in the inaccurate ratings condition. 71 Interestingly, there is a significant difference for the percentage of lines in answer from work plans rated highest (i.e., 5) between those in the high versus low rater expertise treatments. Consistent with predictions, subjects with a high rater expertise chose to include more lines in their answer fi'om work plans rated highest than those with low rater expertise. This indicates raters expertise may influence whether individuals include highly rated content in their answer. Further evidence indicates this finding does not hold when data from treatments with high and low rater sample size or filter sophistication is examined. 5.6.3.2 Click Stream Measures As expected, investigating the total number of clicks as an indication for the amount of effort expended on the task, subjects with inaccurate ratings expend more effort by clicking on and looking at more work plan items than those with accurate ratings. Further examination of click stream patterns indicates, while not significantly different, but consistent with expectations, subjects with accurate ratings selected higher rated items more than those with inaccurate ratings. Interestingly, there is a significant difference for the percentage of clicks on work plans rated high (i.e., 4 or 5) between those in the high versus low rater expertise conditions. Consistent with predictions, subjects with a high rater expertise selected more highly rated work plans than those with a low rater expertise. This indicates rater expertise may influence whether individuals select highly rated content to review. Meanwhile, this finding does not hold when data from treatments with high and low rater sample size or filter sophistication is examined. 72 Finally, as expected, subjects with accurate ratings expended less effort by selecting fewer work plans than those with inaccurate ratings. In summary, the information search measures analyzed above suggest those in the accurate ratings condition used higher rated work plan items more and expend less effort than those in the inaccurate ratings condition. Also, the analysis suggests rater expertise may influence whether individuals select for review and include highly rated content in their answer, while rater sample size or filter sophistication do not. 5.6.3.3 Correlations Between Click Stream and Work Plan Answer Measures Many of the associations between click stream and work plan answer measures are as expected (e.g., when ratings accurately or inaccurately reflected content quality, the more work plans opened is positively associated with more total clicks on work plans). The associations suggest subjects selected and used high rated work plans when ratings were accurate but selected then did not use them when ratings were inaccurate. Also, when ratings were inaccurate, subjects demonstrating more effort were able to achieve a higher quality decision (i.e., task answer). 5.6.4 Initial Information Search Strategy The information search process of each subject was objectively coded using click stream data (i.e., pattern of clicks used to open work plans). The coding reflects whether the first click of their click stream was following highest rated items first or following a more sequential or random strategy. As expected, based on correlations between strategy and performance, when ratings were accurate, reviewing highly (non-highly) rated items first is associated with improved (worse) decision performance. Unexpectedly, when ratings were inaccurate, no strategy is associated with decision performance. 73 Individuals should not have an indication of whether ratings were accurate or not until opening and reviewing a work plan, thus predictions suggest subjects should always open the highest rated item first. Consistent with expectations, most subjects in the accurate ratings condition did open the highest rated item first, however, surprisingly those in the inaccurate ratings condition opened the highly rated and non-highly rated work plan first equally often. As expected, in ahnost all treatment conditions, subjects chose to review the highest rated work plan first. Unexpectedly, subjects did not choose to review highly rated work plans first in three conditions: the accurate ratings baseline, accurate ratings and low rater sample size, and accurate ratings and low filter sophistication. This finding may indicate subjects thought the low rater sample size or filter sophistication suggested a lack of rating credibility and ratings were discounted during initial work plan selection. As expected, the most popular search strategy was for subjects to choose to review the highest rated work plan first, while the second most popular was to select the first work plan listed. ANOVA results indicate no differences across decision time for any treatment condition in all four experiments for either initial search strategy measure. AN OVA results also indicate no differences across decision quality for any treatment condition in all four experiment for subjects following an initial search strategy of reviewing non-highly rated work plans first. However, ANOVA results do indicate those reviewing highly rated work plans first do better when ratings were accurate than when ratings were inaccurate. Finally, decision quality was regressed on initial search strategy controlling for treatment condition. Results suggest only when ratings were accurate does reviewing 74 highly rated work plans first improve decision quality when rater sample size and when rater expertise is provided. 5.6.5 Post Hoc Analysis Summary In summary, post hoc analysis suggest individuals typically select the highest rated content to review first, may understand when ratings were inaccurate, but may not be able to overcome this inaccuracy unless rater expertise is low suggesting ratings should be discounted. 75 '1 F 5,1} dlSi CHAPTER 6 6. DISCUSSION OF RESULTS The questions addressed by this research examine the influence of credibility indicators and content recommendations on the usage of content ratings supplied by other users in decisions regarding the use of knowledge system content. The decision making model described the moderating influence of credibility indicators and content recommendations on the persuasion of ratings on content usage decisions. Two research questions were addressed in this study: 1. How can credibility indicators help people determine the level of accuracy in ratings of knowledge system content? and 2. How can content recommendations help people level of accuracy in ratings of knowledge system content? Expectations from the developed model posited that credibility indicators and content recommendations would influence content rating usage based on cognitive psychology and decision theory. This influence was expected to be greater when ratings inaccurately rather than accurately reflected actual content quality. Inaccurate ratings were expected to trigger the use of credibility indicators and content recommendations morethan accurate ratings. Hypotheses were derived from the research model and were tested using four inter-related laboratory experiments. The next section interprets the results presented in Chapter 5 and is followed by a discussion of the implications of the findings for both theory and practice. 6.1 Interpretation of the Research Results The findings based on the statistical analyses performed in Chapter 5, including supporting post hoc statistical analyses of information search data, are integrated and discussed in this section. First, the influence of content ratings directly on task 76 performance is presented. Second, the influence of the credibility indicators and content recommendations (i.e., sample size, source expertise, and filter sophistication) on rating usage is considered. 6.1.1 Influence of Content Ratings on Task Performance The baseline condition suggested a direct influence of content ratings on task performance. Statistical analyses illustrated a strong main effect of the degree of content rating accuracy on decision performance for not only the baseline condition, but also the other three experiments. Individuals use content ratings to decide what knowledge system content to use in their task solution. Hypothesized predictions were based on individuals selecting and reviewing the highest rated content first, however post hoc analyses of information search data suggest this did not always happen. The following individuals, as a majority, did not select and review high rated content first: those in the baseline, either high or low filter sophistication conditions and low number of raters with a high degree of rating accuracy. Thus, something is causing individuals to not follow ratings before they could possibly assess the degree of accuracy between ratings and content quality, which could only happen after reviewing content. Additional post hoc analysis examined how well individuals knew the degree of rating accuracy for their given treatment condition. On average, across all experiments, subjects knew when the degree of accuracy was high, but they did not know as well when the degree of accuracy was low. Thus, people appear to better at determining when ratings are helpful than when they are not. Decision theory suggests people form a hypothesis about information they receive (e. g., an a prior belief that ratings are helpfirl), 77 and then additional data is evaluated as either confirming, disconfirming, or noncontributory, with disconfirrning evidence under weighted or ignored (Wallsten 1980). In this study, it may be the case that individuals under weigh low credibility indicators or filter sophistication especially when ratings are not helpful. Post hoc analyses of information search data also suggest when individuals could recognize a low degree of rating accuracy exists; they suffered from a lack of improvement in task performance meaning they could not overcome those misleading ratings. With accurate ratings, users can efficiently and effectively utilize the ratings and associated content to solve the task. To overcome inaccurate ratings, users must rely on their own judgment and persistence in finding the highest quality content to solve the task (Feather 1962; Sandelands, Brockner and Glynn 1988). Individuals may see task success as the result of effort devoted to the task (i.e., persistence) or as the result of sudden insights into the task (Sandelands, Brockner and Glynn 1988). Future research is needed to better understand how the tradeoff between misleading content ratings and the level of content quality influences persistence behaviors in task performance. Meanwhile, evidence indicates that certain credibility indicators (i.e., source expertise) may help users overcome misleading ratings, which is discussed in the next sections. 6.1.2 Moderating Influence of Credibility Indicators and Content Recommendations Rater sample size, source expertise, and content recommendations were examined and are discussed separately below. 6.1.1.1 Rater Sample Size Hypotheses Hla-c examined the influence of sample size on the use of content ratings in decision performance. Decision theory was the basis for these hypotheses 78 offering normative models that larger sample sizes indicate higher credibility. While several studies suggest individuals do not use sample size in decisions (Tversky and Kahneman 1974), other studies suggest aspects of the decisiOn setting trigger its use (Sedlmeier and Gigerenzer 2000). In this study, inaccurate ratings were predicted to trigger the use of sample size information. However, statistical analyses illustrated that individuals did not use sample size information when making decisions about whether to rely on or discount content ratings. Post hoc analyses suggest sample size values did not influence what content was used in work plan answers, but it did influence search patterns. For those with low sample size values, the majority of individuals used an initial search strategy that did not including selecting the highest rated items to review first. Post-task questions revealed individuals thought the number of raters was not an objectively derived value but based on subjective sources. This could mean they believed rater sample size was prone to manipulation and not based on an objective criterion separate from ratings. If individuals believe rater sample size is prone to manipulation, then they may discount the information and not use it as a credibility indicator of rating trustworthiness. Individuals may not have used sample size information in determining rating credibility since they were novices. As novices, they may believe other consultants who have been at the firm long enough to enter ratings must be more expert than themselves. Thus, to novices, a low number of other consultants entering ratings (i.e., low sample size) might suggest ratings are more credible even when it should not. Having a low number of raters is better than no raters since novices may assume any raters are more 79 8X 8C- (:0 CIE ina $01 We pre ma expert than themselves. Novices appear to assign the same level of credibility to ratings whether there are a high or low number of raters (i.e., sample size). In knowledge system rating schemes, the expertise level of raters may matter more than the number of raters according to this study. 6.1.1.2 Source Expertise Hypotheses H2a-c examined the influence of source expertise on the use of content ratings in decision performance. Cognitive psychology theory on source credibility was the basis for these hypotheses offering normative models that greater source expertise indicates higher perceived credibility. Studies suggest individuals use perceived expertise to evaluate a source (Ilgen, Fisher and Taylor 1979), even changing their own beliefs to agree with the source (Rhine and Kaplan 1972). Once again, inaccurate ratings were predicted to trigger the use of source expertise information. Low source expertise was expected to cause individuals to discount ratings more when ratings were inaccurate than when ratings were accurate. Statistical analyses support these predictions and illustrate that individuals did use source expertise information when making decisions about whether to rely on or discount content ratings. Additional evidence from post hoc analyses indicate those with a low rater expertise used ratings less, included less lines in their answer from highly rated work plans, and selected less highly rated work plan to review. Post-task questions revealed individuals thought rater expertise was not an objectively derived value but based on subjective sources. This could mean they believed rater expertise was based on subjective criterion such as from the correctness of ratings and not based on an objective criterion separate from ratings. If individuals believe rater 80 expertise is not separate from ratings, then they may rely on the information more and use it as a credibility indicator of rating trustworthiness. Individuals may realize, even as novices, their own judgment of quality is better than relying on ratings when the source of those ratings are not experts in the topic domain (i.e., has low expertise). Post hoc information search data indicates with high source expertise, higher rated content was reviewed first more often than with low source expertise present. Thus, individuals may believe low source expertise may be associated with low rating credibility. This low rating credibility helped individuals overcome inaccurate ratings by suggesting to them to use their own judgment of content quality. 6-1 . l .3 Content Recommendations Hypotheses H3a-c examined the influence of content recommendations on the use of content ratings in decision performance. Exploratory arguments on content recommendation usage were the basis for these hypotheses indicating higher filter sophistication in recommendation algorithms may suggest higher perceived credibility in ratings. Once again, inaccurate ratings were predicted to trigger the use of content recommendation information. Low filter sophistication was expected to cause individuals to discount ratings more when ratings were inaccurate than when they were accurate. However, statistical analyses illustrated that individuals did not use filter sophistication information when making decisions about whether to rely on or discount content ratings. Surprisingly, additional post hoc analyses provide little new insights into the behavioral impacts of providing collaborative filter information to knowledge system users. As this system feature of providing collaborative filter recommendations grows in 81 popularity and is increasingly adopted by knowledge systems, it becomes even more important for future research to determine the influences of this information on decision- making. Given the exploratory nature of this experiment and lack of significant statistical results, future research is needed to better understand how people use content recommendations along with ratings to use system content. 6.2 Overall Conclusions fi'om the Research Study In general, this study demonstrated ratings have a strong influence on how individuals use knowledge system content even when the ratings are misleading. When content ratings are inaccurate, this study indicated that individuals might realize this is happening but lack the ability to overcome the influence of ratings. Even providing individuals with indicators of rating credibility. does not always help. Disclosing the number of raters or level of filter sophistication in content recommendations does not influence decisions to use or discount ratings. However, disclosing the level of source expertise in ratings does influence decisions to use or discount ratings, which helps determine knowledge system content usage. This study illustrated individuals believed rating credibility was low when source expertise was low which may have helped them overcome ratings inaccuracy. 6.3 Implications of the Research Results The results of the fOur inter-related experiments have important implications for both theory and practice. 6.3.1 Theory Previous research examining individual decision-making has paid limited attention to the influences of subjectively sourced information rating schemes. This 82 research has demonstrated the importance of including indicators of rating credibility in order to help decision makers in their task performance. Also demonstrated is the importance of selecting an influential indicator of rating credibility, since indicators vary in their level of influence on decisions. The research results suggest that source expertise generally influences the decision performance on intellective tasks. Another result of this study is the extension of the theoretical understanding of the influence of content ratings, credibility indicators and decision performance. The next section considers improvements to the theoretical model. First is a discussion of the deficiencies of the previous model. Next is an explanation of the new and improved decision-making model of the understanding of knowledge system content usage. Finally, search pattern data is used to provide initial support for the new model. 6.3.1.1 Modified Theoretical Model Subjects do not appear to be using inaccurate ratings as a trigger that prompts the use of additional information about the credibility of ratings as predicted. Also, due to the exploratory nature and the lack of significant results, content recommendations’ influence on knowledge system usage is not included in the model. Thus, modifications of the theoretical model are appropriate (see Figure 6.1). 83 Figure 6.1 Modified Decision Model 1..” 1 Rating U56 High flL'FOIIOW Ratings Mal—CY Indicators \ High 3 LOW >Don’t follow Low 9 Ratings Don’t Use LOW ——' High Rated I I I I 1. Initial Search Strategy 2. Solving the Task The original model predicted by the study did not incorporate a role for credibility indicators in the initial search strategy, nor did it include the concept of checking the level of source expertise for ratings when solving the task. The following section first discusses then provides evidence to support the two phases of the updated model: 1) the initial search strategy and 2) solving the task. 6.3.1.1.1 Initial Search Strategy The model predicts that high credibility indicators will cause users to select high rated items first, while low credibility indicators will cause users to select items without regard for ratings first. Counts of initial search strategy followed by treatment condition illustrate more subjects in the high credibility indicator condition choose to follow ratings (63% for number of raters and 73% for percentage raters experts) while those in the low credibility indicator condition choose a more random strategy (50% for number of raters and 50% for percentage raters experts). As further support of the model, the filter 84 recommendation experiment, which does not contain credibility indicators, does not follow the same pattern (see Table 6.1 for counts of initial search strategy). Table 6.1 Counts of Initial Search Strategy by Credibility Indicator Expmt Number of Raters % Raters Experts Filter Sophistication Low High Low HigL Low High ‘ lSt 4 23 (50%) 26 (63%) 20 (50%) 33 (73%) 22 (49%) 18 (45%) Listed & Rating is 5 Random 23 (50%) 15 (37%) 19 (50%) 12 (27%) 22 (51%) 22 (55%) & 18 Work Plan Total 46 41 39 45 44 40 Figures 6.2, 6.3, and 6.4 illustrate the mean strategy followed by those in the high versus low credibility indicator conditions. The main effect of credibility indicators is not . significant for number of raters (F=1.741, p = .191) or filter recommendations (F=.322, p=.572), but it is significant for percentage raters who are experts (F=4.631, p=.034). Also, opposite of expectations, in the filer recommendations experiment, those in the high treatment condition appear to not follow ratings compared to those in the low treatment condition. Figure 6.2 Initial Search Strategy Mean Plots for the Raters Sample Size Experiment Number of Raters Not Follow Ratings 1'6 Low # of Raters Initial 1_4\ Search . 1.4 Strategy HM 1 3 Follow Ratings Match Mismatch 85 Figure 6.3 Initial Search Strategy Mean Plots for the Rater Expertise Experiment % of Raters Who are Experts Not Follow Ratings 0 Initial 1.5 Low As Raters Experts 1.5 Search 1.3 Strategy 1.2 High % Raters Experts Follow Ratings Match Mismatch Figure 6.4 Initial Search Strategy Mean Plots for the Collaborative Filter Experiment Filter Sophistication Not Follow 1 .7 Ratings 1 6 High Sophistication Initial ' \ Search 1.4 Strategy Low Sophistication \ 1 3 Follow Ratings Match Mismatch 6.3.1.1.2 Solving the Task The model predicts that once users have begun the initial search process, they will review content and will decide if it is either helpfirl or wrong for the task. They could do this by using an additive linear search strategy where work plans would be traded off against each other or an elimination-by-aspects search strategy where the projects steps of each work plan would be traded off against each other (Payne 1976). To make this decision, users will use the rating, but first check whether the level of expertise of those providing input to the rating was higher than their own expertise. In the case of the sample size, users, being novices, may assume average raters are more expert than 86 themselves. In the case of the source expertise, users may realize high (low) expertise suggests the rating source is more (less) expert than their own expertise. To exarrrine this process, ANOVA tests were performed with dependent variables of percentage clicks on items rated high (i.e., 4 or 5) and percentage lines in answer from items rated high (i.e., 4 or 5). For means and AN OVA results see Table 1.6 Percentage of Clicks on Work Plans Rated High (4 or 5) and Table 1.4 Percentage of Lines in Answer fiom Work Plans Rated High (4 or 5). In support of the modified model of Figure 6.1, Table 1.6 in Appendix I illustrates that between high and low for the sample size (F=1 . 108, p=.295) and content recommendation (F=.007, p=.934), there is no differences in the percentage of clicks on work plans rated high (i.e., 4 or 5). Thus, sample size and content recommendations appear not to influence the decision to click on high rated work plans. Meanwhile, a significant difference is found between high and low source expertise (F =3.334, p=.071). Perhaps, highly rated content is selected more often when the source expertise of those providing the ratings is higher than the users’ own expertise. However, in Table 1.4 in Appendix I, the same differences were not significant for percentage of lines used in the answer fi'om high rated work plans (i.e., 4 or 5) for any of the experiments. But mean comparisons indicate those with high source expertise (97% with accurate ratings and 79% with inaccurate ratings) included more lines from high rated work plans than those with low source expertise (88% with accurate ratings and 77% with inaccurate ratings) on average. Meanwhile, those with high sample size (27% with accurate ratings and 26% with inaccurate ratings) did not always include more lines from high rated work plans than those with low sample size (5% with accurate ratings 87 and 30% with inaccurate ratings) on average. Also, for content recommendations those with high filter sophistication (15% with accurate ratings and 41% with inaccurate ratings) did not always include more lines from high rated Work plans than those with low filer sophistication (18% with accurate ratings and 35% with inaccurate ratings) on average. This suggests subjects use ratings more to decide what work plans to use in their answer when the level of expertise of those providing the ratings is higher than their own expertise level. 6.3.1.2 Contribution to Decision Theory Findings of this study illustrate support for decision theory that a trigger in the decision setting may exist for using information in decision-making regarding solving the task (Sedhneier and Gigerenzer 1997, 2000; Ilgen, Fisher and Taylor 1979). However, flris trigger causes a specific need to know the level of expertise in the source of information (i.e., rater expertise). A low degree of ratings accuracy can trigger the use of source expertise information to help determine whether to use or ignore rating values in decisions regarding solving the task. Additionally, the outcome of this study provides a new setting indicting the importance of source credibility in rating information (Ilgen, Fisher and Taylor 1979). Given novices are involved, the type of information conveyed by credibility indicators may matter more than statistically valid credibility indicators like rater sample size. Novices may be more interested in 3 sources expertise than the number of sources providing input on solving the task. Finally, content recommendation analysis does not provide additional contributions to decision theory and more research is needed to understand its use in decision settings. 88 6.3.2 Practice Practitioners struggle with how to sort, screen, and select items from the long lists of system content comprising search results. While much attention has been paid to developing better search algorithms to find more relevant and high quality system content, little attention has been paid to what information will help users select from a list of search results (Ansari, Essegaier and Kohli 2000; Blabanovic and Shoham 1997). The knowledge task, search results list, ratings and credibility indicators examined in this study are consistent with the knowledge systems and tasks performed by novice employees in consulting firms. Not all ratings, credibility indicators, or content recommendations help users to effective utilize knowledge systems. Managers, knowledge systems trainers, and consultants need to be made aware of the negative influence of inaccurate ratings and influence of credibility indicators and content recommendations. Understanding this, consultants can better manage their own search. processes when trying to locate and re-use knowledge within the knowledge system (Orlikowski 1993, 2000). Additionally, managers may be more aware of the need to direct novice consultants to higher quality content knowing misleading ratings could impede their success. Knowing that ratings inaccurately reflect content quality is difficult to overcome, firms may decide to allocate more resources to ensuring ratings accurately reflect content quality when ratings are submitted to the system. Firms have been struggling to decide the best strategy for maintaining knowledge repositories that only include high quality content (Davenport and Hansen 1999). This study suggests maintaining correct ratings is more important than disclosing credibility indicators or filter sophistication since rating 89 correctness always impacts user behaviors and system usage outcomes. Only experts could be allowed to rate knowledge system content or experts could verify submitted ratings before being published on the system. Finally, system designers can learn from this research to find better ways to incorporate more useful metrics into search result feedback and ratings schemes. By knowing which credibility indicators and content recommendations influence decisions, system resources could be focused on counting, storing and accumulating the information that matters most to decision makers. Since not every metric can be reported, system designers can use the limited space on search results screens to disclose only the most useful information to users and help them overcome inaccuracies in rating schemes. This study examined only a few of the many characteristics of ratings information that could be built into system features. More research is needed to determine how the strength and scale type of content ratings as well as text explanations and consistency of credibility indicators influence rating usage in decision-making. 6.4 Chapter Summary The results of the statistical analyses presented in Chapter 5 were interpreted in this chapter. This interpretation consisted of the task performance measures, which were of primary interest in this study, as well as the information search data measures to elaborate on apparent relationships in the data. Source expertise, and not sample size or content recommendations, was found to influence decision performance. The research model was modified to reflect the lessons learned from the theory discussed and data analyzed. 90 CHAPTER 7 7. LIMITATIONS AND FUTURE DIRECTIONS The strengths and important limitations of interpreting the research results are discussed in this chapter. The chapter concludes with directions for future research. 7.1 Strengths and Limitations In order to insure internal validity, the experimental design emphasized strong controls, which were a trade-off against external validity. The use of a controlled laboratory experiment was a strength of this research as it controlled for intervening influences, which threaten the experimental manipulation or provide an alternative explanation of the results. Possible influential factors that were controlled include use of a single source for research subjects, a single technology, a common physical environment, structured instrumentation, transparent collection of decision time and information search data, scripted experimental instructions, and a single researcher conducing the experimental sessions (see Appendix F for administrative documents of the experiment). Student subjects, a controlled knowledge system simulation built for the experiment, a limited set of tasks, and the operationalization of the credibility indicators and content recommendations all reduced direct generalizability of results. This is, however, necessary to guarantee a valid test of theories. Learning during task performance is another potential problem, which was minimized by assigning subjects to only one treatment condition. Many of the control variables used are measured with multi-item self-reported measures. 91 Student subjects typically differ from the target population (i.e., business professionals) in two ways: 1) their experience with the task domain and 2) their motivation for decision performance. In this study, steps were taken to minimize the difference between student subjects and knowledge system users. First, the student had experience using web-based applications to accomplish tasks. Second, the subjects were attending an undergraduate information systems class and had covered the domain of data modeling and database design. Third, a ten-minute instructional tutorial was verbally administered at the beginning of each experimental session. Also, instructional screens were added to the introduction material to refresh subjects’ memories of the domain and to explain how work plans are built and combined from knowledge system content. Finally, students were provided an incentive to participate in the study and post experimental interviews indicated that subjects found the experiment to be interesting and informative about the consulting job experience. Based on the decision performance, student subjects proved to be adequate decision makers to investigate the research questions, however prior to generalizing the results to other populations, possible differences between the decision-making abilities of business students and junior consultants should be considered. The task involved selecting line items from work plan examples provided to build a new work plan answer. The generalizabiltiy of these findings may be limited to comparable tasks. However, in general, when selecting from search results, users are free to use entire items or parts of items when creating new documents of any kind. The information processing required by this task is comparable to tasks across a range of 92 domains where old documents are re-used to build new ones, which is consistent with knowledge system usage behaviors. Mentioned previously, another limitation involves examining content recommendations and the level of filter sophistication in the context of finding reliable or high quality system content. One of the main purposes of content recommendations is to help users find additional relevant system content and not necessarily help find the highest quality content; which is the focus of this study. However, in an exploratory nature, it was predicted that content recommendation and filter sophistication levels might have some influence on the judgment of ratings and content quality. Future research is needed to determine more specifically how content recommendations influence system content selection and use judgments. Although the operationalizations of the credibility indicators and content recommendations were considered a strength due to the tight controls used, the between- subject design meant credibility indicators were always high or always low for any one subject. This is consistent with scenarios of between firm or between unit comparisons where some content domains are highly used and rated and others are not. However, the lack of variance of credibility indicator or content recommendations within a treatment condition may result in reduced generalizability to search results where this variance is high. A final limitation of this research is the one-time nature of the experimental session. Possibly, experience, both in processing similar tasks and in processing similar information, would change the effects of the content ratings and interactions in these results. 93 7.2 Future Research Directions The results of this research suggest that ratings influence decision-making performance and the source expertise of those ratings matters. The findings from these experiments provide an initial understanding of the relationship between content ratings and intellective tasks. Additional research effort should examine a broader range of credibility indicators and focus mostly on how to help individuals overcome misleading ratings. First, inaccurate ratings did not appear to be a strong trigger of the use of rating . credibility indicators. Researchers should examine more salient and motivating factors in the decision process to see if they prompt attention to credibility indicators and content recommendations. Additional research would be needed to determine what these salient and motivating factors are in the knowledge system environment. Second, future research is needed to determine why credibility indicators do not always affect decisions about whether to rely or not on ratings. The Elaboration Likelihood Model (ELM) suggests content itself and rating could be part of the “central route” to the knowledge system user judging the content’s quality level. However, ELM also suggests credibility indicators and content recommendations could be part of the “central route” if they are actively attended to and have an effect on decisions or “peripheral route” if they are available but not consciously included in decisions of rating credibility and content quality (Petty and Cacioppo 1981; Stiff 1994). If credibility indicators and content recommendations are processed as part of the “peripheral route”, studies have shown they are probably used to rationalize decisions instead of influence them as in the “central route” (Areni, Ferrell and Wilcox 2000). 94 In conjunction with ELM, research needs to determine whether individuals believe content ratings are “group opinions” or just a quality metric. If users think of ratings as “group opinions” then ELM may imply the use of the heuristic consensus implies correctness. More studies are needed to determine the exact nature of what people think of rating values. Another reason credibility indicators may not always influence knowledge system users’ decisions is that processing all the information is costly (i.e., takes time and attention to consider all factors in determining whether ratings are credible). Research has shown humans under-use helpful information in decision tasks (Connolly and Thorn 1987), because of the declining payoff of looking at one more piece of information and the complexity in combining all the information reviewed (Connolly and Thorn 1987). Thus, knowledge system users may. not be able to trade off the costs of using all the information provided (content itself, content ratings, and credibility indicators or content recommendations) with the benefits of reducing uncertainty about rating credibility and content quality. Future research should examine the tradeoff between ratings, credibility indicators, effort and persistence on solving the task. Also, knowledge system users may miscalibrate how well they are doing in the decision task and think they are performing well without using the credibility indicator and content recommendation information (Phillips 1973; Yates 1990). While calibration was used to analyze decision performance, it was not the focus of this study. This study indicates those who know ratings were helpful or not or knew their performance level were able to perform more effectively but not faster or slower. Future research should 95 determine how miscalibration influences the lack of persistence in overcoming misleading content ratings. Based on the information search strategy literature, the results of this study indicate systematic patterns of search could be associated with different rating information. More research is needed to determine what types of information systems and rating information are associated with additive linear, additive difference, conjunctive and elimination-by-aspects patterns of searching (Payne 1976). Since there is a connection between search patterns and use of information, determining how information searches takes place could inform what credibility indicators and content recommendations information to make available to system users. Post hoc analyses on search pattern data suggested individuals do not always look at the highest rated items first in a list of search results as a prior expected. Prior research suggests people do not scan beyond the first page of search results (Jansen, Spink and Saracevic 2000). The modified decision model of this study suggests rating credibility indicators may have a role. However, little is known about how users manage using a long list of search results and future research is needed to explain how people determine what to select and review in this context. Future research is needed to provide insights regarding the use of collaborative filter recommendations. As collaborative filter algorithms become more widely used in knowledge systems, and other systems (i.e., Internet shopping), understanding the influence of this information on decision-making becomes more important. Future work is needed to understand whether and how recommendations influence beliefs about rating correctness or content quality. People may discount recommendations immediately 96 because algorithm assumptions or degree of fit with others’ preferences are not disclosed. However, people may rely on and use recommendations believing system generated information is better than other information. Given low task experience and high uncertainty involved with judging what content is high quality, knowledge system users could be using credibility indicators and content recommendations as self-monitoring feedback. High self-monitors seek and use information from others (i.e., credibility indicators and content recommendations) to manage their behavior, while low self-monitors are not so concerned and do not pay attention to the information from others (Snyder 1974, 1987). While this was captured as a control variable, future research should examine this and other individual differences and how they influence the use of information in the knowledge system content usage environment. Based on Table 3.1, the characteristics of content ratings, credibility indicators and content recommendations, studies are needed to determine how rating strength or scale type as well as text explanations and rating consistency influence rating usage. Understanding how different types of information about rating influence whether individuals use or discount ratings will better prepare users and system designers in ways to improve the effective usage of knowledge system content. Finally, while the context of this study was knowledge system repositories usage, the use of rating information extends to other contexts such as Internet shopping or bulletin board information sharing. The theoretical discussions of this study could apply to these other contexts where individuals’ a prior belief structures may vary based on context. Shopping for a book on the Internet is different than using old work products 97 from a repository to create new work products. When shopping on the Internet, people may belief a priori that ratings involve a higher degree of intentional inaccuracies and be more skeptical of rating values than when using a knowledge repository. Future work is needed to understand how the results of this study change based on different system contexts. 98 REFERENCES 99 8. REFERENCES Alavi, M. and Leidner, D.E. “Review: Knowledge Management And Knowledge Management Systems: Conceptual Foundation And Research Issues” MIS Quarterly (25:1), March 2001, pp. 107-136. Ambrosio, J. “Knowledge Management Mistakes,” Computerworld (July 3), 2000, p. 44. Ansari, A., Essegaier, S. and Kohli, R. “Internet Recommendation Systems,” Journal of Marketing Research (August), 2000, pp. 363-375. Arerri, C.S., Ferrell, M.E., and Wilcox, J .B. “The Persuasive Impact of Reported Group Opinions on Individuals Low vs. High in Need for Cognition: Rationalization vs. Biased Elaboration?” Psychology and Marketing (17:10), 2000, pp. 855-875. . Argote, L. and Ingram, P. "Knowledge Transfer: A Basis For Competitive Advantage In Firms,” Organizational Behavior and Human Decision Processes (82:1), May 2000, pp. 1 50-169. Ba, S., Stallaert, J. and Whinston, A.B. “Optimal Investment in Knowledge Within a Firm Using a Market Mechanism,” Management Science (47:9), 2001, pp. 1203-1219. Balabanovic, M. and Shoham, Y. “Fab: Content-based, Collaborative Recommendation,” Communications of the ACM (40:3), 1997, pp. 66-72. - Batra, R. and Ray, M.L. “Situational Effects of Advertising Repetition: The Moderating Influence of Motivation, Ability, and Opportunity to Respond,” Journal of Consumer Research (12), 1986, pp. 432-445. Beach, L.R., Mitchell, T.R., Deaton, M.D., and Prothero, J. “Information Relevance, Content and Source Credibility in the Revision of Opinions,” Organizational Behavior and Human Performance (21), 1978, pp. 1-16. Beaulieu, P. “Commercial Lenders’ User of Accounting Information in Interaction With Source Credibly,” Contemporary Accounting Research (Spring), 1994, pp. 557-585. Beaulieu, P. “The Effects of Judgments of New Clients’ Integrity Upon Risk Judgments, Audit Evidence, and Fees,” Auditing: A Journal of Practice and Theory (20:2), 2001, pp. 85-99. Bimbaum, M.H., Wong, R. and Wong, L.K. “Combining Information From Sources That Very in Credibility,” Memory & Cognition (4:3), 1976, pp. 330-336. Borchers, A. “Trust in Internet Shopping: A Test of Measurement Instrument,” Proceedings of the Seventh Americas Conference on Information Systems (August), 2001, pp. 799-802. 100 Brajnik, G., Mizzaro, S., Tasso, C. and Venuti “Strategic Help in User Interfaces for Information Retrieval” Journal of the American Society for Information Science and Technology (53:5), 2002, pp. 343-358. Cheung, C. and Lee, M. “Trust in Internet Shopping: A Proposed Model and Measurement Instrument,” Proceedings of the Sixth America ’s Conference on Information Systems (August), 2000, pp. 681-689. Chow, C. W., Deng, J .F. and Ho, J .L. “The Openness of Knowledge Sharing within Organizations: A Comparative Study in the United States and People’s Republic of China,” Journal of Management Accounting Research (12), 2000, pp. 65-95. Cohen, J. and Cohen, P. Applied Multiple Regression/ Correlation Analysis for the Behavioral Sciences. Hillsdale, N.J.: Lawrence Erlbaum Associates, 1983. Coleman, D., and Irving, G. “The Influence of Source Credibility Attributions on Expectancy Theory Predictions of Organizational Choice,” Canadian Journal of ’ Behavioral Science (29), 1997, p. 122-131. Cormolly, T. and Porter, A. “Discretionary Databases in Forecasting,” Journal of Forecasting (9), 1990, pp. 1-12. Connolly, B.K. and Thorn, T. “Pre-decisional Information Acquisition: Effects of Task Variables on Sub-optimal Search Strategies,” Organizational Behavior and Human Decision Processes (39:3), 1987, pp. 397-417. Constant, D., Kiesler, S. and Sproull, L. “What's Mine is Ours, or Is it? A Study of Attitudes about Information Sharing,” Information Systems Research (5:4), 1994, pp. 400-421. Cosley, D., Lam, S.K., Albert, 1., Konstan, J .A. and Riedl, J. “Is Seeing Believing? How Recomender Interfaces Affect Users’ Opinions,” Proceedings for CH1 2003 (5:1), April 5-10, 2003, pp. 585-592. Cramton, CD. “The Mutual Knowledge Problem and the Consequences for Dispersed Collaboration,” Organization Science (12:3), 2001, pp. 346-371. Davenport, T.H., DeLong, D.W. and Beers, M.C. “Successful Knowledge Management Projects,” Sloan Management Review (39:2), 1998, pp. 43-57. Davenport, TH. and Hansen, M.T. “Knowledge Management at Andersen Consulting,” Harvard Business School Case (9-499-032) July 7, 1999. Davenport, TH. and Prusak, L. Working Knowledge: How Organizations Manage What Hey Know. Boston, Massachusetts: Harvard Business School Press, 1998. DeLong, D.W. and Fahey, L. “Diagnosing Cultural Barriers to Knowledge Management,” Academy of Management Executive (14:4), 2000, pp. 113-127. 101 Denes-Raj, V. and Epstein, S. “Conflict Between Intuitive and Rational Processing: When People Behave Against Their better Judgment,” Journal of Personality and Social Psychology (66:5), 1994, pp. 819-829. DeTienne, KB, and Jackson, L.A. “Knowledge Management: Understanding Theory and Developing Strategy,” Competitiveness Review (11:1), 2001, pp. 1-11. Droge, C. “Shaping the Route to Attitude Change: Central Versus Peripheral Processing Through Comparative Versus Non-comparative Advertising, " Journal of Marketing Research (26), 1989, pp. 377-388. Edvinsson, L. and Sullivan, P. “Developing a Model For Managing Intellectual Capital,” European Management Journal (14:4), August 1996, p. 356. Einhom, H.J. “Use of Nonlinear, Non-compensatory Models as a Function of Task and Amount of Informaiton,” Organizational Behavior and Human Decision Processes (9:1), 1971, pp. 1-27. Epstein, S. “Integration of the Cognitive and the Psychodynamic Unconscious,” American Psychologist (49:8), 1994, pp. 709-724. Erickson, GS. and Rothberg, H.N. "Intellectual Capital And Competitiveness: Guidelines For Policy,” Competitiveness Review (10:2), 2000, pp. 192-198. Falconer, J. “Implementing a Dynamic Corpus Management System Within a Global Consulting Practice,” International Journal of Technology Management (18), 1999, pp. 520-534. Feather, N.T. “The Study of Persistence,” Psychological Bulletin (59:2), 1962, pp. 94- l 15. F lanagin, A.J. and Metzger, M.J. “Perceptions of Internet Information Credibility,” Journalism and Mass Communication Quarterly (77:3), 2000, pp. 515-540. Floyd, SW. and Woolridge, B. “Knowledge Creation And Social Networks In Corporate Entrepreneurship--The Renewal Of Organizational Capability,” Entrepreneurship Theory And Practice (23:3), Spring 1999, pp. 123-143. F ritch, J .W. and Cromwell, R.L. “Evaluating Internet Resources: Identity, Affiliation and Cognitive Authority in a Networked World,” Journal of the American Society for Information Science and Technology (52:6), 2001, pp. 499-507. Ford, N., Miller, D. and Moss, N. “The Role of Individual Differences in Internet Searching: An Empirical Study,” Journal of the American Society for Information science and Technology (52:12), 2001, pp. 1049-1066. Gallagher, C.A. “Perceptions of the Value of a Management Information System,” Academy of Management Journal (17:1), March 1974, p. 46-55. 102 Gigerenzer, G. and Todd, P.M. Simple Heuristics That Make Us Smart. New York, New York: Oxford University Press, 1999. Graham, AB. and Pizzo, V.G. “A Question of Balance: Case Studies in Strategy Knowledge Management,” European Management Journal (14:4), August 1996, pp. 338- 347. Grant, R.M. “Prospering in Dynamically-Competitive Environments: Organization Capability As Knowledge Integration,” Organization Science (7:4) J uly-August, 1996, pp. 375-387. Gray, P.H. “A Problem Solving Perspective On Knowledge Management Practices,” Decision Support Systems (31:1), 2001, pp.87-102. Greco, J. “Knowledge is Power,” Journal of Business Strategy (20:2), March/April 1999, pp. 18-22. Gregor, S. and Benbasat, I. “Explanations From Intelligent Systems: Theoretical Foundations And Implications For Practice,” MIS Quarterly (23:4), 1999, pp. 497-530. Griffin, D. and Tversky, A. “The Weighing of Evidence and the Determinants of Confidence,” Cognitive Psychology (24), 1992, pp. 411-435. Hackbarth, G. and Grover, V. “The Knowledge Repository: Organizational Memory Information System,” Information Systems Management (Summer), 1999, pp. 21-30. Hair, J .F ., Anderson, R.E., Tatham, R.L. and Black Multivariate Data Analysis. New York, N.Y.: Collier Macmillan, 1998. Hansen, M. and Haas, M. “Competing for Attention in Knowledge Markets: Electronic Document Dissemination in a Management Consulting Company,” Administrative Science Quarterly (46), 2001, pp. 1-28. Hansen, M. and Morten, T. “The Search-Transfer Problem: The Role Of Weak Ties In Sharing Knowledge Across Organization Subunits,” Administrative Science Quarterly (44:1), 1999, pp. 82-111. Hansen, M., Nohria, N. and Tierney, T. “What’s Your Strategy for Managing Knowledge?” Harvard Business Review (March-April), 1999, pp. 106-116. Hill, W., Stead, L., Rosenstein, M. and F urnas, G. “Recommending and Evaluating Choices in a Virtual Community of Use,” Proceedings of the CHI’ 95 Mosaic of Creativity (Denver, Colorado), May 7-11, 1995, pp.l94-201. Hirst, D.E., Koonce, L. and Miller, J. “The Joint Effect of Management’s Forecast Accuracy and the Form of its Financial Forecasts on Investor Judgment,” Journal of Accounting Research (37 Supplement), 1999, pp. 101-123. 103 Hjorland, B. “Towards a theory of aboutness, subject, topicality, theme, domain, field, content... and relevance,” Journal of the American Society for Information Science and Technology (52:9), 2001, pp. 774-778. Holsapple, CW. and J oshi, K.D. “Organizational knowledge resources,” Decision Support Systems (31), 2001, pp. 39-54. Holzner, B. and Marx, J .H. “Some Historical Notes 0 the Sociology of Knowledge,” In Knowledge Application: The Knowledge System in Society. Boston, Mass: Allyn and Bacon, 1979, pp. 43-76. Housel, T.J., El Sawy, O.A., Zhong, J. and Rodgers, W. “Measuring The Return On Knowledge Embedded In Information Technology,” Proceedings of the T wenty-Second Annual International Conference on Information Systems (New Orleans, Louisiana), December 16-19, 2001. Hovland, CI. and Weiss, W. “The Influence of Source Credibility on Communication Effectiveness,” Public Opinion Quarterly (15), 1951, pp. 635-650. Howard, D.L. “Pertinence as Reflected in Personal Constructs,” Journal of the American Society for Information Science (45:3), 1994, pp. 602-615. Ilgen, D.R., Fisher, CD, and Taylor, M.S. “Consequences of Individual Feedback on Behavior in Organizations,” Journal of Applied Psychology (64), 1979, pp. 349-371. Irn, I. and Hars, A. “Finding Information Just For You: Knowledge Reuse Using Collaborative Filtering Systems” Proceedings of the T weary-Second Annual International Conference on Information Systems, New Orleans (Louisiana), December 16-19, 2001. J adad, AR. and Gagliardi, A. “Rating Health Information on the Internet: Navigating to Knowledge or to Babel?” The Journal of the American Medical Association (279:8), 1998, pp. 611-614. ' Jansen, J.J., Spink, A. and Saracevic, T. “Real Life, Real Users, And Real Needs: A Study And Analysis Of User Queries On The Web,” Information Processing and Management (36:2), 2000, pp. 207-227. Kankanhalli, M.S., Tan, B.C.Y. and Wei, K.K. “Seeking Knowledge In Electronic Knowledge Repositories: An Exploratory Study,” Proceedings of the T wenty-Second Annual International Conference on Information Systems, New Orleans (Louisiana), December 16-19, 2001, pp. 123-133. Kardes, F.R. “Spontaneous Inference Processes in Advertising: The Effects of Conclusion Omission and Involvement on Persuasion,” Journal of Consumer Research (15), 1988, pp. 225-233. Kennedy, J. and Peecher, M.E. “Judging Auditors’ Technical Knowledge,” Journal of Accounting Research (35:2), 1997, pp. 279-293. 104 Keppel, G. Design and Analysis: A Researcher ’s Handbook. Engelwood Cliffs, N.J.: Prentice-Hall, 1973. Keren, G. and Lewis, C. “Even Bernoulli Might Have Been Wrong: A Comment on Intuitions About Sample Size,” Journal of Behavioral Decision Making (13), 2000, pp. 125-132. . Kerlinger, F.N. Foundations of Behavioral Research. Orlando, Florida: Harcourt Brace and Company, 1986. . Khalifa, M., Lam, R. and Lee, M. “An Integrative Framework For Knowledge Management Effectiveness,” Proceedings of the T wenty-Second Annual International Conference on Information Systems, New Orleans (Louisiana), December 16-19, 2001. Koriat, A. “How Do We Know That We Know? The Accessibility Model of the Feeling of Knowing,” Psychological Review (100:4), 1993, pp. 609-639. Kubr, M. Management Consulting: A Guide to the Profession, Geneva, Switzerland: International Labor Office, 1996. Kunda, Z., and Nisbett, RE. “The Psychometrics of Everyday Life,” Cognitive Psychology (18), 1986, pp. 195-224. . Larcker, DE and Lessig, V.P. “Perceived Usefulness of Information: A Psychometric Examination,” Decision Sciences (11), 1980, pp. 121. Levy, P. E., Albright, M. D., Cawley, B. D. and Williams, J. R. “Situational And Individual Determinants Of Feedback Seeking: A Closer Look At The Process,” Organizational Behavior and Human Decision Processes (62:1), 1995 pp. 23-38. Lynch, C.A. “When Documents Deceive: Trust And Provenance As New Factors For Information Retrieval In A Tangled Web,” Journal Of The American Society For Information Science And Technology (52:1), 2001, pp. 12-17. Maglaughlin, KL. and Sonnenwald, D.H. “User Perspectives on Relevance Criteria: A Comparison among Relevant, Partially Relevant, and Not-Relevant Judgments,” Journal of the American Society for Information Science and Technology (53:5), 2002, pp. 327- 342. Maheswaran, D. and Chaiken, S. “Promoting Systematic Processing in low-Motivation Settings: Effect of Incongruent Information on Processing and Judgment, ” Journal of Personality and Social Psychology (61), 1991, pp. 13-25. Maister, D. Managing the Professional Service Firm. New York, New York: The Free Press, 1993. McGrath, J. E. Groups: Interaction and Performance. Englewood Cliffs, New Jersey: Prentice Hall, 1984. 105 Mehra, A., Kilduff, M. and Brass, DJ. “The Social Networks of High and Low Self- Monitors—Implications for Workplace Performance,” Administrative Science Quarterly (46:1), 2001, pp. 121-146. Miniard, P.W., Bhatla, S., Lord, K.R., Dickson, PR. and Unnava, H.R. “Picture-Based Persuasion Processes and the Moderating Role of Involvement,” Journal of Consumer Research (18), 1991, pp. 92-107. Moenaert, R.K., Deschoolmeester, D., Meyer, A. and Souder, W.E. “Information Styles of Marketing and R&D Personnel During Technological Product Innovation Projects,” R&D Management (22:1), 1992, pp. 21 -40. Montgomery, H. and Svenson, 0. “On Decision Rules and Information Processing Strategies for Choices Among Multi-attribute Alternatives,” Scandinavian Journal of Psychology (17), 1976, pp. 283-291. Mullin, R. “Knowledge Management: A Cultural Evolution,” Journal of Business Strategy (September/October), 1996, pp. 56-59. Munch, J.M. and Swasy, J .L. “Rhetorical Questions, Summarization Frequency, and Argument Strength Effects on Recall,” Journal of Consumer Research (15), 1988, pp. 69- 76. Murch, R. Project Management: Best Practice for IT Professionals. Upper Saddle River, NJ: Prentice Hall, 2001. Nahapiet, J. and Ghoshal, S. “Social Capital, Intellectual Capital, And The Organizational Advantage,” Academy of Management Review (23:2), 1998, pp. 222-266. Nelson, KM. and Cooprider, J .G. “The Contribution Of Shared Knowledge To IS Group Performance,” MIS Quarterly (20:4), December 1996, pp. 409. Nelson, M.W., Bloomfield, R., Hales, J .W. and Libby, R. “The Effect of Information Strength and Weight on Behavior in Financial Markets,” Organizational Behavior and Human Decision Processes (86:2), 2001, pp. 168-196. Neter, J ., Kutner, J ., Nachtsheim, Wasserman, W. Applied Linear Regression Models. Homewood, 11].: RD. Irwin, 1996. Newell, A. and Simon, H.A. Human Problem Solving. Englewood Cliffs, N.J.: Prentice- Hall, 1972. Nonaka, I. “The Dynamic Theory of Organizational Knowledge Creation,” Organization Science (5:1), 1994, pp.l4-38. Nonaka, I. and Takeuchi, H. The Knowledge-Creating Company: How Japanese Companies Create The Dynamics Of Innovation. New York, NY: Oxford University Press, 1995. 106 Nonaka, I. and Takeuchi, H. “A Theory Of Organizational Knowledge Creation,” International Journal of Technology Management: Special Publication on Unlearning and Learning (11:7/8), 1996, pp. 833-847. O’Dell, C. and Grayson, C.J. “If Only We Knew What We Know: Identification and Transfer Of Internal Best Practices”, California Management Review (40: 3), Spring 1998, pp. 154-174. O'Leary, D. “Using AI in Knowledge Management: Knowledge Bases And Ontology,” IEEE Intelligent Systems 1998. O’Leary, D. “KMPG Knowledge Management I: From Shadow Partner to K-Man to K- Web to K—World to Cering,” Working Paper (December), 2001a. O’Leary, D. “KPMG Knowledge Management 11: Innovation Diffusion,” Working Paper (December), 2001b. Olivera, F. “Memory Systems In Organizations: An Empirical Investigation Of Mechanisms For Knowledge Collection, Storage And Access,” Journal of Management Studies (37:6), September 2000, pp. 811-832. Olson, J .M. and Cal, A.V. “Source Credibility, Attitudes, and the Recall of Past Behaviors,” European Journal of Social Psychology (14), 1984, pp. 203-210. Orlikowski, W. “Learning From Notes: Organizational Issues in Groupware Implementation,” The Information Society (9), 1993, pp. 237-250. Orlikowski, W. “Using Technology And Constituting Structures: A Practice Lens For Studying Technology In Organizations,” Organization Science (11:4), 2000, pp. 404-428. Orlikowski, W.J. and Hofrnan, J .D. “An Irnprovisational Model For Change Management: The Case Of Groupware Technologies,” Sloan Management Review (38:2), Winter 1997, pp. 1 1-21. Pan, S.L. and Scarbrough, H. “Knowledge Management in Practice: An Exploratory Case Study,” Technology Analysis & Strategic Management (11:3), 1999, pp. 359-374. Park, T.K. “Toward a Theory of User-Based Relevance: A Call for a New Paradigm of Inquiry,” Journal of the American Society for Information Science (45:3), 1994, pp. 135- 141. Payne, J. W. ‘”Task Complexity And Contingent Processing In Decision-Making - An Information Search And Protocol Analysis,” Organizational Behavior and Human Performance (16:2), 1976, pp. 366-387. Pentland, B. T. “Information Systems And Organizational Learning: The Social Epistemology Of Organizational Knowledge Systems,” Accounting, Management & Technology (5:1), 1995, pp. l-21. 107 Petty, RE. and Cacioppo, J .T. Attitudes and Persuasion: Classic and Contemporary Approaches. Dubuque, Iowa: Wm. C. Brown, 198 l . Petty, R.E., Cacioppo, J .T. and Schumann, D. “Central and Peripheral Routes to Advertising Effectiveness: The Moderating Role of Involvement, ”Journal of Consumer Research (10), 1983, pp. 134-148. Phillips, L.D. Bayesian Statistics for Social Sciences. London: Nelson, 1973. Pirolli, P. and Wilson, M. “A Theory of the Measurement of Knowledge Content, Access, and Learning”, Psychological Review (105:1), 1998, pp. 58-82. Quinn, J .3. Intelligent Enterprise. New York: The Free Press, 1992. Rangan, S. “The Problem Of Search And Deliberation In Economic Action: When Social Networks Really Matter,” Academy of Management Review (25:4), 2000, pp. 813-828. Ratneshwar, S., and Chaiken, S. “Comprehension’s Role in Persuasion: The Case of its Moderating Effect on the Persuasive Impact of Source Cues,” Journal of Consumer Research (18), 1992, pp. 52-62. Rhine, R]. and Kaplan, RM. “The Effect of Incredulity Upon Evaluation of the Source of a Communication,” The Journal of Social Psychology (88), 1972, pp. 255-266. Riesenberger, J. R. “Executive Insights: Knowledge--the Source Of Sustainable Competitive Advantages,” Journal of international Marketing (6:3), 1998, pp. 94-107. Rivkin, J .W. “Imitation of Complex Strategies,” Management Science (46:6), June 2000, pp. 824-844. Roos, J. and Von Krogh, G. “The Epistemological Challenge: Managing Knowledge and Intellectual Capital,” Editorial and Overview in European Management Journal (14:4), August 1996, pp. 333-338. Rosenau, M.D. Successful Project Management: A Step-by-Step Approach with Practical Examples. New York, NY: John Wiley and Sons, 1998. Ryan, SD. and Prybutok, V.R. “Factors Affecting The Adoption Of Knowledge Management Technologies: A Discriminative Approach,” 2001, ????. Sandelands, L.E., Brockner, J. and Glynn, M.A. “If At First You Don’t Succeed, Try, Try Again: Effects of Persistence-Performance Contingencies, Ego Involvement, and Self- Esteem on Task Persistence,” Journal of Applied Psychology (73:2), 1988, pp. 208-216. Sarvary, M. “Knowledge Management and Competition in the Consulting Industry,” California Management Review (41 :2), Winter 1999, pp. 95-107. 108 Schumann, D.W., Petty, RE, and Clemons, D.S. “Predicting the Effectiveness of Different Strategies of Advertising Variation: A Test of the Repetition-Variation Hypothesis,” Journal of Consumer Research (17), 1990, pp. 192—202. Sedlrneier, P. "The Distribution Matters: Two Types of Sarnple-Size Tasks,” Journal of Behavioral Decision Making (1 1), 1998, pp. 281-301. Sedlrneier, P. and Gigerenzer, G. “Intuitions About Sample Size: The Empirical Law of Large Numbers,” Journal of Behavioral Decision Making (10), 1997, pp. 33-51. Sedlrneier, P. and Gigerenzer, G. “Was Bernoulli Wrong? On Intuitions About Sample Size,” Journal of Behavioral Decision Making (1 3), 2000, pp. 133-139. Settle, R. B. and Golden, L. L. “Attribution Theory and Advertiser Credibility,” Journal of Marketing Research (1 1), May 1974, pp. 181-185. Shah, P. “Network Destruction The Structural Implications Of Downsizing,” Academy Of Management Journal (43:1), 2000, pp. 101-112. Shon, J. and Musen, MA. “The Low Availability of Metadata Elements For Evaluating The Quality of Medical Information on the World Wide Web,” Proceedings of American Medical Informatics Association Symposium, 1999, pp. 945-949. Simonson, I., Huber, J. and Payne, J. “The Relationship Between Prior Brand Knowledge and Information Acquisition Order,” Journal of Consumer Research (14), March 1988, pp. 566- 578, Slater, MD. and Rouner, D. “How Message Evaluation and Source Attributes May Influence Credibility Assessment and Belief Change,” Journalism and Mass Communication Quarterly (73:4), 1992, pp. 974-991. Slovic, P. and Lichtenstein, S. “Comparison of Bayesian ad Regression Approaches to the Study of Information Processing in Judgment,” Organizational Behavior and Human Performance (6), 1971, pp. 649-744. Snyder, M. “The Self-Monitoring of Expressive Behavior,” Journal of Personality and Social Psychology (30), 1974, pp. 526-537. Snyder, M. Private Appearances/Pubic Realities: The Psychology of Self-Monitoring. New York: Freeman, 1987. Snyder, M. and Gangestad, S. “On the Nature of Self-Monitoring: Matters of Assessment, Matters of Validity,” Journal of Personality and Social Psychology (51:1), 1986, pp. 125-139. Spink, A. and Greisdorf, H. “Regions and Levels: Measuring and Mapping Users' Relevance Judgments,” Journal of the American Society for Information Science and Technology (52:2), 2001, pp. 161-173. 109 Standifird, S.S. “Reputation And E-Commerce: Ebay Auctions And The Asymmetrical Impact Of Positive And Negative Ratings,” Journal of Management (27), 2001, pp. 279- 295. Stein and Zwass “Actualizing Organizational Memory with Information Systems,” Information Systems Research, (6:2), June 1995, pp. 85-117. Stewart, DD. and Strasser, G. “Information Sampling n Collective Recall groups Versus decision Making Groups,” Poster presented at the 65” Annual Meeting of the Midwestern Psychological Association, Chicago, Illinois, 1993. Stiff, J .B. Persuasive Communication. New York: The Guilford Press, 1994. Strasser, G., Stewart, D., and Wittenbaum, G.M. “Expert Roles and Information exchange During Discussion: The Importance of Knowing Who Knows What,” Journal of Experimental Social Psychology (31), 1995, pp. 244-265. Straus, S.G. and McGrath, J .E. “Does the Medium Matter? The Interaction of Task Type and Technology on Group Performance and Member Reactions,” Journal of Applied Psychology (79:1), 1994, pp. 87-99. Sundnar, S. S. “Effect of Source Attribution on Perception of Online News Stories,” Journalism and Mass Communication Quarterly (75:1), 1998, pp. 55-68. ' Sundnar, S. S. “Exploring Receivers' Criteria for Perception of Print and Online News,” Journalism and Mass Communication Quarterly (76:2), 1999, pp. 373-3 86. Svenson, 0. “Process Descriptions of Decision Making,” Organizational Behavior and Human Performance (23:1), 1979; pp. 86-112. Swanson, E.B. “Management Information Systems: Appreciation and Involvement,” Management Science (21:2), October, 1974, pp. 178-188. Tate, M. and Alexander, J. “Teaching Critical Evaluation Skills For World Wide Web Resources,” Computers in Libraries (16:10), 1996, pp. 49-55. Teece, D. “Capturing Value from Knowledge Assets: The New Economy Markets For Know-How, And Intangible Assets” California Management Review (40:3), Spring, 1995,pp.55. Teece, D. “Strategies for Managing Knowledge Assets: The Role Of Firm Structure And Industrial Context,” Long Range Planning (33), 2000, pp. 33-54. Tiwana, A The Knowledge Management Toolkit: Practical Techniques for Building a Knowledge Management System. Upper Saddle River, N.J.: Prentic Hall, 2000. 110 Thomas, J .B., Sussman, SW. and Henderson, J .C. “Understanding “Strategic Learning”: Linking Organizational Learning, Knowledge Management, And Sense Making,” Organization Science (12:3), 2001, pp. 331-345. Thompson, L.L., Levine, J .M. and Messick, D.M. Shared Cognition In Organizations: The Management Of Knowledge. Mahwah, N.J.: L. Erlbaum, 1999. Todd, P. and Benbasat, I. “The Use Of Information In Decision Making: An Experimental Investigation of the Impact of Computer-Based Decision Aids,” MIS Quarterly (16:3), 1992, pp. 373-393. Tversky, A. “Intransitivity of Preferences,” Psychological Review (January), 1969 pp. 31- 48. Tversky, A. and Kahneman D. “Belief in the Law of Small Numbers,” Psychological Bulletin (76), 1971, pp. 105-110. Tversky, A. and Kahneman D. “Judgment Under Uncertainty: Heuristics and Biases,” Science (185), 1974, pp. 1124-1131. . Van Overwalle, F. and Van Rooy, D. “When More Observations are Better Than Less: A Connectionist Account of the Acquisition of Causal Strength,” European Journal of Social Psychology (31), 2001, pp. 155-175. .Wallsten, T.S. Cognitive Processes in Choice and Decision Behavior. Hillsdale, N.J.: L. Erlbaum, 1980. - Walster, E., Aronson, E. and Abrahams, D. “On Increasing the Persuasiveness of a Low Prestige CoMunicator,” Journal of Experimental Social Psychology (2), 1966, pp. 325- 342. Wathen, C.N. and Burkell, J. “Believe It or Not: Factors Influencing Credibility on the Web,” Journal of the American Society for Information Science and Technology (53:2), 2002, pp. 134-144. Wegner, D.M. “Transactive Memory: A Contemporary Analysis of the Group Mind,” In‘B. Mullen and GR. Goethals (Eds.) Theories of Group Behavior, New York: Springer-Verlag, 1986. Weiner, B. “’Spontaneous’ Causal Thinking,” Psychological Bulletin (97:1), 1985, pp. 74- 84. Whitten, J .L., Bentley, L.D., and Dittman, K.C. Systems Analysis and Design Methods, Boston, Massachusetts: McGraw-Hill Higher Education, 2000. Wijnhoven, F. “Development Scenarios For Organizational Memory Information Systems,” Journal of Management Information Systems (16:1), Summer 1999, pp. 121-146. 111 Wilton, RC. and Myers, J .G. “Task, Expectancy and Information Assessment Effects in Information Utilization Processes,” Journal of Consumer Research (12), March 1986, pp. 469. Wong, P.T.P. and Weiner, B. “When People Ask ‘Why’ Questions and the Heuristics of Attributional Search,” Journal of Personality and Social Psychology (40), 1981, pp. 650- 663. Wright, K. “Perceptions of On-Line Support Providers: An Examination of Homophily, Source Communication and social Support Within On-line Groups,” Communication Quarterly (48:1), Winter, 2000, pp. 44-59. Wrightsman, L.S. “Interpersonal Trust And Attitudes Toward Human Nature,” In ‘ Robinson, J. P., Shaver, P. R., Wrightsman, L. S. (Eds.) Measures Of Personality And Social Psychological Attitudes." Volume I In Measures Of Social Psychological Attitudes Series. San Diego, CA: Academic Press, 1991. Yao, Y.Y. “Measuring Retrieval Effectiveness Based on User Preferences of Documents,” Journal of American Society of Information Science (46:2) 1995, pp. 133-145. Yates, J .F . Judgment and Decision Making. Englewood Cliffs, NJ: Prentice-Hall, 1990. Zack, M. “Managing Codified Knowledge,” Sloan Management Review (Summer), 1999, pp. 45-58. Zinud, R.W. “An Empirical Investigation of the Dimensionality of the Concept of Information,” Decision Sciences (9), 1978, pp. 187. 112 113 APPENDIX 114 9. APPENDIX Appendix A: Experimental Cells Providing Content Ratings (baseline condition) Rating Level and Content Accurate 1 Quality Inaccurate 2 Providing Rater Sargle Size Rater Sample Size (number of raters) Low High Rating Level and Content Accurate 3 5 Quality Inaccurate 4 6 Providing Rater Expertise Rater Expertisfl‘yo Raters Who are Experts) Low High Rating Level and Content Accurate 7 9 Quality Inaccurate 8 10 Providing Collaborative F ilterigg Collaborative Filtering (gggree of sophistication) Low ‘ High Rating Level and Content Accurate ll 13 Quality Inaccurate 12 14 115 Appendix B: Screen Prints Of Experimental Materials lrmwl- 11.11: ‘. 1. Overvlew Consultants deliver client services using large electronic Knowledge Systems which. contain a vast amount of work materials from all the jobs they take on for clients. These Syst me are used to find materials to re-use (i. e, to not 're-invent the wheel') when performingy ape °ecifit: tasks. A typical task would be to build an initial list of steps for a new client job. Consultants cannot usually know for certain which work plans provided by the KnoMedge System are best suited as input to an initial work plan. Some work plans will be better than others. You are a first-year consultant whose responsibility is to create the best work plan for a new client job. 9Y0“ will identify old work plans In the Knowledge System that combine most I building a new plan of work To do this you will be presented a description of the client job, then the results of a Knowledge System search. You will select pans of old work plans from the Knowledge System to build a new plan of work for the new client. After you submit your work plan, you will be asked some questions about your beliefs. 116 n FIIIIwII: -II_III-' II; 'I' III InIIIIIluI. lIIIrI MII lll.‘ III! III“. ”II I lxplIIIIzr 2. Pay Scheme For the client, there is a 'best answer'—a work plan that covers the key characteristics ofa good plan designed the way your manager wants it. Old work plans provided by the Knowledge System will match your manager's criteria to varying degrees. Your pay for this task will depend on how well you select pans of old work plans from the Knowled dgs System to combine and build youra nswer. You will receive $5 for carefully completing the new work plan and the additional questions at the end. even ifyour decisions turn out not to be the best ones. Your pay can increase to a maximum of $13 ifyou both choose the best answers and In the most elficient manne er Specifically. you will earn an additional $4 ifyou pick the best items. However. if you build the best answer but Include extra, unnecessary steps, you will be penalized for including these extra steps. You will also earn an additional $4 if you are one ofthe top 15% quickest to build a new work plan and answer the questions and your work plan is the best answer. Thus, you could earn 35 for completing the new work plan + 54 ifrt is conect + $4 for being expedIent = 513 total aamings. FIIIleI- -IlIII- II; lI -III\ IrIlIIIIluI tIIIn MII III II" IntI' IrIIl I YDilll‘ I 3. Date Modellng and Database Deslgn (DM and DD) In May 2002 youjoined the Detroit office ofA-1 Consulting Firm as a member ofthe Data Modeling and Database Design (DMDD) Division. Your manager asks you for help' In creating work plans for a new client. Your manager explains that you are to use the electronic Knowledge System ofthe firrnt to find other work plans for similarjobs to be used as a starting point for creating the new plan for the current client job. The following Is to remind you ofthe terminology that will be found In the work plans from the Knowledge System search res us.lt As a member ofthe Data Modeling and Database Design (DMDD) Division you are famIlIar with how entity- relationship diagrams are used In data modeling activities (picture below on lell)a and that these diagrams comprise entities relationships and attributes ofthe information that a company wants to track about its organization You also are aware that entity-relationship diagrams and logical schema (similar to entity relationship diagrams but constrained by the actual database system being used) are inputs to building databases which are made up of lInked tables defined by the database designer (picture below on right) This Mariel 117 . . . . l . . I . I. 5. i ll . . . . ‘i‘ . l . . . ,. i l lI l . l: i. ,, .I i l .. i l lltlwll’l'lll' ‘Iv‘ lI:IIII IrIIIIIIluI tIIIII MILIIIVI)" IrIlIIInILl [Xllilllt‘l Database Design Entity-Relationship Diagram and Logical Schema 4. Work Plan Descriptlon Here is a description ofa plan ofwork, which is used to document the steps and consultant rank for performing a client job. See the following column headings below Project Slep- a specific task performed by one or more consultants Steps are tasks performed to get thejob done. Consultant Rank - level ofthe consultant performing that project step. Consultants are titled based on experience level as; junior (1-3 years with the firm) and senior (48 years). Rank is determined by project step difliculty. Often ajunior will perform a project step under close supervision of a senior or the senior will perform the step along with thejunior, thus both ranks will be listed for that project step. Example of a Work Plan ll8 :lIIIle IIIII 'va II III. lIItIIIIIIII III IIII MIIzr IIlI lrIlI-Irwl l KIIlIIlIJ 6. Comblnlng Work Plane If a client hires your firm to perform two separate projects. you would search the Knowledge System for one project and select the best work plan, then search the Knowledge System for the second project and select the best work plan. Then you would COMBINE the work plans Fore ample ifa client hires your firm to perform aFeeeibI'lity Studythen build a User Interface for their computer systems the best answer might look like this. You select the following as the best work plan from the Knowledge System Search Results for your search on keywords 'Feesi'bi'lityS MW You select the following as the best work plan from the Knowledge System Search Results for your search on keywords ‘User lecnterf Results for your search on keywords ‘User lnterfece': programmer You would combine the work plans above to create the new work plan below. 'Feaeibi'lily sway- 'FoesIbI'iIIy Study' through a cost-benefit $995M!” Study' 119 i . I I I {E i I ’-t I I I I I 'I I i I I (IIIwII-IIIII- Up II~III. IIIIIIIIIIII IIIIn MII‘rIIIIIII I'lllfllll’l I xplIIrri 6. Knegem Decerlptlo Example Search Results) Here is an example of search results. You will NOT be running the search but will be provided with the results of running a search in the t'irm's Knowledge System. Results are similar to the typical search engine results (to. a list of items deemed to match the search string words). Results from the search are from other client jobs completed by your firm. You can assume all materials provided are current (less then one year old). from the same job type and industry as your current client. NOTE: Functions have been disabled because this page only provides examples. See below for the following column headings: Item ll- click on that item to see the contents. Reting- this value reflects the average of what other consultants in your firm have rated this item based on their using it. Ratings run from 5 = 'highly valuable'through 1 = 'woithless'. NOTE: Rating values are submitted by a verlety of other consumer-h, expert or not. In your division or not and who may do different work than you do. Knowledge System Search Results for your search on keywords 'Work Plans": I..L.....___,-.__"We-..”"m..." 120 Example Item 1 When enabled: a To select a line item click on the box under Select - To send selections to your answer click on SEND TO WORK PLAN ANSWER below - To edit your answer when doing the case. close this window by clicking on the 'x' on the top right. 30 back to the Case Instructions window and select Work Plan Answer x NOTE". I'us‘ .I l' m f L l I A. l I. L Emu-1m.- yew-newer. Whnbne.cbethiwindewbycltkngonthe'x'onthetopz'ght. Project s-p 121 " _lr.IleI»IIIII- 'ith‘lll», IrIIIIIduI.IIIIn MII IIII.IIII 'll'lfll’tt" l xplIIrI‘I 7. Decision For You to Make The client will be asking for your firm to perform two separate projects-date modeh'ngan and database design. You will be provided search results I data amodeling and then separately for database design but on the same screen. Your rmanager rwill provide criteria for selecting the work plans to use in your answer. You will need to select the best work plans based on this criteria and combine them. NOTE: At any time. you can review introduction pages with the links on the lefi. Make sure you are ready to begin as your clock for efllclency starts when you click on START THE CASE, After cllcldng START THE CASE. you wlll be asked "De you went to clean title wlndow?‘ cllck YES. ='| II- In. IIIII IIIIrI\ HII III. IIII IrIII ”II I l MIIIIII I . : _:. v:- ; A; 7:. 3.2 1 1:14: Case lnetructlons introduction Now you are really going to make the decision we have been discussing. Pa as The client has hired your firm to create a data model and design a database {or their ecompany. Your manager has asked for your help in creating a work plan of the tasks to be don . consultant rank for doing each task Your manager has told you the most I'mmnent echaracteristics AWL!!! that must be covered by the plan ofwork you design are: W - 1 Senlore assigned to important steps for supervision of juniofs work. 2. lnfonnatlvelnon-vague project step descriptions Consultant rank assi nod to meet steps execpt for headin s, which do not need ranks . mean 9 P ’ I 9 I W u have run a search in the firm's Knowledge System and to view the results. which are from other ~ jobs completed by your firm. click on KS SEARCH RESULTS located below, Afier examining the PM - search results items, select line items for your answer by clicking the check boxfor each item and k P n 7 it will be automatically tianslered into the WORK PLAN ANSWER file. Make sure to build the best ' plan ofwork forthis client given the 3 characteristics above. .mm Experience shows the best work plan that that covers both data modeling and database design has I between 26-50 line items including headings. You can edit your answer by clicking on WORK PLAN ANSWER. When you are done. click FINISHED CASE to go to the questions. However. alter clicking FINISHED CASE. you will not be allowed to return to change your answer. 122 . ‘ Introductlon Page: NW! ‘5 l..i I In .IIUI IiIIIi‘~ Mir (”5.0" IHI’IHI l I I-‘IIIIIII‘I II. n - OH! Now you are really going to make the decision we have been discussing. The client has hired your firm to create a data model and design a database for their company. Your manager has asked for your help in creating a work plan ofthe tasks to be don consultant rank for doing each task. Your manager has told you the most important charactan‘etr'cs that must be covered by the plan ofwork you design are: 1. ISoIIIois adgned to important steps for supervismn aonfjunior's work 2. lnimnl mafivo/non-vegue project stop descriptions 3. Consultant rank asslgned to project steps [execpt fonr headings which do not need ranks] You have run a search in the firm' 5 Knowledge System and to view the results which are from other pubs completed by your firm click on KS SEARCH RESULTS located below After examining the search results items. select line items for your answer by clicking the check boxlor each item and it will be automatically transfered into the WORK PLAN ANSWER file Make sure to build the best plan of work for this client given the 3 characteristics above. Expensnce shows the best work plan that that covers both data modeling and database design has between 26-50 line items including headings. You can edit your answer by clicking on WORK PLAN ANSWER. When you are done. click FINISHED CASE to go to the questions. However. after clicking FINISHED CASE, you will not be allowed to return to change your answer. Soud- m: e Rating— this value reflects the average otwhat other consultants in your firm have rated this item based on their using it. Ratings run from 5' - 'highly valuable' through 1 = 'Tworthless NOTE Rating values are submitted by a verlety of other momma, expert or not, In your dIvle'Ion or not and who may do ditlerent work than you do.“ See below for search results for Wag: and W. rds “DATA W: 123 3L1. which ”t‘leHS HI! imnlt Internet [Explorer " Openhemld I Knowledge System Search Results for your search on keywords "DATABASE DESIGN": Item ill Open Item 15 Open Item 15 i— Open Item 17 5"- Open Item 18 ii ‘ Open item 19 E Open Item 20 : ' Open Item 21 :’ Open Item 22 i: ' Open Item 23 3 Open Item 24 : Open Item 25 IT Open Item 25 g y . :l~ - . J zlfl Ei‘om 4 5 40m xx, Mimwm-«Iamw-weemnm .efieefinooe mfg 3 Wink Pldl‘l Annwci Huiiiignll Inlcincl I )(DllllCl Work Plan Answer I To remove a line item from your answer, dc-sclcct by clicking on the box under Include a To reorder line items change the numbers under Step Order, you can use decunals (3.1.3.5) - To see you change: click on UPDATE WORK PLAN ANSWER below a To add more line items to your answer. close this window, go back to the Case Instructions window and select KS Search Results NOTE: When time, choc this window. There have been no when: selected attzhis time ll Inch-b ll 890:!" ll Paras-r ll Com-imam JI “T"fiimmei 124 3 I'W'xnl, I’Lm i‘m \wm Mrurumit Inlvtrnei l X[)lul(:l Work Plan Answer i: . To more a line item from your answer, de-select by clicking on the box under Include i e To reorder line items change the numbers under Step Order, you can use decimals (3.1.3.5) ; - To see your changes click on UPDATE WORK PLAN ANSWER below ' e To add more line items to your answer, close this window, go back. to the Case Instructions :1 window and select KS Search Results NOTE: When (hm, close this winibw. Order PLAN FOR“ DATA MODELING t' metro-7mm H la Look for items to capture, store, and produce mformehon the client their business i“ lb. Study the forms and files lc. Review progun data. file, and database structmes 1d. Check on entities that are epert ofthe system 1e. Define identifiers that are e pert of the system and Senior BUY SUPPLIES IN ORDER TO WORK 9° "1* 'O'O'CI'CI'O‘CI'Q'O'O 2e. Bru'nstorm with project teem and Senior 2] " ‘ in”: Internet /A sweflfie‘fi‘iEeJ‘l- ’ I Knowledge System Study ' 'c... - new; 1. I would like to run another search to look at more work plans. then poulhly tube the "um only) work plan I submitted. mm Strongly agree (‘1 (‘2 r‘Ei ‘"4 (‘5 i"6 (‘7 (‘8 ('9 (‘10 Strongly disagree (‘N I H 2.ltlo notwenttoglve the plan ofwork thatleubmittedtomymanager. f‘ t" P r‘ F F P P F F r‘ W Strongly agree 1 2 3 4 5 6 7 8 9 10 Strongly disagree N WEED 3. There are better amen than the one I submitted. Wm Stronglyagree (“102 (‘3 ('4 ('5 (‘6 (‘7 ('8 (‘9 PlOStronglydisagree (‘N W 4. I am confident my choices were the best ones poulbie. Strongly agree i"1 (‘2 (‘3 (‘4 ‘"‘5 (‘6 ('7 (‘8 (‘9 (‘10 Strongly disagree ("N W W __ ..Survey': gbhi'wm _.___,_ LA 'Mwm#+ ' U ' ___-_-__-__ Vilma! Mgg'zmsgmmm-wummsm, _ V_ I _ __ _ Wflim 515%} 125 “a. 'ru’ ’ 7 ‘r I..t.l!irt= . Introduction Pages . lrdvande oriy) * 2mm: 1. I would llke to run another search to look at more work plans. then possibly revise the work plan I suhmltted. Strongly agree ‘"1 (‘2 (‘3 (‘4 l"5 (‘6 ‘"7 (‘3 (‘9 (‘10 Strongly disagree 0N 2. I do not want to glvo the plan of work that l submltted to my manager. Strongly agree (‘1 (‘2 I"3 (‘4 (‘5 (‘6 ‘"7 (‘8 I"9 (‘10 Strongly disagree I"‘N 10 Strongly disagree P N 4. I am confident my d3; "' Strongly agree (‘1 l; ;: / fl __ :4 10 Strongly disagree rN "'RZEaaficEiv; B eiiers'afout Uind'hnoQEddgmem item} ' Please let me know ifyou agree or disagree with the following statements. rated on a scale ofl to 10, where 1 indicates that you strongly agree with the statement and 10 indicates that you stongly disagree with the statement. It you have no opinion on a statement. please select N at the end of the scale. 1. The work plans I used In my answer were chosen because there was a HIGH number of raters rating them. Stronglyagree (‘1 (‘2 I"El (‘4 (‘5 (‘5 (‘7 r8 (‘9 r‘1OStrongly disagree (N 2. The work plans I used In my answer were chosen because ALL the raters were experts. Strongly agree (‘1 (‘2 (‘3 ('4 (‘5 l"E5 (‘7 (‘8 (‘9 (‘10 Strongly disagree (‘N 3. The Search Results dlflered In how well they followed the Important characterlstloe of a work plan as outllned by my manager. Strongly agree (‘1 (‘2 (‘3 (‘4 (‘5 (‘6 (‘7 (‘8 (‘9 (~1OStrongly disagree (“N 4. I used work plans In my answer because they were “rated” hlgh (Ilka 5 and/or 4). Strongly agree f‘ 1 (‘ 2 I" 3 I" 4 (‘ 5 P E r 7 (‘ 8 f‘ 9 P 10 Strongly disagree f‘ N 5. A hlgh "rating" meant the Item MET the Important oharacterlstlee of a work plan as outllned by my manager. .. C1 r"_'i Ca 126 16. I can only argue for Ideas whlch I already believe. ii I Stronglyagree (‘1 I"2 (‘3 (‘4 I"5 (‘6 ‘"7 (‘8 (‘9 (‘108tronglydisagree I"N 17. I gueael put on a showto Impres or enterlaln othels. ‘ (I; , i Stronglyagree ‘"1“2 (‘3 (‘4 (‘5 (‘6 (‘7 l"B l"El I"10$tmnglydisagree (‘N 18. Iwould probably make a good actor. m '2 Stronglyagree (‘1 (‘2 (‘3 (‘4 (‘5 (‘3 l"7 I"B (‘9 (“IDStronglydisagree (‘N 19. In a group at Strongly agree ‘ ngly disagree I" N ”a l 2|].Ihavacomlié; 7 if - Q. :: '27 7:177:53» il Strongly agree i: ’4 7’ -;: :4 ,, ' ngly disagree I"N :5; i .. _ 4 , iii ‘4 :4: 2‘ 4., 4'4 _;.:.'44 , ,, 4:. I!§:l 21. At a party I let others keep the jokes and Rodeo golng. ' Stronglyegree (‘1 I"2 (‘3 (‘4 (‘5 (‘8 I"7 (‘8 (‘9 (‘1UStronegdisagree (‘N l Strongly agree l"1 I"2 (‘3 (‘4 (‘5 (‘6 (‘7 (‘8 (‘9 I"IOStronglydisagree r‘N 22. I am a female. i l l "s i 'IHWIl-tIilv' Task Completed...Thank You for Participating in the Knowledge System Study After all experimental eeeelona are completed, a dlacueelon of thle study and the correct solution will be announced and posted to me BU8309 blackboard alto. 127 Appendix C: Manipulation Screens We.” . 0.. ,, , ‘ ed the ram based on their using I Raina: run born 5 = 'hrghly valuable‘ through 1 = Whleee' NOTE 6» ,, «A 4 u...- .- .m i. 1““ and... " r 1 or net and who may do Meme work than you do See helewfor search results lor W and W W Edith mm D! your search on Mm “I“ new e n", ‘ ‘ , ‘ ‘ ed this item baeed on their uung it. Ratings run born 5 = 'hiWy valuable' through I a Morthleee' NOTE " h 4 L nwrlh u -e_ l- e'naae Mn- ualiandwhomaydoflererlwodtlh'enyoudo 129 .iiwiii-i mm. M ed this (em expemereethyeurdvflea ' I r '1 ereetmdmmaydodilerentwortrthanyeudo. m m M w .m m M. m. w m m. m w m m m Sumhnmhr-MI'NWNWM lambarefflatenneng-fl. mmmaumum unhmweuflm 130 . an, - , ' ‘ edllalltem based on their using I Rninga run m5 - “highly valuable'through I = Vermeer. WTE' n '_ . . __ "My... Inz-aal‘KH-n ernetandvdwmeydodilererlwoikthanyoudo, _ .' 1 1 ._ ‘m' u a v k r 7 Number of Raters ranges: 3 J1. tun-W hem! Mn lumberafllaurl 2 4 2 6 4 5 v 7 2 .5 V wmunhmhmmnnmmurum 131 lllll II . L 4 u. , . a. L A A a '3 ' LUIIUVll-llla based on their using n Ratings run lrom 5 = 'highly valuable‘through l = Moonlese‘ NOTE Rating values are submvtied by a varley of 00 r In or net and who may do different work then you do "I. expert er not. In your dWen . t . 1 a t 4 . I "9'" I . . . r is See below for search results tor W and W Number of Nature range: 3 .31. Mum-e- Rarlrig Number of Raters 4 .D-u, ' , ‘ ‘ edit-elem easedontheirue'nga. RiemannhunS-‘htfilyvahable'ttewl=‘wuthleae' MINE: .. ' L .1. 1 .-.nm-n‘hru‘fl- ereetendwhomaydooilerenrmrtithmyoudo um . . .' Z Inmberetlleteramgeari-SI. lure! Ratln lumber-”lasers 2 95 2 94 A 93 2, , a, 4, , 133 Ill . h, , ' ‘ L—~ edlhrelem based on than using 1 Retinga run earn 5 = 'highly veluable' through 1 I Mountau' NOTE n t e , , , not. or not. your “an er red and who may do diluent wort then you do nag . 5“ r ' ll super! in the topic otthat dam on alum m. 134 _ n.‘ 1 - , , .. "museum item band on Iheir using it Ratings run from 5 H 'hrghly valuable" through 1 x ‘worthleu' NOTE n L . u...- u ...- I- 1"" an“... erect metro may do dill-rare work than you do . u a r . are expert in the tow: otthat item. = 'h Raters Eugen ,— 7% 8% 135 . “A ., ‘ ‘ ed lhll nem based on their using at Ratings run from 5 = 'hryily velueble' through 1 B 'worthlear' NOTE on I. a. him I. land I. l I e or not and who may do ditlere'nt wont th'ari you do _ a1 5.. c .. r . _. are expert in the top-c otthai nem 136 _ a- . L . , ... .1 bleed 0: their usirig I Ratings run 1mm 5 = 'highly valuable' through 1 = Morthlus' NOTE at' value: are momma by a variety elem e or or net and mo may do onerent wortr than you do _ = Y . r ' _. are aspen in the topic ofthet item See below for search result: lor W and W 'fl Ratere Expert 51% SS 137 l iiuiiiu - °-‘ , ‘ , ' , ‘ ed Ihli based on the- using I Ratings run from 5 I 'hrghly valuable' through 1 I 'worthlen' NOTE I: g “a". A A u...- .. ...- r. t...- ALI-l.- UII , , ereetuidvmomayoooleremwtiihanyoudo r the recommended lam usellul erlglnel hem. between Iber- reoerenended and tie erlglnel hem. 138 o a“ .3, , ‘ ‘ , ‘ . edthislem band on than using it Ratings run trom 5 = 'highly nluoble' through I - 'woithlau' NOTE "‘ ‘ expor- er not. In your dhrflou 'V I or not and who may do Morerit writ than you do a l a ' u r ‘ A the recommended item useful ...- \l'ww nu we. we . . . 1 “3 ‘ “ orlglnal Item. lmlLr » between llama recommended and lie erlglnal Item. :. > Recommend Abe M“ me— Deal n It- 4 139 . 0.. - - bend a: then tum I Rettnge run ham 5 = 'Nghly wueble‘thvouuh I = Vanni" " ' .I.._ .1 I‘ II nuuo'tonummyaoahn'm wrkth'ln you do ' Ihe recommended nem use'lul en'Inel hem. hem-n Item Iecemmeneel end he ollglnel Item. 140 Id We nern beamed an the" usmg n Retmgs run tom 5— " 'hlghty valuable'lhrough- I = :rmhlelr' “NOTE or net and who may do diluent work then you do the lecamrnended lern ueelful w... much! elen ‘rt‘ 0 en'IneI Item. heneeen Iteme recommended end the an'Inel Men. 141 Appendix D: 100% Quality Work Plans DATA MODELING PROJECT DATA MODELING PROJECT Project Step Consultant Rank 1. Understand business model Junior and Senior 2. Identify entities a. Interview system owners and users to identify things they would like to capture, store, and produce information Junior and Senior b. Study the forms and files Junior c. Review program data, file, and database structures Junior and Senior d. Check that entities have many occurrences and name them Junior e. Define unique identifiers for each entity Junior and Senior 3. Draw a rough draft of entity relationship diagram a. Brainstorm relationships between entities Junior and Senior b. Normalize to minimize redundancy and maximize flexibility Junior and Senior c. Draw entity relationship diagram Junior and Senior 4. Identify data attributes a. Brainstorm on characteristics describing each entity Junior and Senior b. Review forms, documents, printouts of stored data Junior and Senior c. Circle each unique item on the form Junior and Senior d. Exclude items that are extraneous or are constant Senior e. Name attributes and verify attributes with end-users Junior and Senior 5. Map data attributes to entities a. For each entity, find forms, file printouts, reports, etc. whose data describes the entity and record the attributes Junior and Senior b. Interview end-users to identify data attributes Senior 6. Partner review and walk through with client Junior, Senior, Partner 142 DATABASE DESIGN PROJECT DATABASE DESIGN PROJECT Project Step Consultant Rank 1. Understand business model Junior and Senior 2. Review database requirements a. Review the entity relationship diagram Junior and Senior b. Identify the entities to be designed Junior and Senior c. Identify associations to be designed Junior and Senior d. Determine data distribution and access rights for employees Junior 3. Design the logical schema for the database a. Review the logical schema which reflects the database management system chosen Junior and Senior b. With the client’s database administrator and staff update the schema design based on the specific technology chosen Junior and Senior c. With the client’s database administrator and staff update the repository specifications based on the specific technology chosen for implementation . Junior and Senior 4. Build physical database structures a. Convert each entity in entity relationship diagram as a relational table Junior and Senior b. Convert each relationship in entity relationship diagram as a link between relational tables Junior and Senior 5. Prototype the database a. Gather and load with test data Junior and Senior b. Test outputs, inputs, screens and other components Senior c. Adjust database based on testing results and re-nm tests Senior (1. With the client’s database administrator and staff review test results Junior and Senior 6. Partner review and walk through with client Junior, Senior, Partner 143 Appendix E: Screen Prints of All Work Plans Item 1 . To select a line item click on the box under Select . To send selections to your answer click on SEND TO WORK PLAN ANSWER below - To edit your answer close this window. 30 back to the Case Instructions window and select Work Plan Answer I )l’ m4L114LL NOTE: ' . “ ‘4‘, euwu. When that. oboe the winibw end detebue structures Talk more to the client :1’8531475362'26 rm? Item 2 - To select a line item click on the box under Select - To send selections to your answer click on SEND TO WORK PLAN ANSWER below - To edit your answer close this window. go back to the Case Instructions window and select Work Plan Answer urn-Is" I'L'_ nn- 1| 4 L 1: .1 L your anew-er. When‘done, ebee thi math; Pnject se, and database structures A ROUGH DRAFT OF ENTITY RHATION'SI‘HP DIAGRAM A ROUGH DRAFT OF ENTITY REATIONSHIP DIAGRAM 145 armmnm Mutinsnlt Internet 5 xploret it Item 3 E ‘F e To select a line item click on the box under Select E . To send selections to your answer click on SEND TO WORK PLAN ANSWER below I'- - To edit your answer close this window. 30 back to the Case Instructions window and select 2 Work Plan Answer [LL NOTEUponclrkingesnithisscieeawillrsfiuhanddearfliedackmksfimtheiteinsyonsebctedwuepbedin E your answer. When done. ches this windiw. ; SM" W3" H mm r. r "worm PLAN FOR DATA MODELING E1335 9" "a“ S . +3 r “I IDENTIFY EN'I'ITIIss E"m °° Mk l ceded '— wle Interview system owners and users to identify things they would like to capture. store. "J . . unsor and produce information r' A- lb. Study the foam and files || [— --lc. Review program data. file, and database structures "Juru'or I" mid Check that entities have many occurrences and name than H [— --le. Define unique identifiers for each entity “Junior and Samar I' 2. DRAW A ROUGH DRAFT or m RELATIONSHIP DIAGRAM Egg 9° "a" l '- L-Za Brainstorm relationships between entities II M lag“ Emmy. lEmma-$000.4QBMM-Jgonuamiguugggfluxfig 7:13 .. 5% [— --ls, Define unique identifiers for each entity ”Junior and Senior r' a. DRAW A ROUGH DRAFT or cum-v asunousmp DIAGRAM $3333 n° "n“ [- l-Ja Bramstonn relationships between entities II [- L-2b. Normal'ue to m. redundancy and mm flexibility "Junior [— -2c. Draw entity relationship diagram H r' 3. IDENTIFY DATA ATTRIBUTB F"°“‘“‘ 3° “n“ needed I- --3a. Brainstorm on characteristics describing each entity Junior [- -3b Review forms. documents, printouts of stored data Junior and Senior I— «3c. Circle each maqua Item on the form Junior and Senior [- «3d Exclude items that are extraneous or are constant [— ~-30. Name attributes and verify attributes with sndussrs r‘ . MAP DATA ATTRIBUTB To mums E33335 3° W" I— -4a For each entity, find fonns, file pmtouts. reports. etc. whose data describes the entity unior imd record the attributes [- o-4b, Interview end-users to identify data attributes lFenior F's—stem] “""Z'Sfl‘éhTé'ka' 1' ‘fiiiffiwm :13 J. , e ’ I *- ,5. .. .-,.-.........,- -......_... -......,. .. . . .. . .‘7‘. "F .'.l '- -..e-e ‘e-l D - A; final} g 1mm IEH r __ _. , —=— » . _. P‘F—fi _. .- ' ignflsmmlflssmawfllgwmt-use..._,:‘3¢g_fila£QQB 7:12PM“; 146 3I'MMIH l.’I Mu Hull" lnlmm-l I xplmm 4;? Item 4 § a To select a line item click on the box under Select -' e To send selections to your answer click on SEND TO WORK PLAN ANSWER below I: e To edit your answer close this window, 30 back to the Case Instructions window and select ; Work Plan Answer :- NOTE: Upon clicking send, this smart will reflssh and clear the check mks, but the items you selected were phced in :— your aruwer. When done. close this Window. 1“ 5"“ Project sq H Comm 5. r WORK PLAN POR- DATA MODELING [mm °° m“ '3; needed L F I IDENTIPvaTmPS Ending." no took it ' eeded _' l- ule Interview system owners and users to identify things they wouldlike to capture, store, '1 . l * . . tuner and Broduce Information 1 g _ l— -- lb. Study the forms and files “Junior {’1 I l [- --lel Review program data. file, and database structures Illunior ‘ 1‘ '4 [- --ld Check that entities have many occurrences and name there "Junior A . I'- «la. Define unique identifiers for each entity ”Junior , _4 l' 2. DRAW A ROUOI-I DRAFT OF ENTITY RELATIONSHIP DIAGRAM E333 “° ““3 3 5 'L i I— --2a Brainstorm relationships between entities "JImior and Senior 2‘ S so Meme! 1 Dane :IEEEMHIWW...‘ [gnaw-m4 «seem» @wmiggimm going; let Define unique identifiers for each entity DRAW A ROUGH DRAFT OF ENTITY RELATIONSHIP DIAGRAM 2a Brainstorm relationships between entities . Normalize to minimize redundancy and maximize flexibility Draw entity relationship diagani IDENTIFY DATA ATTRIBUTE ...g... A \.. 3a Brainstorm on characteristics describing each entity 3b. Review forms, documents, printouts of stored data _... 1 .fl __. r 3c. Circle each unique item on the form 3d. Exclude items that are extraneous or are constant 3e. Name attributes and verify attributes with end-users MAP DATA ATTRIBUTB TO ENTITIES For each entity, find forms, file printouts, reports, etc. whose data describes the entity record the attributes . Interview end-users to identify data attributes ‘lflTl‘l'I‘l'W'l'IW'I'l'lW w -'I new: it Ian-M1 ferns Am "Ir-it ‘3‘ ,Oleer - :‘jffgéia R"W@.§§"mf“ I ‘ fun—Pres— .1 3 s i‘ WM 77 ; mall ate-W IEM'WWJ 6K5 MBM‘JIQWE‘EE'fi-u nwwiwflim 147 3 liIHMllVHII HI: III ml! lnlI-Irurl l xplumr Item 5 . To select a line item click on the box under Select 5. e To send selections to your answer click on SEND TO WORK PLAN ANSWER below - To edit your answer close this window. go back to the Case Instructions window and select § Work Plan Answer g NOTE Upon slicing send. this screen will rated: and dear lbs check mks, but the items you selected were phced in :- your mt. When done, cbse lltl winsbw. _E E. “‘“II mm H mm r' "WORK PLAN FOR DATA MODELING Jfig‘m‘ °° “‘1‘ a F “I IDENTIFYENTITIES Inhmmm" I‘ ' needed ' > '— ~1aLookforitemstocaptwe.stora. andproduceinfonnationfortheclientg’ventheir Junior :; usiness '_ [— --lb.Studythe forms andfiles I? I'- ~-lc. Review program data. file, and database structures Junior i; - I'- uld. Check on entities that are a part ofthe system l V I'— -—l e. Define identifiers that are a part ofthe system Junior ‘ t' 2‘ BUY SUPPLII-‘S IN ORDER To WORK [233:5 “° "n" if I— --2a Bra'mstorui with project team lbenior 2] EM “C ' ' "’ Vfgemw‘ Me“.-- 5 7E mgjacw jgwmusm uléjlsm‘m' Mfinommfinwe WW' 1e. Define identifiers that are a part ofthe system BUY SUPPLIES IN ORDER TO WORK Brainstorm with project team Normalize items tofitthemodel 2c. Make drawrngs onpaper V BRING IN SENIOR TO GET WORK DONE 3a. Brainstorm with project team .. m. ,_ innamr‘r‘ Iniilum'm.mira.1. as! “a, , . . 3b. Review forms, documents, prirnouts 3c. Circle items that are on the forms. documents, pn'ntouts 3d. Exclude items that are on the forms, documents, printouts 3e. Name attributes on the forms, documents, printouts BRING SENIOR BACK TO GET MORE WORK DONE For each entityrecord the attributes after talking to senior and getting input on how is done .Talk more to the client I" I" l— I— I" I- I" I- I" I' l'" l" I" I" sgmwmegmgnj 7“”? m ‘ ‘ , an 3 L EWJW-w l $3“an 5mm} EEG SWM-fllabluwun attic. ,mflme 712m: 148 . 3IIMMIIVII Mn In .ult lrutr-Inr-t [' XIIIIHCI — -:--;:.sq;-5_-:.-.m4..m "FIE ; ‘ a l' I I" Item6 e To select a line item click on the box under Select a To send selections to your answer click on SEND TO WORK PLAN ANSWER below a To edit your answer close this window, go back to the Case Instructions window and select Work Plan Answer . . “I NOTE Upon clthirgsend, th’u screen willrglrssh Meteor theehsckmks, but the items yoosehctad were phcedin ‘1 youranswer. When done. class this wiisbw. I. Select 7 i. H MS» I emu-mm I. r "WORK PLAN FOR DATA MODHJNO Ext? “° “n" f u l' “I IDENTIFY ENTITIES I??? °° "n" I' ee e r- - l a. Look for items to capture. store. and produce 'mformation for the chant gven their unior usiness r b-letudytheforms andfiles "Junior I I" --l c. Review progmn data, file. and database structures unior l! I" hld. Check on entities that are a part ofthe system Junior I, 7 I" --le. Define identifiers that are a part of the system Junior I I . i“ r‘ 2. BUY SUPPLIES IN ORDER TO WORK 3;? ”° Mk I I" --2a Brurutonn with project team Junior and Senior ZI Dene *’ * ' ’ "“ :_ T": 0 rm /. Ml EDI-semen I @armm...] _QKSSeacthauls-..I QWYE‘FIT. le. Define identifiers that are a part of the system BUY SUPPLIES IN ORDER TO WORK 2a. Brainstorm with project team .Normal‘me items to fitthe model Make drawings on paper . BRING IN SENIOR TO GET WORK DONE 3a Brainstorm with project team .4.— .! _._. 3b. Review forms, documents, printouts 3c. Circle items that are on the forms, docinnents. printouts 1°.WHQI‘ 'l ‘I 3d. Exclude items that are on the forms, documents, printouts 3e. Name attributes on the forms. documents, printouts BRING SENIOR BACK TO GET MORE WORK DONE ”’1 ll its. ‘1 I‘ldu I " I For each entity record the attributes after talking to senior and getting input on how is done Tall: more to the client tt' r. r r. I. r I. I. r r. r. I. r I. I. ‘ Serra ioWork" ‘ 333666? fl] Fife—we 0m 4“"- IItBJPflImB 713PM 149 3 I‘M'illv'l /‘ HHWU‘UII Irrlr-Inr-I I )(lelItfl Item 7 e To select a fine item click on the box under Select a To send selections to you answer click on SEND TO WORK PLAN ANSWER below e To edit your answer close this window, go back to the Case Instructions window and select Work Plan Answer NOTE Upon clichirgserd, this screen willrsfissh anddsar the check marks, but the item you sehcted were pked in your answer. Winn done, cbse the winrbw. .WT’T’ITI‘TI’I‘T‘I‘I’I T11"??? III‘Z‘WTI‘Q‘TF . I‘ "WORK PLAN FOR DATA MODELING $3335 0° "n" El - l‘ “I IDENTIFY ENTITIES Iii-[3:28, no rank i; '— Ititmlfi? for Items to capture, store, and produce mfonnation for the client given their "Junior andSenior I . r .-Ib. Study the forms and files “In“ I" —-l c. Review program data. file, and database structures ”Junior and Senior I" «I d, Check on entities that are a part ofthe system “Junior I; I" ~-l e. Define identifiers that are a part of the system ”Junior and Senior I I l' 2.EUYSUPPuI-S IN ORDER To WORK I 33'? “or“ H' I" 2a Brainstorm with project team 'uniur and Senior i; Elfin 7' 7 ’ [fir—... Inland ,2 _ x .. sale-3'. 1'!" I Is: “ ' l.!'r (11-0!Qfllnld a I ‘ A a :22 ME} I gnaw... I mmmwl Q‘s meilgfigngflfi‘: 1&1“in I'- - g Q -i I" w-l s. Define identifiers that are a part of the system Junior and Senior 1" p. BUY SUPPUB IN ORDER To WORK 3:3" mm" I" ~2e Brainstorm with project team Ipunior and Senior I" we. N ormalise items to fit the model [punior and Senior I" P-Zc. Make drawings on paper ”Junior and 30010! I" 3, BRING IN SENIOR TO GET WORK DONE E3233 "° "“" I" --3a Brainstorm with project team "Junior and Senior I" --3b. Review forms, documents, printouts “Junior and Senior I" «3C. Circle items that are on the forms, documents, printouts ”Innior and Senior I" -3d. Exclude items that are on the forms, documents, printouts Iberian I" ~30. Name attributes on the forms, documents, printouts ”Junior and Senior r 4. BRING SENIOR BACK TOGET MORE WORK DONE Egg? “m" I- Jfiztiizogw entityrecordthe attributes attestalldngto senior and gettinginput on how ”Junior andSenior I" "—43. Talk more to the client lbenior :1 on” A“ ‘ f: .7 f Sendgflofl‘hin " W Le] b” fl PH 5.- I. m. ,p ,/ 735‘ij ng-...,Iflwmmsam..,j gins swam-Illéflstim-un., JWQIQQQ 7zt3PM ; 150 I .I‘ I Item 8 - To select a line item click on the box under Select . To send selections to your answer click on SB‘TD TO WORK PLAN ANSWER below - To edit your answer close this window, 30 back to the Case Instructions window and select Work Plan Answer NOTE" uL-_ Jl‘ “Al 1. ALL LLl' . r "mu-snub, your mweri When‘airn, chee (lib winrbw' Projects” ma database structures A ROUGH DRAFT 0? mm RHATIONSHIP DIAGRAM between entities A mUOH DRAFT OF ENTITY REIATIONSHIP DIAGRAM 151 3 DHHIILH.’ Mu Ill .IIII Inn-Incl prlmm Item 9 I. . To select a line item click on the box under Select 3 e To send selections to your answer click on SEND TO WORK PLAN ANSWER below e To edit your answer close this window. go back to the Case Instructions window and select Work Plan Answer I NOTE Upon clickiig semi. thy screen will refiesh and dear the check mks, but the item you eehcted were phced in } your answer. When dire, cbse thn window. : 5".“ Project Si. N Car-halt“ if . 5' r' WORK PLAN FOR DATA MODEIING Ext? m m" I L r I. IDENTIFY EMT!" Ito'm‘ °° “a" j , I— «la Interview system owners and users to identify things they would like to capture, store, II, . ends - II and produce information I" nlb. Study the forms and files II I" «to. Review progam data. file. and database structures “Junior and Senior -; I" «ld Check that entities have many occurrences and name them II ; I" «l e. Define unique identifiers for each entity “Junior and Senior I r b. DRAW A ROUGH DRAFT 0F EN'ITI'Y RELATIONSHIP DIAGRAM [figfig "° "n" 1 ' I . I— --2a. Bramstonn relationships between entities II 2] EW- . " ’ " 773m“ 4‘" E EMIIECe-om . new MEI.EIssmm-Mesmegti@fiemogafl I" --le. Define unique identifiers for each entity “Junior and Senior II r a. DRAW A ROUGH DRAPT OP EN'ITI'Y RELATIONSHIP DIAGRAM Eggs “° W“ I g I I" «la. Brainstorm relationships between entities II I I" «2b. Normalize to We redundancy and maximize flexibility "Junior andSenior I" «2c. Drew entity relationship diagam II ' I' 3. IDENTIFY DATA ATTRIBUTES IEmS M m" If! I" «3a. Brainstorm on characteristics describing each entity “Junior and Senior I I I" ~3b. Renew fort-as, documents. printouts of stored data J timer and Senior 1: I" ~3c. Circle each unique item on the form Junior and Senior :1: I" «3d. Exclude items that are extraneous or are constant 2. I" ~-3e. Name attributes and verify attributes with end-users E; r a. MAP DATA ATTRIEUTES To ENTITIES [Em ”° "n“ '— :ndgrl: cc); 3313:3313??? for-ins, tile printouts, reports, etc. whose data describes the entity IIIW an d Senior :- I" ”db. Interview end-users to identify data attributes Ibenior .: "" ““S'endto Whenfiimr j ;_ L EDEN" F ’ i ”- -’ ' rfituim .: 2.4sz (Li £192" 'W~=;J @W" waml £3“ WHMJEIEPWQIE SIQESSLEEEEM 152 T) IIM‘IIWNI Mu In “It lnh-Inrt I aplmeu 1“ Item 10 g - To select a line item click on the box under Select L e To send selections to your answer click on SEND TO WORK PLAN ANSWER below . To edit your answer close this window. go back to the Case Instructions window md select : Work Plan Answer ; NOTE; Upon cltkirg semi, this screen will refiuh and clear the check mks, but tle itenu you selected were phoed in F your amwer. When done, cbee thn windiw. ‘3‘ F “WORK PLAN FOR DATA MODELING fig °° m“ 3.... «an; no rank I '— III'IDmm I mum” IIlr‘Ileeded II,- '— bluLookforitemsto capture, store.andproduceinformationfortheclientg'ventheir JuniorandSenior Ill I mess I" ”Ibo Study the forms and files I" p-l c. Review program data file, and database structtxes Junior and Senior II ; I. I" H d. Check on entities that are a part ofthe system I" ml e. Define identifiers that are a part ofthe system unior and Senior I I l , I I I ' r 2. BUY SUPPLIES IN ORDER TO WORK 3;? m "n“ . I . I" «2a. Brainstorm with project team II fl Esme f "" ’ I w ’ Trium w E' Eastman-sum -_ -.,.IgumsEm-ulEIKSMM'JIQIIEMEPH-- ”$345595” ”1‘13 I" --l e. Define identifiers that are a part of the system Junior and Senior II I r 2. BUY SUPPLIES IN ORDER To WORK n::::’;‘ “° “a" I I I" o-2a. Brainstorm withproIect team I] I I" p-Zb. Normalize items to lit the model Junior and Senior I I" --2c. Make drawmgs on paper I r' B. BRING IN SENIOR To GET WORK DONE ‘13:;3‘ °° "“k I I I" p-3a Brainstorm with project team I unior and Senior I I" 3b. Review forms, documents. printouts Junior and Senior III I" r-3c, Circle items that are on the forms, doctnnents, printouts unior and Senior 5 I" b-3d. Exclude items that are on the forms, documents, printouts 2‘ I" «3e. Name attributes on the forms, documents, pr'mtouts r 4. BRING SENIOR BACK TOGET MORE WORK DONE E::m‘n°‘“‘k I" lgtiftzoenaoch entity record the attributes alter talking to senior and getting input on how "me and Senior : I" ”—Ab Talk more to the client lbenior :1 3 one. ~ * ‘ - - ‘ ~ ”“0 mm ~ 153 3 DHHIII NI Hu‘mwtt Internet Explorer «rt-:13: Ema; Item 11 a To select a line item click on the box under Select a To send selections to your answer click on SEND TO WORK PLAN ANSWER below - To edit your answer close this window, go back to the Case Instructions window and select Work Plan Answer NOTE: Upon darling send, this screen will retain and dear the check mks, but the items you selected were placed in your answer. When dine, close this winibw. .'... ETIT".1"V‘" M"? '3? U‘T‘YT. ETTICH'JAI‘} “'11 ""i’i’i'FHTfiT". Elli E 5““ hams-y ll Can-mm l' WORK PLAN FOR- DATA MODEuNG Egg?- ”° "“3 r' l . IDEN‘I'IFY m E3335 “° W '— y—l a Inter-new system owners and users to identify things they would like to capture, store, . i ‘ and produce information I" bulb. Study the forms andfiles H I, I" --lc Review program data file, and database structures ”Junior I ' l" «M. Check that entities have many occurrences and name them II I" E-le Define unique identifiers for each entity “Junior I I r' 2. DRAW A ROUGH DRAFT OF EN‘ITI'Y RELATIONSHIP DIAGRAM lt‘fiff “° “n“ i i I" «2a Brainstorm relationships between entities lbenror I1] Done ' ’ ' ’— —; . tntamet ,E Hum £16592, “Algorithm-Ti E335“ Owing»; le Define unique identifiers for each entity DRAW A ROUGH DRAFT OF ENTITY RHATIONSHIP DIAGRAM 2a, Brainstorm relationships between entities 2b. Normalize to rniriirriize redundancy «id maximize flexibility , Draw entity relationship diagram . IDENTIFY DATA ATTRIBUTES 3a. Brainstorm on characteristics describing each entity 3b Renew forms, documents, printouts of stored data 3c. Circle each unique item on the form 3d Exclude items that are extraneous or are constant 3e. Name attributes and verify attributes with endusers MAP DATA ATTRIBUTEB T0 BUTT!- For each entity, find forms, file printouts, reports, etc. whose data descnbes the entity record the attributes Interview endusers to identify data attributes 1111171111111j f: L I: g I i; .-,. ...,Ih. I I $33.le 41‘ ___- Sand 6Wort"i_f "PEI: 3:55? “:1 ...J 2.1 "rig Immet m1 gunman Igwamnmmj gjrssmnem JIgouuucm Niel..,w_1fl~£-§@fl mm 154 JIIMHIHII' Mulmtult lnlmnctExpluim 32:21:11.4” HEB: Item 12 g E e To select e line item click on the box under Select g a To send selections to your answer click on SEND TO WORK PLAN ANSWER below ‘ e - - , ' g - To edit your answer close this wmdow. go back. to the Case Instructions Window and select :' Work Plan Answer 3 NOTE: Upon cl'rkixg send, ths screen wfll "flesh and clear the check mks, but the item you selected were pbed in : your enswsr. When done. close this window. § 5"“ Inject sup N Con-hum r' WORK PLAN FOR- DATA MODELING ltfi‘g‘ °° m" g [H 5.: r l. IDENTIFY ENTI'I'II-B “’m‘ “° "n" l i needed «1e Interview system owners end users to identify things they would like to cepture. store. A l! l- . Juneor H endgroduce informetion [- b-lb Study the forms end files Junior I i [- --lc.Rev1ew progresn dete file, end detebese structures Junior HI 9' [— --1d Check that enttties heve many occurrences end nerne them Junior I : i l- «1e. Define unique identifiers for eech entity "Junior I r 2. DRAW A ROUGH DRAFT OF ENTITY RELATIONSHIP DIAGRAM fist? “° W“ , ‘ I I I'— --2ee Brainstorm reletionships between entities Junior end Senior 2] Done *— F” 5‘. Internet fir, :7 _‘ A ‘ . I - . i'TH' Wc—M—‘H—"T! .55—J ”lac.” ,gg, ,; 1E2“ ling; ”_'_"‘J Eng” ,L -flgwmfi2%9flefimfldfl I I s s x [- --le. Define unique Identifiers for eech entity Junior H I r' a. DRAW A ROUGH DRAFT or ENTITY RELATIONSHIP DIAGRAM :fig °° "n" } r I'— n-2e. Bremetonn teletionships between entities “Junior end Senior :1 t r «2b. Nomehze to m redxmdency end men'rnize flexibility "Junior 1. . [- «Zc Drew entity reletionship diegrern "Junior 1 : F 3. IDENTIFY DATA ATTRIBUTES £3335 “° "n“ [- --3e Bremetonn on cherecten'stics describing eech entity “Judo: . [- --3b Renew forms, documents, printouts of stored dete “Junior l1 ‘ i [- ~-3c. Circle eech unique den on the form ”lunar : [— --3d. Exclude items thet ere extremoue or ere constent emor g“ [- »3e. Neme ettnbutes end verify ettn'butes with endusers unior 5 r' 4. MAP DATA ATI'RIBUTES To 5mm. “33:5 m M" ‘ [— -.4e For eech entity. find toms, file printouts, reports, etc. whose dete descnbes the entity . : end record the ettnbutes Z I" «41:. Intemew endusers to identify dete ettnbutes lbenior ~ :‘sfifwag i. den": Q 3 Send IoWongIen'Apgaier‘ A e V goon : ; I’m new“ gunmen..." [@yamuSam...| gins smm-..flgwngu_§,t. Emma?» : 155 I .z 4" . .o Z) l‘IHMIIVI I MI. ...-mu lth-Im-I I KNOW! Item 13 ii I To select a fine item click on the box under Select - To send selections to your answer click on SEND TO WORK PLAN ANSWER below . To edit your answer close an: window, 30 back to the Case Instructions window and select Work Plan Answer T‘WlTTI‘T "7.. NOTE: Upon clrh'ig serd. thn screen will refissh and dear the check mks, but tln items you selected were placed in yonrenswer. When done, close this winde. lin'I'JT {THEM CL . a. r' Rx PLAN FOR; DATA MODELING ““5 “° M" *‘3 eeded L r' I IDENTIFY ENTITIEs «an; “° W" il 3 eeded . l— I -l e Look for items to capture, store, and produce mfonnetion for the client gven their "Jun: or 5 names I" b-lb. Study the forms end files "Junior I 1 I" p-l c. Review pro gram dete, file, and detebese structures unior ‘ ' I" uld. Check on entities that are epert ofthe system Junior 1‘ I" L-l e. Define identifiers thet ere e pert of the system unior I' b. BUY SUPPLIES IN ORDER To WORK Mgr:- 0° W H‘- g I" I-2e. Brainstorm with project teem Junior and Senior :2] Done 7 i— _ :0 um ,; E {‘53-th Beam... I QM!“ M_SM:-I a“ MM'AIQWW‘ "E-tiifMfiN‘DG‘JEEL"; le. Define identifiers thet ere e pert of the system BUY SUPPLIES IN ORDER TO WORK Brainstorm wrth prolect team I Normalize items to fit the model .Melre drawings on peper I BRING IN SENIOR TO GET WORK DONE 3e Breinstonn with project teem 3b. Review forms. documents, pr'mtouts 3c. Circle Items that are on the forms, documents, printouts 3d Exclude items that are on the forms, documents, printouts 3e. Name ettributes onthe forms. documents, printouts BRING SENIOR BACK TO GET MORE WORK DONE For eech entity record the ettributes efier telldng to senior end getting input on how is done , Tetk more to the client I" I" I" I" I" I" l" r. l— l- I" I" I" I" m3“ ; meant} : 'iéeidn‘figrkienenaéfi J :1 friilm“ ' and: Elam» Ifll‘flfimsmj mmwdmgumxfinfineMQjfl 156 2} I'M‘LHV'NI Mu Imutt Inn-Incl I xplnn-I Item 14 e To select e line item click on the box under Select a To send selections to you answer click on SEND TO WORK PLAN AN SWER below - To edit your enswer close this window. 30 back to the Case Instructions window end select Work Plan Answer NOTE; Upon clxkirg semi. thn screen Will refine}: and clear the check marks, but tie item you eebcted were placed in your answer. When done, cbee thn window. lest-n u Mu, .u ..pu,‘ ..— ..“- “Auto-e . n E ‘ a s “'“ll Wm» ll WM l' "WORK PLAN FOR DATA MODELING @332“ m M" r‘ l IDENTIFY ENTI‘I’IES «dine “° W" ‘ eeded '— -l e Look for items to cepture. store, and produce infometion for the client y'ven their Illunior usiness I" :-lb. Study the forms end files I" L—l c. Review pro grem dete file. end detebese structures Junior and Senior I" nld. Check on entities thet ere e pen of the system I" --le. Define identifiers that ere e pert ofthe system Junior end Senior r' 2 BUY SUPPLIES IN ORDER To WORK h «dine “° "n" eeded I" t-Ze. Breinstonnwithproject teem 5m" ‘ I" I7. m“ M3 Bib-W» IEWMWJ Bmsmmdlwfiflwafififinbcfiflwfl Dane I" n-le. Define identifiers thet ere e pert of the system Junior end Senior r' 2 BUY SUPPLIES IN ORDER To WORK 3:? “° m" I" --2e. Breinstonnwithproject teem I" r-Zb. Nonneline items to firth. model unior and Senior I" --2c. Meke drewings on peper l' 3. BRING IN SENIOR To GET WORK DONE “3&5 °° "“k I" "Be. Breinstonn with project team Junior end 3930! I" ~3b. Review forms, docmnents, printouts J uniux end Senior I" "3c. Circle items thet are on the forms, documents, printouts unior end Senior I" -3d. Exclude Items thet ere on the forms, documents, printouts I" --3e. Nune ettributes on the forms. documents, printouts [- 4. BRING SENIOR BACK To GET MORE WORK DONE 3:“? “° m" I— ] limzch entity record the ettributes after telhng to senior end getting input on how Junior endSenior I" ">4b Ten: more to the client I enior 731633? W§end B'Wdrlifim‘fi’fiar I *“* “Twriaa - 1 ,r ...6 E Wham-om» lmwwnwgeonJ flxsswnespglawsmwew Iiiiifleflibcfié‘fim 1 157 hr 3 lili'JW‘Nl Mu In: all! Ink-Incl prlourl “_-.. #1 Item 15 f . i e To select a line item click on the box under Select 3 . To send selections to your answer click on SEND TO WORK PLAN ANSWER below 3 - To edit your answer close this window, 30 back to the Case Instructions mew and select 3; Work Plan Answer i NOTE: Upon cltkng send. this screen will rsfissh and clear the check mks, but t}. items you selected were phced in :' your answer. When done, cbee thn wtndow 1 sun: ”has” ll W3“; :- WORK PLAN FOR: DATABASE DBIGN Egg? “° "a“ 5 l' 5‘ MEET WITH CLIENT T0 00 OVER PROJECT “3:3? “° W" ‘l‘ [- --5a. GO to the library and research the client's employees Junior and Senior H [— »Sb. Identify the entrties ofthe client I I [— b-5c. ldentrfy assocxatsons of the client Junior and Senior H I'- o5d Determine which employees to include in the database _- r I . PURCHASE COMPUTER EQUIPMENT To SUPPORT PROJECT TEAM [1:13:35 “ M" I ‘ i} l— t6:&’Revtew the technical aspects ofthe database to ensue rt will work for the client's "Junior andSenior ” i l - [— H» -.6b Withthe client update- technical aspects ofthe database to ensue it will work for the II E] 1. . ...-l . .. ..I . T “ ' ‘ ' ‘ ‘ F’V‘WW‘ '6 ,; I" w PIT] £511 Cum.» , elkflmeSm lanes-mum lgoemvm an}: '39:!“ QQE2'231PMH [— --5d. Determine which employees to include in the database I I l' . PURCHASE COMPUTER EQUIPMENT To SUPPORT PROJECT TEAM “3:35 °° Mk I I l '— ~~¢A5Aajl3eview the technical aspects ofthe database to ensure it will work for the client‘s Junior andS . H I.— ~6b, Wxth the client update technical aspects ofthe database to ensure it will work for the l: client‘s needs ‘ l— ~-Oc. With the client update repository aspects ofthe database to ensure it will work for the Junior andSenior H client‘s needs r' 7. BRING SENIOR IN TO BUIID PROJECT tfi‘ “ “n“ E? '— gloadfgnvest diagrams and tables for the database being built for the client as long as llJunior and Senior :1 I- p-7b Ask senior about work progess "Junior and Senior 1 l‘ ,ASSEMBLE Pm OF THE PROJET ”mm ‘3' “° W" i needed 3 '- L-Sa. Gather and loadwrth date Junior and Senior ; I'— +3b Test allthe components ofthe database 2 I I'- -8c. Adjust the project to ensure it works E [— l»8d Make sure all the prayect preces are assembled 1: may f, Efiwfij if 0 erndfiwofip‘ I. A TE I - . p,,..:r were: L :J Dene fifl’fllrtemu £9.11;fiCaolnetnnfiana;JQ1WakleSaeenm‘£lKSSeachReuls- Jammie» Mice... fifigkflpflfl 735m 158 5 “BMW HI Mu Imult lrlltfll'ltfl l xplorel £- i a To select a line item click on the box under Select 1 e To send selections to your answer click on SEND TO WORK PLAN ANSWER below a To edit your answer close this window, 30 back to the Case Instructions window and select Work Plan Answer 3 NOTfizUponcltkhgsedthisscreenwfllrsfisshmddmdischsckmbfiuttheitemsyouselectedwerepbedin your answer. We dam, cbse thn w'mbw. : ~ 5"" Project so, 11 emu l' WORK PLAN FOR- DATABASE DESIGN Ext? M ““k r 5. REVIEWDATABASE REQUIREMENTS 1:33? °° "“k 3‘ I— --5a Review the entity relationship diagarn unior 1 1 ‘ I" --5b Identify the entities to be desigied 5: I" «5c. Identify associations to be desigied Junior 1: I'— --5d. Determine data detribution and access rights for employees I' "6 DESIGN THE LOGICAL SCHEMA FOR THE DATABASE “33‘ “° Mk 1 F1.|-.6a Review the logical scheme which reflects the database management system chosen Junior and Senior l- 6b. With the client' s database admim'strator and stafi'update the schema desigi based onthe ;' ecific technology chosen 2.] ED” ' ' " Ti”— 0 unmet .3; El amlm- ..1.E]wmuu5m .1flmswgm11goeujtcm itllei- 11133393114 QQQ_ 735.34 1.: 3 [JIlMRlLNli Muzmsolt Inn-tile: lfplplel I'— I -5c. Identify assOCiations to be designed “--5d. Determine data distribution and access rims for employees --8c. Adjust database based on testing results and recruit tests ‘ 116 DESIGN THE LOGICAL SCHEMA FOR THE DATABASE 3:35 ” “a“ 3 '- F 1|--6a. Review the logcal scheme which reflects the database management system chosen "Junior and Sern'or '— --6b. With the client's database administrator and stafi‘update the schema design based on the specific technolog chosen .— -6c. With the client's database administrator and stafl’update the repository specifications 1} based on the specrfic technoloQr chosen for implementation F b. BUILD PHYSICAL DATABASE STRUCTURE 113;? “° "a“ 1; EC [— «7a. Convert each entity in entity relationship diagram as a relational table Junior i [— ~7b. Convert each relationship in entity relationship diagarn as a link between relational tables I unior ; r' "8. PROTOTYPETHE DATABASE 11"“"18 9" "a“ i I needed 1 I t' «8a Gather md load with test an. 1mm and Senior 5 I— «811. Test outputs, inputs, screens and other components r I" L-Sd With the client's database administrator and staff review test results m :3 Raw ” SendioWorkPlanAnswer ... 1 ’4 'rt.-r L: rape- .. MT ’7 , 1 . . " . 1 hter-m M1 gnaw-... 1menu5geganl 1:116 managers -..1|g)osuncm -_u_ae.: 313g fithQQJs mi"; 159 3 ”HM!!! .Nl Muimult IDIITlnt31IX|1IOIIfI >3 2*. Item 17 I; a To select a line item click on the box under Select 13 a To send selections to you answer click on SEND TO WORK PLAN ANSWER below L“ a To edit you answer Close this window, 30 back to the Case Instructions windOw and select 5; Work Plan Answer I: 11 NOTE: Upon cltkng seal. the screen will rifles}: and dear firs check mks, but tb Items you selected were plwaed a ;: you answer. When dine, close this wincbw. .1 I' WORK PLAN FOR DATABASE DESIGN E135? °° “n“ f;- r 5. REVIEW DATABASE REQUIREMENTS E333?- “° "9“ a!“ I" ~5a Review the entity relationship diagram unior 1; I" «5b. Identify the entities to be desigied I" -5c. Identify associations to be designed unior I" «5d. Determine data distribution and access rights for employees r' "5. DESIGN THE LOGICAL SCHEMA EORTHEDATABASE 3335“ "n" -: I" "--6a Review the logcal scheme which reflects the database management system chosen unior and Senior 1 I" -~6b. With the client's database administrator and stefl'update the schema dang: based on the I specific technology chosen :1 Dane ' ' _ ,7" I31. lrlamst {3. . r , r n ' ' o ‘ I?"- u . .1221! ties-e W Ll Emcee-u Seen-"18$ tea-gagesllgwuum -- mu.- 1mm: mesons; )Imwu NI Mu Insult lnlclnul prIOIBI I" |--5c. Identify associations to be desigied unior I" |[--5d. Determine data distribution and access rights for employees r' “6 DESIGN THELOGICAL SCHEMA PORTHEDATABASE 333- ‘” m" I" "uda Revrew the logcal scheme which reflects the database management system chosen ”Junior and Senior '— 6b With the client's database administrator aid stafl'update the schema design based on the specific technology chosen I" 6c. With the client's database adrmnistrator and stafi' update the repository specifications \nior based on the specific technolog chosen for implementation 1' 7. BUILD PHYSICAL DATABASE STRUCTURE: 11:3?“ “‘1‘ I" -.7a Convert each entity in entity relationship diagram as a relational table Junior I" «7b. Convert each relationship in entity relationship diagram as a link between relational tables Junior r' Is PROTOTYPETHEDATABASE ““3 m “n" . needed I" ~-8a Gather and load with test data Junior and Senior I" "8b Test outputs, inputs, screens and other components I" ~8c. Adjust database based ontesting results andre-run tests I" ~8d. With the client's database administrator and staff review test results Eugenia- " "“"ééifib'wm‘" Wei—gum”: r1] «.... “-fizntrcci‘ ‘m . ZI"II.’F"".i'li1’"”T.*i§ T")! «hr cs .- s 9.... ...... I r ; Done ‘r-m 1 -. 7r fi— ——-r y _. ~.-w— fry-Ii fi— will i354“ Emm'dfitwm5m.«l§_1xss-a~m-Ameeyucug;use". Lifieflifiéfi'zsf 160 a DBSRIZN? Mimosnlt Inlemcl l, xplorel Item 18 To select a line item click on the box under Select To send selections to you answer click on SEND TO WORK PLAN ANSWER below To edit your answer close this window, go back to the Case Instructions window and select Work Plan Answer NOTE: Upon clickirg send this screen will rgfissh and clear the check mks, but the items you selected were placed in you answer. When done, cbse th'n window. 8“." Project St. 11 Cousin-r Rank r' WORK PLAN POR DATABASE DESIGN Emmi“ °° “0" 3 r' 5. REVIEW DATABASE REQUIREMENTS W123? “ ““k r- I" ~5a Revrew the entity relationship magam Junior and Senior ' E I" --5b Identify the entities to be designed I" P-Sc. Identify associations to be desigied Junior and Senior I" «5d. Determine data distribution and access rights for employees I- “6 DESIGN THE LOGICAL SCHEMA POR THE DATABASE 333‘ “° “9“ I" "--6a. Review the logical scheme which reflects the database management system chosen “Junior «id Senior I" -6b With the client' s database administrator and stafl'update the schema design based on the 3 specific technolog chosen :1 ' T—. I" 1’. m .2 what-am 19amm1é1tssmm Algernon lilies... $315M QQQ _aasPM 3 Ull'fillLN) Mmmsull lnlmnul I xplmei ~8b. Test outputs, inputs, screens and other components ~8c. Adjust database based on testing results and re-run tests I" 1--5c. Identify associations to be designed Junior and Senior I" lk5d. Determine data distribution and access gym for employees r' "6 DESIGN THE LOGICALSCHEMA PORTHEDATABASE 3:35 “m“ I" "~6a Review the logcal schema which reflects the database management system chosen ”Junior and Senior '— 6b. With the client's database administrator and stafi' update the schema desigi based on the specific technology chosen «do With the client's database administrator and statfupdate the repository specifications . . I— based on the specrfic technolog chosen for implementation unior and S r "7 BUILD PHYSICAL DATABASE STRUCTURES 13:? °° "a“ I" ”—‘h Convert each entity in enh'ty relationship diagram as a relational table ”Junior and Senior I" “~71: Convert each relationship in entity relationship diagram as a link between relational tables "Junior and Senior I‘ I18 PROTOTYPETHEDATABASE ”1""‘m5 °° Mk needed I" --8a. Gather and load with test data Junior and Senior I" I" I" 8d With the client's database administrator and staff review test results SendtoWodthgi‘A‘Bswer 1 2.“: . Evens V, __ 1 wr—fi I'lntemat L1 wm-n jawmngfiqgg firsSggdiBmJLgopsnm: 11sec: ..t33fliéggfl 2.35PM 161 a I‘IIMI IN I HI: Instill InIttIm-I I prOlct Item 19 e To select a line item click on the box under Select Work Plan Answer you answer. When dine, cbee tlm wmibw. e To send selections to you answer click on SEND TO WORK PLAN ANSWER below I To edit you answer close this window, go back to the Case Instructions windbw and select NOTE: Upon clszking send, this screen will rgfrssh and clear the check marks, but the item you selected were plated at l' WORK PLAN FOR DATABASE DmION 12:35 m M" l' 13 Man WITH CLIENT To GO OVER PROJECT 1133:? °° Mk I" P-Sa Go to the library and research the client's employees Junior I" --5b Identify the entities of the client Junior I" P-5C. Identify associations of the client Junior I" ~5d Determine which employees to include in the database Junior [- PURCHASE COMPUTER EQUIPMENT To SUPPORT PROJECT TEAM :3? “° W" ‘— Toni-Review the technical aspects ofthe database to ensue itwill work for the client‘s “:Iunior and S '— P-6b. With the client update technical aspects ofthe database to ensure it will work for the "Junior , -——— ‘f v 1 --—-- - , - rig—*— I ||_| i‘F‘H‘ rflr L I .. .. A . ..- I “I I i I i II E f C m; gamma-L I @wnum...| glee smm-Jlgjoiiiimr . Magi... jfajpfin—bQQ 333» .4, ,I. VAX/1“ . . I" --5d Determine which employees to include in the database Junior I‘ PURCHASE COMPUTER EQUIPMENT TO SUPPORT PROJECT TEAM 11:33 “° m“ [— ;::dl:eview the technical aspects ofthe database to ensue it will work for the client‘s Junior andSenior Nib. With the client update techrncal aspects ofthe database to ensue it wrll work for the . I" . Junior client's needs --6c. With the client update repository aspects ofthe database to ensue it Will work for the . I" 1 Junior client‘s needs I‘ 7. BRING SENIOR IN TO BUILD PROJECT “33‘ “° W" I" --7a Convert daguns and tables for the database being bult for the client as long as . unior needed I" «7b. Ask senior about work progess "Junior l‘ . ASSEMBLE PIECES OP THE PROJECT £11335 “° “9“ r ~38. Gather and load With an. "Junior and Senior I" —-8b Test allthe components ofthe database IFemor I" ~8c. Adjust the proIect to ensure it works lbenior I" »-8d Make sure allthe proIect pieces are assembled ”Junior racism 1 ‘ genggwawwfifi; ’ ‘1 ED.” I "7"" 7"" I— “and ‘— £1 .r av. mug; game» - .--J ”was s<=~»-~-l 5:3“ “EM“:IIQI’WWFL‘ 5...... géagfltfmfliim 162 a DBMHII ? mesull Inn-Intel Explolct Item 20 ~. I To select a line item click on the box under Select 3 a To send selections to your answer click on SEND TO WORK PLAN ANSWER below . To edit your answer close this window, go back to the Case Instructions window and select Work Plan Answer NOTE: Upon cl'tking send. the screen will rsfissh and clear the check marks, but the items you selected were plated in _ your answer. When dine, cbse this winibw. 1 5'5“ Project Sin, H Comm r' WORK PLAN POR- DATABASE DESIGN £335 °° “‘9‘ l‘ 5. REVIEW DATABASE REQUIREMENTS 1E3? “° ““1" 1i- I" p-Sa Renew the entity relationship diagam Junior I" ~5b. Identify the entities to be designed unior E: I" «5c. Identify associations to be designed Uunior 1 : I" «5d. Determine data distribution and access rights for employees Junior l' "6 DESIGN THEIDOICALSCHEMA PORTHEDATABASE 333% °° ""k I" "--6a Review the log‘cal scheme which reflects the database management system chosen ”Junior . I" -6b. With the client’s database administrator and stafi'update the schema design based on the . 7 ; specific technolog chosen 1:] _5 3 Club“. //, Done . Mimmflm smm,m...-Immw - “in: _ gramwoon W 3 IIHMHI l.) Miizimult Internet I xplorer I" I"5c‘ Identify assOCIations to be designed unior I" ”u5d. Determine data distribution and access rights for employees Junior ' i ' r “is DBIGN THE LOGICAL SCHEMA PORTHEDATABASE 33:3 ”° W“ I" "—6a Review the logical schema which reflects the database management system chosen “Junior E f '— «6b. With the client's database administrator and staff update the schema desigi based on the uni or I § I specific technolog chosen I ' I" -6c. With the client’s database administrator and staff update the repository specrfications unior 11 based on the specific technolog chosen for implementation 1 —— . i : l‘ ”7. BUILD PHYSICAL DATABASE STRUCTURES E1335 °° ““3 ‘ ‘ i" I" "--7a Convert each entity in entity relationship diagram as a relational table "Junior 2 I" ”--7b. Convert each relationship in entity relationship diagam as a link between relational tables "Junior r' ”a PROTOTYPBS THE DATABASE Eggs 0° N“ i _ J- -.8i Gather and load with test data qunior and Senior 9 I" --8b Test outputs, inputs, screens and other components lFenior : I" b-Sc. Adjust database based on testing results and re-run tests enior I" --8d With the client's database administrator and staff review test results Junior "a“; An]- - i -- “a :SEhFfiWJfiFAan‘? "1 ”I one" _- U 1 " " '7 ‘1 A T—«P—"m r ’ r ..4 E ; £1th .WI flWakHunSmaj gains smm-ellfioeunm . N59," giggfllfSCjQ—ENRI 163 3 IlllMllVNll Hiumutt Internet I xplotet »' I t. Item 21 II a To select a line item click on the box under Select . To send selections to your answer click on SEND TO WORK PLAN ANSWER below r a To edit your answer close this wmdow. go back to the Case Instructions window and select 5,. Work Plan Answer f NOTE: Upon clicking send. this screen will rsfiuh and clear the check mks, but the item you sehcted were placed in :- your arswer. When done, chase the window. ‘I‘ I" IIWORK PLAN FOR DATABASE DESIGN mg? 9° Mk I- r' .MEETWI'THCLIENTTOGOOVERPROJECT 1 :me-m‘“ f" I" 5a. 00 to the library and research the client‘s employees unior II. . I" ~-5b Identify the entities of the client I - I" -~5c. Identify associations ofthe client ‘ Junior 1 I" --5d. Determine which employees to include in the database I- "a. PURCHASE COMPUTER EQUIPMENT TO SUPPORT PROJECT TEAM Hm °° "“k t l" Etienne the technical aspects ofthe database to ensure it will work for the client‘s 1P . andS . l , . I" III -. b IIIIWrIthItIlIiIe client update technical aspects ofthe database to ensure it will work for the II :I I " " " ” ‘ ‘ m" r ' Eil"'"f"0m” 4 LII ace-9m .. Ifld‘msfimmIaxssmao-m .Ilgj_oeuuvuo Lie-"I £3,311“ QIQGII 7‘37P“~ I" --5d. Determine which employees to include in the database H I . I . l I F , PURCHASE COMPUTER EQUIPMENT TO SUPPORT PROJECT TEAM Egg”? °° Mk I I l" 3:211:er the technical aspects ofthe database to ensue it Will work for the client's I] . and 8 . I l' ..ob. With the client update technical aspects ofthe database to ensure it will work for the ; ‘ client‘s needs ‘ '— --6c. With the client update repository aspects ofthe database to ensueitwillwork for the . iii I ‘ l m l 1 client s needs HJ I. t' "7 BRING SENIOR IN To BUUD PROJECT lt'fi‘ °° “n“ 5' 7a. Convert aagams mdtables forthe database beingbia'ltforthe clientaslongas H '- eeded g I" ”--7b. Ask senior about work progress "Junior if; r‘ I . ASSEMBLE PIECES OF THEPROJECT M“: ‘3 °° m" I: ' use d E r' -8& Gather and load with data Junior and Senior 1; P I" --8b. Test all the components ofthe database I3; I: I" -.8c.Adjust the projectto ensure itworks I: I" L8d. Make sure allthe prolect pieces are assembled I mos: Lie 2. goo. ' * ' ' , ““I“. W“ 164 a DBMHVI l Mumsnll Inlmnet Explorer -5!_."'l_é:‘i'.’.3'r’-‘_' ”Fl [3 . i? Item 22 .5 i - To select a line item click on the box under Select e To send selectionsteyonranswerclickonSENDTOWORK PLANANSWER below gI a To edit your answer close this window, go back to the Case Instructions window and select 5 Work Plan Answer i NOTE: Upon clickirg send this screen will Potash and clear the duct marks, but the items you sebcted were placed In : your answer. When done. cbee the window. 3 W Project so, II Corinna-mink i r' WORK PLAN POR DATABASE DESIGN Egg? “° M" I [- LSMEE‘TMTHCLIENTTOCDOVERPROJHTT 13:35:10!“ II" I" Eda Go to the library and research the client‘s employees Junior 5% I" --5b. Identify the entities ofthe client Junior II I" -5c. Identify associations ofthe client Junior I; I" h—Sd. Deterimne which employees to include in the database Junior II I- . PURCHASE COMPUTER EQUIPMENT To SUPPORT PROJECT TEAM :3? °° "a“ i a I" II firafeview the technical aspects ofthe database to ensue it will work for the client's "J . and Senior II '— IbeIWithItlIie client update technical aspects of the database to ensue it will work for the "Junor . PJ DO"! 3_ i I“ Ti m .. '_ I suiIi I amputation-... I Qummm...| firs Sogdifteeus- ..I gmumi Juana... manna . 22,319»; I" lI-Sd. Determine which employees to include in the database Junior I I? r' “is PURCHASE COMPUTER EQUIPMENT TO SUPPORT PROJECT TEAM . 332:5 “° "a" i l r- . :eadl‘teview the technical aspects ofthe database to ensue it will work for the client‘s Junior and Senior II ~6b. With the client update technical aspects ofthe database to ensue it will work for the i I. I" . , Junior . . client s needs I . ~6c. With the chent update repository aspects ofthe database to ensure it will work for the . I" . , Junior I1 client s needs I l‘ 7, BRING SENIOR IN To BUILD PROJECT E33 ”° m" I i '— p-7a Convert diagams and tables for the database being built fosthe client as long as , is} unior - I needed 1 I" ..Tb. Ask senior about work progess “Junior : l‘ "8. ASSEMBLE PIECE OP THE PROJECT Egg? “° “‘"‘ 3; l' --8a. Gather and load with data ”Two: and Senior 31 I" ”Sb. Test all the components ofthe database lbenror I ' I" ~-8c. Adjust the prolect to ensue it works [bearer I" ~8d Make sue all the project pieces are assembled "Junior 1' ”‘9 Digerati :m_§e—ndtoWO_rt5* Plan nigger :I] . . . in: *i‘hbr —--n— ——— _..j—‘rr_ " ' i LOU—I'm: w 4 . ' Mamet . i f ,/ Done , a so; I} anaemia..- IQyokRnSaoooiJ fjxs mmm-JIQPERELBJIBPPMEQB mat—an 165 3 lili'JlVl .’ Mu mgr-It Illltflntfl I xpluiel Item 23 e To select a line item click on the box under Select Work Plan Answer your auwer. When this, cbse this winibw. e To send selections to your answer click on SEND TO WORK PLAN ANSWER below . To edit your answer close this window, go back to the Case Instructions Window and select NOTE Upon clkkirg send this screen will rgl’ssh and dear the check mks. but the items you selected were plwed in r WORK PIAN EOR DATABASE DESIGN E3333 “" m“ I‘ 5. MEETWITHCIJENTTOGOOVERPROJECT Wigwam“ I" r-5d. Go to the library and research the client‘s employees Junior and Senior I" ~-5b, Identify the entities of the client Junior and Senior I" “Sc. Identify assocrations ofthe client Junior and Senior I" b-Sd Determine which employees to include in the database Junior [- . PURCHASE COMPUTER EQUIPMENT To SUPPORT PROJECT TEAM 3:35 “° m“ I" [Igtd‘Review the technical aspects ofthe database to ensue itwill workfor the client‘s lkudm andSenior l' ”157°; Imam. client update mama aspects ofthe database to ensue a will work for m. ”mm‘ m 3m“ 0m " ' E" 5" i3 Meme E11: _1 Baum-... [gunman-u! BISW-Jaoywreeygiifiéfikiib©fijgw --8a Gather and loadwith data “Junior and Senior >~8b Test allthe components ofthe database 1PM L-8c. Adjust the prOject to ensure it works lbenior I—Sd. Make sure allthe prOject pieces are assembled I" p-5d. Determine which employees to include in the database Junior l' . PURCHASE COMPUTER EQUIPMENT TO SUPPORT PROJECT TEAM ne‘m‘ “° m“ I" :oadl’leview the technical aspects ofthe database to ensue it will work for the client's J . and Senior I" nob. INith the chant update technical aspects ofthe database to ensue itwillwork forthe J . andSenior client s needs I" nee. With the client update repository aspects ofthe database to ensure it willwork for the Juiior ands . client's needs r‘ 7 BRING SENIOR IN To BUIID PROJECT Egg? “° W“ I" ;::d(.3:nvert diagrams and tables for the database being built for the client as long as ”J . andSenior I" ~7b Ask senior about work progess "Junior and Senior r‘ "8. ASSEMBLE PIECES OP THE PROJECT IE3? “° Mk I" '— r I" ”Junor and Sem‘or mire; __ ' "993m ' PIanAnswerj 'rt. ... run hilhflilul uzlsl mil-mie-thqsg-paa-unnrusm" ‘ I I 2 Esmt; flCaselnstmctiom - ...j gwmnnwm...| ens Search Boas: - ...Il’gjossnvu - Mics... 166 Frfimw :afihflifififfififii ; s . a". ..l 3 Uli'IHll : Hummll Irm-Im-t l xploml Item 24 e To select a line item click on the box under Select {a e To send selections to your answer click on SEND TO WORK PLAN ANSWER below ,;~ . To edit your answer close this window, go back to the Case Instructions window and select if Work Plan Answer F; NOTE: Uponclthnssnd.thisscreenwfllrefisshanddsarthscbsckmks,bmthe1temsyouselectedweie phcedm E your answer. When skins. chase the wiiabw C: r WORK PLAN FOR DATABASE DESIGN It??? m m“ ’2: r' 5. REVIEWDATABASE REQUIREMENTS Egmg mm“ I‘- I" --5a Renew the entity relationship diagam ”Juliet and Senior 1'3 I" "5b. Identify the entities to be desigied "Junior and Senior I" --5c. Identify associations to be designed “Junior and Senior ‘I I" --5d. Determine data distribution and access rights for employees "Junior } r "6. DESIGN THEDOGICAL SCHEMA EORTHEDATABASE IE3? “m" f I" "~6a Review the logcd scheme which reflects the database management system chosen "Juiior and Senior ”- ' I" -6b. ‘With the client's database administrator and stafi' update the schema desigi based on the unior and Senior 1 s ecific technolog chosen ZI r"; i Int-w J: ' on I? 'dcmsamalfltsmm-Jgjoggogijgu $23333: 3699, 7-39_PM 3 m: ..‘m I 3 Mn Insult Inn-Incl l xplmm ”-——__———_——_ ~5c Identify associations to be designed lpuiior and Senior III-Tm I" FJI- 5d Determine data distribution and access rigits for employees "Junior r' "6 DESIGN THE LOGICAL SCHEMA PORTHEDATABASE $333890!“ 1* I" "~6a. Renew the logical schema which reflects the database management system chosen "lunar and Senior I . I" b. With the client's database adm'mistrator and stafi update the schema desist based on the unior and S . : specrfic technolog chosen I. ~6c. With the client’s database administrator and stafi' update the repository specifications . . I I" based on the specific technology chosen for iirqilernentation “3 .' I" "7 BUILD PHYSICAL DATABASE STRUCTURES waif-mm“ 1' 7 IE I" ”--7a Convert each entity in entity relationship diagram as a relational table unior and Senior E I" “-711 Convert each relationship in entity relationship diagram as a link between relational tables Junior and Senior :3? r “s. PROTOTYPETHEDATABASE "“8- “°““" g eeded 13 I" --8a Gather and load with test data "Junior and Senior 3:: I" “Sb. Test outputs, inputs, screens and other components “Senior I" --8c. Adjust database based on testing results and re-run tests lbenior I" --8d. With the client's database administrator and stalfreview test results ”Jutior and Senior l('3”” “I“?iu‘rmfi I'mrzmwrr“ 167 3 III:'.|II NH Hu Insult Inn-Ino-I l xplorer Item 25 l - To select a line item click on the box under Select E e TosendselectionstoyoormmrclickonSENDTOWORKPLANANSWERbelow ; . To edit your answer close this window, go back to the Case Instructions window and select 3 Work Plan Answer i. NOTE Upon chum; send. this screen Wm refissh and clear as check mks. but the item you sebcted were pine is your am. When done, cbse the washer. j; i» r' WORK PLAN FOR: DATABASE DEIGN I353? ”Mk . r 5. REVIEW DATABASE mum Ito‘m‘ ”Mk 11:. I" «5a Renew the entity relationship diagram IIlunior and Senior I I I" «5b Identify the entities to be desigied II I" «Sc. Identify associations to be desigied unior andSenior I" --5d. Determine data distribution and access agate for employees I' ”5. DESIGN nlElOGlCALSCHmA FORTHEDATABASE 33‘ “m“ 9' I" II—da Renew the logical scheme which reflects the database management system chosen unior and Senior Q 3 l— 6b With the client's database admrnistrator and stafi'update the schema desigi based on the f 7 specific technology chosen H can 7"""—— i"— {— .0 Int-ms 4 i331} ace-mg @1me Sam-«l Basso-om-eflgqg'sgcaz you; . itafiiktflmlaeggj ] liltiilll N.) Mum-mull Internet Explorer I" I --5c. Identify associations to be desigied Junior and Senior I" II--5d Determine data distribution and access rights for employees r‘ IIe DESIGN THELDGICAL scar-3m FORTHEDATABASE 3:? “Mk I" II—6a Renew the logical scheme which reflects the database management system chosen “Junior and Senior I— --6b. With the client's database administrator mid stafi'update the schema desigi based on the specrfic technolog chosen do With the client's database administrator and staff update the repository specifications . . I— Ibased on the specific technology chosen for implementation unior and Senior I- II: BUILD PHYSICAL DATABASE STRUCTURES £33323" 9" "“3 I" II—fla Convert each entity in entity relationship diagram as a relational table IIJunior and Senior g I" ”—71: Convert each relationship in entity relationship diagram as a link between relational tables ”Junior and Senior i t , l l" ”s PROTOTYPBmBDATABASE I133?" ”m" i 1 I" --8a Gather and load With test data "Junior and Senior i a I" ~81). Test outputs, inputs, screens and other components II E I" --8c. Adjust database based on testing results and re-run tests II I" ~8d With the client’s database administrator and staff review test results II ,1 r" Saleefitj Emmet} * ‘5' San—635mm" ""'fi‘iinmfie§§ei""::] ' 2:. » a- _ —-- ,___-n_—..-(. ‘.--— ( LA 1 Done "_‘T .. " wflrfiwm’r ,g- _.1 4. u .I 35311 Elm-9W» Ewen-"Sand flmmmllaoew?fiy-L@fl46®fi§~ 168 3 DR‘iIIVNl Mannsnlt Internet Fxploter Item 26 a To select a line item click on the box under Select 0 To send selections to you- answer click on SEND TO WORK PLAN ANSWER below a To edit your answer close this window, 30 back to the Case Instructions window and select Work Plan Answer NOTE: Uponcltkngseniths screenwillrsflssh and clmfluchkaknbm the item you sebctedwereplacedtn youranswer. Whendom, cbsethnwmbw I .. null! lMs‘tr‘Isunluflluellflallp ; i i r ‘ WORK PLAN FOR: DATABASE omen '23? a M "n" 5 1 r' 5‘ umwrmcum TOOOOVER PROJECT “Exfd‘gmmk VF : [- »-5a 00 to the library and research the client's employees Junior and Senses ii .i [- --5ba Identtfy the entities of the client l; I" ”Sc. Identify associattons of the client Jum‘or and Senior | i [— ~-5d, Detenmne winch employees to zncluda in the database it r , PURCHASE COMPUTER EQUIPMENT TO SUPPORT PROJECT TEAM E1360“? “° "n" ; g r. gfzseview the technical aspects ofthe database to ensure it will work for the client's "J - : r and Slnior H '- ~-6b. With the client update technical aspects ofthe database to enstxe it mll work for the II :2] Done r—r‘ e m V; Miamm_ _JQEOIRM_SM“JQKSSMM,_HBD_IM_ ua_1,ta.&fl¢~g©fi_ns [— --5d Detenmne whrch employees to :nclude tn the database H “Heading, no rank H I" . PURCHASE COMPUTER EQUIPMENT TO SUPPORT PROJECT TEAM needed . l '— :jeadlsleview the techmcal aspects ofthe database to ensure it will work for the client‘s Junior and Senior ' ti l- ~-6b Wxth the chant update techntcal aspects ofthe database to ensure tt ml] work for the client's needs : , '— ~-‘6c.:Ntth the clientupdate repository aspects ofthe databaseto smeltmllwortforthe m“ ands . ll cuent 3 needs I‘ 7, BRING SENIOR IN TO BUILD PROJECT gift? °° "“k 1 .1 '— -7a Convert disgusts andtablesforthsdatabase beingbuiltforthe clientaslongas "J . andS . ‘} needed 3 [- --7b. Ask senior about work progess "Junior and Senior 4 r' us assmaua PIECES OF THE mower Im“““5 “° “n" 5 needed 2- [— »8a Gather and load wrth data Jumor and Senior l [- -—8b Test all the components of the database 1 l I— ~~8c. Adjust the projectto enme itworks 3 I“ b-8d Make sure allthe project pieces are assembled F SelectAflfl. E. cam Alti- 3 :u Sand 6% Plan Answer? u” j ' 2i bai'" T“ fl “‘ w‘ 7 WT” Imam-n" M 1 accumulate-:4 @Wadtfime 53m... I guts Seachfieals -el|g]oas£u_1__- 514;}.35flgé'fl— 7:533”? 169 3 llll‘.ll‘Jl .’ Mu Imull lnlvlnd l’xplolm Item 27 e To select a line item click on the box under Select Work PlanAnswer your answer When done, cbse this window, I To send selections to your answer click on SEND TO WORK PLAN ANSWER below a To edit your answer close this window, go back to the Case Instructions window and select NOTE: Upon cl'xh’mg send. this screen will refissh and clear the check mks, but the items you sebcted were placed in L45. Review the technical aspects ofthe database to ensure it will work for the client‘s eeds "finder and Senior l|>-6b. With the client update technical aspects ofthe database to ensure it will work for the .Q2. .al ..... O . We" 3““ PrejeetSq ll Con-hum r WORK PLAN FOR DATABASE 131-SIGN [£3333 “° N" F 5, MEET WITH CLIENT To 00 OVER PROJECT "2332‘? “° W" I" P-Sa Go to the library and research the client's employees Junior and Senior I" b-Sb. Identify the entities of the client Junior and Senior I" ”Sc. Identify associations of the client Junior and Senior I" --5d. Determine which employees to include in the database Junior :- . PURCHASE COMPUTER EQUIPMENT TO SUPPORT PROJECT TEAM 333‘ “° m" l" r "Junior and Senior , r“ rum-Hf 'fii ' film El 85.-awe lilwwmeBmwm-Jlevmw"EL-- fifwfiwwauwaéj ' Soiqmflififlme " l I" L-Sd Determine which employees to include in the database Junior r . PURCHASE COMPUTER EQUIPMENT TO SUPPORT PROJECT TEAM 0:335 “° "n" r— ;feadlfeview the technical aspects ofthe database to ensure it will work for the client‘s Junior and Senior I- 2.3311339: client update technical aspects ofthe database to ensue it will work for the Junior and Sem'or '— 23:“333: client update repository aspects ofthe database to ensure it will work for the Jum'or and Senior I' 7. BRING SENIOR IN TO BUILD PROJECT '13:? “° m“ I— £35330?“ ckagarns and tables for the database being built for the client as long as “Junior andSenior I" ~7b. Ask senior about work progess ”Jumor and Senior l' I ASSEMELE PIECES OF THE PROJECT 1:333 n° "n“ I" -—8a. Gather and load with data ”Junior and Senior I" ~8b. Test allthe components ofthe database Ibemor I" ~-8c. Adiust the prqect to ensure it works "benim I" -8d. Make sure allthe project pieces are assembled ”Junior and Senior $313M 2‘?er _... ..l.l.l . In an umletmdlt‘I‘I- al- .“...I I‘ll _ns-I '5one "f—T’imw I" o £191 ; $ng §]WorkP|amSeIeep...I guts smawmoasm - mu... ,gmmggfigigm 170 >2 —. -. 3 ItltMtll l 3 Mn [0‘ (IN lntr-rno I I xplori-I Item 28 e To select a line item click on the box under Select a To send selections to your answer click on SEND TO WORK PLAN ANSWER below a To edit your answer close this window, go back to the Case Instructions window and select Work Plan Answer . ii ire-a. _ 1 NOTE: Upon clicking send. the screen will rapes}: and clear are check marks, but tle Item you selected were placed in your answer. When doze, chase the window. 5““ Project Si... N Can-MM r‘ WORK PLAN POR: DATABASE DESIGN IE3? ”° “n“ r' b. REVIEW DATABASE REQUIREMENTS Kim" °° Mk I" ~5a Renew the entity relationship diagram “Junior i ‘ [- --5b Identify the entities to be designed ”Junior I" w5c. Identify assocrations to be desigied unior I" «5d Determine data distribution and access agate for employees Junior r' DESIGN THEIDCICALSCHEMA FOR THE DATABASE 343‘ mm“ I" “--6a Review the logical scheme which reflects the database management system chosen Junior and Senior ; l" ”—6b. With the client's database administrator and staff update the schema design based onthe . i ‘ I inner specrfic technology chosen Done H ' T“'— T" t“ :0 Im Ml Etc-OWL] @Wuknumsl figmm-..I[goaiiqm. “a... _;lE3®fiH5§)®B—7§0 a lJthUlll? Microsutt Internet Explorer I" |--5c. Identify associations to be desigied Junior I" “--5d. Determine data distribution and access rights for employees Junior 1' 1 r' “is DESIGN THELOGICALSCHEMA PORTHEDATABASE 3:35 °° “n" E. I" “-6a Review the logcal schema which reflects the database management system chosen "Junor and Senior I" «6b. With the client's database administrator and stafi'update the schema design based on the . specific technolog chosen I" «6c, With the client's database administrator and staff update the repository specifications unior based on the specrfic technolog chosen for Implementation t' "7 BUILD PHYSICAL DATABASE STRUCTURES thmg °° “‘1‘ I" “--7a Convert each entity in entity relationship diagram as a relational table unior : I" Jln'lb. Convert each relationship in entity relationship diagram as a link between relational tables Junior n. I‘ I8 PROTOTYPE THE DATABASE 3:35 “° W“ E I" “So Gather and load wrth test data ”Junior and Senior I" --8b Test outputs, inputs, screens and other components lbenior I" «8c, Adgust database based on testing results and re-run tests lbenior I" -8d. With the client's database administrator and staff review test results ”Junior Miami 1 ' ' ‘ ”'S'anri 56Wbik'Ptan Anewer' " W"J _ V ED“ "— P "P I— ? Internet ,2; ' w. a trail ,5 éjCaeelnstructiom-UJ @wmmsam...I €1KSSeachRanh-..fl§jp§!|lt:l:2:memifiu"b®fl RIDER . 171 Appendix F: Administration Of Experiment Materials F.1 List of Work Plans for each of the Four Work Plan Order Scenarios ONE ORDER TWO ORDER [o_m“""ul' o'dI-e—I Ks"'rram;I Inara' Modo't KS—Item_s " Item It File name Item # File name Item 1 dmsrvt2 Item 1 dmmuvl1 Item 2 dmsrcn2 Item 2 dmmucn1 Item 3 dmmrcn1 Item 3 dmsrvn1 Item 4 dmmrcl2 Item 4 dmmrvt1 Item 5 dmmuvno Item 5 dmmuvnO Item 6 dmmrvII Item 6 dmmucl2 Item 7 dmsuvt2 Item 7 dmsucn2 Item 8 dmsrcl3 Item 8 dmsuvl2 Item 9 dmsucn2 Item 9 dmsrvl2 Item 10 dmsrvn1 Item 10 dmsrcl3 Item 11 dmmucn1 Item 11 dmsuvn1 Item 12 dmmucl2 Item 12 dmsrcn2 Item 13 dmmuvt1 Item 13 dmmrcn1 Item 14 dmsuvn1 Item 14 dmmrcl2 Database Ks Items Database KS Items Item # FIIe name Item # file name Item 15 dbsrvn1 Item 15 dbmrvI1 Item 16 dbmrcnI Item 16 dbsucn2 Item 17 dbmucn1 Item 17 dbmrcn1 Item 18 dbsrcn2 Item 18 dbmucI2 Item 19 dbmuvl1 Item 19 dbmuvnO Item 20 dbmrcl2 Item 20 dbmuvl1 Item 21 dbmuvno Item 21 dbsuth Item 22 dbmrvII Item 22 dbsrcI3 Item 23 dbsrvt2 Item 23 dbsuvn1 Item 24 dbsrcl3 Item 24 dbmrcl2 Item 25 dbsucn2 Item 25 dbmucn1 Item 26 dbsuvn1 Item 26 dbsrcn2 Item 27 dbsuvl2 Item 27 dbsrvn1 Item 28 dbmucI2 Item 28 dbsrvl2 172 THREE ORDER FOUR ORDER Data Model KS Item: Data Model KS Items Item It File name Item # File name Item 1 dmsuvn1 Item 1 dmmrcl2 Item 2 dmmuvl1 Item 2 dmmrcn1 Item 3 dmmucl2 Item 3 dmsrcn2 Item 4 dmmucn1 Item 4 dmsuvn1 Item 5 dmsrvn1 Item 5 dmsrcl3 Item 6 dmsucn2 Item 6 dmsrvt2 Item 7 dmsrcl3 Item 7 dmsuvt2 Item 8 dmsuvl2 Item 8 dmsucn2 Item 9 dmmrvII Item 9 dmmucl2 Item 10 dmmuvno Item 10 dmmuvnO Item 11 dmmrcl2 Item 11 dmmrvII Item 12 dmmrcn1 Item 12 dmsrvn1 Item 13 dmsrcn2 Item 13 dmmucn1 Item 14 dmsrvtz Item 14 dmmuvt1 Database KS Items Database KS Items Iltem # file name Item # File name Item 15 dbmuct2 Item 15 dbsrvl2 Item 16 dbsuvl2 Item 16 dbsrvn1 Item 17 dbsuvn1 Item 17 dbsrcn2 Item 18 dbsucn2 Item 18 dbmucn1 Item 19 dbsrcl3 Item 19 dbmrcl2 Item 20 dbsrvl2 Item 20 dbsuvn1 Item 21 dbmrvl1 Item 21 dbsrcl3 Item 22 dbmuvnO Item 22 dbsuvt2 Item 23 dbmrcl2 Item 23 dbmuvl1 Item 24 dbmuvl1 Item 24 dbmuvnO Item 25 dbsrcn2 Item 25 dbmucl2 Item 26 dbmucn1 Item 26 dbmrcn1 Item 27 dbmrcn1 Item 27 dbsucn2 Item 28 dbsrvn1 Item 28 dbmrvII F.2 First Page of Sign Up Sheet for Study Participation Study Participation Sign Up This is to sign up for: 0 The chance for extra credit in class (15 points), 0 Earning a few bucks (a potential for $13 for about an hours time), and 0 Learning about what it is like to be a management consultant. All for participating in a research study on improving how people search knowledge management systems. The study will be held on computers in Room 105 in the Epply Building in the Business School. The study should last about 60-75 minutes. Please print your name and email in the time slot that fits your schedule (max. 20 / slot): WEDNESDAY (November 6) Email Email 10:15-11:45 a.m. 1:00-2:30 p.m. 1._ __ 1. _ 2. __ 2. _ 3. __ 3. __ 4. ‘ __ 4. __ 5. __ 5. _ 6. __ 6. __ 7. _ 7. ___ 8. _ 8.. _. 9. __ 9. __ 10. _ 10. __ 11. __ 11. __ 12. __ 12. _ l3. _ 13 __ 14. __ 14 __ 15. _ 15 __ 16. _ 16 __ 17. _ 17 __ l8. __ 18 __ 19. _ 19 __ 20. _ 20. __ 4:00-5:30 p.m. 4:00-5:30 p.m. (con’t.) 1. __ 11. _ 2. _ 12. __ 3. _ 13. __ 4 I4. 173 F .3 Tutorial Protocol Knowledge System Study Tutorial Protocol—N ovember 2002 Maximize the Screen 0. Login a. Use your pilot [D b. Use the Login ID that was given to you 1. Overview 3. Lefl hand sidkalways available to return b. Reference only—means you cannot navigate to next/previous page 2. Pay Scheme a. Paid based on quality of your answer AND how quick you are b. Clock starts when you start the case 3. Data Modeling and Database Design — just a reminder of terms (ignore ???) 4. Work Plan Description a. You will be creating a work plan for a new client by re-using old ones b. A work plan consists of Project Steps and Consultant Ranks 5. Combining Work Plans a. You will need to combine pieces or whole work plans to create your new one b. You decide which and what to use 6. KS Description with Example Search Results (which are old Work Plans) a. Chance to look at/get familiar with work plans without the clock running b. Functionality has been disabled because this is just an example 7. Decision for you to Make a. This is where you will start the case (and start the clock) b. Click “YES” (there is a reminder message) Maximize the Screen 8. Case Instructions a. You need to read in detail—there are 3 characteristics of a good work plan b. Search Results i. Have been run for you ii. Will get — Item and Rating iii. Will get one of the following -- # of raters, % raters experts or recommend also iv. Read what these are c. Will get Data Modeling work plans then Database Design work plans d. You will need to: i. Figure which one to open and Which line items to use or not ii. Go to see/edit answer 1. Re-order (don’t’ worry about step order #’s) 2. Delete (show a few, then all) iii. Go back to Search Results to select more iv. Go to see/edit answer v. Finished (are you sure?) vi. 4 questions vii. Belief questions (very important) viii. Thank you 174 F .4 Hand Out to Subjects with Login IDs Hello and Welcome to the test of the Knowledge System Study Thank you for agreeing to participate. To begin, just follow these instructions and the instructions on the screen. Have fun! TO START THE PROGRAM: 1. Open Microsofi Explorer (do not use Netscape) . Enter the URL: htfi/mebulabusmsu.edu/knowledgesvstems/ 2 3. Enter your last name spelled as: 4 . Enter your ID: TO GET YOUR MONEY: lIf you completed the entire exercise including the survey questions at the end, you have earned money and extra credit points. To pick up your money, bring this sheet and stop by my office (Robin in N241 on 2nd floor of North Business Complex) one of the tfollowing times: Monday, November 18 10 am. to 5 p.m. Thursday, November 21 10 am. to 1:30 p.m. Or you can make an appointment by emailing me at postonrl @msuedu. Thank you. 175 F.5 Session Control Log Knowledge System Study Session Control Log—November 2002 [Date: (Time: Computer Problems: iLogin ID Problems: Comments: Saved Data From Database: Number of Participant: Number of No Shows: Date: Time: Computer Problems: [Login ID Problems: Comments: Number of Participant: Number of No Shows: 176 onm N N N N ON N N N N N N N EN EN E88. m E o N E o E o o o o o o o o 550 m E N N o o N E o o E o N N E N VNAN VNN E 3 2 2 EN NE E E wE NE EE NE E 2 NN-EN NE 3 N w 2 E. 2 E o a EE E c m N ONdE .. mew: new: as; new: new: ea: .53. .Exmq am: 5.: 53 35 .55. ESE :32 ESE :22 BE :22 ESE >32 32 :32 ESE be: £2 .32 253:. 25.355 em Eon—Eh _ atomxm 9:55— .x. _ @523 .3 .825—Z oEEomam “Eamon.”— EoEanEH t3 om< 80 32m .3 $550 Nd 033. cum 5N MN N N om N N N MEN N N N N N EmEorE. w E o E o E o o E o E E o N E 350 MN NE E E NN 2 EN 2 NE E ON EN NE NE NE 863. m: a w a c E m E. m a n m w NE 3 Siam new: new: 23 33 93E ea: as: Em: 5.: 5E .55 .35 .35. ESE :32 ESE :32 ESE :32 ESE be: 32 :32 ESE be: $2 ESE 253:. aeEanEumEmem ..oEEE _ 8.59s— Eouam .x. _ 38am .3 Eonfiaz «___—omen DEF—mam E2535. .3 Econom E 30> Eoo .33m .8 3880 Ed 2an .coEEmoEEmEnEom (E0 02on Ema u comm .eoEEmoEEmEmom E0 02on 33 n momq .mtomxm 2w 0:? 283E owfieoeom _.EwEEE n mxmm .mtomxm 8m 0E5 mEoEmm omficoobm 301E u mxmq £63m Eo SEES/E :33 u 3mm £63m .Eo E09952 304 .1. Emma .EEoEmEmESE M £2 £832 H :22 JESSE—NEH n :58: .EoEtouxm n EEqum $0M 55980 2883th Eva 83380820 38 Sam Eo 3:on ”D 528an 177 ON». NN N N mN 0m N N N N N NN mN NN «N :38. 2 N N o o o o E o E. o E m E E meme: toaxm E E o o o o o o o o o o o o o :>och chEoEom EVE w w m m E EE NE 3 N EE ME NE 2 0E @50ch oEEEEE< NE E: m 3 NE 2 2 m w NE E EE N a w maoevE 3:82 Q» N m v m E N N E m N m E v m 9(6ch mom: a: new: .33 ea: mam: an: an: Em: 2E: 55 E35 ESP—i 32 be: ES ENE 32 ENE ES :32 ES bu: E: be: £2 32 258:. aeEanEEmEEEmom Eva—ENE 8anme 283E Ex. _ 9.85:6 BEE—El SEE—3am E EEExm €083th 3 oocotoaxm Sufism (E0 3550 v.0 2an ONM NN MN N N om N N N MEN MEN NN mN NN VN ESPN EE N N c o o o E o m o E N o 0 meme: 8E w NE o E EE E 2 EE 2 ENE NE 3 oE 3 225m mmE NE m S E QE 2 3 NE E E ME .2 EE ENE oEmSE as: new: mom; 83 .EE Em: an: 915 .35 am: .35 E33 .55. ESE bag BEE be: ESE as: ESE he: ESE be: ESE be: $2 an: ESE—3:. 53:33.2. em boa—Em mtumxm 9.35— .x. r 9.83— .? ..onEEEZ 258.3— «HES Eugen; ‘3 E2500 Eoofsm .Eo 3:on «(:0 2an 178 EN _NN MN _mN MEN 2 _NN 0N EN EN EN _NN _mN NN 4N EeoN NE E o E N N E N E E E o o N N meme: 305E owN MN EN NE 0N N N E E N EN NN E E E 8ch .. 022 . -EEomEoNE co m N N o v m o m w o m o o m 329 new: new: new; 33 ea: Em: 9&4 .55 ESE: 5E: 55 .55 .33. ESE has E: be: ESE be: 32 be: ESE >32 32 be: £2 ESE 258;. :eEuaoEumEE—om ESEE _ 3.533.— 9.83— a\o _ 953% E6 ..onEEEz 05.0an «NF—mam 28:53; .3 :oEENEEEEEaU maggotom N530 (Eo 3:500 0.0 2an ONm NN N N N om N N N N N NN mN NN .ENN E88. 392 02 NE m EE 0 N m NE a 2 w E 0 NE 0 8ch :82: EE mmEEE omN E E E EN ON EN EE E E 0N mE oE 0 NE 26ch new: new: .33 33 ea: Em: 953 .5: 3E 35 55 55 .53. ESE :22 ESE he: ESE be: ESE :32 ES :22 ES be: 32 E32 253:. noanoEEmEmem LES—Em _ $33M flouam .x. _ £85. .8 ESE—EZ 05—83— 2593.— anEEonE. NNE :oEEmSEENU coEEEEEoU wEEEEmyE Eo mEEEEEoD md 058. 179 . ooo.E 5.20m ......ooN. ooo. E whom *nmmw. ......Nom. ooo.E EVEEom ......owm. ..zmeo. ....wmwm. ooo.E MEEom ......NoE. .....nmEv. ......Euom. _.....mov. ooo.E NEEom ooo. Vmo. ooo. Vmo. 3%va ooo.E ECEom mvo. E Eo. NNo. NNo. ooo. Nmo. ooo. E NEED mmor omou ooo. Emo. Nmor mmot **Nom. ooo.E EEmEnE NNo. o Eo. ... E E E. woo... Noo. oNor moo. oEo.- ooo. E .EENEEEOU Vmo. wmo. oEo. mNo. woo. moor Nwo. Nvo. ......omm. ooo.E MEEEOU ooo. woo. mmo. omo. .EswmvE. Eamon. Emo. ooo. ESNNM. ......mov. ooo.E NEEEOU moo. oNo. *w E E. Nmo. oNo. owo. Nwo. ooo. Noo. “.....Nvm. “*on E. ooo. E 5:00 how—om mu—om ENE—om whom NEEom .— CEom NEmED EEmE: LEE—GU NEE—GU NEE—OD CEBU Ensu— EEmove u ...... .EESVEEE u .. 688 megs u .EE mwhawfluz Owdthricu MCOg—d mflomww—OEOU DHNEN/Imm EEE 2an 032. 20:28.60 NEE xEEEoNEEE< 180 Appendix I: Work Plan Answer Mean Measures by Treatment Condition Table 1.1 Number of Work Plans Used in Answer mean [standard deviation] Number of Work Plans Used in Answer Providing Content Ratings (baseline condition) Rating Level and Content Match 6.7 [2.78] Quality Mismatch 7.9 [3.86] Match/Mismatch F (p-value) 1.65 (.205) Providing Rater Sample Size Rater Sample Size (number of raters) Low High Rating Level and Content Match 4.6 [2.69] 5.4 [2.59] Quality Mismatch 8.8 [3.25] 9.0 [3.85] Match/Mismatch F (p-value) 42.053 (.000) Rater Sample Size F (p-value) .707 (.402) Match * Rater Sample Size F (p-value) .225 (.636) Providing Rater Expertise Rater Expertise (% Raters Who are Experts) Low High Rating Level and Content Match 6.6 [3.34] 4.9 [2.66] Quality Mismatch 7.7 [4.82] 8.0 [3.76] Match/Mismatch F (p—value) 8.627 (.004) % Raters Who are Experts F (p-value) .803 (.372) Match "‘ % Raters Experts F (p-value) 2.027 (.158) Providing Collaborative Filtering Collaborative Filtering Mme of sophistication) Low High Rating Level and Content Match 5.0 [2.56] 5.2 [2.31] Quality Mismatch 7.4 [4.17] 6.9 [5.07] Match/Mismatch F (p-value) 7.325 (.008) Filter Sophistication F (p—value) .038 (.845) Match * Filter Sophistication F (p-value) .195 (.659) 181 Table 1.2 Percentage of Lines in Answer From First Work Plan Accessed mean [standard deviation] % Lines in Answer From lst Work Plan Accessed Providing Content Ratings (baseline condition) Rating Level and Content Match 37% [25%] Quality Mismatch 23% [27%] Match/Mismatch F (p-value) 3.302 (.075) Providing Rater Sample Size Rater Sample Size (number of raters) Low flgh Rating Level and Content Match 44% [32%] M%i32%] Quality Mismatch 21% [21%] 22% [19%] Match/Mismatch F (p-value) 18.996 (.000) Rater Sample Size F (pwalue) .021 (.885) Match "' Rater Sample Size F (p-value) .019 (.889) Providing Rater Expertise Rater Expertise (% Raters Who are Experts). Low High Rating Level and Content Match 48% [31%] 56%[35%] Quality Mismatch 31% [25%] 28% [23%] Match/Mismatch F (p-value) 15.521 (.000) % Raters Who are Experts F (p-value) .260 (.611) Match * % Raters Experts F (p-value) 1.099 (.297) Providing Collaborative Filtering Collaborative Filterin (Egree of sophistication) Low High Rating. Level and Content Match 40% [31%] 40% [35%] Quality Mismatch 32% [27%] 26% [25%] Match/Mismatch F (p-value) 3.437 (.067) Filter Sophistication F (p-value) .241 (.624) Match "' Filter Sophistication F (p-value) .242 (.624) 182 Table 1.3 Percentage of Lines in Answer From Work Plan Rated Highest (5) mean [standard deviation] % Lines in Answer From Work Plan Rated 5 Providing Content Ratings (baseline condition) Rating Level and Content Match 41% [3 3] Quality Mismatch 14% [3%] Match/Mismatch F (p-value) 12.655 (.001) Providing Rater Sample Size Rater Sample Size (number of raters) Low High Rating Level and Content Match 71% [29%] 54% [33%] Quality Mismatch 17% [19%] 26% [23%] Match/Mismatch F (p—value) 63.673 (.000) Rater Sample Size F (p-value) .630 (.429) Match "' Rater Sample Size F (p-value) 6.1 16 (.015) Providing Rater Expertise Rater Expertise (% Raters Who are Experts) Low High Rating Level and Content Match 50%J36%] 64% [31%] Quality Mismatch 10% [14%] 26% [28%] Match/Mismatch F (p-value) 49.319 (.000) % Raters Who are Experts F (p-value) 7.763 (.006) Match * % Raters Experts F (p—value) .027 (.869) Providing Collaborative Filtering Collaborative Filtering (Egree of sophistication) Low High Rating Level and Content Match 55% [32%] 58% [34%] Quality Mismatch 20% [27%] 20% [30%] Match/Mismatch F (p—value) 37.208 (.000) Filter Sophistication F (p-value) .081 (.777) Match * Filter Spphistication F (p-value) .041 (.840) 183 Table 1.4 Percentage of Lines in Answer From Work Plan Rated High (4 or 5) mean [standard deviation] % Lines in Answer From Work Plan Rated 4 or 5 ProvidinLContent Ratings (baseline condition) Rating Level and Content Match 85% [23%] Quality Mismatch 68% [33%] Match/Mismatch F (p-value) 4.237 (.045) Providing Rater Sample Size Rater Sarmfle Size (number of raters) Low w Rating Level and Content Match 98% [5%] 87m27%] Quality Mismatch 74% [30%] 73M26%] Match/Mismatch F (p-value) 16.071 £000) Rater Sample Size F (p-value) 1.598 (.209) Match * Rater Sample Size F (p-value) 1.170 (.282) Providing Rater Expertise Rater Expertise (% Raters Who are Experts) Low High Rating Level and Content Match 88% [18%] 97% [8%] Quality Mismatch 77% [29%] 79% [24%] Match/Mismatch F (p-value) 12.746 (.001) % Raters Who are Experts F (p—value) 1.729 (.191) Match "' % Raters Experts F (p-value) .683 (.410) Providing Collaborative Filtering Collaborative Filtering (we of sophistication) Low High Rating Level and Content Match 89% [18%] 94% [15%L Quality Mismatch 72% [35%] 65% [41%L Match/Mismatch F (p—value) 16.230 (.000) Filter Sophistication F (p-value) .032 (.859) Match * Filter Sophistication F (p—value) 1.138 (.289) 184 Table 1.5 Number of Clicks mean [standard deviation] Number of Clicks Providing Content Ratings @seline condition) Rating Level and Content Match 42 [30] Quality Mismatch 48 [30] Match/Mismatch F (p-value) .649 (424) Providing Rater Sample Size Rater Sample Size (number of raters) Low Hfih Rating Level and Content Match 30 [19] 45 [42] Quality Mismatch 59 [39] 63 [49] Match/Mismatch F (p-value) 9.631 (.002) Rater Sample Size F (p-value) 1.513 (.222) Match * Rater Sample Size F (p-value) .506 (.479) Providing Rater Expertise Rater Expertise (% Raters Who are Experts) Low High Rating Level and Content Match 38 [21] 30 [22] Quality Mismatch 44 [31] 56 [41] Match/Mismatch F (p-value) 7.280 (.008) % Raters Who are Experts F (p-value) .171 (.680) Match * % Raters Experts F (p-value) 2.909 (.091) Providing Collaborative Filtering Collaborative Filteripg(_d_e_gree of sophistication) Low High Rating Level and Content Match 35 [24] 46 [32] Quality Mismatch 58 [63] 48 [39] Match/Mismatch F (p-value) 2.512 (.1 16) Filter Sophistication F (p-value) .000 (.987) Match * Filter Sophistication F (p-value) 1.614 (.207) 185 Table 1.6 Percentage of Clicks on Work Plans Rated High (4 or 5) mean [standard deviation] % of Clicks on Work Plans Rated 4 or 5 Providing Content Ratings (baseline condition) Rating Level and Content Match 78% [17%] Quality Mismatch 74% [21%] Match/Mismatch F (p-value) .484 (.490) Providing Rater Sample Size Rater Sarmle Size (number of raters) Low High Rating Level and Content Match 83% [12%] 78% [26%] Quality Mismatch 76% [18%] 74% fl8%] Match/Mismatch F (p-value) 2.428 (.122) Rater Sample Size F (p-value) 1.108 @295) Match " Rater Sample Size F (p-value) .139 (.710) Providing Rater Expertise Rater Expertise (% Raters Who are Experts) Low for» Rating Level and Content Match 76% [18%] 89%L12%] Quality Mismatch 78% [19%] 77% [16%] [Match/Mismatch F (p-value) 2.676 (.105) i % Raters Who are Egerts F (p-value) 3.334 (.071) Match * % Raters Experts F (p-value) 4.754 (.031) Providing Collaborative Filtering Collaborative FilterEngLeme of sophistication) Low Hfih Rating Level and Content Match 79% [17%] 77% [20%] Quality Mismatch 73% [26%] 74% [22%] Match/Migmatch F (p—value) .934 (.336) Filter Sophistication F (p—value) .007 (.934) Match "‘ Filter Sophistication F (p-value) .170 (.681) 186 Table 1.7 Number of Work Plans Opened mean [standard deviation] Number of Work Plans Opened Providing Content Ratings (baseline condition) Rating Level and Content Match 14.8 [7.00] Quality Mismatch 18.9 [7.79] Match/Mismatch F (p-value) 3.859 (.055) Providing Rater Sample Size Rater Sample Size (number of raterfl Low High Rating Level and Content Match 12.6 [5.39] 14.8]§.79] Quality Mismatch 17.9 [6.64] 18.5]§.13] Match/Mismatch F (p-value) 14.072 (.000) Rater Sample Size F (p-value) 1.340 (.250) Match "' Rater Sample Size F (p—value) .426 (.519) Providing Rater Expertise Rater Expertise 1% Raters Who are ExJLerts) Low Hfih Rating Level and Content Match 18.2 [15.49] 12.U§.62] Quality Mismatch 16.4 [7.41] 16.9 [6.77] Match/Mismatch F (p-value) .426 £515) % Raters Who are Experts F (p-value) 1.845 (.177) Match * % Raters Experts F (p-value) 2.755 (.100) Providing Collaborative Filtering Collaborative Filteflg (dggree of sophistication) Low High Rating Level and Content Match 13.2 [5.29] 15.9 [7.52] 7 Quality Mismatch 16.3 [7.18] 17.6 [7.25] Match/Mismatch F (p-value) 3.099 (.081) Filter Sophistication F (p-value) 2.127 (.148) Match * Filter Sophistication F (p-value) .294 (.589) 187 Table 1.8 Number of Work Plans Rated High (4 or 5) Opened mean [standard deviation] Number of Work Plans Rated 4 or 5 Opened Providing Content Ratings (baseline condition) Rating Level and Content Match 9.7 [3.78] Quality Mismatch 12.0 [2.88] Match/Mismatch F (p-value) 6.031 (.OISL Providing Rater Sample Size Rater Sample Size (number of raters) Low High Rating Level and Content Match 9.2 [3.30] 10.0 [3.95] Quality Mismatch 11.4 [2.52] 11.8 [2.67] Match/Mismatch F (p-value) 11.132 (.001) Rater Sample Size F (p-value) .936 L336) [Match * Rater Sarrnale Size F (p-value) .083 (.774) Providing Rater Expertise Rater Expertise (% Raters Who are Experts) Low HiLh Rating Level and Content Match 9.9 [3.31] 9.6 [3.34] Quality Mismatch 11.0 [3.19] 11.0 [2.8fl Match/Mismatch F (p-value) 16.071 (.000) % Raters Who are Experts F (p-value) 1.298 (.209) Match * % Raters Experts F (p-value) 1.170 (.282) [Providing Collaborative Filtering Collaborative Filtering (dggree of sophistication) Low High Rating Level and Content Match 9.1 [3.00] 10.2 [4.09] Quality Mismatch 10.2 [3.35] 11.0 [2.87] Watch/Mismatch F (p-value) 1.969 (.164) [ Filter Sophistication F (p-value) 2.044 (.156) LMateh * Filter Sophistication F (p-value) .044 (.835) 188 Appendix J: Discussion Of Post Hoc Analysis Details J .1 Results of Post Hoc Statistical Analysis on Information Search Data Additional post hoc analyses were performed to investigate search process behaviors of subjects’ task performance. Post hoc analyses include 1) examining answers to post-task questions regarding beliefs, 2) investigating if subjects who knew their performance level is related to outcomes, 3) exploring information search processes based on both click stream and work plan answer data, and 4) studying initial search strategy effects on task performance. Each section contains a description of the data and a discussion of significant findings. J .1 .1 Answers to Post-Task Questions Afier subjects completed the experimental task, they were asked several questions regarding their beliefs, control measures and manipulations checks. The computerized experimental web pages presented the appropriate questions based on the treatment condition to which each subject was assigned. Programming errors cause twenty-six out of four hundred ten subjects to receive questions related to manipulation they were not exposed to during the experiment. To correct this problem, answers to these questions were removed prior to analysis. However, every subject but one had entered the value of “not applicable” for these questions indicating subjects were conscientiously answering them. This section analyzes the questions regarding subject beliefs, lending insight to search behaviors (see Table J .1 for answer to post-task questions regarding beliefs). 189 Table J .1 Answers to Post-Task Questions Regarding Beliefs (Following use a 10-point scale fi'om l = Strongly agree to 10=Strong1y disagree) Experi- Post Task Questions Mean Answers by Comparison of Mean Compare to meat (l=Stroneg Agree and Treatment Answers Hyp. And 10=Strong1y Disagree) Condition Normative Prediction* Base- I used the ratings Matched ratings Subjects in matched As line provided for each Search (2.4) ratings condition had predicted. Result item to decide Mismatched ratings stronger beliefs about what items to look at. (3.6) using ratings as input in deciding which work plans to look at than those in the mismatched condition (t=4.800, p=.000). I used the ratings Matched ratings Subjects in matched As provided for each Search (2.9) ratings condition had predicted. Result item to decide Mismatched ratings stronger beliefs about what to use in building (4.1) using ratings as input in my work plan answer. deciding which work plans to take line items from to use into their answer than those in the mismatched condition (t=4.674, p=.000). The ratings were based on Matched ratings Subjects in both the As the opinions of other (3.0) matched and predicted. consultants in the firm Mismatched ratings mismatched ratings (3.0) condition believed ratings were opinions of other and their was no difference in their beliefs (tr-.108, p=.914). Rater I used the ratings Match—Low # There was no difference Unexpected Sample provided for each Search Raters (1.88) between subjects in the —-those with Size Result item to decide Match—High # low (1.9) versus high low should what items to look at. Raters (2.96) (4.0) number of raters not use Mismatch—Low # conditions in their ratings as Raters (3.69) beliefs about using the much as Mismatch—High # ratings to select work those with a Raters (3.96) plans to look at high # of (t=2.655, p=.106). raters. The number of raters Match-Low # There was no difference Unexpected value provided in my Raters (2.21) between subjects in the —number of Search Results was based Match—High # matched (5.7) versus raters should on the opinions of other Raters (3.19) mismatch (5.3) ratings be consultants in the firm Mismatch—Low # conditions, they both considered Raters (2.38) believed the number of an objective Mismatch—High # raters was a value by Raters (2.75) subjectively determined both groups. value (t=.620,p=.536). Rater I used the ratings Match-Low % There was no difference As Exper— provided for each Search Experts (3.48) between subjects in the predicted, 190 ise Result item to decide Match—High % low (3.8) versus high but not what items to look at. Experts (2.76) (2.8) rater expertise significant. Mismatch-Low % conditions in their Experts (3.77) beliefs about using the Mismatch—High % ratings to select work Experts (3.53) plans to look at (t=1.146, p=.287). The level of rater Match—Low % There was no difference Unexpected expertise value provided Experts (3.59) between subjects in the —% raters in my Search Results was Match-High % matched (4.7) versus experts based on the opinions of Experts (2.76) mismatched (5.0) should be other consultants in the Mismatch-Low % ratings conditions, they considered firm. Experts (3.54) both believed the % of an objective Mismatch-High % raters experts was a value by Experts (3.39) subjectively determined both groups. value (t=.559,p=.578). Recom- I used the ratings Match—Low ‘ There was no difference Unexpected mend provided for each Search Sophist. (2.65) between subjects in the —those with Also Result item to decide Match—High low (2.7) versus high a low should what items to look at. Sophist. (2.78) (4.7) filter not use Mismatch—Low sophistication ratings as Sophist. (4.42) conditions in their much as Mismatch—High beliefs about using the those with a Sophist. (3.48) ratings to select work high filter plans to look at (t=.221, sophisticatio . p=. 639). 11. Trust Relying on ratings of the Matched ratings Subjects in mismatched As Ratings Search Results items was (4.7) ratings condition predicted. risky. Mismatched ratings believed relying on (4.2) ratings was more risky than those in the matched condition (t=2.117, p=.035). The ratings provided for Matched ratings Subjects in mismatched As Search Result items could (5.7) ratings condition predicted. not be trusted. Mismatched ratings believed ratings could (6.4) not be trusted more than those in the matched condition E3133IDOQ Ante- If the ratings provided Matched ratings There was no difference No cedents were inaccurate, it is (8.0) between subjects in the prediction. to because others in my firm Mismatched ratings matched versus Ratings were intentionally trying (7.8) mismatched ratings to mislead me. conditions, they both believed the if ratings were wrong, it was not because others in the company were intentionally trying to be misleading (t=.845, f3fl. If the ratings provided Matched ratings Subjects in the matched No were inaccurate, it is (6.9) ratings condition prediction. because others in my firm Mismatched ratin s believed if ratings were 191 did not know what the (6.0) wrong, it was not true ratings should be. because others in the company did not [wow what the true ratings should be more than those in the mismatched ratings conditions (t=3.497, p=.003. If the ratings provided Matched ratings Subjects in the matched No were inaccurate, it is (6.4) ratings condition prediction. because others in my firm Mismatched ratings believed if ratings were just do not know what (5.7) wrong, it was not Knowledge System items because others in the will be helpful to me. company did not know what items would be helpfitl to me more than those in the mismatched ratings conditions (t=3.497, p=.001). * See Chapter 4 for normative predictions leading to hypotheses. While answers to questions about ratings were consistent with expectations, answers to questions about other information provided were not consistent. Unexpected answers to post-task questions reveal that subjects did not believe the rater sample size and rater expertise were from objective sources as they were intended to convey. This is evidence that subjects may not fully understand the intended source of information provided. Subjects may believe the rater sample size is prone to manipulation and rater expertise is based on a subjective (i.e., from the correctness of ratings) instead of objective criterion arrived at separately from ratings. Additionally, mismatched ratings were expected to trigger the use of credibility indicators or content recommendations and when these are low, ratings should not be used, which does not appear to be the case. In fact, the opposite was found for the rater sample size and content recommendations experiments. Those in the matched ratings, low rate sample size or low filter sophistication conditions indicated they used ratings the most (mean = 1.88 and 2.65) while those in the mismatched ratings, high rater sample size or high filter SOphistication conditions indicated they used ratings the least (3.96 and 192 4.70). These differences between indicated rating usage are not significant between low and high rater sample size or filter sophistication (t=2.655, p=.106 and t=.221, p=.639). Meanwhile, rating usage does appear to be consistent with predictions in the rater expertise experiment. Those in the mismatched ratings, low rater expertise conditions indicated they used ratings the least (3.77) and those in the matched ratings, high rater expertise conditions indicated they used ratings the most (2.76). However, differences between indicated rating usage are not significant between low and high rater expertise (t=1.146, p=.287). This is evidence in support of the theory guiding hypotheses for rater expertise but may indicate the rater sample size and filter sophistication are overpowered by ratings values. Finally, subjects in the matched and mismatched conditions differed on what the reason for a mismatch might be. Differences between subjects are not surprising, as subjects in the matched condition may not have occasion to think about why ratings . might be wrong while those in the mismatched condition did because their ratings were mismatched. J .2 Subjects Who Knew Their Performance Level This section examines whether subjects who know how well or badly they did or how useful or unuseful ratings were do better than those who did not know. Subjects were asked whether their ratings matched content quality as well as how confident they were with their task answer. The following use a 10-point scale from 1=Strongly agree to lO=Strongly disagree (also found in Table 5.7 above): 0 Self Calibration: I felt the "ratings" provided were actually consistent with the overall quality of their associated work plan. 193 0 Confidence: I would like to run another search to look at more work plans, then possibly revise the work plan I submitted; I do not want to give the plan of work that I submitted to my manager; There are better answers than the one I submitted; I am confident my choices were the best ones possible (reverse coded). Using this information and knowing subject treatment conditions and decision quality scores, the following two dummy variables were created: 0 Rating Condition Calibration: Calculated as = 1 if in matched (mismatched) rating condition and selected a value of <= 4 (>= 7) on the Self Calibration scale above. Otherwise = O. 0 Quality Performance Calibration: Calculated as = 1 if in score >= 25 (= 36 * 70%) [score <= 11 (= 36 * 30%)] and selected a value of >= 7 (<=4) on the Confidence scale above. Otherwise = 0. Rating Condition Calibration was positively correlated with decision quality (r = .407, p = .000). Quality Performance Calibration was marginally significantly correlated with decision quality (r = .101, p = .056). When subjects knew ratings were helpfiil or not or knew their performance level, they were able to perform more effectively but not faster or slower. Lack of correlation with time is not surprising as those in the matched ratings condition should have faster times offsetting those in the mismatched ratings condition who should have slower times. Counts by treatment for Rating Condition Calibration are found in Table G5 and for Quality Performance Calibration in Table G6 in Appendix G. Chi-square tests were conducted on rating condition calibration and quality performance calibration to check for possible differences across treatments within each of the four inter-related experiments. As expected, the chi-square statistics for Ratings Condition Calibration indicate significant differences for all experiments, even marginally significant for the rater sample size experiment, suggesting manipulations may induce different uses of the information provided (for chi—square statistics in Table 194 J .2 and Table J .3). Also as expected, the chi-square statistics for Quality Performance Calibration indicate no significant differences for all experiments suggesting subjects knew when ratings were not matched with content quality even if they could not overcome it. Since differences between treatments for subjects who knew how well they performed were not significant, no further analysis of that data is presented. Table I .2 Chi-Squared Statistics for Rating Condition Calibration by Treatment Exprmt Baseline Rater Sample Rater Expertise Filter Size Sophistication Chi- 7.567 6.182 9.121 8.628 squared (d.f.=1,p=.006) (d.f. = 3,p = .103) (d.f. = 3,p == .028) (d.f. = 3,p = .035) Table J .3 Chi-Squared Statistics for Quality Performance Calibration by Treatment Exprmt Baseline Rater Sample Rater Expertise Filter Size Sophistication Chi- .684 .828 1.274 . 4.528 squared (d.f.=1,p=.408) (d.f. = 3,p = .843) (d.f. = 3,p = .735) (d.f. = 3,p = .210) Hypothesized predictions suggest high (low) credibility indicators and filter sophistication should inform subjects with ratings matched (mismatched) to content quality about the status of their rating. Thus, more subjects in these treatment conditions should have higher Rating Condition Calibration than those in other treatment conditions. The percentages of subjects who knew their correct ratings condition is shown in Table J .4. As expected, subjects with high credibility indicators and filter sophistication and rating matched with content quality exhibit the highest percentages of those who know their rating condition (i.e., for rater sample size 71%, rater expertise 83% and filter sophistication 83%). Surprisingly, subjects with low credibility indicators and filter 195 sophistication appear to know their rating condition least (i.e., for rater sample size 48%, rater expertise 42% and filter sophistication 52%). Thus, while high credibility indicators and filter sophistication appear to be informing subjects of ratings matched with content quality, low credibility indicators and filter sophistication do not appear to be informing subjects of ratings mismatched with content quality. Table J .4 Percentage of Subjects by Treatment Condition Who Knew their Rating Condition Providing Content Ratings (baseline condition) Rating Level and Content Match 71% Quality Mismatch 33% Providing Rater Sample Size Rater Sarrmle Size (number of raters) Low High Rating Level and Content Match 76% 71% Quality Mismatch 48% 54% Providing Rater Expertise Rater Expertise (% Raters Who are Experts) Low High Rating Level and Content Match 61% 83% Quality Mismatch 42% 67% Providing Collaborative Filtering Collaborative Filtering (d_egree of sophistication) Low High Rating Level and Content Match 7 5% 83% Quality Mismatch 52% 56% J .3 Information Search Process Measures Information search measures were also dynamically collected reflecting behaviors subjects followed regarding the selection and use of search result items. Information search measure have been widely used as a process tracing technique (Payne 1976; Svenson 1979). The measures come from two sources: the click streams each subject I96 followed while performing the task and the actual usage of search results in the work plan answer created. The measures captured from each source are listed in Table J .5. Measures were also gathered and analyzed separately for the data modeling and database design portion of the task with similar results as measures analyzed for both portions combined. Accordingly, only the combined measures that reflect behavior processes across both portions of the task are analyzed. Table J .5 Process Data Measures Data Source Click Stream Measures Work Plan Answer Measures Total number of clicks made Number of different work plans used in answer (maximum is 14) Percentage of total number of clicks made on Percentage of the lines in answer from work Ms rated a 4 or 5 the work plan first clicked on were Number of the available work plans opened Percentage of the lines in answer that (maximum is 14) were from the work plan rated a 5 Number of the available work plans rated a 4 Percentage of the lines in answer that or 5 opened (maximum is 7) from work plans rated a 4 or 5 Additional Information: Number of position in list of first work plan Total number of lines in answer clicked on Strategy included clicked on 1. first work plan rated 4 in the list of work plans, 2. work plan rated 5, 3. first work plan in the list, 4. a random work plan. Hypothesized predictions in Chapter 4 were based on several expectations of human behavior including: 1. Regardless of treatment condition, knowledge seekers should select the highest rated content first then move to the next highest rated content. 2. Higher rated content should be used more in the task when ratings matched content quality than when ratings mismatched. 3. Those in the mismatched treatment condition should expend more time and effort indicated by selecting more work plans. 197 4. Finally, those with low credibility indicators or filter sophistication should discount ratings and expend more time and eflort indicated by selecting more work plans. 5. Thus, those with mismatched ratings and low credibility indicators or filter sophistication should expend the most time and eflort while those with matched ratings and high credibility indicators or recommendation sophistication should expend the least. The next sections examine whether information search measures captured during experimental trials support these expected behaviors. 1.3.1 Work Plan Answer Measures Examining the source of the lines used to create work plan answers, subjects in the matched ratings condition used fewer different work plan items in their answer than those in the mismatched ratings condition in all treatment conditions and this was statistically significant in all cases except for the baseline condition (see ANOVA F - statistics for the main effect of match/mismatch ratings in Appendix I Table 1.1). Consistent with expectations, subjects with ratings matching content quality expend less effort choosing to build a task answer out of fewer work plans. Further examination of the items included in work plan answers indicates in all cases subjects in the matched ratings condition used more lines in their answer coming fi'om the first work plan they opened, from work plans rated highest (i.e, 5), and from work plans rated high (i.e., both 4 and 5) than subjects in the mismatched ratings condition. This difference was significant for all measures in all treatment conditions (see AN OVA F -statistics for the main effect of match/mismatch ratings in Appendix I Tables 1.2, 1.3, and 1.4). Consistent with expectations, subjects with ratings matching content quality expend less effort choosing to build a task answer more often from the first work plan opened and used more high rated content than those with ratings 198 mismatching quality. This also suggests subj ectsin the match condition opened the highest rated work plans first and used it in their answer. Thus, work plan answer measures provide evidence individuals may recognize but not overcome rating deficiencies. Interestingly, there is a significant difference for the percentage of lines in answer from work plans rated highest (i.e., 5) between those in the high versus low rater expertise treatments. Consistent with predictions, subjects with a high rater expertise chose to include more lines in their answer from work plans rated highest than those with a low rater expertise. This indicates raters expertise may influence whether individuals include highly rated content in their answer. Meanwhile, this finding does not hold when high and low rater sample size or filter sophistication is provided. J 32 Click Stream Measures Investigating the total number of clicks as an indication for the amount of effort expended on the task, subjects in the matched ratings condition clicked on fewer work plan items than those in the mismatched ratings condition in all cases. This difference was significant for the number of raters and percentage of raters experts experiments (see AN OVA F-statistics for the main effect of match/mismatch ratings in Appendix I Table 1.5). Consistent with expectations, subjects with ratings mismatching content quality expend more effort by clicking on and looking at more work plan items than those with ratings matching quality. Further examination of click stream patterns indicates subjects in the matched ratings condition more ofien clicked on an item rated 4 or 5 than subjects in the mismatched ratings condition in all treatment conditions except when rater expertise was 199 low which was not statistically different. However, this difference was not significant in any treatment condition (see AN OVA F -statistics for the main effect of match/mismatch ratings in Appendix I, Table 1.6). While not significantlydifferent, consistent with expectations, subjects with ratings matching content quality selected higher rated items more than those with ratings mismatching quality. Interestingly, there is a significant difference for the percentage of clicks on work plans rated high (i.e., 4 or 5) between those in the high versus low rater expertise conditions. Consistent with predictions, subjects with a high rater expertise selected more highly rated work plans than those with a low rater expertise. This indicates raters expertise may influence whether individuals select highly rated content to review. Meanwhile, this finding does not hold when high and low rater sample size or filter sophistication is provided. Finally, subjects in the matched ratings condition opened fewer work plans in total than subjects in the mismatched ratings condition in all cases except when rater expertise is low. This difference was significant in all but the rater expertise experiment. Also, subjects in the matched ratings condition opened fewer work plans rated high (i.e., 4 or 5) than subjects in the mismatched ratings condition in all cases. This difference was significant in all but the collaborative filter experiment (see AN OVA F-statistics for the main effect of match/mismatch ratings in Appendix I, Table 1.7 and 1.8). Consistent with expectations, subjects with ratings matching content quality expended less effort by selecting fewer work plans than those with ratings mismatching quality. In summary, the information search measures analyzed above suggest those in the matched ratings condition used higher rated work plan items more and expend less effort 200 than those in the mismatched ratings condition. Also, the measures suggest rater expertise may influence whether individuals select for review and include highly rated content in their answer, while rater sample size or filter sophistication do not. Thus, individuals may realize ratings are not accurate, but only with the help of rater expertise can they overcome the inappropriate ratings. 1.33 Correlations Between Click Stream and Work Plan Answer Measures The correlations between information search measures are provided in Table J .6 separately for match and mismatch ratings. Many of the relationships between measures are as expected (e.g., when ratings are matched or mismatched with content quality, the more work plans opened is positively associated with more total clicks on work plans [F615 and .672]). Some of the more noteworthy associations are discussed below: 1. The percentage of lines in answers from work plans rated high (i.e., 4 or 5) is positively associated with the number of work plans opened that were rated high (i.e., 4 or 5) when ratings match and negatively associating when ratings mismatch content quality. 2. Percentage of lines from first work plan opened, percentage of lines from work plans rated highest (i.e., 5) and high (i.e., 5) used in answers are all positively associated with decision quality when ratings match content quality but negatively associated when ratings mismatch content quality. 3. Number of total clicks on work plans, number of work plans opened, and number of work plans opened rated high (i.e., 4 or 5) are all positively associated with decision quality when ratings mismatch content quality. These associations suggest subjects selected and used high rated work plans when ratings matched but selected then did not use them when ratings mismatched content quality. Also, when ratings mismatched content quality, subjects demonstrating more effort were able to achieve a higher quality decision (i.e., task answer). In summary, subjects may realize ratings match or mismatch with content quality, but may have difficulty overcoming a mismatch. 201 ......wom. . *noom. ...EoEr tummw. mNE. samer .....mer 3.0th 3.3%. 09:5 DEEP e3. :2: £8. No. es. tam. ......NNEE. ......Nem. ......mEet bEEeeo nae oooE BEE 3 see eocomo Es a race. 83 8:80 as e ...NNE. ......eNN.- 83 Essex 3 ace £26 a. lemme. ......mEe. E E.- 83 £26 a NNE. e8. e8; e8. 83 eon: a Each ......NNN. 43. ......EENN. NE. 80.. 83 Bee 3 see on: as 2:; ......meN; .32.. :87 we? ......NNN. 83 3E m ace 6:3 es ......EmNt ......NEmt .._..EENN. tome- coo. EENE. ......Nwe. .oooE eE 82E on: s. _.....NQ. ......NE W. ......NmNt ......eNm. NE E. ...:th ......eEct ......NEE..- 83 Sea E63 Beebe e 83. 3 E55. an: 8.5. 8.3. ERG 62—010 QUE-0&0 m? ERG # m3 EP—u m HEP—.— EeE ERG an}? .5 n .5 e 9.3.0 s. 9.25 e .32. 2.3 .x. 25 .x. 2.: .x. .E: a 535m anEEU 6.5.5.. .53 2.83 were: 8532 :EooVnEv H ...... AmoVQv H L 85932 END mmooem macaw mcoEEmEoboU oEmtm>Em 9E. 033. 202 33%. 32v. 1&ri 12m. moo. immmr .1.—emu 3.4.- .....&$. 0.55 23H ......NZH. .3..an tumoov .Ibmm. **m¢~.u from“..- 336?. ....L‘amr *omb. 53.30 xmmh. ooo.“ , 83m 3 89c 88.5 Es n :3: 83 858 .5 ¢ 1.37 55%.- coo; 3me m? 88m 83:0 o\o ......wnm. 1&3. .35ri 804 86:0 u ......mm _. ......oom. *er wwo. coo; 85A u 3on ......Rmr ....L 3.- .335. 333.. vofi- coo; 8me m? 88m 0:5 ..\° :30. .....var imam. 1.3? $9. Immv. coo; 33% m 88m 0:5 X :30. .....Rv... ....mmm. iwmmr mcov ism. 10$. ooo.“ “2 Bob 2:4 X. .._...ocv. :30 1.30. ....me. ....Lam. **m _N.. ......mmmr 1&3..- coo; ESE x83 fiesta n 633— m? 683m 633m 635— :8...— uonono @2590 m? 8?... 8:: m? 53.... m 8?: .L :5: .59 A? u 95 u 3.3.0 ..\o 96:0 u # _Soh 2.: .x. 2.5 .x. 2.5 .x. ..ED n 5525 29:0 .8322 .55 3.53 mmfiuum 35:25:: 203 J .4 Measures for Initial Information Search Strategy The information search process of each subject was objectively coded using click stream data (i.e., pattern of clicks used to open work plans). The coding reflects whether the first click of their click stream was: 1. on the first work plan rated 4 in the list provided, 2. on the one work plan rated 5, 3.on a random work plan from the list, or 4. on the first work plan listed. Also, the lSt 4 listed and ratings = 5 were combined since they involved following highest rated items first, while random and sequential strategies were combined since they did not. However, if the treatment condition called for a list of work plans where the order involved “the first work plan listed” also being “rated 4” and the subject selected to look at the first work plan, it is ambiguous whether the subject selected the work plan because it was “the first work plan rated 4 in the list provided” or because it was “the first work plan listed”. Because of this situation, seventy-seven subjects could not be coded. The remaining subjects strategies are analyzed next. As expected, based on correlations shown in Table J .7, when ratings match content quality reviewing highly (non-highly) rated items first is associated with improved (worse) decision performance. Unexpectedly, however, when ratings mismatch content quality no initial search strategy followed is associated with decision performance. Table J .7 Correlations of Strategy and Decision Quality and Decision Time Strategy 1" Four Rating Random lSt 1‘” Four Random Listed is Five Work Listed & & 1" Plan Rating Work is Five Plan Matched Ratings Decision Quality .022 302'” -.031 -.297** .309** -.309** Decision Time -.051 -.072 -. 126 .193* -.099 .099 Mismatched Ratings Qecision Quality .027 -.083 -.023 .114 -.070 .070 Lecision Time .003 .112 -.099 -.043 .1 16 -.1 16 204 [* = (p<.05), ** = (p<.001)] Chi-square tests indicate subjects followed different strategies across match/mismatch ratings conditions (Chi-square statistic -= 15.759, d.f. = 3, p = .001), across treatment conditions (69.050, d.f. = 39, p = .002), but not across the four experiments (Chi-square statistic = 6.955, d.f. = 9, p. = .642). Prior to opening work plans, subjects should not know whether ratings were matched or mismatched with content quality, thus predictions suggest they should always open the highest rated item first. Consistent with expectations, most subjects in the matched ratings condition did Open the highest rated item first. Surprisingly, however, those in the mismatched ratings condition opened the highly rated and non-highly rated work plan first equally often. Counts of strategies by matched or mismatched ratings are shown in Table J .8. Table I .8 Strategy Counts by Match/Mismatch Ratings Quality Condition 1St Four Strategy 1st Four Rating Random 1st Work Random _ Listed is Five Plan Listed & & l8t Rating Work is Five Plan Match 10 71 27 23 81 50 Mismatch 13 66 23 6O 79 83 Totals 23 137 50 83 160 133 Strategy counts by treatment condition are shown in Table I .9. As expected, in almost all treatment conditions, subjects chose to review the highest rated work plan first. However, unexpectedly, subjects did not chose to review highly rated work plans first in three conditions: the match ratings baseline, matched ratings and low rater sample size, and matched ratings and low filter sophistication. The low rater sample size or filter sophistication may have suggested to subjects a lack of rating credibility and ratings were 205 discounted during initial work plan selection. Once again, subjects may have realized ratings were not accurate, but could not find a way to overcome the inaccuracy since decision performance did not improve. Strategy counts by experiment are shown in Table J .10. As expected, the most popular search strategy was for subjects to choSe to review the highest rated work plan first, while the second most popular was to select the first work plan listed. Table J .9 Strategy Counts by Treatment Condition Strategy 1‘t Rating Random 1’t 1‘t Random Four is Five Work Four & 1’t Listed Plan Listed Work & Plan Rating is Five Match Baseline l 6 5 8 7 l3 Mismatch Baseline 1 10 1 6 11 7 Match and Low # Raters l 8 3 10 9 l3 Mismatch and Low # 2 12 7 3 14 10 Raters ' Match and High # Raters 3 10 3 6 l3 9 Mismatch and High # l 12 3 3 l3 6 Raters Match and Low % Rater O 11 2 8 11 10 Expertise Mismatch and Low % 3 6 5 4 9 9 Rater Expertise Match and High % Rater 5 15 5 l 20 6 Expertise Mismatch and High % 1 12 l 5 l3 6 Rater Expertise Match and Low Filter 2 9 2 15 ll 17 Sophistication Mismatch and Low Filter 0 11 5 O 11 5 Sophistic’n Match and High Filter 1 7 3 12 8 15 Sophistication Mismatch and High Filter 2 8 5 2 10 7 Sophistic’n Totals 23 137 50 83 160 133 206 Table I .10 Strategy Counts by Experiment Strategy l“t Four Rating Random 1St lSt Four Random Listed is Five Work Listed & & 1" Plan Rating Work is Five Plan Baseline 2 l6 7 14 l 8 20 Rater Sample Size 7 42 16 22 49 38 Rater Expertise 9 44 13 19 53 31 Collaborative Filter 5 36 15 31 4O 44 Sophistic’n Totals 23 138 51 86 160 133 To better understand the effect of strategy on task performance, initial search strategy measures for the combined strategies of first found listed and rating is five (i.e., follow highly rated work plans) as well as random and first work plan listed (i.e., follow . non-highly rated work plans) were analyzed. ANOVA results indicate no differences across decision time for any treatment condition in all four experiments for either initial search strategy measure, thus decision time will not be discussed further. AN OVA results also indicate no differences across decision quality for any treatment condition in ' all four experiment for subjects following an initial search strategy of reviewing non- highly rated work plans first. However, ANOVA results do indicate significant differences across decision quality for both the rater sample size (F=4.101, p=.049) and filter sophistication (F = 9.742, p=.OO3). As expected, those reviewing highly rated work plans first do better when ratings match than when ratings mismatch with content quality. The means of decision quality by treatment condition for initial strategy to review highly rated work plans first is found in Table J .1 l and to review non-highly rated work plans first is found in Table J .12. 207 Table J.11 Mean Decision Quality by Treatment Condition for Initial Strategy to Review Highly Rated Work Plans Mean [standard deviation] and n=sample size Providing Content Ratings (baseline condition) Rating Level and Content Match 23.3 [6.79] n=7 Quality Mismatch 10.7 [8.01] n=1 1 Providing Rater Sample Size Rater Sample Size (number of raters) Low High Rating Level and Content Match 28.5 [8.14] n=9 28.4 [7.30] n=13 Quality Mismatch 8.6 [8.15] n=14 7.2 [6.67] n=13 Providing Rater Expertise Rater Expertise (% Raters Who are Experts) Low High Rating Level and Content Match 28.3 [4.95] n=11 27.7 [6.46] n=20 Quality Mismatch 11.5 [10.85] n=9 6.5 [6.63] n=13 Providing Collaborative Filtering Collaborative Filtering (dtflee of sophistication) Low High Rating Level and Content Match 25.9 [9.02] n=ll 26.3 [9.98] n=8 Quality Mismatch 7.5 [7.85] n=11 10.8 [8.84] n=10 Table I .12 Mean Decision Quality by Treatment Condition for Initial Strategy to Review Non-Highly Rated Work Plans Mean [standard deviation] and n=sample size Providing Content Ratings (baseline condition) Rating Level and Content Match 17.9 [10.32] n=13 Quality Mismatch 13.1 [9.98] n=7 Providing Rater Sample Size Rater Sample Size (number of raters) Low High Rating Level and Content Match 27.7 [5.21] n=13 16.6 [11.40] n=9 Quality Mismatch 9.0 [9.52] n=10 12.5 [10.99] n=6 Providing Rater Expertise Rater Expertise (% Raters Who are Experts) Low High Rating Level and Content Match 19.9 [9.23] n=10 23.7 [7.23] n=6 Quality Mismatch 9.14 [5.61] n=9 5.9 [2.95] n=6 208 Providing Collaborative Filtering Collaborative Filtering (degree of sophistication) Low High Rating Level and Content Match 21.1 [8.84] n=17 24.8 [7.44] n=15 Quality Mismatch 8.9 [6.79] n=5 11.2 [10.34] n=7 Next, decision quality was regressed on initial search strategy controlling for treatment condition. Results suggest only when ratings match content quality does reviewing highly rated work plans first improve decision quality when rater sample size is provided (t=2.73, p=.018) and when rater expertise is provided (t=2.870, p=.006). 209