31:3. lllllllillllllllllllllll 31293 00625 7111 lbfélcéSlS LIBRARY Michigan State h University This is to certify that the dissertation entitled TOWARD A REPRESENTATION OF AUDITOR KNOWLEDGE: EVIDENCE AGGREGATION AND EVALUATION presented by ERIC LEROY DENNA has been accepted towards fulfillment of the requirements for Date Angus t 3 , Ph.D. degree in Accounting WAR 6/”c w \ Major professor V William E. McCarthy 1989 \ISL' rs an Affirmative Acrwm Equal Opporrumn' Insnruuun PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. DATE DUE DATE DUE DATE DUE MSU Is An Affirmative Action/Equal Opportunity Institution TOWARD A REPRESENTATION OF AUDITOR KNOWLEDGE: EVIDENCE AGGREGATION AND EVALUATION By Eric LeRoy Denna A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Accounting July 1989 ecclng ABSTRACT TOWARD A REPRESENTATION OF AUDITOR KNOWLEDGE: EVIDENCE AGGREGATION AND EVALUATION By Eric LeRoy Denna The process by which financial auditors determine the extent and timing of audit procedures is relatively unknown. Many have provided hypotheses regarding auditor judgment behavior, but there are few studies which describe the judgment process itself (see Felix and Kinney [1982]). This study uses the computational modeling approach to develop and evaluate two models of complex audit judgment. The first model represents the reasoning process of an expert reviewing an audit engagement. Although the expert demonstrated complex reasoning while performing the review, we were unable to represent the complex reasoning at the primitive level. A second model represents an expert’s reasoning process while assessing the likelihood of material error while planning an engagement. In this model we were successful in both identifying and representing complex auditor judgment at the - primitive task level. The second model focuses on representing auditor knowledge which includes a representation of the auditor’s understanding of a client’s operations, a client’s personnel responsibilities, and a client’s finances. The computational model uses frames, rules, and special purpose functions to represent the auditor knowledge in computer program form. The model is evaluated in three ways. First, we demonstrate the model’s ability to assess the likelihood of material error for two audit concerns of an actual client engagement. Second, we demonstrate the model’s ability to explain its knowledge of the client. Third, we demonstrate the model’s ability to reason through the effect of two environmental events which were new to both the client and the auditor. Conclusions and extensions are offered regarding the impact of the research on developing an understanding ’of the audit judgment process and knowledge, on the training of inexperienced auditors, and on the development of more useful audit information systems. Copyright by ERIC LEROY DENNA MAY 1989 Although many have been extremely helpful throughout the process of completing this dissertation, none is more deserving of my heartfelt thanks than my beautiful wife. Therefore, this dissertation is lovingly dedicated to my sweetheart, Lyn iv ACKNOWLEDGMENTS Above all else, I am grateful to a loving God who has provided me with an intellect capable of learning and with wonderful people to provide guidance and encouragement during my efforts to learn. Of all those who have guided my learning, I am profoundly indebted to Bill McCarthy for his passion for excellence and sincere interest in doing quality research. Any degree to which this study meets a measure of excellence is due primarily to Bill’s influence. Severin Grabski and Dewey Ward provided helpful and insightful comments during this research. As well, two experts gave freely of their time during the development and testing of the model. This study would not have been possible without their contribution. TABLE OF CONTENTS CHAPTER I - INTRODUCTION ..................................... 1 CHAPTER II - AUDIT JUDGMENT, EVIDENCE, RISK, AND THE AUDIT PROCESS ................................................. 1 1 THE AUDIT RISK MODEL ................................... 11 Audit Risk Research .................................... 13 Weaknesses of the Current Audit Risk Model ................. 15 Weaknesses of the Current Research Approach ................ 15 AUDIT EVIDENCE .......................................... 16 A Measurement Based Approach to The Evaluation of Evidence . . . 20 THE AUDIT PROCESS ....................................... 23 Audit Planning ......................................... 25 Evaluating Audit Evidence ................................ 26 The Audit Review ...................................... 27 CHAPTER III - THE COGNITIVE MODELING APPROACH ............... 29 WHAT IS COGNITIVE MODELING? ............................ 29 DEVELOPMENT OF THE COGNITIVE APPROACH ............... 29 ANALYSIS OF DIFFERENCES ................................. 31 DIFFICULTIES IN COGNITIVE MODELING ..................... 32 EXPERT SYSTEMS AS COGNITIVE THEORIES .................. 34 Level of The Expert Theory - Deep Knowledge ................ 35 Proper Use of The Expert System Methodology ................ 35 Past Expert System Research in Accounting ................... 36 KNOWLEDGE ACQUISITION ................................. 39 KNOWLEDGE REPRESENTATION ............................. 40 Knowledge Representation Using Logic ...................... 40 Knowledge Representation Using Rules ...................... 41 Knowledge Representation Using Frames ..................... 42 Aggregation ..................................... 42 Generalization ................................... 44 Sequencing ...................................... 44 A Semantic View of a Frame .............................. 46 THE COMPUTATIONAL MODEL .............................. 46 Programing Languages ................................... 46 Expert-System Shells .................................... 48 CHAPTER IV - STUDYING THE AUDIT EVALUATION PROCESS ......... 49 SELECTING THE STUDY EXPERT AND INDUSTRY .............. 49 PREPARING TO STUDY THE AUDIT REVIEW TASK ............. 50 Becoming Familiar With the Client ......................... 50 Reviewing Matters for the Engagement Partner ................ 51 Reviewing the Annual Report ............................. 51 vi Resolving the Review Exceptions ........................... 52 AUDIT MANUAL GUIDELINES AND TRAINING ................. 52 THE MODEL DEVELOPMENT PROCESS ....................... 53 A MODEL OF THE REVIEWER THOUGHT PROCESS ............. 55 Is the Assertion Troublesome? ...... , ...................... 55 Analytical Procedures Results ........................ 55 Materiality ...................................... 59 Environmental Factors ............................. 59 What Are the Probable Causes of the Problem? ................ 60 Does Such a Cause Exist? ................................ 61 Is There Sufficient Justification for the Troublesome Assertion? . . . . 61 Does the Assertion Presentation Adhere to the Applicable Standards? ...................................... 61 Effect of the Assertion on the Review? ...................... 62 FRED - A RESEARCH PROTOTYPE EXPERT SYSTEM ............ 62 CONTRIBUTIONS OF STUDYING THE REVIEW PROCESS ......... 65 Speed of Reasoning and Lack of Documentation ............... 65 Nature of the Review Task ................................ 66 Individuality of the Review Process ......................... 66 CHARACTERISTICS OF THE AUDIT PLANNING PROCESS ........ 67 Speed of Reasoning and Lack of Documentation ............... 67 Nature of the Planning Task ............................... 67 - Individuality of the Planning Process ........................ 68 CHAPTER V - A MODEL OF AUDIT PLANNING JUDGMENT ............ 69 THE RESEARCH PROCESS ................................... 72 AN OVERVIEW OF THE MODEL DEVELOPMENT PROCESS ....... 75 A RECONSTRUCTIVE MODEL OF LME REASONING ............ 77 THE LME JUDGMENT PROCESS .............................. 80 Assess Changes in Overall Inherent Risk ..................... 80 Perform Analytical Procedures ............................. 82 Assess the Inherent Risk of Each Audit Concern ............... 83 Assess Control Risk for the Audit Concern .................... 85 Assess Likelihood of Material Error ......................... 86 A MODEL OF AUDITOR CLIENT KNOWLEDGE ................. 86 Knowledge of Client Operations and its Environment ............ 88 Knowledge of Client Personnel and Responsibilities ............. 90 Knowledge of Client Finances and Financial Reasoning .......... 9O APE REPRESENTATION METHODS ........................... 93 Representation of the APE LME Control Structure ............. 93 Representation of the APE LME Hook to Client Knowledge ...... 95 Representing APE Knowledge of Client Operations ............. 96 APE MODEL EVALUATION .................................. 97 APE Explanation Capabilities .............................. 98 APE Deformation Reasoning .............................. 100 CHAPTER VI - SUMMARY AND EXTENSIONS ........................ 104 RESEARCH EXTENSIONS .................................... 110 Testing the Model’s Generalizability ......................... 110 Pedagogical Use of a Computational Model ................... 111 Designing More Useful Audit Information Systems .............. 112 vii APPENDD( A - DETAILED DESCRIPTIONS OF APE EVENTS ............. 113 APPENDIX B - APE DIALOGUE TRANSCRIPTS ........................ 124 APPENDIX C - APE ARCHITECTURE ................................ 131 LIST or REFERENCES ............................................ 136 viii LIST OF TABLES TABLE 1 - TABLE OF PROPOSITIONS AND COROLLARIES ABOUT PROFESSIONAL JUDGMENT IN PUBLIC ACCOUNTING (PJPA) ..... 3 TABLE 2 - PREVIOUS ES WORK IN ACCOUNTING ..................... 37 TABLE 3 - FRED - DOMAIN KNOWLEDGE DETAILS ................... 57 LIST OF FIGURES FIGURE 1 - THE EXPERT SYSTEM RESEARCH ENVIRONMENT ......... 7 FIGURE 2 - DETERMINING THE AMOUNT. OF. EVIDENCE TO. COLLECT. . . 12 FIGURE 3 - THE ROLE OF EVIDENCE INTHE AUDIT PROCESS ......... 17 FIGURE 4 - AUDIT EVIDENCE EVALUATIVE FACTORS CONTAINED IN SECTION 326 ......................................... 19 FIGURE 5 - THE MEASUREMENT BASED APPROACH ................. 22 FIGURE 6 - THE AUDIT OPINION FORMULATION PROCESS ............ 24 FIGURE 7 - AGGREGATION IN THE PHYSICAL WORLD ............... 43 FIGURE 8 - AGGREGATION IN ACCOUNTING DATA .................. 45 FIGURE 9 - AN EVENT HIERARCHY USING FRAMES .................. 47 FIGURE 10 - KNOWLEDGE ACQUISITION METHODS AND RESULTS (FRED) .............................................. 54 FIGURE 11 - FRED - REASONING STRATEGY ........................ 56 FIGURE 12 - FRED - INFERENCE NET REVIEW CONCLUSIONS ......... 63 FIGURE 13 - FRED - INFERENCE NET TROUBLESOME ASSERTION CONCLUSIONS ....................................... 64 FIGURE 14 - DECOMPOSITION OF AUDIT PLANNING JUDGMENT ....... 71 FIGURE 15 - AUDIT PLANNING PROCESS ............................ 73 FIGURE 16 - KNOWLEDGE ACQUISITION METHODS AND RESULTS (APE) 76 FIGURE 17 - LME REASONING PROCESS ............................ 81 FIGURE 18 - HOOK TO CLIENT KNOWLEDGE DURING LME JUDGMENT . 84 FIGURE 19 - LME INFERENCE NET ................................. 87 FIGURE 20 - APE MODEL OF OPERATIONS .......................... 89 FIGURE 21 - TOP APE INVENTORY SCRIPT .......................... 91 FIGURE 22 - ORDER INVENTORY DETAILS .......................... 92 FIGURE 23 - APE FINANCIAL REASONING KNOWLEDGE .............. 94 FIGURE 24 - LOW PRICE LEADER REASONING ....................... 101 FIGURE 25 - TRUCKERS’ STRIKE REASONING ........................ 103 FIGURE 26 - AUDIT COMPUTATIONAL MODELING RESEARCH ......... 109 FIGURE 27 - APE INVENTORY SCRIPT .............................. 114 FIGURE 28 - APE - ORDER INVENTORY DETAIL ......... ‘ ............ 115 FIGURE 29 - APE - RECEIVE WAREHOUSE INVENTORY DETAIL ........ 116 FIGURE 30 - APE - PRICE WAREHOUSE INVENTORY DETAIL .......... 117 FIGURE 31 - APE - SHIP WAREHOUSE INVENTORY - ................... 118 FIGURE 32 - APE - ORDER STORE DETAIL .......................... 119 FIGURE 33 - APE- RECEIVE STORE INVENTORY DETAIL .............. 120 FIGURE 34 - APE - DISPLAY STORE INVENTORY DETAIL .............. 121 FIGURE 35 - APE - SELL STORE INVENTORY DETAIL ................. 122 FIGURE 36 - APE - RECEIVE STORE DELIVERY DETAIL ............... 123 FIGURE 37 - AUDITOR CLIENT KNOWLEDGE ........................ 132 FIGURE 38 - PHYSICAL SCHEMA OF LME ASSESSMENT ............... 133 FIGURE 39 - PHYSICAL SCHEMA OF LME ASSESSMENT (CONT) ........ 134 FIGURE 40 - APE-EXPLANATION/DEFORMATION LISP CODE STRUCTURE ......................................... 135 CHAPTER I INTRODUCTION For years researchers have been intrigued by the ability of an auditor to determine whether a particular piece of information can be considered sufficient competent audit evidence (see Mautz [1958, p. 40]). The mental process of aggregating and evaluating evidence lies at the very center of auditing (Mock and Wright [1980] p.61). Evidence evaluation in auditing is performed by an individual who can effectively utilize large and complex stores of data dealing with a client and its environment. However, little is known about the mental processes involved when an auditor evaluates evidence. As a result, the audit profession is left to hope that audit experience and some supplementary firm training will be sufficient to help the auditor acquire the ability to evaluate audit evidence. Auditing standards give a definitive description of the normative guidelines for evidence evaluation: Sufficient competent evidential matter is to be obtained through inspection, observation, inquiries, and confirmations to afford a reasonable basis for an opinion regarding the financial statements under examination (AICPA, Section 326.01) However, ”the nature, timing, and extent of the procedures to be applied on a particular engagement are a matter of professional judgment" (para .12). Although standards are given regarding the available methods for gathering evidence, very little is available to guide the professional in performing the evaluation process itself. Felix and Kinney [1982], Mock and Wright [1980], Mautz [1958], and others have recognized the need for research to give guidance in the area of evidence evaluation. But, 2 as Mock and Wright note, "... an overall approach for evaluating evidence has not been presented" (p. 61). Additionally, Felix and Kinney [1982] state: We know of no comprehensive survey of audit evidence aggregation for even single accounts... State descriptions of this complete process as a series of auditor choices or decisions do not exist (p. 266). Additionally, Gibbins [1984] observes: We do not yet have a good understanding of what happens when experienced people, such as public accountants, use their judgment to make decisions that matter, amid the pressures, constraints, dangers, and opportunities of their everyday environment (p. 103). Gibbins’ observation is made notwithstanding the tremendous number of studies of judgment in accounting and auditing of which Ashton [1982, 1983], Joyce and Libby [1982], . Libby [1981], and Waller and Felix [1983] are only a few. Motivated by the findings of many researchers who question conventional assumptions about the study of judgment (e.g. Abelson [1981], Bosk [1979], Kahneman and Tversky [1982], Nisbett and Ross [1980], Tversky and Kahneman [1981], and especially Hogarth [1981]), Gibbins [1984] presents a summary of audit judgment research by identifying 21 propositions and corollaries regarding professional judgment in public accounting. The propositions and corollaries proposed by Gibbins (see Table 1) are presented "to work toward an empirical (scientific) theory of the natural, everyday process of professional judgment in public accounting” (p. 104). The following subset of phenomena are especially important to this study: 1. audit experience and training produces judgment templates (frames for guiding the auditor in future decisions) (P1) templates are stored in long-term memory (P2) template attributes are shaped by the environment (P3) judgment begins with a search for a template (P9) templates specify conscious response preferences (P14) conscious judgment strategies are involved in guiding judgments (P20). P‘S‘PP’N 3 TABLE 1 TABLE OF PROPOSITIONS AND COROLLARIES ABOUT PROFESSIONAL JUDGMENT IN PUBLIC ACCOUNTING (PJPA) 1. The judge’ S experience (accumulated learning) P1: Experience produces structured judgment guides ("Templates" ) C1(i) Templates exist prior to an event which triggers its use C1(ii) Greater experience results in more efficient use of memory C1(iii) Templates are more complete for more common tasks P2: The templates are maintained in long-term memory P3: Template attributes are Shaped by the environment C3(i) Some templates are more ready for use than others 2. The triggering event (stimulus) P4: The environment is subjectively perceived C4(i) Factors limiting perception also limit judgment P5: Templates are continuously updated 3. The judgment process P6. Judgment IS a continuous process P7: Judgment IS an incremental process C7(i) Routine judgments respond to the short term C7(ii) Routine judgments avoid limits on future responses P8: Judgment is a conditional process P9: Judgment begins with a search for a template P10: Template selection depends on circumstantial fit C10(i) Routine template selection is based on past learning C10(ii) Perception and search continue until a template is found P11: Routine judgment is not, and need not be, conscious C11(i) Explaining ones judgment involves rationalization C11(ii) One’s explanations correlate with common templates P12: The judgment environment is incompletely perceived P13: Personal characteristics affect template selection 4. The decision/ action (response) P14: Templates specify conscious response preferences C14(i) As outputs, preferences are subject to imperfections C14(ii) Preferences are based on past actions and learning P15: Preferences and actions are consciously bridged P16: The bridging process is instrumental, not probabilistic C16(i) Preferences and consequences are instrumentally connected P17: The decision/action must be justifiable C17(i) Some information is to justify choice, not make it C17(i) Justification includes some rationalization P18: Bridging evaluations tend to emphasize the downside 5. Propositions about non-routine PJPA P19: Conscious judgment is a response to the circumstances P20: Conscious judgment strategies also guide intervention C20(i) Mental "red flags" prompt conscious intervention C20(ii) Complex responses need conscious implementation P21: Fully conscious professional judgment is infrequent (source: Gibbins [1984, p. 121]) 4 Interestingly, although these phenomena have been assumed to exist, little is known as to how such mental phenomena actually occur. Our intent is to provide detailed descriptions of the judgment process and auditor knowledge Gibbins refers to using available computer technology. The expected benefits of this approach will be discussed at the end of this section. The complex characteristics of human information processing have been largely ignored by traditional stimulus/response research designs. Rather than incorporate the complexity, the complexity often is assumed away or considered insignificant in order to fit a research effort to the constraints of a particular methodology for which the researcher has an affinity. Therefore, most studies of auditor judgment have focused on auditor behavior. If, however, the goal of audit judgment research is to understand the audit judgment process rather than simply the results of auditor judgment (auditor behavior), then researchers should be developing an understanding of the process itself. Without a detailed description of auditor judgment, any "theory" testing will Simply be logical "guesses" as to the nature of auditor judgment with chance guiding our investigation rather than reason. Given this challenging environment, studying mental processing has been greatly enhanced by the advent of the computer. Nearly all researchers have benefited from the computational power of the computer when generating numeric data. However, numeric computation is only an indirect benefit of the computer for the cognitive scientist. Of far greater value is the development of symbolic processing by which low level human reasoning can be represented. For researchers interested in studying human reasoning (cognition), the "... use of computers in helping us understand cognition" involves using the computer as "a testbed for our ideas about what the mind does” (Schank and Hunter [1985] p. 143). 5 For years researchers have used the idea of simulating human information processing in the computer’. Each of these projects involved academic researchers attempting to simulate valuable information processing tasks performed by human experts. This approach of simulating the intelligent mental processes of human experts has been labeled the expert system (ES) approach (Waterman [1986], pp. 4-6). The following are examples of this type of research: 1. PUFF - At Stanford University, researchers developed a model of expertise used to diagnose the presence and severity of lung disease in a patient given data from respiratory tests. The system was implemented using the EMYCIN ES shell which represents the expertise using rules combined with goal directed inferencing (see Aikins et al. [1983]). 2. EXPERTAXSM - Recently, the accounting firm of Coopers & Lybrand, captured the expertise of firm experts in the area of corporate tax accrual and planning using the ES approach. This system is viewed as the first successful commercial application of ES technology within the accounting profession (see Shpilberg and Graham [1986]). 3. AUDITPLANNER - The ES methodology has also been used recently to enhance our understanding of the audit judgment process. Steinbart [1984; 1987] used the ES approach to develop a model of the decision processes of an expert while setting materiality during the planning of an audit. This research differs from both PUFF and ExperTAXSM in that the focus was on developing a macro level decision model of a single expert rather than on gathering the expertise of a number of individuals to provide a commercial product. ‘ . An exhaustive review of efforts to use computer technology to simulate human cognition can be found in Hayes- Roth, Waterman, and Lenat [1983], and Waterman [1986]. 6 4. GCX - Most recently, Selfridge and Biggs [1988] have studied the reasoning strategy and domain knowledge of an audit expert performing a going concern judgment during the audit engagement. The study is the first attempt to provide a model of auditor knowledge at its deepest levels of reasoning. Rather than being restricted to the use of rules, GCX utilizes frames, rules, procedures and other current representation methods to model the auditor’s reasoning. The intent of this study is quite Similar to that of Selfridge and Biggs. Specifically, this study will use the ES research approach to represent the mental processes of an auditor while planning or reviewing and audit engagement. Figure 1 illustrates the research setting for this study. The researcher’s (knowledge engineer) role is to elicit from an expert (an expert auditor) the knowledge used to perform a decision task involving expertise (audit planning and reviewing). The researcher then represents the knowledge in computational form (a computer program). Once the expert knowledge is captured in computational form it is known as a knowledge base representing the auditor’s expertise. The primary motivation for this study is to provide a description of an auditor’s judgment process and knowledge involved in a specific audit judgment task. This approach differs from the more traditional behavioral studies which simply identify the statistical association between various cues and auditor behavior. The need for descriptive research in auditing has been emphasized by Felix and Kinney [1982] wherein they identified the lack of descriptive audit judgment research in three of the four phases of the audit process (see pp. 252-253, 262, 266].) Knowtedge SHELL Knowledge Engineers (adapted from McCarthy [1987]) THE EXPERT SYSTEM RESEARCH ENVIRONMENT FIGURE 1 8 In addition, Felix and Kinney also note that there are even fewer audit studies which test hypotheses (p. 267). Why is there so little hypothesis testing in audit research? A possible answer might be found in the Libby and Lewis [1977] observation that Often, simply borrowing psychological theories for audit research is inappropriate due to (1) the complexity of the audit environment and setting, (2) the professional nature of the process, and (3) the lack of outcome feedback. To date, however, nearly every study, including Libby and Fredrick [1988], has done just that -- borrowed a psychological theory and tested its adequacy in an auditing context. It is our contention that more descriptive studies of auditor judgment would result in more useable models and theories. With a larger number of carefully developed models and theories of auditor judgment we would have more substantive hypotheses testing studies. The ES approach provides the researcher with a useful representation and testing environment to develop detailed descriptions of the auditor decision process. Using the ES approach, auditor judgment descriptions are represented explicitly in computational form (i.e., computer program code). Critics may confuse the development of a computer program with the real issue of developing a theory of human cognition. Although computer programming will be involved, ”the real results will be a new kind of understanding of ourselves, an understanding that is ultimately much more valuable than any program" (Schank and Hunter [1985] p. 155). Or, as Davis and Lenat [1982] explain: The aim here is thus not Simply to build a program that exhibits a certain specific behavior, but to use the program construction process itself as a way of explicating knowledge in the field, and to use the program text as a means of expression of many forms of knowledge about the task and its solution (p. 471). Like the econometric model, the computer program is an artifact which represents some portion of reality. The difference lies in the intent of the models. The econometric model makes no attempt to represent the reasoning process of the individual decision 9 maker. Rather, the model’s performance is considered acceptable when economic characteristics are predicted within some limit of tolerable error. However, the Simulation of decision making using a computer concentrates on representing the knowledge and reasoning which link the receipt of stimuli to a particular response. Simply knowing that a specific stimulus has some measurable effect on a particular output really tells us little until we can answer the questions "how?" and "why?" Accordingly, the ES approach is satisfied when the model not only predicts the expert decision result, but when it explicitly represents a model of the expert’s decision processes which enable us to describe how and why the judgment occurred. A second reason for this study is to work toward an improved means of training auditors to perform audit judgment tasks. By having a representation of the decision processes of one who is viewed as an expert, a basis is provided for the training of others involved in learning how to perform the independent review task. Finally, a third motivation for this study is to provide a contribution toward the design of a more useful audit information system. By improving our understanding of the auditor judgment process, we can better identify the information needs of one performing an evidence evaluation task. As a result, uncertainty regarding the information requirements is reduced. In turn, this reduction increases the likelihood of developing a useful system to assist the decision maker. Therefore, the intent of this study is to make a contribution toward enhancing the quality and usefulness of audit research, auditor training, and audit information systems. DISSERTATION ORGANIZATION This dissertation is divided into five parts. First, we review issues relating to the study of audit judgment focusing on the planning and evaluation of audit evidence. Second, we introduce the idea and approach of cognitive modeling. After discussing the link between the expert system approach and cognitive modeling, we present a brief review of 10 knowledge acquisition, knowledge representation, and computational modeling. Third, we present a description of the research methodology to be used for studying both the audit planning and evaluation tasks. Fourth, we analyze results from the study of the audit evaluation task. Fifth, we present the results and analysis of studying audit planning. Sixth, we will conclude with a brief summary of this research study, discuss its implications, and provide some suggested extensions. CHAPTER II AUDIT JUDGMENT, EVIDENCE, RISK, AND THE AUDIT PROCESS The purpose of this section is to discuss the issues surrounding the planning and evaluation of audit evidence. To do so we first discuss the relationship between risk and audit evidence and the related research. We then review and evaluate models of audit evidence. Lastly, we discuss the audit tasks involving auditor judgment of risk and how a descriptive study of these tasks could enhance efforts to understand audit evidence evaluation tasks. THE AUDIT RISK MODEL The auditor’s focus is to plan a means of collecting sufficient evidence to evaluate the fairness of a set of client assertions (the financial statements). In addition, the auditor evaluates the audit plan itself and the evidence once it is collected. The planning and evaluation of evidence is performed by all members of the audit staff throughout the audit process. However, the engagement partner remains responsible for judging whether sufficient competent evidence is planned for and collected to support or refute the assertions of the financial statements. Additionally, at the close of the audit, the engagement partner must decide if the overall level of evidence collected is still sufficient and properly presented to adhere to reporting standards. With each engagement the auditor faces an environment with a particular amount of available evidence as shown in Figure 2.2 The diagonal line represents the level of risk the auditor is willing to assume regarding the audit. Once set, the level of risk for the 2. My thanks to Dewey Ward for his allowing me to use his conceptualization of the audit risk model as shown in Figure 2. 11 High Risk Level Low 12 AVAILABLE EVIDENCE Uncollected / Control Evaluation Evidence / Tests of Detail /<— Analytical Procedures L I Other Sundry Tests AUDIT EVIDENCE AND RISK FIGUREZ Fixed Position 13 audit is fixed. The higher the level of risk associated with the engagement, the greater the amount of evidence the auditor will collect. With the level of risk fixed, the remaining judgment on the part of the auditor is to decide which methods to use to collect the desired evidence (assurance). The arrows on the figure illustrate how the various methods of collecting evidence are inversely related. Overall, the auditor chooses among the alternatives of testing the internal structure, directly testing transaction details, using analytical procedures, or using other alternative investigative methods. Once the auditor has developed a plan for gathering evidence, the auditor’s attention turns to collecting and then evaluating the evidence. After the evidence is collected, the auditor reviews the plan and the resulting evidence to determine whether the audit plan was adequate, whether the plan was adhered to properly, whether any deviations from the plan were needed, and whether the deviations from the plan were appropriate. Both planning and evaluation deal with evidence, only the perspective of each is different. Audit Risk Research SAS 39 and 47 formalize the identification of audit issues requiring special consideration, and the methods of producing evidence to evaluate these issues, as assessments of risk associated with a specific audit engagement. The two standards adopt the concepts of inherent risk (IR), control risk (CR), and detection risk (DR). The product of the three risks is defined as audit risk (AR) and represented as: AR = IRXCRXDR. This model is referred to as the audit risk model. Inherent risk deals with the "susceptibility of an account balance or class of transactions to error that could be material, when aggregated with error in other balances or classes, assuming that there were no related internal accounting controls" (section 312.20a). Inherent risk in the audit risk model is specific to a particular account or type of transaction and requires a great deal of auditor judgment. 14 Control risk simply represents the risk of "error that could occur in an account balance or class of transactions and that could be material, when aggregated with error in other balances or classes, will not be prevented or detected on a timely basis by the system of internal accounting control" (section 312.20 b). The assessment of control risk is operationalized by the auditor identifying the strengths and weaknesses of the system and the degree to which the auditor will rely on the controls. Inherent risk and control risk "exist independently of the audit" (section 312.21) and are, therefore, beyond the control of the auditor. Nonetheless, the auditor makes an assessment as to the risk of each. By doing so, the auditor provides a basis for determining the level of the third component of the audit risk model, the detection risk. Detection risk is strictly a function of the effectiveness of the auditing procedures designed for a particular part of the audit and their implementation. The risk is "that an auditor’s procedure will lead him to conclude that error in an account balance or class of transactions that could be material, when aggregated with error in other balances or classes, does not exist when in fact such error does exist" (section 312.20 c). Unlike inherent or control risk, the auditor has direct control over the level of detection risk by the manner in which the nature, extent, and timing of audit procedures are planned and applied by the auditor. Note that according to the audit risk model, inherent risk and control risk are inversely related to detection risk. As the level of inherent and control risk increases, the auditor must place greater reliance on the audit procedures to gain the desired level of assurance regarding a particular account or transaction. Combined, these factors provide the basis for developing an audit program. In a very real sense, the audit program represents the auditor’s perception of what constitutes sufficient competent evidence. Therefore, for the researcher, the study of audit evidence is the study of the audit judgment involved in developing and evaluating the audit program. 15 Although the audit risk model appears to be well defined, as Felix and Kinney [1982] have already noted, evidence of its actually being a representation of how an auditor reasons is currently not available. The following is a review of the literature relating to the audit risk model. Weaknesses of the Current Audit Risk Model Research to date on audit risk has concentrated on testing the adequacy of the models proposed in the auditing standards. The various definitions and combinations of the components of risk as defined in SAS 39 and 47 have been the sole domain of these projects (e.g., Cushing and Loebbecke [1983], Holstrum and Kirtland [1982], and Libby, Artman, and Willingham [1985]). None of this research has attempted to test, let alone describe, a theory of audit evidence which was formulated first by observation of auditor evidence judgment. Weaknesses of the Current Research Approach The lack of observation guided research has led Kaplan [1985] to conclude that "auditors’ application of professional judgment is a complex process and goes beyond the summing of individual attributes" as in the audit risk model proposed by audit standards. This same observation by Waller and Jiambalvo [1984] prompted them to call for more behavioral research focusing on what they term "predecisional methods” to develop better descriptions of the process by which auditors assess risk. Williams [1987] was the first to answer the criticisms of Felix and Kinney [1982], Kaplan [1985], and Waller and J iambalvo [1984] by attempting to provide a better description of audit risk assessment. Her research concentrated on tracing the acquisition and use of data during the audit planning process. Although Williams has provided some descriptive information regarding the judgment process, the need for even more descriptive research as noted by Waller and Jiambalvo is still unanswered. 16 In summary, at least part of the difficulty in audit risk research appears to be due in part to useful descriptions of the process by which auditors both assess the level of various risk components, and the translation of the risk into an audit program. Our contention is that we need more reliable descriptions of auditor reasoning to guide the development of useful models. Otherwise, we will continue to simply test any new theory proposed by psychologists and wonder why our results are not conclusive or interesting to practitioners and academics. The research of this dissertation intends to provide a detailed description of the evidence planning and evaluation process by an expert auditor as has been called for by those we have just cited. As a result, we should have additional insight as to the means by which sufficient competent evidence is identified by auditors while planning and evaluating an audit. However, it is difficult to make sure we understand just what constitutes audit evidence. AUDIT EVIDENCE Mock and Wright [1980] (hereafter MW) provide a summary of the current research regarding audit evidence and its role in auditing’. MW view the information environment of collecting and evaluating evidence as that shown in Figure 3. The major components of the MW view consist of: 1. The Accounting Information System (AIS) - the major outputs of this system are considered to be the financial statements and other financial data. The financial information from the AIS is a collection of assertions which the auditor must test to determine the fairness of the financial statements. 3. Much of the following logic is due to the efforts of Mock and Wright [1980]. 17 Accounting lnfonnation System Outputs, hardware, controls Qualities of AIS: error rate, reliability, accuracy, timeliness. etc. 1. Data Base 2. Financial statements: a. completeness b. existence c. rights 8. obligations d. valuation 8- allocation e. presentation & disclosure I Audit Information System _ Audit procedures, controls, etc. Characteristics of audit system: cost, reliability, validity, Evaluative Factors in Designing 81 Executing Audit bias, etc. 1 Audit Evidence Characteristics of evidence: competence, sufficiency, reliability, cost. timeliness, objectivity. etc.. _ Audit Judgment Judgments in audit report: fairly presents, etc. Judgments in auditor workpapers: reliance, scope, nature, timing. Professional standards: - sufficient competent evidence - adequate procedure to provide reasonable basis - rational relationship between cost and usefulness - etc. Audit objectives: - bona fide accounts - appropriate valuation - proper disclosure etc. Audit firm criteria: materiality - risk - internal control - cost contribution - etc. THE ROLE OF EVIDENCE IN THE AUDIT PROCESS FIGURE 3 18 Audit Information System (ADIS) - the primary goal of the ADIS is to collect evidence using various procedures designed to test for completeness, sufficiency, timeliness, etc. This evidence is then subjected to the judgment of the auditor. Evaluative Factors - to assist in proper judgment, a number of factors must be considered when evaluating evidence to support the various assertions of the AIS. These factors consist of those proposed by the profession in terms of professional standards and audit objectives (see AICPA, Section 326) as well as firm Specific guidelines. Although the model gives an idea of the role of evidence in evaluating financial assertions, it does not provide a theory for understanding the actual evidence planning and evaluation processes. The foundation for any evaluation framework will likely be contained within the professional standards (AICPA, Section 326) which set forth the normative guidelines of evidence evaluation. Figure 3 shows the basic evaluative factors which are summarized in the third standard of field work: Sufficient competent evidential matter is to be obtained through inspection, observation, inquiries, and confirmations to afford a reasonable basis for an opinion regarding the financial statements under examination (Section 326.01). To guide the professional’s judgment, the standards focus on two criteria used in assessing the quality of audit evidence: competency, and sufficiency. MW extended the idea of competence to include the characteristic of validity (see Figure 4). Even though standards are helpful in establishing a frame of reference, MW point out various shortcomings which include: 1. 2. 3. the concepts proposed are vague and not operational measurement criteria are not offered (i.e., what is the means for measuring evidence quality?), and a scientific and systematic approach is not presented (p. 64). Although the MW criticisms may be outside the intent of the professional standards, they nonetheless identify what researchers can and should provide audit practitioners, regulators, and academicians. 19 Financial Statement Assertion I Audit Objective I Audit Procedure I Audit Evidence Sufficient Competent Evidence OGJPETENCE a matter of judgment, is affected by pertinence, VAIJ DITY SUFFICIENCY timeliness. existence of corroborating evidence, relevance and validity. related to reliability, varies with source independence, quality control system, and directness of knowledge. a matter of judgment. Relevant considerations include nature of item, risk. susceptibility of item to misstatement and competence. OTHER CONSIDERATIONS relationship between cost and usefulness. relative risk, and certian statistical criteria. Audit Judgments Assertions Warranted? (source: Mock and Wright [1980, p.72]) > Evidence Cost Beneficial? AUDIT EVIDENCE EVALUATIVE FACTORS CONTAINED IN SECTION 326 FIGURE 4 20 One of the earliest surveys of evidence planning and evaluation is that of Mautz and Sharaf [1964]. Mautz and Sharaf presented a philosophy to guide the study of auditing. Many of the formal concepts and terminology found in auditing practice and research can be traced to this seminal work. Albeit insightful in developing a normative description of evidence, a philosophy only provides guidance similar to the professional standards and suffers from the same weaknesses noted by MW. Toba [1975] was the first to attempt a more specific, measurable view of the nature of audit evidence and its evaluation through the use of predicate logic and probabilities. While Toba offered some insightful additions to the general theory of evidence, the logic based representation still left the concept of evidence inoperable. After recognizing the limitations of a purely logical or probabilistic approach to representing evidence evaluation, Toba concluded that auditing is more a heuristic convincing process of persuasion than one of investigative learning which would lend itself to a purely logical or probabilistic type of representation. Kissinger [1977] attempted to correct some of the deficiencies Of Toba’s propositions but added little to the development of an operative, testable concepf of evidence. Sneed [1978] and Schandl [1978] furnished additional philosophy regarding evidence, but again, fail to provide the needed model which can be subjected to scientific validation. Hence, MW emphasize the need for a scientific approach to add "rigor, precision, and greater reliability to" evidence evaluation research. A Measurement Based Approach to The Evaluation of Evidence Having identified the weaknesses of previous research, MW proposed a "Measurement Based Approach to the Evaluation of Audit Evidence." In essence, MW operationalize the concepts explained in Section 326 of the auditing standards (see Figure 5). Competence is evaluated by asking "Is the evidence useful?"; validity by asking "Are the 21 assertions correctly drawn?"; and sufficiency by asking ”Are the attributes correctly measured?” This approach involves the following steps: 1. Identify the financial assertions to be evaluated. 2. Identify the financial statement attributes which need to be investigated. 3 Identify alternative audit tests which may provide evidence on the appropriate attributes. Analyze each alternative in terms of the validity and reliability of the evidence and the meaningfulness of possible audit assertions. 4. Analyze the usefulness of each alternative in terms of relevancy, cost, audit risk, and behavioral constraints (Mock and Wright [1980], p. 72-73). Commenting on this approach, Ward [1980] agreed with MW as to how the scientific method can be used to improve the quality of evidence evaluation. However, Ward considered the MW framework too simplistic and narrow. Like Waller and Jiambalvo [1984], Ward called for the development of a richer model of evidence by focusing on descriptive research in order to identify the candidate procedures of the audit evidence evaluation processes. Once described, the processes can be refined to enhance the effectiveness of the audit. Again, other than Williams [1987], the call for descriptive audit judgment research has gone unheeded. Documenting a method of evidence planning and evaluation seems quite elusive for at least two reasons: 1. Lack of concentration on an actual audit judgment - as Ward [1980] noted, a focus on the candidate procedures (or fundamental decision steps) of the auditor may well be the key to revealing the general evaluative processes of the auditor. Once the decision processes of a number of tasks are understood we might be able to begin to generalize the theories to account for many different evidence evaluation settings across industry, client, and engagement types. 2. Lack of investigative tools - tools are needed to the specific processes used by the auditor to evaluate evidence within a rich information environment. Although theories of human cognition have existed for some time, the major stumbling block has been the inability of the researcher to explicitly represent such theories for testing. These two challenges identify a great deal of the difficulty of many cognitive research efforts using the traditional stimulus/response research paradigm. As mentioned Financial Statement Assertion I Audit Objective I .— Audit Procedures I + Audit Evidence 22 T— PURPOSNE VIEW: Useful audit Is the evidence assertions, tests, useful? and evidence Behavioral constraints Relevance Cost and risk FACTUAL VIEW: Audit , §—— Meaninglulness assertions Are the assertions correctly drawn? (processing) Audit Tests Are the attributes correctly measured? Reliability Scale type Valid Representation I AUDIT .iuoceme'ms Assertions Warranted? > Evidence Cost Justified? (Source: Mock and Wright [1980. p.72) THE MEASUREMENT BASED APPROACH FIGURE 5 23 earlier, advances in computer technology and psychological theory have provided investigative tools which target the representation of cognitive processes. Currently, these tools are best suited to dealing with a Specific decision or knowledge domain which must be identified before investigation can proceed. . The next section discusses two specific audit tasks which deal with the planning and evaluation of evidence. Following an explanation of the audit tasks to be investigated, the approach used in cognitive modeling will be discussed. THE AUDIT PROCESS Earlier we noted that a working definition of audit evidence is the audit program. As the auditor assesses the risks associated with an engagement and its parts, a plan is developed. If followed, the plan results in "sufficient competent evidence” by which the auditor analyzes the fairness of the client’s assertions. Therefore, the study of the planning and evaluation of audit evidence is actually the study of the steps performed during the audit engagement which relate to assessing the types of evidence needed to evaluate management’s assertions. Felix and Kinney [1982] describe the audit process as the steps shown in Figure 6. Basically, these steps Show the detail of performing an audit which can be summarized as: planning and design of the audit approach, tests of transactions, direct tests of balances, and completing the audit. 9939:“ Some steps of the audit can be performed more than once depending on the results of each audit step. Steps two and three Simply identify the two basic means of actually gathering evidence: compliance testing and direct tests of balances. Only steps one and four actually involve auditor judgment in assessing the sufficiency of audit evidence. f Reevaluation f 24 Preliminary Tactical Com liance Evaluation of Plann' f T pt f Orientation —. Internal ._> m.9 0 es 0 . Audit Pertinent Accounting Activ't' C ntrols Controls ' nos 0 Evaluation of Substantive Internal Tests of Aggregation Forming —> Accounting —h Transactions —> of Opinion __l Controls and Results Balances Assertion Not Supported I Report (Adapted from Felix and Kinney. [1982], p. 246) THE AUDIT OPINION FORMULATION PROCESS FIGURE 6 Audit Planning The audit risk model proposes that planning the audit involves the assessment of the inherent and control risk associated with the engagement as a whole and for each of the individual assertions presented by client management on the financial statements. This planning process has been characterized by Srinidhi and Vasarhelyi [1986] to consist of three stages: 1. Identification and evaluation: Identify and evaluate the factors considered relevant to the planning of audit procedures. Components integration: Decompose factors considered relevant to the planning of the audit into various specific components which serve as the means for assessing the state of a particular factor considered relevant to the audit. Factors integration: Integrate the set of relevant factors to provide some basis for determining the extent of audit tests to perform. Although Srinidhi’s and Vasarhelyi’s outline provides some guidance for studying the audit planning process, we lack even a minimal amount of detail regarding the judgment processes of each of the phases. A number of researchers have studied the audit planning process (e.g., Gaumnitz, Nunamaker, Surdick, and Thomas [1982]; Kaplan [1985], and Srinidhi and Vasarhelyi [1986]). Nonetheless, we are still left without even a simple description of an auditor’s judgments during the audit planning process. That which Felix and Kinney [1982] pointed out some time ago, still holds true: "State-descriptive research on the auditor’s initial planning process is nonexistent. Such research might collect evidence on the physical steps and methods which auditors use in planning audits, or it might describe, classify, or measure the mental (or internalized) processes which an auditor uses to plan the audit and to develop "priors," or subjective beliefs, as to the state of the auditee’s affairs. Evidence on the physical steps which an auditor takes in planning may be easier to design and execute than is the description, classification or measurement of mental processes. Mail surveys, field studies (interviews), and the examination of working-paper documentation are all methods that the researcher might use to obtain such evidence." (p. 252). 26 Notice that although Felix and Kinney recognize the need for the research, they doubt the ability of the researcher to be able to represent the mental processes of the auditor due to the lack of an adequate representation method. We will next discuss the audit tasks dealing with the evaluation of audit evidence after which we will discuss a method by which mental processes can be represented. Evaluating Audit Evidence At the close of an audit, the auditor reviews contingent liabilities, contingencies, commitments, etc., as well as events subsequent to year end which might affect the valuation or disclosure of the statements being audited. Once these tasks are completed, the evidence resulting from executing the audit program is evaluated to determine its sufficiency and competence. Typically, this phase of the audit involves the following activities (see Arens and Loebbecke [1985] p. 734): evaluate the sufficiency of evidence collected, review financial statement disclosures, obtain a client representation letter, evaluate whether the evidence supports the auditor’s opinion, read other information in the annual reports, review working papers, and obtain an independent review of the audit work performed by the engagement team. SP‘V‘PPNI‘ Each activity routinely involves the manager or partner responsible for the audit. Most of these evaluation activities require competent audit judgment, extensive familiarity with the audit process, and the ability to summarize and coordinate large amounts of evidence supporting a particular assertion. Overall, steps one through six can be summarized as (1) determining whether sufficient competent evidence has been gathered, (2) insuring adherence to firm and professional standards, and (3) adhering to presentation requirements. 27 The Audit Review Of the tasks performed to evaluate an audit, only the independent review involves someone who has not been a part of the audit engagement team. Arens and Loebbecke (p. 738) state: At the completion of larger audits, it is common to have the financial statements and the entire set of working papers reviewed by a completely independent reviewer who has had no experience on the engagement. This reviewer frequently takes an adversary position to make sure the conduct of the audit was adequate. The audit team must be able to justify the evidence they have accumulated and the conclusions they reached on the basis of the unique circumstances of the engagement. The thrust of this review is to evaluate each management assertion on the financial statements to determine (a) whether sufficient competent evidence has been gathered, and (b) whether the financial statements are properly presented.‘ The task facing the reviewer is to determine whether a sufficient amount of evidence has been gathered, per the audit plan, to support the conclusions of the audit team. Invariably, some additional work is performed to satisfy the reviewer’s concerns. However, the amount of additional work requested by the reviewer can vary greatly. As with the planning process, Felix and Kinney [1982] note, "We know of no comprehensive survey of audit evidence aggregation for even single accounts...State descriptions of this complete process as a series of auditor choices or decisions do not exist" (p. 266). Therefore, in answer to the number of researchers. who have noted the lack of ‘. Research On The Effectiveness of The Review Process The need and value of the review function in auditing has been noted in the professional standards. Specifically, auditing standards emphasize that the "exercise of due care requires a critical review at every level of supervision of the work done and the judgment exercised by those assisting in the examination" (AICPA, Section 230.02). The importance of the review function has been underscored in practice by recent SEC action censuring a firm for non-compliance with the review function requirements in the standards (see SEC vs. Seidman and Seidman). Addressing the question of whether the review function is useful at various stages of the audit, Trotrnan and Yetton [1985] sought to establish empirically whether the "review process reduced judgment variance, and, if so, whether it was a relatively efficient way of doing so" (p. 257). They found the review process was effective in reducing judgment variance, although Similar results were obtained when two competent audit seniors were making judgments interactively. They conclude, "a second opinion, regardless of its form, outperforms individual judgments" (p. 265). Although Trotrnan and Yetton have confirmed the value of the review function, we are still left without an understanding of the actual judgment process involved. This is true for routine reviews during audit field work and the review of an entire audit by a partner not involved in the audit engagement. 28 descriptive audit research, the intent of this research is to provide descriptive models of both the planning and review tasks using the computational modeling approach in an effort to better understand the nature of audit evidence. The next section introduces the concept of computational modeling and the tools available for developing computational models. CHAPTER III THE COGNITIVE MODELING APPROACH WHAT IS COGNITIVE MODELING? Howard [1983] summarized three characteristics of the cognitive modeling approach to understanding human intelligence: 1. focus on knowing rather than responding, 2. emphasize mental structure or organization, 3 view the individual as being active, constructive, and planful, rather than as being the passive recipient of environmental stimulation - a rich environment is integrated rather than ignored. These characteristics have led Gardner [1985] to View the primary theme of the cognitive approach as: the clear demonstration of the validity of positing a level of mental representation: a set of constructs that can be invoked for the explanation of cognitive phenomena, ranging from visual perception to story comprehension. Where forty years ago, at the height of the behaviorist era, few scientists dared to speak of schemas, images, rules, transformations, and other mental structures and operations, these representational assumptions and concepts are now taken for granted and permeate the cognitive science (p. 383). Additionally, Gardner proposes the "triumph of cognitivism" as placing: talk of representation on essentially equal footing with [other] entrenched modes of discourse - with the neuronal level, on the one hand, and with the sociocultural level, on the other (p. 383). The thrust of cognitive modeling is to provide a representation of the thought process. DEVELOPMENT OF THE COGNITIVE APPROACH Quillian [1968] was the first to attempt a semantic representation of memory which he later tested through computer simulation. Quillian concentrated on the memory which would be needed to understand language. The computer simulation of the model was named Teachable Language Comprehender (TLC). Although the model was found to be 29 30 very robust in representing a great deal of knowledge, subsequent research showed that Quillian’s model was unable to account for all instances of language use. Since the initial attempts of Quillian, researchers interested in developing a semantic representation of human memory have divided themselves into two camps. The groups are divided over issues surrounding research objectives and approaches. Miller [1981] has labeled the two groups demonstration theorists and development theorists. Theory demonstration is characterized by a focus on highly constrained issues or tasks involving cognition in an effort to identify a unique theory which will account for the observed phenomena. After proposing a wide variety of theories which might account for a specific instance of intelligent behavior, attention turns to experimental testing to determine the theory which best explains the behavior. The expectation is that once each instance of intelligent behavior has been investigated, the various theories will be pieced together to provide a general theory of human cognition. Theory demonstration encompasses the majority of work performed by experimental psychologists. In contrast, the theory development camp concentrates on developing models which attempt to explain complex instances of cognition. Rather than restrict the study environment, theory development researchers emphasize the need for theory sufficiency when simulating the cognitive processes of interest. The concept of theory sufficiency focuses on the need to provide detailed descriptions of a theory based on observation rather than the loosely structured, arm-chair theories so prevalent in behavioral research. Traditionally, theory development has concentrated on language comprehension, expert decision making, etc. Once a number of competing sufficient theories have been identified, then the researcher begins subjecting the theories to experimental validation. Hence, the concentration on developing a unique theory of human cognition is not essential. AS Sowa [1984, p. 23] notes, theory development researchers view "intelligence as a kludge: people 31 have so many ad hoc approaches to so many different activities that no universal principles can be found." Both the theory demonstration and theory development approaches have common characteristics in that they stress the need for, and rigorous testing of, theories. The difference between the two is in the methodologies used for testing and in the standard for defining a theory. Both have ample support in terms of prominent proponents and credible logical arguments. Nonetheless, until one approach is proven more effective than the other, selection of which orientation to hold to is a matter of researcher preference. This project will follow the theory development orientation by attempting to develop a cognitive model which accounts for a particular instance of human cognition: the planning and review of audit evidence by an expert auditor. ANALYSIS OF DIFFERENCES Supporting the study of a single expert’s thought processes and knowledge is the need for a taxonomy of models which provide a sufficient description of human intelligence. Benefits of this approach are twofold. First, pedagogical efforts could be better guided by understanding how an expert’s knowledge is organized and what types of mental processing occurs. This is of interest to anyone involved in educating audit professionals. Until a reliable method for resolving judgmental conflict between two recognized experts is available, we are forced to work with Single experts while building these complex models of human reasoning and knowledge. Second, is the idea expressed by Simon [1980] Since intelligent systems are programmable [i.e., they adapt to the demands of the task environment], we must expect to find different systems (even of the same Species) using quite different strategies to perform the same task. I am not aware that any theorems have been proved about the uniqueness of good, or even, best, strategies. Thus, we must expect to find strategy differences not only between systems at different Skill levels, but even between experts. Hence, research on the performance of adaptive systems must take on a taxonomic, even a sociological aspect. We have a great deal to learn 32 about the variety of strategies, and we should neither disdain nor shrink the painstaking, sometimes pedestrian, tasks of describing that variety. That substrate is as necessary to us as the taxonomic substrate has been to modern biology (p. 42). This same argument for studying individual decision making is summarized by Dukes [1965]: N=1 studies cannot be dismissed as inconsequential. A brief scanning of general and historical accounts of psychology will dispel any doubts about their importance, revealing, as it does, many instances of pivotal research in which the observations were confined to the behavior of only one person or animal. Foremost among N = 1 studies is Ebbinghaus’ (1885) investigation of memory. Called by some authorities "a landmark in the history of psychology a model which will repay careful study (McGeoch and Irion [1952, p.1])...The researcher who fails to see that important generalizations from research on a Single case can ever be acceptable is on a par with the experimentalist who fails to appreciate the fact that some problems can never be solved without resort to numbers (p. 74) The benefit of having better descriptions of decision making is its effect on model development and testing. In the physical sciences, tremendous time and effort are dedicated to developing detailed descriptions of physical phenomena. These observations spawn a number of potential explanations for the phenomena which result in well developed hypotheses and tests. In hopes of encouraging a similar progression of knowledge in understanding audit judgment, this study attempts to develop a detailed description of an audit judgment being performed by an audit expert. DIFFICULTIES IN COGNITIVE MODELING Although the focus and benefits of the cognitive approach may seem clear and reasonable, cognitive modeling is not without its own difficulties. The following is a discussion of three of the more perplexing issues facing the cognitive approach: 1. Tedious and Time Consuming - noting the effort to document, represent, and test individual models of mental processing will involve tremendous amounts of time and patience seems an understatement. However, an approach 33 which requires a comparatively greater amount of human resources should only limit the number of those willing to be involved and not the value of the effort itself. Self Introspection - because mental phenomena are not directly observable (we cannot see a thought as we would a cell under a microscope), there is a dependence to some degree on an individual’s ability to describe, at least partially, one’s conscious thought. Much of the debate regarding the pros and cons of self introspection are summarized by Nisbett and Wilson [1977] (who question its usefulness) and Ericsson and Simon [1980] (who defend its use in particular Situations). Basically, the use of introspection is criticized because (1) subjects may not be able to articulate their decision processes, and (2) the act of verbalizing may change subjects’ decision processes. However, Ericsson and Simon point out that such criticisms can be minimized or even overcome when verbal protocols are properly used. Ericsson and Simon restrict the use of introspection to when introspection occurs concurrently (while a task is being done), and when the desired information is that which is normally attended to consciously. Using introspection to unlock compiled knowledge (automatic responses) is doubtful. Computational Paradox - possibly the most Significant issue, yet least addressable at this time, is the idea that the current tool for representing and testing cognitive theories (the von Nueman digital computer) might well be far afield from the neuronal level processing of the human. As Gardner [1985] points out: We must face the alternative that humans may be an amalgam of several kinds of computers, or computer models, 34 or may deviate from any kind of computer yet described (p. 387). Neuronal level processing, however, is an issue dealing with the physical, not conceptual, representation of knowledge. As well, Miller [1981] notes: If a computer were used to model the weather, no one would fear that a cyclone might destroy the computer center. [I-Iowever, often a] computer that models an intelligent brain is expected to be a brain, to display actual intelligence (p. 220). The creation of an intelligent entity is not the intent of this study. Rather it is the simulation of one performing an intelligent task. Nonetheless, as Gardner [1985] continues, ”Computers will be pivotal in helping us determine how computerlike we are but the ultimate verdict may be ’Not very much’" (p. 387). The clean, straightforward processing of today’s computer may simply be a tool for investigation and hypotheses testing of human mental processes rather than the environment for a more natural representation of human thought. EXPERT SYSTEMS AS COGNITIVE THEORIES The close link between the science of cognition and the computer is the result of the need for a medium for representing and testing cognitive models. As has been noted by Bailey et al. [1987], "Identification and representation of the domain knowledge, both the knowledge states and related procedural knowledge, lie at the heart of human information processing and expert systems work" (p. 2). Hence: The primary focus of an expert systems research project is the creation of a theory of a single expert’s decision making processes (Newell et al. [1958], p.151). Auditors are appropriate subjects for expert judgment research. They exhibit many of the characteristics normally associated with domain expertise...(Bailey et al. [1987] p. 5). 35 Level of The Expert Theory - Deep Knowledge Currently, ES research is pushing beyond the identification of a "formalizable catalog of concepts, relations, facts, and principles," to the identification of what Sowa [1984] terms the deep structure of understanding: ”the type labels, canonical graphs, schemata, and laws of the world that define some body of knowledge or domain of discourse” (p. 294). The focus on deep knowledge extends the concept of the typical computer program (which concentrates on remembering) to that of developing a knowledge-based system (which concentrates on knowing). Sowa [1984] further explains the idea of a knowledge-based system in that: 1. Knowledge is more active than rote memory. 2. Knowledge does not depend on a fixed model, but can be applied in new ways to novel situations. 3. A teacher may be necessary to impart knowledge, but the knower should be able to use it without external guidance (p. 277). Focusing on the fundamental structure of knowledge is in essence an effort to bridge the gap between the theory development and theory demonstration approaches mentioned earlier. Emphasis on knowledge structures is the distinguishing factor between the development of commercial computer programs and the development of cognitive models for research in human information processing. Commercial programs concentrate on task performance efficiency while cognitive models concentrate on task performance 22M!- The formation of commercial systems and cognitive theories both use the ES methodology for development. Proper Use of The Expert System Methodology Although Bailey et al. [1987] have concluded that "auditors are appropriate subjects for expert judgments," care is still needed to determine whether a specific audit task can be studied using the ES approach. Viewing audit judgment as an area for the development of knowledge based systems can easily be supported using the criteria of Waterman [1986] and Bobrow et al. [1986]. AS Bobrow et al. explain, "One can build expert systems for 36 appropriate problems - ones that are valued, bounded, routine, and knowledge intensive" (p. 893). A careful review of the criteria of Waterman [1986] and Bobrow et al. [1986] gives ample support for the study of evidence evaluation. As with any research tool, expert systems are not appropriate for many research questions. However, when properly utilized, the ES approach has been used to provide previously unavailable descriptions of human cognition in many fields, including that of auditing judgment. Past Expert System Research in Accounting Previous research using the ES methodology can be divided among three categories: task automation, cognitive exploration, and view modeling (Table 2 lists previous uses of the ES approach in accounting research). 1. Task automation - involves the development of computer software for tasks which previously have not been automated. While some argue whether such efforts are the appropriate domain of academics (McCarthy [1984], and Danes [1986]), Bailey et al. [1987] believe that regardless of whether such eflorts are appropriate for academicians, academe does not have a comparative advantage in the arena. Bailey et al. point to the success of Coopers & Lybrand in developing ExperTAXSM (see Shpilberg and Graham [1986]) as evidence. This view, however, has little support when considering the efforts of researchers at Stanford, Carnegie-Mellon, and many other schools in developing commercial ESS. Many projects have allowed cognitive theorists to test their ideas while providing a product to an interested supporter. The ability of science and industry to work hand in hand should not be considered a detriment to either. Cooperation between 37 TABLE 2 PREVIOUS ES WORK IN ACCOUNTING DOMAIN OF INTEREST REFERENCE TASK AUTOMATION Estate planning Michaelson [1982] Auditing EDP systems Hansen and Messier [1984] Allowance for bad debts Dungan [1983] Choice of audit opinions Mock and Vertinsky [1984] Collectability of bank loans Willingham and Wright [1985] Opinion formulation Dillard and Mutchler [1986] COGNITIVE MODELING Materiality judgments Steinbart [1984], [1987] Internal control evaluations Gal [1985] Internal control evaluations Meservy [1986] Going concern judgment Selfridge and Biggs [1988] DATA MODELING VIEWS Views of internal control Gal [1985] systems ___— 38 academe and practice has been encouraged in auditing research itself (see Felix and Kinney [1982], p. 268). 2. Cognitive exploration - proposes the ES methodology to be another tool for studying cognition (which has been distinguished as a "knowledge-base" when concentrating on understanding the deep structure of knowledge). Steinbart [1984] investigated the cognitive processes involved in the determination of audit planning materiality using hueristic representations. Selfridge and Biggs [1988] replace hueristic based representations by identifying the fundamental reasoning processes and knowledge of the auditor using more advanced knowledge representation methods. Meanwhile, Meservy [1985] has attempted to provide an understanding of the cognitive processes involved in the evaluation of internal controls. Each of these efforts have documented the processes and domains of knowledge needed to perform expert tasks in auditing and may well prove to be critical in understanding the audit judgment process. 3. Data modeling views - A natural outgrowth of understanding an individual’s cognitive processes is identifying the data needed to properly model an information system to support the decision maker. Gal [1985] has proposed that once the decision task is documented, one can use the task description to define the type of information system which would best represent the data needed to assist in the decision process. The contributions of the expert system approach to the study of human information processing are limited at this point. As Steinbart [1987] recently observed: The development of a taxonomy of judgment models should identify the commonalities and differences between auditors in their approach to making judgments. [Each] Study represents but one step toward developing such a taxonomy (p. 110). 39 Steinbart concluded that the expected benefit of this taxonomy is identifying how to formulate general guidelines for individuals involved in audit judgment tasks. KNOWLEDGE ACQUISITION After identifying a cognitive task and a cooperative expert, the researcher is faced with the process of eliciting the knowledge of the expert. Experts are known to bring far more to a problem solving session than basic problem solving understanding and generic cognitive skills. Feigenbaum [1977] notes: the problem solving power exhibited in an intelligent agent’s performance is primarily a consequence of the specialist’s knowledge employed by the agent, and only very secondarily related to the generality and power of the inference method employed. Our agents must be knowledge rich even if they are methods poor (p. 1016). Therefore, models of cognition must have a large store of well-organized domain specific facts and good problem solving heuristics. The difficulty facing the researcher is that "as individuals acquire the heuristics and facts that allow them to perform tasks at an expert level, they tend to lose the awareness of what they know" (Bailey et al. [1987] p. 8) (e.g., they compile the knowledge into paired associations rather than maintaining the reasoning details supporting the decision process). This is referred to as "the paradox of expertise" by Johnson [1983]. Three approaches are available for dealing with the challenge of eliciting expert knowledge: description, observation, and intuition. The descriptive method focuses on what Johnson [1983, p.82] defines as reconstructive methods of reasoning. The researcher attempts to reconstruct the knowledge of the expert through interviews, lectures, and written materials. The descriptive method helps to identify much of the domain knowledge and reasoning strategies of the expert which can then be refined through use of the observation method. The weakness of the descriptive method is its inability in unlocking the deep reasoning which lies at the center of expertise. 40 The observation method is also known as introspection or verbalizing knowledge. Rather than concentrating on first developing a complex system of verbal protocols, researchers today involve the expert in a problem solving task and capture the conscious thought process as it occurs. Description and observation are used in the traditional knowledge engineering approach, the intended approach in this project. The intuitive approach requires the researcher to become somewhat knowledgeable in the task. The researcher then attempts to assist in the development of the cognitive model through self introspection. Intuition has been proposed as an alternative method for knowledge acquisition by Johnson [1983, p.92]. However, due to the expected complexity Of the audit planning and evaluation process, the intuitive role is not expected to be useful in this study. KNOWLEDGE REPRESENTATION Three methods exist for representing knowledge: logic, rules, and frames (which include scripts). The following provides a description and critique of each representation method. Knowledge Representation Using Logic The first attempt by Newell and Simon [1961] to simulate human intelligence using the computer was based on predicate logic. When using logic structures, the knowledge base is a collection of expressions which provide a representation of something to be described (be it a concept, place, person, etc.). The idea is to capture the facts related to a particular domain of knowledge as well structured formulae. Logic structures provide the foundation for the representation language called PROLOG (an acronym for PROgramming in LOGic). Using an example of family relationships, Genesereth and Ginsberg [1985] demonstrate the use of logic representation as follows: 41 Given Representation Art is the father of Bob F(Art,Bob) Art is Bob’s parent if Art P(Art, Bob) :F(Art,Bob) is the father of Bob Art is Cap’s grandparent if G(Art,Cap):P(Art,Bob),P(Bob,Cap) Art is Bob’s parent and Bob is Cap’s parent Although a powerful tool for defining clean, simple, and provable constructs, logic structures have serious drawbacks when dealing with knowledge acquisition, beliefs, and defaults of human thought. Because human knowledge does not appear to be so neat and well packaged as logic structures, the use of a purely logical approach is considered limited.5 Knowledge Representation Using Rules Newell and Simon [1972] later proposed the use of a production system composed Of If-Then rules to simulate human knowledge. The rule-based approach continues to be one of the dominant forms of knowledge representation today. The basic idea of the method is to capture knowledge in the form of If-Then rules. Steinbart [1987] shows the use of rules in capturing auditor knowledge as follows: IF 1. It is likely that the client is a private entity, and 2A: The client is filing with a regulatory agency in preparation for the sale of its securities in a public market, or B: The client does intend to go public within the next two or three years. THEN The client is a public entity. The IF part of the rule is termed the "premise" While the THEN part is labeled the "conclusion." When the premise is determined to be correct, the rule is executed (commonly known as "firing"). As a result, the related conclusion becomes a fact which is known to the system. 5. Mylopolous and Levesque [1982 pp. 4-6] present a concise critique of the logic based approach of knowledge representation. 42 Capturing and linking a number of rules to describe a particular decision provides the foundation for developing inference paths or chains of reasoning. Two approaches are available for representing the reasoning process: forward and backward chaining. Given a set of data, the forward chaining approach begins to infer facts about the world represented by the set of data. When the goal is to infer one specific fact about a situation, the analysis of all known facts about a situation (forward chaining) is likely to waste valuable time and energy. For situations guided by a question (what is the likelihood of material error in a particular account) the backward chaining approach is much better suited for representing the reasoning process. Backward chaining begins with an assertion to be proven using only those rules which are needed to reach a conclusion. Backward chaining is also known as "goal-directed" reasoning and has been very useful in studying audit judgment as most audit decision tasks involve determining whether or not a certain assertion is true or false. Knowledge Representation Using Frames The idea of a frame builds upon the concepts of aggregation, generalization, and event sequencing which are used to define the relationships between real world objects and concepts. The following is a brief review of each of these concepts. mama; - the idea that every person, place, or thing is an aggregation of various parts is easily demonstrated. The difficulty lies in trying to represent the components of a person, place, or thing in such a way as to be epistemologically correct. Figure 7 illustrates the concept of aggregation in constructing two physical items. Notice, as the various disaggregated parts are joined together, they form the desired objects. Although an example of a physical object description, the same general idea of aggregation can be seen 43 (Source: Howe [1983, p.132]) AGGREGATION IN THE PHYSICAL WORLD FIGURE 7 44 in the conceptualization of accounting data as well. Figure 8 Shows how the data items could be aggregated to describe economic resources and events.‘5 Generaligtion - generalization differs from aggregation in that the idea of specialization is represented. In this case, each item is viewed as an example (token-type) or subclass (type-token) of another. To illustrate, the following list shows how the concept of generalization assists in identifying types of an animal. Animal iS-a Fish is-a Shark is-a Great White Shark iS-a Jaws Reading the list from bottom to top illustrates increased generalization from one item to the next. Specialization is illustrated as the list is read from top to bottom. Within business, the concept of generalization can be used to construct a list of objects such as: An asset is-a Current Asset is-a Marketable Securities 1s-a 100,000 Shares of IBM Common Stock Together the concepts of aggregation and generalization enable us to classify most Objects of interest as a component or a category of something else. Segugncing - The idea of sequencing (or scripts) was first proposed by Schank and Abelson [1977] who were interested in showing how objects and concepts are related by the ‘. A complete explanation of the use of aggregation and generalization in conceptualizing an accounting database is found in McCarthy [1987]. AGGREGATION 45 ‘ SALE UNE ITEM CUSTOMERJ D— STOCK if QUANTITY TIME I FEVENE I com I INVOICE I REGION II SALES FORCE REPLACEAENT (DST CARRYING (DST CUSTOIIER # ADDFESS ACQUISITION COST (Source: McCarthy [1987]) AGGREGATION OF ACCOUNTING DATA FIGURE 8 46 passage of time.7 Simply stated, the relationship between two items is labeled "is-followed- by." For example, Item A Relationship Item B birth Is-followed-by death practice is-followed-by game sale is-followed-by cash receipt In summary, the concepts of aggregation, generalization, and sequencing provide the researcher with a means whereby many real-world relationships used to associate an object or concept to another can be represented. A Semantic View of 3 Frame Using the concepts of aggregation, generalization, and sequencing, a system of frames can be developed to gather information about items of interest. This is the same concept Gibbins [1984] discussed in terms of auditors using templates (or frames) of reference. Figure 9 illustrates how aggregation, generalization, and sequencing can be used to represent various real world events using a semantic network. Frames are a commonly used method whereby semantic models, such as in Figure 9, can be implemented in a computational model to test cognitive theory focusing on the causal reasoning of an expert. THE COMPUTATIONAL MODEL The purpose of the computational model is to provide a direct means of testing a cognitive theory. Two basic tools are available for developing a computational model: programming languages, and expert-system shells. The cognitive task being studied determines the tool most appropriate for developing a computational model of cognition. Programing Languages I Using a programming language requires the researcher to develop the entire computational model using a language such as LISP (an acronym for LISt Processing 7. The use of the sequencing relationship in conceptualizing accounting databases has recently been demonstrated by Cal and McCarthy [1986]. 47 Place Day Time Killed H°s‘ / Injured SCEIAL Number of guests JE Homeless Cost of Age BIRTHDAY Person Bride WEDDING 9'00"" Sport Resource Winner Inside Agent Score Outside agent Invoive if Amount Shipper # (Adapted from McCarthy [1987] and Winston [1984]) Remittance # RECEIPT Amount Bank AN EVENT HIERARCHY USING FRAMES FIGURE 9 48 language), or PROLOG. Unless a very experienced user of one of these two languages, a researcher can expend significant amounts of time and effort programming rather than interacting with the individual performing the expert task. Expert-System Shells An alternative is the use of a "shell" or skeletal system which provides logic, rule, or frame based representational tools as well as useful editing and system management tools. ES shells have been used by a number of accounting researchers such as Steinbart [1984]; [1987], Gal [1985], and Meservy [1985]. However, all of these projects have been somewhat restricted by using shells which provide only rule-based representation. Many other ES shells are available today which provide a much more complete set of representational methods. Because many researchers have already shown the existence of auditors using knowledge templates (most recently Gibbins [1984], Gal [1985], and Selfridge and Biggs [1988]) as well as a form of goal directed reasoning, we chose a development tool which has both rule and frame representation capabilities. The specific ES tool chosen is called GoldWorksTM which is produced by Gold Hill Computers. We chose GoldWorks because it does not have the restrictions commonly found in rule-based ES development environments. GoldWorks supports the use of frames and rules, has pattern matching capabilities, frame inheritance, forward and backward chaining, the use of rule sets, and many other useful knowledge representation alternatives. By having this rich environment, we are not as restricted in terms of alternative methods as is the case in most ES environments. ‘5‘ CHAPTER IV STUDYING THE AUDIT EVALUATION PROCESS This chapter describes the development of a computational model of the audit review process. The chapter begins by describing the preparation for studying the audit review process. Next, we describe the model development process and the resulting computational model of the audit review task. We then discuss the weaknesses of studying the audit review process. Lastly, the chapter discusses how studying the audit planning process might provide valuable information toward understanding evidence aggregation. SELECTING THE STUDY EXPERT AND INDUSTRY A problem encountered while identifying a task for studying audit evidence evaluation is that the task be as free as possible from the engagement management process. Our interest is in the cognitive processes related directly to evidence evaluation, rather than decisions regarding budgets, staffing, etc. This study concentrates on understanding an evidence aggregation task commonly done by an audit expert. This avoids creating an artificial, lab setting type of task for studying audit evidence aggregation. As explained earlier, the audit review task is an appropriate evidence aggregation task. After selecting the audit task, the process of choosing the study expert began. Identifying a study expert began by interviewing two members of the national audit directorate of a large CPA firm. The purpose of the interviews was to assess the possibility of either of the two being the expert for this project. Although both understood the professional and firm policies regarding the review process, it had been some time since either had conducted an audit review. Therefore, both partners felt uneasy about being the 49 50 expert for this study. Each suggested identifying a partner in a practice office who reviewed audit engagements on a routine basis. The two audit directorate partners helped to identify a partner who performed the review function on a regular basis. The firm recognized this partner for his ability as an independent reviewer. Two interviews were conducted with the partner to explain the intent and process of the computational modeling approach. Having successfully found an appropriate expert to work with, selecting the industry to study became a function of the study partner’s area of expertise. In order to make the project size manageable, we originally intended to study the review of first year retail engagements. Because the expert had no experience with retail engagements, studying first year retail engagement reviews was infeasible. Instead, the expert was a well recognized authority in the telecommunications industry. The expert performed reviews of telecommunication engagements on a regular basis throughout the firm. Therefore, our research focused on observing reviews of five telecommunication clients. PREPARING TO STUDY THE AUDIT REVIEW TASK The first session with the expert concentrated on understanding the audit review task in relation to the entire audit. Before explaining the details of the financial statement analysis phase we will give an overview of all of the tasks of the audit review process. These tasks consist of the following: client familiarity, reviewing matters for the engagement partner, review the annual report, and resolve the review exceptions. :FP’PZ" Each of these tasks will now be discussed. Becoming Familiar With the Client Generally, the independent partner takes some time to become familiar with the client. If the review involves an unfamiliar client, the reviewer can become more familiar 51 by reading the previous annual reports, the SEC filings, the summary of operations by management, and the actual audit planning memo. The primary purpose of becoming familiar with client operations is to enable the auditor to understand the issues facing the client and the effects of these issues on the client’s financial statements. Client familiarization results in the auditor beginning to identify issues which are particularly crucial to the client’s success. For example, the reviewer might identify certain transactions and related accounts which are particularly important for the client and make these the focus of the review. Having become familiar with the engagement client, the partner reviews the issues raised during the engagement which required partner attention. Reviewing Matters for the Engagement Partner During the audit field work, the audit team might encounter situations requiring a departure from the audit program (e.g., unforeseen accounting issues) and approval of the engagement partner. The engagement team notes each circumstance on a form entitled Matters for the Attention of the Partner (commonly known as the MAP). For each item on the MAP, the engagement team must note any action in response to the circumstance requiring partner attention. . While reviewing each item on the MAP, the reviewer makes two judgments. First, the partner determines whether the engagement team responded properly to the problem. Second, the reviewer determines whether the engagement team sufficiently documented their work. This process invariably results in a review of a portion of the audit workpapers and possibly a discussion with the engagement partner and manager. Reviewing the Annual Report The primary task facing the partner doing the review is to determine whether the annual report is ready for release. In the professional vernacular, the reviewer is to determine whether the ”statements make sense.” This involves a line by line evaluation of all material in the annual report. The partner reviews each assertion of the annual report 52 (financial Statement line items and footnotes) for two purposes. The reviewer first determines whether the assertion is reasonable and second whether the related disclosure has properly adhered to applicable regulations and standards. Resolving the Review Exceptions During the annual report review, the independent reviewer often raises issues which require discussions with the engagement team. These discussions generally focus on an account balance not being sufficiently well justified, an account balance not reconciling with a related balance or footnote, etc. Sometimes the review exceptions require additional audit work by the engagement staff although the intent is to avoid such work through adequate audit planning and thorough reviews among members of the engagement staff. Nonetheless, it is not uncommon for the reviewer to identify issues which result in additional audit field work by the engagement team. AUDIT MANUAL GUIDELINES AND TRAINING Audit manual guidelines regarding the audit review are fairly general in nature and do not provide much guidance for performing the review task. For the most part, the manual explains when the audit review is required, the responsibility for assigning the reviewer, and how to resolve differences of opinion between the engagement partner and the reviewer. The firm provides no formal training regarding either the overall engagement review or any of the other review functions of an audit. Because of the lack of training materials and guidelines, the study’s success became very dependent upon the results of interviews with the reviewer and observations of the review process itself. This dependence made it critical to adequately document the auditor’s thought processes while observing the review. 53 THE MODEL DEVELOPMENT PROCESS Figure 10 provides a summary of the model development process using the terminology offered by Hoffman [1987]. The first step in the process involved interviewing the expert and reading thefirm’s material regarding the review process. These interviews and the reading of the firrn’s literature resulted in the development of a reconstructive model of the review task. The reconstructive model served as a control structure of the reasoning process. During the next phase of the model development process we focused on observing the expert while reviewing client engagements. During the early stages of this phase, it became apparent that two characteristics of the auditor’s reasoning process prevented refining the initial model. First, the auditor performed the review of the individual financial statement items with great Speed. Second, the auditor accessed a large amount of the information contained in the report during the review process. In an attempt to slow down the auditor’s reasoning process and to identify the specific types of information the auditor utilized, information on the annual report was masked. Masking involved covering the information in the annual report with labeled pieces of paper indicating what was being covered. When the reviewer needed the information being covered, the reviewer pulled back the labeled paper to reveal the concealed information. During the review, we asked the auditor to identify each piece of information as the mask was removed. The auditor reviewed two engagements using the masked reports. Observing the auditor review the masked statements provided two critical pieces of information. First, the characteristics of the information and processes involved in the review became more certain. Second, the auditor demonstrated a detailed understanding of the client industry. While very helpful, the benefit of masking the reports was 54 ME'I'I-DD METHOD OF USE AND RESULTS OBSERVATION Method: Initial interviews with expert and studying firm literature on audit review. Results: High level representation of the review reasoning process. CONSTRAINED Method: Masked annual report information to Slow INFORMATION down reasoning and explicitly identify information being utilized by expert. Results: Identified need for and character of additional information used during the review. Expert became very frustrated. SIMULATED Method: Observed additional reviews by expert. SCENARI$ Results: Nothing significant. DIFFICULT Method: Concentrated on difficult reviews and CASES statement items. Results: Nothing Significant. (adapted from Hoffman [1987]) KNOWLEDGE ACQUISITION METHODS AND RESULTS AUDIT REVIEW PROCESS (FRED) FIGURE 10 55 Short-lived. The auditor quickly tired of having to pull back pieces of paper to obtain the needed information. The interviews using the masked statements confirmed the model developed during the earlier review sessions. Unfortunately, the sessions using the masked statements did not significantly refine the model. In addition, as summarized in Figure 10, concentrating on difficult engagements and statement items, or simulating various types of review Situations, did nothing to further refine the model. A MODEL OF THE REVIEWER THOUGHT PROCESS AS a result of the model development process, we propose a model of the review process named FRED (Financial Statement kview and Diagnosis). Figure 11 illustrates the reasoning strategy, and Table 3 provides details of the reasoning process. Each of the components of the reasoning process in Figure 11 will now be explained. IS the Assertion Troublesome? Task number one on Figure 11 is to determine if an assertion in the annual report is troublesome. For example, cash of $5,000,000 would be a single management assertion. Determining whether an assertion is troublesome involves three factors: analytical procedures results, the level of materiality, and various environmental factors. Anglfliggl Pmdurfi Rgglts The focus of the analytical procedures is to use the interrelationships of the client financial information in order to identify unusual account behavior. There are four basic types of analytical procedures: 1. Trend analysis compares the current and prior balances to identify any significant trends in the account. 2. Ratio analysis tests the relationships among financial statement accounts or groups of accounts to highlight unusual activity. 3. Reasonableness tests involve any computations, or series of computations, to estimate an account balance by using relevant financial or operating (nonfinancial) data. 4. Consistency of the assertion with other related and effected assertions. 56 : MEDOE COEEO -BoSom . >Owk commmncofiomocw wwpcmpcflm co_:o.o.mm-oEowm_D:ofi 9: m. . 2:28:22 2 965?... F coszowoa cofowwm 9: M80 :oEomwmémoO mepcmcchm -moc__op_:m->co.m_amom mO_O__OQ-EL_u_ 3.589va mcoEOmm< EoEommcmE I— 57 TABLE 3 FRED - DOMAIN KNOWLEDGE DETAILS Component Adequate-Documentation Analytical-Review-Results Audit-Workpapers Clean-Assertion Engagement-Staff-Answers Environmental-Factors Existing-Probable-Cause F irm-Policies Inadequate-Documentation J ustified-Troublesome- Assertion DEEMED The conclusion the auditor reaches when there are no deficiencies in the financial statements. Ratios and differences computed using the financial statement balances to determine unusual account behavior. Information which the auditor extracts from the audit workpapers should the need arise in order to determine whether a probable-cause exists. A financial statement item which does not have any apparent characteristic which would cause the auditor concern regarding the reliability of the financial statements. Answers which members of the engagement staff provide in response to the reviewer’s questions. Characteristics of the client operating environment. A specific cause which the auditor has found does exist concerning the current financial statement period under review. A model of the auditor’s understanding of the audit firm’s reporting guidelines and regulations relating to the audit engagement under review. The conclusions the auditor reaches when there are deficiencies in the presentation of an assertion in the financial statements. These often deal with non- compliance with firm, regulatory, or industry standards and norms or with accounting problems in terms of reconciling the presentation of the financial information on the statements themselves. An assertion accepted by the auditor as having adequate justification for its troublesome characteristics. 58 FRED - DOMAIN KNOWLEDGE DETAILS Management-Assertion Materiality Probable-Cause Regulatory-Guidelines-and- Standards Rej ected-Probable-Cause Review-Opinion Troublesome-Assertion Unjustified- Troublesome-Assertion Table 3 (Cent) Any statement, written or spoken, contained in the annual report for which the auditor is responsible. An assertion specific dollar amount (typically a function of the statement) used to determine whether an assertion should be given Specific attention or not. A Specific cause which the auditor has identified as possibly causing the troublesome characteristic of an assertion. A model of the auditor’s understanding of the reporting guidelines and regulations relating to the audit engagement under review. A specific cause which the auditor has found does not exist concerning the current financial statement period under review. The final output of the review. This opinion can be to support the engagement team’s audit opinion, question the opinion, or suspend the opinion until further information is acquired. A financial statement item which has some kind of troublesome characteristic (e.g., undesirable trend, related troublesome event condition) which causes the auditor concern regarding the reliability of the financial statements. An assertion rejected by the auditor as having adequate justification for its troublesome characteristics. 59 In and of itself, analytical review only reveals, or confirms, the need to further investigate. Analytical procedures are meaningless without some assessment of the account materiality, of prior account behavior, or of the assertion’s risk. Matgriglity For the reviewer in this study, materiality tended to be a function of either Net Income or Total Assets. For example, unless an income statement account in question exceeded 5% of net income, the balance was immaterial. Each industry has a certain set of transactions and related accounts which are disturbing to the auditor because of their risk. Account specific risk might be a function of the critical nature of: 1. the related transaction to the success of the company, 2. the extent of judgment involved in calculating the account balance, 3. the inherent weaknesses in controls surrounding the transaction and its recording, or 4. the environmental events and conditions which commonly exist in the client operating environment. To a large extent, the level of materiality is a quantification of at least part of the auditor’s assessment of risk. Envimnmgntal Factog Events and conditions alone might highlight the need to review a particular assertion in the annual report. An example given by the partner during one of the review sessions is as follows. When an industry faces increased competition, a company might relax credit policies to encourage sales. The auditor would be concerned that allowances for uncollectible accounts are appropriate. If a certain assertion is troublesome, the reviewer searches for causes. If not a troublesome assertion, the reviewer proceeds to the next assertion. During this study the 60 auditor explained that the procedures applied to this point are similar to those used during the planning of the audit. However, during a review, the auditor is working under the assumption that the audit has been performed in good faith rather than attempting to plan the audit work to be performed. Therefore, the reviewer is interested only in Significant issues which might present problems should the audit work be challenged. What Are the Probable Causes of the Problem? Task number two in Figure 11 involves the expert identifying probable causes for a troublesome assertion. Probable causes of a troublesome assertion can be of three types. First, an error might occur in computing accounting numbers. This might range from the improper application of an accounting technique or standard, to simple math errors. Second, a change in the operating environment might occur as a result of management action. For example, the introduction of a new product or service might be a likely candidate for changes in sales, receivables, inventory, or operating cost account balances. Third, events might occur in the environment which are outside the client’s control. For example, a change in the availability of a particular resource (materials or labor) could affect costs of sales which in turn might increase the price of goods or services. This could greatly affect reported profits or the company’s ability to continue to exist. Identifying the most likely cause of a troublesome assertion is a function of auditor experience. The expert for this stUdy was able to retrieve several possible explanations for nearly any assertion anomaly. As well, the expert was able to rank the probable causes in terms of their likelihood. The expert considered an auditor’s inability to retrieve potential explanations for assertion anomalies to be a significant indicator of auditor inexperience. Does Such a Cause Exist? Process number three in Figure 11 involves identifying which, if any, of the probable causes exist. During the review session, if the auditor determined the more likely causes did not exist, the engagement team was asked to provide additional information. 61 Is There Sufficient J ustification for the Troublesome Assertion? If the auditor finds an existing probable cause, the auditor determines whether the cause adequately explains the troublesome nature of the assertion. This determination requires significant use of the auditor’s understanding of the client operating environment. At a minimum, this understanding consists of: a knowledge of the environmental events and conditions which affect the client, a Significant understanding of the client’s operations, and an understanding of the effect of the environment on the client’s operations and financial reporting. As a result of this reasoning, the auditor determines whether the assertion is justified. Does the Assertion Presentation Adhere to the Applicable Standards? During the fourth step of the reasoning process in Figure 11, the auditor must determine whether the workpapers provide adequate documentation of both the cause of the troublesome assertion and the work performed by the engagement team relative to the troublesome assertion. Basically, the reviewer must determine whether the engagement team adhered to regulatory and firm guidelines regarding workpaper documentation. In most low risk engagement reviews, the auditor mentioned that reviewing the annual report for compliance with disclosure requirements was the primary task of the independent reviewer. In limited cases, the reviewer is giving some assurance that the audit documentation would provide adequate support if challenged by an external review. Effect of the Assertion on the Review? The final question facing the reviewer is to determine the effect of the particular assertion on the entire review of the annual report. Does the assertion’s review strengthen or weaken the reviewer’s acceptance of the annual report? For the review results to alter the engagement partner’s opinion, the results must be very convincing. This is particularly true when the reviewer contests a clean opinion by the engagement partner. 62 FRED - A RESEARCH PROTOTYPE EXPERT SYSTEM At first glance, from the preceding narrative of the independent partner review process, it appears that the auditor’s reasoning during the review is substantial. However, attempting to represent the process as a computational model provides some valuable insights. At a general level, the reasoning strategy can be represented as a rather simple set of forward chaining production rules. These rules are supported by a very elaborate understanding of client industry operations. As a result, at lower levels of representation, we would expect the computational model of the review to be similar to the work of Selfridge and Biggs [1988]: a Simple heuristic control structure using a rather complex understanding of client operations. A prototype system representing the reasoning process of FRED was created using the rule based expert system tool named EXSYS. The system consists of a simple heuristic control structure to guide the reasoning process through the six phases of the review task. Figures 12 and 13 are diagrams showing some of the rule structures of the system. The value of various states are displayed on the left hand side of each figure. The lines connecting the states to the boxes labeled R1, R2, R3, etc., represent how the states are used by the rules in the knowledge base. For each combination of states, a rule produces a specific conclusion shown on the right hand of the figures. 63 Clean Assertion R1 Trouble some Support Assertion \ Opinion R2 No exisiting Probable Cause Existing 3 Probable Cause Q) Question Opinion Jusfified Troublesome Assertion 4 \. a \VI Unjustified Troublesome Assertion Qualify Opinion Adequate Documenta- tion Inadequate Documenta- tion FRED - INFERENCE NET REVIEW CONCLUSIONS FIGURE 12 64 Significant AR Results R7 Insignifi- cant AR Results R8 Material Assertion Immaterial R9 Assertion Trouble- some Assertion High Risk R10 Moderate Risk R11 Clean Assertion FRED - INFERENCE NET TROUBLESOME ASSERTION JUDGEMENT FIGURE 13 65 For example, in Figure 12, the box R1 represents a rule which tests for the existence of "Clean Assertion" and "Adequate Documentation." If found, rule R1 concludes "Support Opinion." Figure 13 illustrates how the model concludes whether an assertion is clean or troublesome. The rules R7 through R11 test for various states shown on the left of Figure 13. For example, if analytical results are insignificant, and the amount of the assertion is immaterial, and the risk associated with the assertion is high, then rule R10 concludes "Clean Assertion.” This conclusion would then be available for use by rules R1 through R6 in Figure 12 to determine whether to support, question, or qualify the engagement opinion. CONTRIBUTIONS OF STUDYING THE REVIEW PROCESS The motivation for studying the review process was to develop a computational _ model of an evidence aggregation judgment task. To a degree, these objectives were met - - a model representing the audit review process has been developed. However, the study suffers from a lack of depth in not representing the auditor’s rich understanding of the client. This lack of depth is primarily a function of the weaknesses inherent in the study environment surrounding the review task. The inherent weaknesses of studying the audit review process are threefold. Speed of Reasoning and Lack of Documentation As already mentioned, the auditor in this study assimilated and analyzed a tremendous amount of information very rapidly. The primary drawback from the auditor’s speed of reasoning is the resulting lack of documentation for use in building the computational model. We recognized this difficulty While meeting with the study expert and attempted to resolve the problem by masking the financial statements during the review. Masking the statements provided confirmation of the accuracy of earlier work but did not further refine the model. 66 Nature of the Review Task Although the review process involved audit evidence aggregation, the auditor tended to focus primarily on compliance issues and secondarily on determining whether sufficient competent evidence had been gathered. In terms of Figure 11, the auditor tended to emphasize one question above all others, "Does the assertion presentation adhere to the applicable standards?" Although we expected the auditor to assess adherence to documentation standards, we expected at least an equivalent level of effort to be dedicated directly to the evidence aggregation issues of each assertion. This expectation was unfulfilled. Individuality of the Review Process ’ The third difficulty with the review task is that each review is unique. The reviewer may pass through many assertions, quickly discounting any abnormalities for any number of reasons and then concentrate on only a few isolated assertions. There does not appear to be any consistency regarding the type of assertions which the reviewer tended to focus on, even within a single industry. Instead, the auditor explained that identifying accounts which were commonly troublesome required a detailed understanding of the client operating environment. The combined effect of the three weaknesses just discussed is that the review function shown in Figure 11 is able to reveal only a very general model of the evidence aggregation process. Although the study provided ample evidence to suggest the auditor has a detailed understanding of the nature of the client, the review task did not help to expose the characteristics of the auditor’s client knowledge. We concluded that the review process is a fairly inefficient means of studying the evidence aggregation judgment. 67 CHARACTERISTICS OF THE AUDIT PLANNING PROCESS While discussing with the review expert the weaknesses of the review task as a means of studying audit evidence aggregation judgment, the reviewer suggested the audit planning process might be a more appropriate study environment. The expert explained that audit planning is similar to the review in the initial stages but differed in the objective of the analysis. Although the planning process involves reasoning strategies similar to the review process, the planning process did not appear to suffer from the weaknesses identified in the review task. The following is an analysis of how the audit planning task compares with the weakness of the review task study. Speed of Reasoning and Lack of Documentation As noted earlier, the speed with which the reviewer analyzed financial information was a problem to the extent that we were unable to document the reviewer’s thought process beyond a high level sequence of heuristics. In contrast, the planning process gives particular attention to documenting issues dealing with audit program development. The auditor views the resulting audit program as the definition of sufficient competent evidence for the audit engagement. Unlike the study of the audit review process, documenting the auditor’s reasoning during the planning process is not a task imposed by the researcher. Instead, documenting the reasoning involved in audit planning is a natural part of the task. This documentation provides the means of guiding a more detailed study of the decision processes involved in audit planning. Nature of the Planning Task The primary purpose of audit planning is to develop an audit program. Like the review task, part of the audit program addresses issues regarding financial statement compliance with the applicable regulations and statutes. However, the audit program deals more with the confirmation of the financial data by Specifying the needed audit evidence 68 and how to get it. As a result, the audit program defines what constitutes sufficient, competent evidence for the engagement. Therefore, the planning process would seem to provide a much more efficient means of studying auditor evidence aggregation, thereby enabling one to develop a computational model of the same. Individuality of the Planning Process The planning task begins in a manner similar to the beginning of the review task by asking "Is the assertion troublesome?" The primary difference between the planning and review tasks is that the review is concerned with the account balance. During audit planning, the account represents a set of detailed audit concerns. AS a result, the related audit concerns, not the account balance, are the focus of the planning effort. This results in a much more uniform analysis of each management assertion. Rather than quickly discounting a particular assertion as in the review process, the auditor carefully considers each assertion in terms of the related audit concerns and their likelihood of causing material error on the financial statements. Additionally, the auditor appears to use Similar types of information during the planning and review processes, only the level of detail and uniformity of application is different. As a result of this analysis, a detailed study of audit evidence aggregation judgment seems more attainable by focusing on the audit planning process. The next chapter provides the results of studying the evidence aggregation judgment prOCesses demonstrated during audit planning. CHAPTER V A MODEL OF AUDIT PLANNING JUDGMENT This chapter presents the results of modeling the judgment processes of an expert auditor planning an audit engagement. By way of introduction, we wish to first reiterate the purpose of this study. This is done to insure the reader understands the impact of using the computational modeling approach to study auditor judgment. This study focuses on four tasks: 1. identify specific examples of complex auditor judgment during the audit planning process, 2. develop. a model of an example of complex auditor judgment, 3. express the model in computational form, and 4. evaluate the adequacy of the computational model. Prior studies of auditor reasoning using the computational modeling approach have tended to study less complex examples of auditor judgment or to study complex auditor reasoning superficially. Figure 14 illustrates the concept of pursuing a depth versus breadth approach when studying audit judgment. Assume a complete model of auditor reasoning is decomposed into three general tasks: engagement management, engagement planning, and engagement review. The researcher is faced with a choice of extending the breadth or depth of a model of auditor judgment. Extending the breadth of the model focuses on developing a model of all types of reasoning at one level of decomposition. In contrast, extending the depth of the model focuses on exploring one complex sub-process at more primitive levels of reasoning. 69 70 Concentrating on the primitive judgment processes differentiates the researcher from the practitioner. The researcher is interested in understanding and representing the expert’s primitive reasoning processes. To do so, the researcher constrains the study environment to make concentrating on complexity more manageable. In contrast, the practitioner focuses on delivering a computer program for use on the job. Therefore, rather than continually constraining the modeling effort to pursue complexity, the practitioner attempts to deliver a computer program which has broad application by modeling complex reasoning only when absolutely necessary. This study of the audit planning process restricts the engagement setting in order to model complex auditor judgment at its lowest level as is illustrated in Figure 14. This is not an attempt to provide an exhaustive model of audit planning for use in a commercial setting. Focusing on judgment complexity is known as developing a "proof of concept." A proof of concept provides some assurance that the problem is representable, but makes no attempt to determine the feasibility of the model for commercial purposes. To develop a proof of concept of complex judgment, the scope of the judgment domain must be restricted to make the development process manageable. By restricting the judgment domain, the researcher is able to identify and concentrate on the most complex examples of judgment encountered. Unless the study domain is restricted, the researcher can quickly become consumed in modeling a host of less complex instances of expert judgment. AS a result, the researcher is unable to determine whether the judgment process is representable at the primitive task level. Therefore, this study differs from many prior studies using computational modeling in two ways. First, the model concentrates on representing reasoning at the primitive task level. Second, in order to work toward the primitive task level, the scope of the study environment is more restricted than in prior studies. The remainder of this chapter Full depth of complexity 71 Fully Specified model of auditor judgment Enagement mangement reasoning Engagement planning reasoning Engagement review reasoning Audit planning sUbtask A Audit planning subtask B Audit planning subtask C Prhnifive task descnpfion Primitive task descnphon (adapted from McCarthy, Rockwell, and Wallingford [1989]) FIGURE 14 Primitive task description DECOMPOSING AUDIT JUDGMENT PROCESS 72 describes the model development process and the resulting model of audit planning judgment. It also provides an evaluation of the model’s adequacy. THE RESEARCH PROCESS The model development process began with a series of preliminary interviews with an expert auditors. The initial interviews concentrated on identifying a manageable study environment within the audit planning process. The expert explained the audit planning process used by the firm, helped select a Specific complex judgment process, identified documentation and training materials provided by the firm, and helped select an engagement for the study. From discussions with the expert, the planning process appears to involve the four judgment tasks illustrated in Figure 15. Working from left to right, the planning process proceeds as follows. For each financial statement assertion, the auditor generates a set of Specific audit concerns. For example, for the sales figure on a client’s income statement, one of the concerns an auditor may have is whether the sales figure includes sales which occurred after the statement period. Figure 15 shows how each assertion can be related to one or more audit concerns. The expert explained that the national office had developed a set of audit concerns tailored to the various types of client industries. The firm required the auditor to analyze each of the audit concerns during the planning process. : The next step in the planning process is the auditor assessing the likelihood of material error (LME) associated with each audit concern. The LME is a function of the inherent risk and control risk associated with the Specific audit concern. The auditor is responsible to determine whether the LME is "High," "Moderate," or "Low." After assessing the LME, the auditor develops a preliminary audit approach dealing with the specific audit concern. An LME of "High" results in more tasks in the preliminary audit procedures than 3. As a matter of convenience, a new expert was used in the study of audit planning. FINANCIAL STATEMENT ASSERTION AUDIT ‘_OO\ICEFN AUDIT CONCEFN AUDIT CONCERN 73 PRELIMINARY AUDIT APPROACH —LME PRELIMINARY AUDIT APPROACH AUDIT _ LME — _ PROGRAM PRELIMINARY AUDIT APPROACH LME -— AUDIT PLANNING JUDGMENT TASKS FIGURE 15 74 a Low assessment’. The final step in the planning process involves aggregating the preliminary audit procedures for each audit concern into an audit program for the entire engagement. Of the planning tasks shown in Figure 15, the LME assessment was selected as the focus of this study for the following reasons: 1. Specifying audit concerns is not a judgment performed by the auditor. According to the expert, the audit concerns Specified by the national office invariably defined the scope of the audit planning process. The expert stated that auditors rarely take exception with the set of audit concerns defined in the firm’s audit approach. AS a general rule, the expert was content to adhere to the audit concerns specified by the national office. 2. Developing the audit approach for both the specific audit concern and the overall engagement appears to be an exercise in operationalizing the result of the LME assessment. According to the expert, the primary concern is to identify the most efficient means of gathering the needed audit evidence while maximizing engagement profitability. 3. Assessing the likelihood of material error seems to be a significant step in the planning process and appears to involve complex auditor reasoning. In fact, the effort an auditor expends in gathering evidence is, to a large extent, a function of the LME assessment. Eventually, every auditor in the firm must develop an ability to assess the LME for each audit concern of an engagement. Focusing on the LME judgment process means that the other tasks of the audit planning process will not be included in this study. Furthermore, we chose to adopt the same approach as Selfridge and Biggs [1988] and concentrate on the judgment demonstrated 9. The interaction between risk and evidence was discussed earlier in Chapter 2 and is illustrated in Figure 2. 75 by the expert during a single client engagement. The choice being made is to develop a proof of concept regarding an understanding of the details of the reasoning for a single engagement instead of developing a shallow model of the LME reasoning process for a wide variety of engagements. By concentrating on a single engagement we can pursue examples of complex auditor judgment more freely than trying to study several different client engagements at the same time. This decision is consistent with the spirit of this study as explained in the beginning of this chapter. Selecting an engagement for this study was a function of the expert’s clientele which consisted primarily of retail grocers. We chose one client with over 90 stores in five states. The expert had audited this client for nearly five years and was quite familiar with the client’s personnel and operations. The expert provided copies of the planning documentation for the selected client engagement. The last matter discussed in the preliminary interviews involved identifying the firm literature dealing with the LME judgment process. Fortunately, the firm has both an extensive set of literature explaining the philosophy of the LME judgment and training material for auditors to learn how to make the LME judgment. The literature consists of a handbook explaining the general idea of the LME judgment and a supplement for each major industry. The firm provided copies of both the general audit handbook and the supplement for the retail industry. AN OVERVIEW OF THE MODEL DEVELOPMENT PROCESS AS with our discussion of FRED, we use Hoffman’s [1987] terminology to describe the model development process. Figure 16 is a summary of the development process which involved Observation, Simulated scenarios, and difficult cases as the primary means of developing a model of the LME process. This model is named APE (Auditor Planning and Evidence), and the remainder of this chapter provides the details of the development process and the APE model. 76 MEIHOD RESULTS - AUDIT PLANNING (APE) OBSERVATION CONSTRAINED INFORMATION SIMULATED SCENARIOS DIFFICULT CASES Method: Initial interviews with expert and reading firm literature on audit planning. Results: High level representation of the LME judgment process. Method: Not utilized due to experience during study of audit planning. Method: Focused on the LME assessment for a number of different audit concerns. Results: Exposed need to model auditor client knowledge. Gave some indication as to the types and structures of auditor client knowledge. Method: Focused on reasoning for two critical audit concerns. Results: Exposed the types and structures of the auditor client knowledge. (adapted from Hoffman [1987]) KNOWLEDGE ACQUISITION METHODS AND RESULTS AUDIT PLANNING PROCESS - (APE) HGUREis 77 A RECONSTRUCTIVE MODEL OF LME REASONING The firm literature on audit planning is quite extensive. The literature explains how the auditor is to complete the planning documents provided by the national office while planning an engagement. The planning document provides a separate page for each of the audit concerns for the client engagement. For example, one audit concern for the client engagement in this study is "Merchandise is purchased only with proper authorization.” The auditor uses the planning forms to document the LME judgment process for the specific audit concern. The auditor also indicates the appropriate preliminary audit procedures on the planning document. Documentation of the LME assessment process is divided into three parts: 1. Environmental Considerations -- the auditor uses this portion of the form to note any significant events or conditions which would affect the client in relation to a specific audit concern. 2. Observations from Analytical Procedures -- the auditor uses this portion of the form to note the results of analytical procedures which are specific to the audit concern. 3. System Evaluation -- the auditor uses this portion of the form to note specific characteristics of the client’s system of internal controls relating to the specific audit concern. After providing documentation regarding the types of information considered during the assessment of LME, the auditor must note on the planning document whether the LME is High, Moderate, or Low. As was explained in Chapter 3, a reconstructive model of auditor reasoning concentrates on using literature describing the judgment, or on using interviews with the expert to construct an initial representation of the judgment process. Developing a reconstructive model provides the expert with a model which can then be critiqued in order to expose the expert’s own reasoning and knowledge. By using the firm literature we were 78 able to construct a general model of the LME judgment process in the form of a decision flowchart. Using the flowchart we developed a rule-based representation of the reasoning process depicted in the flowchart. The model uses a set of 38 production rules. The LME assessment begins by asking the auditor to state the level of the inherent risk for the specific audit concern. Given the auditor’s assessment of the inherent risk, the model asks from two to nine additional questions dealing with the client’s system of internal controls in order to assess the level of control risk for the specific audit concern. After completing the assessment of control risk, the model determines the LME for the specific audit concern using the answers to three questions: a. Does the auditor intend to rely on the client’s system of internal controls? b. Does the system possess key characteristics upon which the auditor can rely? c. For systems using computer technology, are the computer related controls reliable? Given the answers to these three questions, the model is able to provide an assessment of the control risk for the audit concern. The model then combines the control risk and inherent risk assessments and determines the LME. As well, the model provides some suggestions regarding the extent of the audit procedures given the level of the LME. After using the model to assess the LME, the expert made three observations about its performance. First, the model appeared to be very shallow. This reaction by the expert was not surprising given the firm literature explained the LME judgment process as it would apply to all audit concerns for all engagements. Second, the model was very dependent on the auditor making at least two significant judgments: the level of inherent risk for the audit objective, and the level of control risk for the audit objective. Third, any effort to refine the reconstructive model required a scoping decision: 79 3. refine the model to handle all audit concerns for a given industry, b. refine the model to handle all audit concerns for a given client, c. refine the model to handle a subset of audit concerns for the entire industry, and d. refine the model to handle a subset of audit concerns for a specific client. Our assumption at this point, and from the beginning of the study, was that the development of a truly deep model of auditor reasoning for even a single audit judgment task would be a significant undertaking. Both the effort to model the LME judgment process and the volume of literature provided by the firm indicated the LME judgment process for even a single audit concern involved significant auditor judgment. Therefore, we chose to refine the model by studying the auditor judgment involved in a subset of audit concerns for one client engagement. Having decided to concentrate on a specific subset of audit concerns, the expert was asked to identify two of the more significant and complex audit concerns for a retail grocery audit engagement. The expert responded that the key issue in any retail grocery operation is inventory. Specifically, the expert felt that purchasing and pricing inventory were the most critical concerns for retail grocery engagements. The expert explained that inventory pricing is significant because of its relationship to the client’s revenue. Unless inventory is priced according to management policy, the revenues of the client could be incorrectly stated. Furthermore, mistakes in purchasing inventory could result in obsolete or slow moving inventory. This could also directly influence the profitability of the client and affect the inventory value stated on the balance sheet. The expert identified two audit concerns in the firm’s approach which related directly to the purchasing and pricing of inventory: 1. inventory is purchased only with proper authorization, and 2. inventory is priced according to company policy. 80 This study focuses on developing a deep model of the auditor’s assessment of LME for these two audit objectives. No further restrictions in terms of scope were needed to accomplish this desire. The remainder of this chapter provides the results of the study. THE LME JUDGMENT PROCESS As a result of additional sessions with the expert, we were able to refine the reconstructive model by identifying both the steps and sequence of the reasoning process peculiar to the expert. Figure 17 illustrates the macro level LME judgment process for the expert. The LME judgment process is preceded by assessing the overall level of inherent risk and is followed by developing a preliminary audit approach. Therefore, the auditor repeats the reasoning processes numbered 3 through 7 in Figure 17 for each audit concern. The overall assessment of inherent risk is done only once for each client engagement. Our efforts focused on the judgment processes in Figure 17 which are specific to the LME assessment (processes 2 through 6). A description of each of the processing steps follows. Assess Changes in Overall Inherent Risk Because the selected engagement was not a first year engagement, the expert did not do an exhaustive assessment of inherent risk as with a first year engagement. Instead, the expert focused on changes in the client which could affect the auditor’s prior assessment of inherent risk. The expert appeared to firmly anchor on the prior level of overall inherent risk. For this engagement, the overall inherent risk was low. The auditor focused on four aspects of the client to determine if an adjustment to the prior overall inherent risk was necessary: Identify Client Assess Changes in Overall IR 81 Assess Assess IR for . . . Likelihood Each Aim” of Material Objective Error Develop Preliminary Audit Approach Perform Analytical Procedures ‘ APE REASON ING STRATEGY FIGURE 17 82 significant changes in client management, significant changes in client operations, significant changes in client accounting, and significant changes in client performance. PPNE‘ For the client engagement in this study, there were no changes in any of the four aspects used to assess the overall inherent risk. Therefore, the expert concluded that the overall level of inherent risk remained low. Perform Analytical Procedures Before beginning the assessment of inherent risk for any of the audit concerns, the senior on the engagement team computes several account balance changes and ratios using the client’s financial statements. Analytical procedures involve the expert comparing the results of the various financial statement computations with the auditor’s expectations. The expert characterized the current amount of the account balance or ratio as "sharply increased,” ”increased," "unchanged," "decreased," or "sharply decreased." Whenever the current value of a ratio or balance does not equal the expectations, the expert seeks an explanation for the difference. By default, the auditor expects statement balances to remain unchanged unless an adequate explanation for any change is identified. While observing the expert review the analytical procedures performed by the senior for the two audit concerns, the expert demonstrated an ability to develop an explanation for the abnormal analytical review results which appeared to require some complex reasoning. This ability was underscored during one session in which the expert provided a reason for a fluctuation in an account balance which differed from the senior who had done the analytical procedures for the engagement. In one case, for example, the expert rejected the senior’s explanation of an unexplained account balance change while analyzing the warehouse inventory account for the current year. The senior correctly noted that the balance of the warehouse inventory account for the current year was substantially higher than the prior year. The senior then 83 attributed the increase in warehouse inventory to seasonal buying by the client in anticipation of the approaching holiday season. While reviewing the senior’s work, the expert amended the senior’s explanation. The expert noted the primary reason for the increase in warehouse inventory was due to the increased client purchases in order to obtain quantity discounts from vendors. Figure 18 illustrates how the expert seemed to be able to link fluctuations in the client’s resources (warehouse inventory) to specific events the auditor was aware of in the operating environment (increased purchase quantities). In addition, the auditor manifested an ability to reason through the effect of the fluctuation on related financial statement balances and ratios. This appears to be direct evidence of the expert using a detailed understanding of the engagement client. Therefore, in order to model the expert’s ability to provide explanations for unexpected account behavior, we included in APE a detailed representation of the auditor’s knowledge of the engagement client. Efforts to model the expert’s knowledge of client operations and the expert’s financial reasoning abilities will be discussed later in this chapter. Assess the Inherent Risk of Each Audit Concern The assessment of inherent risk focuses on two types of information: the results of the analytical procedures for the specific audit concern, and’the existence of any client operating characteristics which would increase or decrease the inherent risk associated with the specific audit concern. For example, while assessing the inherent risk for merchandise being purchased with proper authorization, the expert noted the existence of direct store deliveries in the client operations. Direct store deliveries are deliveries of merchandise, such as soda or potato chips, by a vendor representative. 84 Assess Develop . Assess Assess Identify Changes in '9 I°’ . Likelihood Pre'iminary Client Overall Each AUd't of Material AUdit Objective Error ‘pproach IR 4 Perform Assess Analytical Control Risk Procedures for the Audit Objective Reduce Price of Inventory Increase purchase quanfiues Increase Warehouse Inventory Reduce the cost of inventory WAREHOUSE INVENTORY HOOK TO CLIENT KNOWLEDGE DURING DEEP REASONING FIGURE 18 85 The existence of direct store deliveries caused the expert to assess the inherent risk for this specific audit concern as high regardless of the results of analytical procedures. The expert explained that direct store deliveries require the client to depend on the vendor delivering the proper type and amount of merchandise to the right store and marked at the right price. Because these deliveries occur outside the direct control of the client, the expert considered it much more likely that merchandise could be purchased without proper authorization. In this case, the expert used the client knowledge differently than while reviewing the analytical procedures. The expert appeared to search for the existence of direct store deliveries being a characteristic of the client’s operations. If found, the expert automatically assessed the inherent risk of the audit concern as high regardless of the results of the analytical procedures. The expert’s ability to link client operating knowledge with financial statement deviations, and to identify operating characteristics themselves, both became avenues for extending the depth model beyond the use of simple heuristics. Assess Control Risk for the Audit Concern As with the inherent risk, the assessment of control risk appeared to be guided by a simple set of paired associations drawing upon a detailed knowledge of client operations. The primary factor determining the control risk is the existence of what the expert called "key attributes" of the client system“. Key attributes are system characteristics which, in the auditor’s opinion, provide sufficient control of the client operating environment to insure that operations proceed according to company policy. The expert appears to associate a set of key attributes with each specific audit concern. If the key attributes exist, the expert assesses control risk as low, if not, the control risk is high. As with inherent risk, the expert’s assessment of control risk demonstrates an ability to relate client operating characteristics with a specific audit concern. This ability is included in APE. 86 Assess Likelihood of Material Error After assessing the inherent risk and control risk, the auditor assesses the likelihood of material error using a simple set of inferences illustrated in Figure 19. This set of inferences is identical to the firm literature describing the LME judgment process. After working through the LME judgment process for the two audit concerns, we included in our model a representation of the control structure of the reasoning process according to the observations of the expert’s reasoning during the interviews. The purpose of the control structure is to assure that the model performs the sub tasks of the LME judgment process in the same manner as the expert. Having developed a heuristic control structure, the focus of the study turned to including model based reasoning capabilities demonstrated by the auditor during the LME judgment process. Efforts focused on the three types of knowledge used during the LME judgment process: 1. the expert’s ability to identify and explain causes for fluctuations in the client financial statements, 2. the expert’s ability to reason through the effect of changes in financial balances on related financial statement accounts, and 3. the expert’s ability to identify certain operating characteristics in the client environment used to assess the control risk of a specific audit concern. Without an understanding of client operations and their effect on client finances, the reconstructive model required the user to provide the results of the three types of reasoning listed above. A MODEL OF AUDITOR CLIENT KNOWLEDGE Because both of the audit concerns used in this study dealt with the same transaction cycle, modeling the expert’s knowledge of the engagement client was much more manageable. From conversations with the expert and during the study of the LME 87 Inherent risk high R1 .nherent .isk low ‘ LME Moderate Control risk High R3 R4 Control risk Iow LME INFERENCE NET FIGURE 19 88 judgment process, the expert appeared to have at least three types of knowledge about the client: 1. knowledge of client operations and its environment, 2. knowledge of client personnel and responsibilities, and 3. knowledge of client finances and financial reasoning. To represent these three types of auditor knowledge, we pr0pose a model which consists of a network of temporally related events each having causal relationships with client resources as shown in Figure 20.10 Each node of the network represents an event related to other events, resources, and agents using the attributes noted in Figure 20. The following section provides a more detailed description of the model. Knowledge of Client Operations and its Environment The model represents the expert’s knowledge of how the client purchases and sells inventory as a network of temporally related events similar to the GCX model offered by Selfridge and Biggs [1988]. The Predecessor-Event and Successor-Event attributes represent the temporal relationship between the events illustrated in Figure 20. The auditor uses this knowledge to do essential reasoning tasks during the LME judgment process. Also included in the model is a representation of the effect of events on client resources. For example, the auditor understands that as a result of receiving inventory at the warehouse, the amount of warehouse inventory increases. Conversely, the auditor also understands that when the client ships inventory from the warehouse to the stores, the amount of warehouse inventory decreases and the amount of store inventory increases. The Successor-Condition attribute shown in Figure 20 represents the effect of the event on client financial accounts. 10 Those familiar with the work of Selfridge and Biggs [1988] on the GCX model will notice that APE has been heavily influenced by the work on GCX. Therefore, the use of an events based method of representing auditor client knowledge should be viewed as somewhat of a confirmation of the events based memory first introduced by Selfridge and Biggs and not as something original to this work. 89 Parent Event Resource Agent Predecessor - Operation Successor- Condition Event Condition Predecessor Successor Event Event Operation Event Attributes: Resource Attributes: Date Date Internal-Agent Trend External-Agent Current-Amount Resource Expected-Value Predecessor-Event Mode Predecessor-Condition Successor-Event Successor-Condition Parent-event APE MODEL OF AUDITOR DOMAIN KNOWLEDGE FIGURE 20 90 In addition to the temporally related network of events, the auditor demonstrated the ability to describe client operations at three levels of abstraction. At the top level, Figure 21 illustrates that the auditor’s understanding of the entire client operation consists of three events - buy inventory, sell inventory, and receive cash. The first two events at the top level, buy inventory and sell inventory, are expanded in Figure 21 to provide a more detailed description regarding the purchase and sale of inventory. The second level representation decomposes the knowledge even further to a third level of events, shown in Figure 22. This third level provides even more detail regarding the purchase and sale of inventory by the client. The inclusion of different levels of abstraction of client knowledge is a contribution of the APE model not included in the GCX theory of Selfridge and Biggs [1988]. Figures 21 and 22 provide an illustration of the auditor’s leveled understanding of client operations. Figure 21 provides an illustration of the first two levels of auditor knowledge. Figure 22 provides the details of the second level event "Order Inventory." Details of the remaining events depicted in the second level of Figure 21 can be found in Appendix A. Knowledge of Client Personnel and Responsibilities Besides understanding the operating environment of the client, the expert also demonstrated a working knowledge of the role of both client employees and agents external to the client (e.g. vendors and customers). The auditor’s knowledge regarding the client personnel focused on the employee’s responsibilities and competence. Therefore, the model of operating events includes an attribute for position titles of client personnel and external agents as listed in Figure 20 using the Agent attribute. Knowledge of Client Finances and Financial Reasoning The expert also evidenced a knowledge of the client’s financial statements and the relationship between the various accounts in order to make judgments regarding client Approve Merchandise Plan New Store Legend: Buy Inventory 91 Sell Inventory Receive Cash Receive Price Ship Order Warehouse Warehouse Warehouse Inventory Inventory Inventory Inventory ‘ Receive Direct Store '- Delivery Warehouse Inventory Sell Order Receive Display Store Store Store 3‘0’9 Inventory Inventory Inventory Inventory Store Revenue GCOdSt 50f Inventory 00 s Old = Temporal relationships I a Event/resource relationships FIGURE 21 APE INVENTORY SCRIPT 92 Order Inventory Negociate Purchase with Vendor Survey Inventory Status Warehouse Inventory Adopt New Inventory Item APE ORDER INVENTORY DETAIL FIGURE 22 93 performance and expected account balances. This knowledge consists primarily of data- driven reasoning in which the expert determines the effect of changes in account balances on other related accounts. Figure 23 illustrates this network of financial reasoning. APE REPRESENTATION METHODS During the early stages of the APE model development process, a simple rule based expert system shell was used to develop the reconstructive model of the LME reasoning process. As was explained earlier, the model consisted of 38 production rules. The computational model was developed using the product VPExpert produced by Paperback Software. The system is very portable and allowed easy modification during the early knowledge acquisition sessions with the expert. This representation method proved to be adequate for representing the control structure of the LME judgment process in the final ' version of APE. Extending the model to include representations of the auditor client knowledge used during the LME judgment process requires more powerful representation methods. As was explained in Chapter 3, for the final version of the APE computational model, we used a product called GoldWorks produced by Gold Hill Computers. Goldworks provides the need methods to represent the auditor client knowledge. Representation of the APE LME Control Structure The control structure of APE consists of a set of forward chaining rules used to guide the LME reasoning process. For example, the following set of rules give an idea of the concept of controlling a reasoning process using forward chaining rules: 94 TOTAL ASSETS NET INCOME + + + LONG TERM CURRENT EXPENSES GROSS MARGIN ASSETS ASSETS + + INVENTORY REVENUES COST OF GOODS SOLD + + /\/\ WAREHOUSE STORE INVENTORY SALES INVENTORY PRICE VOLUNE APE CLIENT FINANCIAL REASONING NWMMEDGE FIGURE 23 95 IF The session has begun, and The overall inherent risk is unknown. THEN Assess the overall inherent risk for the engagement. IF The overall inherent risk assessment is known, and The results of analytical procedures are unknown. THEN Perform the analytical procedures. IF The results of analytical procedures are known, and The inherent risk for the audit concern is unknown. THEN Assess the inherent risk for the audit concern. As explained in Chapter 3, when the premise of a rule is satisfied, the knowledge contained in the rule conclusion becomes known to the system. In this case, the rule conclusion either invokes other rules which perform specific LME reasoning, or invoke special purpose functions for utilizing the expert’s client knowledge. Representation of the APE LME Hook to Client Knowledge As is illustrated in Figure 18, while reviewing the analytical procedures, the expert demonstrated an ability to use client knowledge in order to explain the unexpected results. A combination of forward chaining rules and special purpose LISP functions represents this ability. For example, the auditor used the following rule to call upon a special purpose function used to find explanations for unexpected analytical procedures in the warehouse inventory balance: IF The current balance Of the warehouse inventory account is unexpected. THEN Use the special function FIND-EXPLANATION. FIND-EXPLANATION is a special purpose LISP function which performs the reasoning needed to explain the unexpected results of the analytical procedures. This is done by searching the auditor’s understanding of client Operations in order to find a reason for the unexpected warehouse inventory balance. If a reason for the unexpected warehouse inventory balance is found, then the function adjusts the expected value of warehouse 96 inventory. As a result Of the auditor’s expectation being equal to the current amount, the system then concludes the analytical procedure results are expected and the next phase Of the LME judgment begins. Representing APE Knowledge of Client Operations A frame named "Operation-Event" represents a generic operating event. Figure 20 lists the attributes of this frame. Instances of the Operation-Event frame represent each event node in the client operation illustrated in Figures 21, 22, and Appendix A. The following is an example of a frame instance which represents the ”Order Inventory" event in Figure 21. Order Inventory: Date 1988 Internal-Agent Buyer External-Agent Vendor Predecessor-Event Approve Merchandise Plan Predecessor-Condition Need Inventory Successor-Event Receive Warehouse Inventory, or Receive Direct Store Delivery Successor-Condition Inventory Ordered Parent-Event Buy Inventory Instances of a frame named "Resource" represent the auditor’s knowledge of client finances. The attributes of the Resource frame are listed in Figure 20. Lastly, a set of forward chaining rules similar to the following, represent the auditor’s knowledge of financial reasoning: IF The expected value Of sales is increased, and The expected value Of the cost of goods is unchanged. THEN Expect the balance of gross margin to increase. IF The expected value of gross margin is increased, and The expected value of operating expenses is unchanged. THEN Expect the balance of net income to increase. Overall, APE involves 53 forward chaining rules, 18 special LISP functions, and over 100 instances of various frame storage structures to represent the knowledge of the expert to perform the LME judgment for the two audit concerns. Appendix B contains further 97 details regarding the physical Operation of APE. The remainder of this chapter discusses the evaluation of the APE computational model. APE MODEL EVALUATION The primary intent Of any effort to evaluate a computational model Of expert reasoning is to assess whether the model truly possesses some degree of expertise. Previous efforts in modeling expert audit reasoning have commonly involved asking others to judge whether the model exhibits expertise. Therefore, following this traditional approach, we first demonstrated to others the model’s ability to perform the LME judgment as did the expert. Appendix C contains the details Of the dialogue with the system and comments explaining APE’s LME judgment processes. Although APE successfully demonstrated its ability to perform the LME judgment process, the demonstration focused on the heuristic reasoning capabilities of APE and not the model based reasoning. A common criticism of heuristic based computational models of expert reasoning is that they tend to be very brittle. By brittle we mean the model can only do the specific task for which it was designed. Trying to adapt the model to another judgment process quickly reveals the shallow nature of the heuristic model. Schank and Childers [1984, pp. 170-187] provide some insight into the means by which model based theories of reasoning can be evaluated. They observe that knowledgeable people are able to do two things beyond making an expert decision. They can explain in detail what they know and can also use their knowledge to reason through new circumstances which have not been previously encountered. Therefore, if a computational model is an adequate representation Of deep reasoning, it should demonstrate some ability to explain what it knows and reason through novel situations which we will call deformations. This evaluation approach is similar to a concept common in database design: once an enterprise is properly modeled, the data in the database can be used in a variety of ways. 98 Schank’s and Childers’s argument uses a similar line of reasoning: if a computational model truly has captured the deep knowledge of an expert, adding additional uses of the knowledge is similar, in concept, to simply attaching needed procedures to provide additional uses of an enterprise’s database. Therefore, to demonstrate the adequacy Of APE’s client knowledge, we developed procedures which use the same client knowledge involved in the LME reasoning to both explain what it knows of the client and to reason through deformations of the client operating environment. APE Explanation Capabilities While trying to identify ways of demonstrating the explanation and deformation capabilities of APE, we discussed the problem with the expert. We asked how the expert would determine whether a person truly possessed a knowledge of the client and the LME I judgment process. The expert felt this knowledge could be evaluated using three questions: 1. What has to happen before a particular operating event can occur? 2. What is the result Of a particular Operating event? 3. Explain the details of specific client operations? To, be able to provide responses to the above questions, we developed LISP procedures which used APE’s knowledge of client Operations. The first two questions above were rather trivial uses of APE’s client knowledge. When asked to describe what preceded or followed a specific Operating event, the LISP procedures in APE first retrieve the description of the event. The procedure then parses the description to display information about the conditions and events preceding or following the particular event in question. The following is the English equivalent of the dialogue which occurs after asking APE to describe what immediately follows a particular event: 99 (USER) Tell me the effect of the client ordering inventory. (APE) After the client places an order inventory can be received at either the warehouse or directly at the stores through direct store delivery. As well, the status Of inventory becomes "ordered." A more interesting explanation capability of APE is demonstrated when asking the system to provide a more detailed description Of a particular client operating event. The following is a sample dialogue which would occur after asking APE to describe how the client receives inventory at the warehouse. Details of the "Receive Warehouse Inventory" event are illustrated in Appendix A. (USER) Tell me how inventory is received at the warehouse. Comment: Using its understanding of client operations, APE is able to determine the level of abstraction at which the user is asking a question. This is done by simply looking for all events which have the name Of the Receive Warehouse Inventory event as the value of its Parent Event attribute. After finding the lower level event descriptions, APE identifies which of the events occurs first. APE then displays the name of the first event to occur when receiving warehouse inventory. (APE) The first thing which must happen in order to receive warehouse inventory is the vendor receives a purchase order from the client. Comment: APE now begins to chain the remaining child events of Receive Warehouse Inventory event together to complete the description of how the client receives warehouse inventory. The next thing to happen is the vendor ships the inventory. The next thing to happen is the vendor truck arrives at the warehouse. The next thing to happen is the vendor truck is unloaded. The next thing to happen is the warehouse receiver signs for the inventory. The last thing to happen is the inventory is place on the warehouse shelves. 100 Demonstrating APE’s explanation facility provides additional confirmation that APE possesses one of the attributes Schank and Childers [1984] noted in knowledgeable people. APE Deformation Reasoning In auditing, a useful definition of a deformation is either a significant change in the client operations or a previously unencountered event which affects the client. The expert identified two events which seemed to fit the definition of a deformation. The first deformation dealt with the client’s efforts to become the discount price grocer in the area. Figure 24 illustrates the four events which led to the client becoming the low price leader and the effect of the events on financial statement items. By increasing warehouse capacity, the value Of the warehouse increased, which in turn increased the value of the long term assets. Similarly, increasing purchase quantities caused warehouse inventory to increase which also increased the value Of total assets. Meanwhile, the remaining two events depicted in Figure 24, reduce the cost of inventory and reduce the sales price of inventory, affected the base components of net income. By adding a description Of the four events in Figure 24, and by using some very simple LISP procedures, APE demonstrated an ability to describe both how the deformation Of the client environment occurred and how a deformation affected the client’s financial statements. The following is a sample of the dialogue which occurs after asking APE to consider the effect of the client’s move to become the low price leader. (USER) Tell me how the client became the low price leader and the effect this move has on the financial statements. (APE) The client became the low price leader by significantly reducing the price of inventory. However, the client could not significantly reduce the price of inventory until the cost of inventory was significantly lowered. 101 TOTAL ASSETS NET INCOME + + + LONG TERM CURRENT EXPENSES GROSS MARGIN ASSETS ASSETS + + + INVENTORY COST OF FEVENUES GOODS SOLD + + , . . - WAREHOUSE WAREHOUSE STORE- INVENTORY SALES INVENTORY INVENTORY INVENTORY COST VOLUME PRICE Increase Warehouse Inventory Reduce Price of Inventory Increase purchase quanfifies Reduce the cost of —' inventory APE LOW PRICE LEADER REASONING FIGURE 24 102 The cost of inventory was significantly lowered by the client increasing the quantity of purchases to receive volume discounts from buyers. However, the client could not increase the quantity of purchases until the capacity of the warehouse was expanded. The result of the move to become the low price leader in the client’s financial statements are as follows: the capacity warehouse is increased, the quantity of warehouse inventory is increased, the cost Of inventory is decreased, and the price of inventory is decreased. The anticipated effect on other financial statement items are as follows: Because capacity of the warehouse is increased, expect the value Of long term assets to increase. Because the expected value of warehouse inventory is increased, expect the value of total inventory to increase. Because the expected value of total inventory is increased, expect the value Of current assets to increase. Because the expected value of current assets and long term assets increased, expect the value Of total assets to increase. Because the cost of inventory is lowest, expect the cost of goods sold per item to be lowest. Because the price of inventory is lowest, expect the volume of sales to sharply increase. Because the price of inventory is lowest, and volume of sales is sharply increased, expect the value of revenues to sharply increase. Because the expected value of revenues is sharply increased, expect the value of gross margin to increase. Because the expected value of gross margin is increased, expect the value of net income to increase. To further test the adequacy of the APE procedures to handle deformations, we added the events shown in Figure 25 describing a truckers’ strike to APE. Using the same reasoning procedures to analyze the low price leader deformation, APE analyzed the effect of a truckers’ strike on the client. The APE dialogue for the truckers’ strike analysis is in Appendix C. 103 TOTAL ASSETS NET INCOME + + + LONG TERM CURRENT ASSETS ASSETS EXPENSES GROSS MARGIN + + - + INVENTORY COST OF REVENUES GOODS SOLD + + /\ . . WAREHOUSE STORE- INVENTORY SALES INVENTORY WARB'DUSE INVENTORY INVENTORY . COST VOLUME PRICE Vendors Cannot Denver Trucks Not Moving Truckers O‘I Strike APE TRUCKERS' STRIKE REASONING FIGURE 25 CHAPTER VI SUMMARY AND EXTENSIONS The primary purpose of this study has been to develop a computational model of complex audit judgment dealing with audit evidence aggregation. Traditional methods of studying complex reasoning processes have proven ineffective in providing detailed descriptions of the evidence aggregation process. Relative to this primary purpose of describing complex auditor reasoning, we offer the following summary of this research. Two different audit judgment tasks dealing with audit evidence aggregation were studied during this research. The first audit task studied was the audit review performed by an independent partner. The audit review focuses on identifying troublesome assertions, analyzing whether there is sufficient justification for the troublesome assertions, and then assessing the impact of the analysis on the audit opinion. We were successful in modeling this reasoning process at a high level by representing the expert’s heuristics using a set of production rules. The resulting model was named FRED. During the review process the auditor demonstrated instances of complex judgment. However, we were unable to represent the complex judgment in detail due to the difficulty in exposing the expert’s knowledge Of the client. This difficulty limited our ability to model the judgment process at its primitive level of reasoning. Because of our interest in decomposing evidence aggregation judgment at the primitive task level, we turned to a study of the engagement planning process using a different expert and a different client engagement. A critical task in audit planning is assessing the likelihood of material error for the various audit concerns associated with the client financial statements. We successfully 104 105 decomposed the reasoning process into four subtasks: assessing the overall level of inherent risk for the engagement, performing the necessary analytical procedures, assessing the inherent risk for the specific audit concern, and assessing the control risk for the audit concern. A computational model named APE was developed which represented the expert’s reasoning at a more detailed level than most prior studies which have only used rule-based representations. The model uses a set of forward chaining rules to represent the control structure of the LME reasoning process. In addition, like the expert, the model possesses a detailed understanding Of three types Of knowledge about the engagement client: knowledge about client operations and their effect on the financial statements, knowledge about the responsibilities Of client personnel, and knowledge about the client’s finances. This knowledge consists of a network of temporally related event descriptions which have causal links to financial statement line items. This network is represented using frames, rules, and special purpose LISP procedures. The following are observations about computational modeling, expert audit judgment, and evidence aggregation resulting from this study: 1. Auditor reasoning appears very complex -- unsurprisingly, while studying both the audit review and audit planning tasks, the expert appeared to perform complex reasoning. Not only did the two auditors involved in this study use heuristics during the reasoning process, both manifested a detailed understanding of the client and financial reporting to complete the reasoning task. The auditor’s knowledge of the client was a critical component of knowledge utilized during both the audit review and audit planning processes. 2. Adequate computational models of audit expertise require more than rule-based representations -- due to the auditor’s manifesting a very complex domain of knowledge, this study provides additional evidence that auditor reasoning involves 106 more than the use of heuristic reasoning. Selfridge and Biggs [1988] were the first audit judgment researchers to contend that simple rule-based models of auditor reasoning are inadequate in properly representing complex auditor judgment. This study provides additional support to Selfridge’s and Biggs’ call for more robust computational models. It is our contention, that future efforts to develop computational models of complex auditor reasoning will require the use of more than rule-based representations. This is especially true when attempting to decompose the judgment process to model the primitive judgment tasks. Evidence aggregation judgment requires extensive client knowledge -- while studying both the audit review and audit planning tasks, the two experts demonstrated a detailed understanding of client operations. We were unable to determine the nature of this client knowledge while studying the audit review task. However, while studying the audit planning process, we developed a model of client knowledge consisting of a knowledge Of client Operations, a knowledge of personnel responsibilities, and a knowledge of how client Operations affect the financial statements. During the model development process, client knowledge proved to be a distinguishing factor in differentiating novice and expert audit reasoning. Therefore, it would appear that future studies Of auditor reasoning using either traditional stimulus-response or computational modeling approaches should include the auditor’s understanding Of the client in a model of expert audit judgment. Evidence aggregation judgment appears representable at the primitive task level -- focusing on two complex instances of expert reasoning during the audit planning process, we were able to decompose the judgment process to what might be considered a primitive task level. To the extent that these two instances of expert judgment are representative of complex auditor reasoning, this study serves as a "proof of concept" that audit expertise is representable at a much more fundamental 107 level of reasoning than simple heuristic based models resulting from most prior computational modeling efforts. 5. Client knowledge appears to involve various levels of abstraction -- in addition to having an extensive understanding of the client, the auditor demonstrated an ability to reason through this understanding at various levels of abstraction. During the LME judgment process the auditor demonstrated an understanding of the client knowledge at three levels of abstraction. These three levels of abstraction were included in the model resulting from this study. As was mentioned earlier, the inclusion of leveled client knowledge is an extension of modeling auditor client knowledge beyond the work of Selfridge and Biggs [1988]. 6. A computational model’s adequacy can be assessed by demonstrating its explanation and deformation capabilities -- The model developed in this study appears to demonstrate at least limited examples of both explanation and deformation capabilities. This is a step beyond many earlier efforts which have concentrated on external evaluations of the computational model by other experts. In this case, by demonstrating the model’s ability to both explain what it knows and reason through novel situations, the model is shown to represent capabilities which are known to exist in knowledgeable people such as auditors. These observations lead us to conclude that researchers interested in utilizing the computational modeling approach have a contribution to make in studying auditor judgment. The contribution rests in the academic researcher’s freedom to concentrate on developing useful descriptions of low level reasoning involved in expert audit judgment. This research provides an additional description of complex auditor judgment toward the development of a taxonomy Of descriptions called for by Simon [1980]. Overall, the contribution of this research can be characterized along three dimensions illustrated in Figure 26. First, this research applies the computational modeling 108 approach to the study of a new domain of auditor knowledge: the audit review and the audit planning tasks. Although many researchers have studied auditor behavior during the planning process, none have attempted to develop a detailed description of the process using the computational modeling approach. Second, the model follows on the suggestion Of Selfridge and Biggs [1988] in modeling auditor client knowledge as a network Of temporally related events and extends the representation to include various levels of abstraction. Third, the model provides some additional contribution toward improved methods of evaluating the adequacy of models of auditor knowledge. Previous studies of auditor judgment using the computational modeling approach have concentrated on evaluating the adequacy of heuristic representations of auditor reasoning. We demonstrate the usefulness of assessing the model’s ability to both explain what the model knows, and - to reason through the effect of a deformation of the domain knowledge. We cannot conclude without noting the influence of the Selfridge and Biggs [1988] work in developing the GCX theory. Selfridge and Biggs were the first to push beyond the modeling of shallow auditor reasoning by proposing a model of deep auditor knowledge. Our contribution in relation to the work of Selfridge and Biggs toward a more complete model of auditor domain knowledge is two fold: OLD DOMAIN NEW Dory 109 ./ A DUNGAN TOOLS GAL METHOD ‘ STEINBART MESERW DENNA GCX REPRESENT- ATION Ye BUILD EVALUATE PROVE (adapted from March [1988]) AUDIT COMPUTATIONAL MODELING RESEARCH FIGURE 26 110 1. We provide some confirmation of the appropriateness of modeling auditor knowledge of a client as a network of temporally related events. 2. We extend the idea of event based client knowledge to include various levels of abstraction. RESEARCH EXTENSIONS We see three ways in which the results of this research could be extended to provide further insights into the nature of complex auditor reasoning, into the training of inexperienced auditors, and into the design of auditor decision support systems. The first extension discusses various ways Of relaxing the constraints on the study environment imposed during this project as a further test of adequacy of the APE model and a means of extending its domain of representation. The latter two extensions propose studies which could explore byproduct benefits Of the APE model. Testing the Model’s Generalizability Having developed a proof of concept for a limited judgment domain, the next step could be to extend the model to include other examples of auditor reasoning. By this means the generalizability of the model could be tested. Extending the model would basically require the constraints imposed during this study be relaxed. We see three ways in which the study environment could be relaxed. 1. Include more audit concerns for the study client. 2. Include more clients within the grocery industry. 3. Include non grocery clients and their respective audit concerns. In each of these proposed extensions we would expect the current model to change in two ways. First, additional descriptions of client operating events would be needed. Second, additional heuristics and special purpose functions would be needed to utilize the expanded client knowledge. In each of these three extensions, differences between expert 111 auditors could also be studied. This could include studying differences in both heuristics and client knowledge between expert and novice auditors. Pedagogical Use of a Computational Model Having developed a means of describing expert auditor reasoning, an important use of this model could be to develop an improved means of training inexperienced auditors. As Ashton, Kleinmuntz, Sullivan, and Tomassini note, Descriptive studies of judgment and choice behavior in audit settings focus on the actual behavior and thought processes of auditors. One important reason for emphasizing the descriptive approach is that, despite the proliferation of decision support and experts systems technologies, much of the auditor’s effectiveness still rests on his / her ability to process information, select and interpret evidence, and draw appropriate conclusions. In part, this is because the vast majority of audit decisions are not yet automated -- auditors still are operating largely on the basis of their training and intuition. If we hope to understand and improve the quality of audit decision making, we need to understand both how the experienced auditor thinks and how to train the inexperienced auditor (p. 100). An interesting extension of the use of the APE model would be to investigate the usefulness of the model in an educational setting. This usefulness could be tested in two ways. First, illustrating to novice auditors how an expert organizes client knowledge and how this knowledge relates to the financial statements being audited. A question to be answered would be whether such training is more effective or efficient than current methods Of training novice auditors. A second, and much more ambitious extension, would involve developing additional procedures which utilize the current model knowledge to perform training exercises or other educational functions to educate inexperienced auditors. This avenue would explore the possibility of allowing auditor’s to become familiar with a client by interacting with a system such as APE rather than performing a number of engagements before obtaining an understanding of the client. 112 . Designing More Useful Audit Information Systems This third extension addresses McCarthy’s [1987] call for the development of methods of linking the knowledge in a computational model of auditor reasoning with semantically modeled transaction databases. For the APE model, its usefulness could be greatly improved by developing a means by which it could directly access transaction data to perform both analytical procedures and compliance evaluations of client controls. Linking APE to the transaction database would enable the system to more reliably perform the analytical procedures and determine whether the client actually adheres to a claimed system Of internal controls. This evaluation could be performed on a continuous basis and would thereby reduce the auditor’s risk of making inaccurate evaluations of the effectiveness of the client’s system of internal controls. This ability could greatly enhance the reliability Of the auditor’s LME assessment during the audit planning process. Rather than mistakenly relying on a client’s system of internal controls, a link with transaction data would allow APE to perform an exhaustive analysis of client compliance by reviewing detailed descriptions of client operations. We are encouraged by the results of this study both in terms of the resulting description of an example of complex auditor reasoning and the potential extensions of this effort. We expect that continued efforts along these lines will prove beneficial to both audit practitioners interested in improving audit practice and researchers interested in understanding the complexities of audit reasoning. APPENDICES APPENDIX A DETAILED DESCRIPTIONS OF APE EVENTS The following pages provide the figures showing the third level of decomposition of the auditor’s knowledge of client Operations. The next page is a duplicate of an earlier figure showing the first and second level decomposition of the client knowledge. The remaining pages Show the detail Of those events marked with an "*" on the first page. 113 114 1 Buy Inventory 2 Sell Inventory 3 Receive Cash 1.1 1.2 I .3 2.1 2.2 Approve Receive Price Ship Merchandise Order Warehouse Warehouse Warehouse ' Plan Inventory Inventory Inventory Inventory . I . . ' Receive Direct Store '- Delivery Warehouse . Inventory 23 2.4 2.5 2,6 2.7 open Order Receive Display 39" New Store Store Store Store Store Inventory Inventory Inventory Inveptory Store Revenue COSt OI Legend: Inventory GOOdS SOId = Temporal relationships _—_ . Event/resource relationships ' = Decomposed on remaining pages of this appendix. APE INVENTORY SCRIPT FIGURE 27 115 Order Inventory Negociate Purchase with Vendor Survey Inventory Status Warehouse Inventory Adopt New Inventory Item APE - ORDER INVENTORY DETAIL FIGURE 28 116 Receive Warehouse Inve ntory Vendor Truck Arrives at Warehouse Vendor Ships Inventory Vendor Receives PO Sign Put Unload fo r Inventory Vendor Warehouse TIUCK Inventory Warehouse Inventory APE - RECEIVE WAREHOUSE INVENTORY DETAIL FIGURE 29 117 Price Warehouse Inventory Update Store Shelf Stickers Buyers Monitor Inventory Prices Set Inventory Price Update .Scanner File Buyers VerHy Scanner PRces APE - PRICE WAREHOUSE INVENTORY DETAIL FIGURE 30 118 Ship Warehouse Inventory Select Inventory for Store Order Drive Truck to Store Driver Confirms Load Receive Store Order Warehouse Inventory APE - SHIP WAREHOUSE INVENTORY FIGURE 31 119 Order Store Inventory Notice Low Stock- on Store Shelf Send Order to Warehouse Prepare Store Order Store Inventory APE - ORDER STORE DETAIL FIGURE 32 120 Receive Store Inventory Store Receiver Signs for Inventory Unload Warehouse Truck Warehouse Truck Arrive at Store Store Inventory, APE- RECEIVE STORE INVENTORY DETAIL FIGURE 33 121 Display Store Inventory Take Inventory From Receiving Place Inventory on Store Shelf APE - DISPLAY STORE INVENTORY DETAIL FIGURE 34 122 Sell Store Inventory Customer Customer Customer Takes > Enters Selects Selection to Store Inventory Cashier Warehouse Inventory Cashier Scans Customer Selection System Calculates Sale Amount Customer Pays Sales Clerk Warehouse Warehouse Inventory Inventory APE - SELL STORE INVENTORY DETAIL FIGURE 35 123 Receive Direct Store Delivery Vendor Checks Store Shelf Vendor Obtains Store Receiver Approval Vendor Places Warehouse Inventory APE - RECEIVE STORE DELIVERY DETAIL FIGURE 36 APPENDIX B APE DIALOGUE TRANSCRIPTS The following is the english equivalent of a sample dialogue which would occur while using APE to assess the LME for an audit objective. The actual output of APE as it currently stands requires some understanding of LISP. Therefore, for readability purposes, we have provided the English equivalent. Dialogue produced by APE is preceded by (APE), while sample responses by a user are preceded by (USER). At times, to explain the Operation of APE, comments have been added which are denoted by "Comment". 124 125 (APE) What is the name Of the client you want to work on? (USER) Dave’s Grocery Comment: At this point the system identifies the client industry type from its knowledge base. (APE) What is the year of the engagement? (USER) 1988 Comment: At this point the system begins to request information needed to make an assessment of changes in overall inherent risk. The user is asked to respond to each question. (APE) How would you describe prior complexity of Operations? (USER) SIMPLE (APE) Have there been any changes in Operations? (USER) NONE (APE) How would you describe prior complexity Of accounting? (USER) SIMPLE (APE) Have there been any changes in accounting? (USER) NONE (APE) How would you characterize management competence? (USER) HIGH (APE) Have there been any significant changes in management? 126 (USER) NONE (APE) Have there been any significant client events this past year? (USER) NONE (APE) How would you characterize client profit stability? (USER) STEADY-INCREASE (APE) How would you characterize current net income? (USER) STRONG-POSITIVE Comment: Having the necessary information the system makes the following conclusions: The complexity Of the client operations is simple. The complexity of client accounting is simple. Management competence is high. Client performance is strong. Overall inherent risk is low. The system now asks the user to select the audit concern to analyze. (APE) What is the control Objective you wish to analyze? (USER) Merchandise is purchased only with proper authorization. Comment: At this point the system begins its assessment Of inherent risk for the audit concern. The assessment begins by determining whether the results of the analytical review process are as expected. The first analysis deals with the balance of the warehouse inventory account. The system recognizes the current amount has increased which was not expected by the system. Our expert anchored on prior year balances and ratios. Any deviation of more than 5 to 10% required an explanation. This explanation consisted of identifying an event in the client operating environment which caused the increase in the inventory balance. As a result of its search of the client knowledge the system identified an event which occurred which caused warehouse inventory to increase. The following is the actual system dialogue. 127 (APE) The value of total warehouse inventory is unexpected. Therefore, I am looking for an explanation. Comment: After a brief period of time the system continues. (APE) I have determined that the reason for the current amount of warehouse inventory being increased is the occurrence of an increase in purchase quantities. Therefore, I am revising my expected value of warehouse inventory to be increased. This concludes my reasoning regarding the current amount of warehouse inventory. Comment: The system continues checking various balances and ratios to complete its assessment of the results Of the analytical review process. The system displays its findings after each analysis is completed. (APE) The value of total warehouse inventory is as expected. The value of gross margin is as expected. The ratio of inventory to total assets is as expected. Comment: The system now checks for the existence of direct store deliveries in the client operating environment. In this case the operating event is found. Therefore, the system concludes that inherent risk for the audit concern is high. (APE) Inherent risk for the audit concern is high. Comment: Having concluded the inherent risk for the audit concern is high the system now begins to assess the level of control risk for the audit concern. The system first concentrates on assessing the internal controls in the direct store delivery process. (APE) Are direct store deliveries preauthorized? (USER) YES (APE) Are direct store deliveries prepriced? (USER) YES 128 (APE) Does a store receiver verify all store inventory deliveries? (USER) YES (APE) Vendors are responsible for direct store delivery mistakes? (USER) YES (APE) Are exceptions and errors in direct store deliveries few in number and immediately resolved? (USER) YES (APE) Have there been any significant problems with direct store deliveries in the past? (USER) NO Comment: As a result of the preceding questions being asked, the system determines that the internal controls for the direct store deliveries are strong. The system now asks for information regarding the warehouse PO system in order to assess the adequacy of controls for purchasing inventory delivered to the warehouse. (APE) Have there been any changes in the warehouse PO system? (USER) NONE (APE) Have there been any previous problems with the warehouse PO system? (USER) NONE (APE) Have there been any prior problems with the general controls of the purchasing process? (USER) NONE Comment: As a result of the responses to the preceding questions, the system determines that the warehouse PO controls are strong. (APE) 129 To conclude its assessment Of the control risk for the audit concern the system asks for information about the computer operations of the client. The system knows the client utilizes a computer for pricing inventory and calculating sales at the store check out stand. Therefore, the following information is requested. Are new and revised programs authorized? (USER) YES (APE) Are new and revised programs tested? (USER) YES (APE) Are new and revised programs properly approved? (USER) YES (APE) Is authorization required to access data? (USER) YES Comment: (APE) As a result of the preceding responses the system concludes that the computer related controls for the client are strong. Because the direct store delivery controls, warehouse purchasing, and computer related controls are all strong, the auditor concludes that the control risk for this audit concern is low. In addition, following the reasoning of our study expert, the system also resets the value of inherent risk to be low because of the strength Of the internal control system of the client. This results in the system concluding its reasoning. Likelihood of material error for the audit concern "Merchandise is purchased only with proper authorization," is low. This concludes the system’s LME assessment for the audit objective. A similar dialogue would occur for the audit concern "Merchandise is priced in accordance with company policy.” 130 The following is a sample dialogue produced by APE when assessing the effect Of a truckers’ strike on the operations and financial statements of the client. The same conventions used during the LME assessment dialogue are utilized for this dialogue. (USER) Tell me the how a truckers’ strike occurs and its effect on the client’s financial statements. (APE) Client inventory became unavailable because vendors are unable to deliver inventory. Vendors were unable to deliver inventory because vendor trucks are not moving. Vendor trucks were not moving because truckers are on strike. Truckers are on strike because they are disgruntled. The conditions resulting from the events involved in a truckers’ strike are as follows: Because client inventory is unavailable, I expect the value of revenues to decline sharply. Because the expected value of revenues is a sharp decline, I expect the value of gross margin to decline sharply. Because the expected value of gross margin is a sharp decline, I expect the value of net income to decline sharply. APPENDIX C APE ARCHITECTURE The code for APE is stored in five files. The following is a brief description of each of the files. For the APE-SCRP.LSP, APE-LMELSP, and APE-EXPLLSP files, a graphical description of the contents is provided in the remaining pages of this appendix. APE-SCRP.I..SP APE-LME.LSP APE-EXPL.LSP APE-FINA.LSP APE-CTRL.I..SP Contains the structures representing the auditor’s knowledge of the client. See the following page for details regarding the contents of this file. Contains the instances and rules utilized to represent the auditor’s reasoning process for assessing the likelihood of material error. See the third page of this appendix for details regarding the contents of this file. Contains the LISP functions used to provide the explanation and deformation capabilities of APE. See the fourth page Of this appendix for details regarding the contents of this file. Contains the forward chaining rules used to assess the impact of changes in financial statement account balances. Contains the rules and screen management instances necessary to start, manage, and close a session with APE. 131 132 AGBVTS /\ INTERNAL EXTERNAL AGENTS: AGENTS: Customer Merchandising- V d en or Personnel Warehouse- Manager Warehouse-Floor- Employee DP-Employee Warehouse-Receiving- Employee Cashier Trucker Senior-Management Buyer Store-Receiving-Employee Store-Manager Store-FIoor-Employee Driver-Employee EVENTS: RESOURCES FINANCIAL PHYSICAL RESOURCES: RESOURCES: Inventory-Purchases Vendor- Inventory-to-total- Trucks Assets Net-Income Gross-Margin Total-Assets Long-Term-Assets Current-Assets Total-Inventory Total-Store- Inventory Total-Wa rehouse- Inventory New-Inventory- Item Expenses Cost-of-Goods- Sold Revenues Sales (Each event is represented as an instance of a frame. See Figures 21, 22, and Appendix A for details regarding the event names.) AUDITOR CLIENT KNOWLEDGE FIGURE 37 133 Start Overall IR issessmen Get Client Industry Assess Operations omplexit Overall IR Date Menu Assess Objective Signifi- canoe Perform Analytical Procedure Assess Client Perform- ance Assess Overall IR Assess Objective LNIE 0... Assess Objective GR 0.. Assess Objective IR 0 O ' - If any unexpected results are found in any of the rules testing the various balances and ratios the rule executes the LISP function FIND-AR-REASON. The name of the account or ratio and its abnormal condition are passed to FIND-AR-REASON also. " - Looks primarily at the results from the analytical review and client operations characteristics to determine the level of inherent risk “' - Looks for the existence of certain key attributes of the client operations for the objective being assessed in order to assess the objective control risk. "" - Set of four rules as shown in Figure 19. PHYSICAL SCHEMA OF LME ASSESSMENT FIGURE 38 134 CLIENT INDUSTRY CONTROL‘OBIECTIVES Daves- getail- K Merchandise-purchases-authorized Grocery rocer Merchandise—pricing-follows-policy POPUP-SCREENS: Choose—Client-Menu Choose-SCO-Menu Overall-IR-Data-Menu DSD—Control-Menu CCO-Data Menu Warehouse-PO—Info-Menu Buyer-Info-Menu Engagement-Data Menu NOTE: Forward chaining rules are used to control the execution of APE. Each of the nodes on the preceding page represents a forward chaining rule. The objects at the top of this page represent instances (DavesoGrocery) of frames (CLIENT). PHYSICAL SCHEMA OF LME ASSESSMENT (CONT) FIGURE 39 135 Main Explanation/ Deformation Menu Event Event Expand Low Truckers‘ Prede- Suc- Parent Price Strike cessor cessor Event ' Leader _ . _ ___I. i I , Find Find Success0i Cause Event '— ind Find Assess Prede- Successor (— I i... . . mpact cessor Condition I Prhn Prede- ‘ ‘SUCCGSSOI cessor Event 3! Print Prede- cessor ri nt Successo ‘ Condition NOTE: The structure chart abOve illustrates the structure of the LISP code which provides the explanation and deformation capabilities of APE. APE-EXPLANATION/DEFORMATION LISP CODE STRUCTURE FIGURE 40 LIST OF REFERENCES LIST OF REFERENCES Abelson, R.P., "Psychological Status of the Script Concept,” American Psychologg' t, (July, 1981), pp. 715-29. Aikins, J.S., J.C. Kunz, and EH. Shortliffe, "PUFF: An Expert System for Interpretation of Pulmonary Function Data," Computers and Biomedical Research, (Volume 16, 1983) pp. 199-208. AICPA Professional Standards (Commerce Clearing House, 1982). Arens, AA. and J .K Loebbecke, Auditing: An Integigated Approach (Prentice-Hall, 1985). Ashton, R.H., Human Information Processing in Accounting: Accounting Research Stug No. 17 (American Accounting Association, 1982). Ashton, R.H., Research in Audit Decision Making: Rationale, Evidence, and Implications - Research Monograph No. 6 (Canadian Certified General Accountants Research Foundation, 1983). Ashton, R.H., D.N. Kleinmuntz, J .B. Sullivan, and LA. Tomassini, "Audit Decision Making," Research Opportunities in Auditing, A.R. Abdel-khalik and I. Solomon (eds) (American Accounting Association, 1988). Bailey, A.D. Jr., K Hackenbrack, P. De, and J. Dillard, "Artificial Intelligence, Cognitive Science and Computational Modeling in Auditing Research: A Research Approach," The Journal of Information Systems (Spring, 1987), pp. 20-40. Bobrow, D.G., S. Mittal, and M.J. Stefik, "Expert Systems: Perils and Promise," Communications of the ACM (September, 1986), pp. 880-894. Bosk, C.L., Forgive and Remember: Managipg Medical Failure (University of Chicago Press, 1979). Cushing, BE, and J.K. Loebbecke, "Analytical Approaches to Audit Risk: A Survey and Analysis," Auditing: A Journal of Practice and Theory, (Fall, 1983), pp. 23-41. Danos, P., D. Holt, and AD. Bailey, Jr., "The Interaction Of Science and Attestation Standard Formation," Working Paper (University of Minnesota, 1986). Davis, R., and D. Lenat, Knowledge-Based Systems in Artificial Intelligence (McGraw-Hill, 1982). 136 137 Dillard, J., and J. Mutchler, "A Knowledge Based Expert System for the Auditor’s Going Concern Decisions," Working Paper (The Ohio State University, 1986). Dukes, W.F., "N=1," chhological Bulletin, (Volume 64, No.1, 1965), pp. 74-79. Dungan, C., "A Model of an Audit Judgment in the Form of an Expert System," Ph.D. Dissertation (University of Illinois, 1983). Ericsson, K.A., and HA. Simon, "Verbal Reports as Data," Pachological Review, (1980), pp.215-251. Feigenbaum, EA., "The Art of Artificial Intelligence," International Joint Conference on Artificial Intelligence (1977), pp. 1014-1029. Felix, W.L. and W.R. Kinney, Jr., "Research in the Auditor’s Opinion Formulation Process: State of the Art," The Accounting Review (April, 1982), pp. 245-271. Gal, G. "Using Auditor Knowledge to Formulate Data Model Constraints: Expert Systems for Internal Control Evaluation," Unpublished Dissertation (Michigan State University, 1985). Gal, G., and WE. McCarthy, "Semantic Specification and Automated Enforcement of Internal Control Procedures Within Accounting Systems," Working Paper, Michigan State University, 1986. Gardner, H., The Mind’s New Science (Basic Books, 1985). Gaumnitz, B., T. Nunamaker, J. Surdick, and M. Thomas, "Auditor Consensus in Internal Control Evaluation and Audit Program Planning," Journal of Accounting Research (Autumn, 1982), pp. 745-755. Genesereth, MR. and ML. Ginsberg, "Logic Programming," Communications Of the ACM (September, 1985), pp. 933-940. Gibbins, M., ”Propositions About the Psychology of Professional Judgment in Public Accounting," Journgl Of Accounting Research (Spring, 1984), pp. 103-125. Hansen, J .V. and W.F. Messier, J r., "Continued Development of a Knowledge-Based Expert System for Auditing Advanced Computer Systems," preliminary report submitted to Peat, Marwick, Mitchell Foundation (1984). Hayes-Roth, F., D.A. Waterman, and DB. Lenat, eds. Building Expert Systems (Addison- Wesley Publishing Company, 1983). Hoffman, R., ”The Problem of Extracting the Knowledge of Experts From the Perspective of Experimental Psychology." AI Magazine, (Summer, 1987), pp. 53-76. Hogarth, R.M. Judgment and Choice: The Psychology of Decision (Wiley, 1981). 138 Holstrum, G. and L. Kirtland, "The Audit Risk Model: A Framework For Current Practice and Future Research," Smposium on Auditing Research, University of Illinois, 1982. pp. 267-310. Howard, D., Cogpitive Psychology (MacMillan Publishing Co., Inc., 1983). Howe, D.R., Data Analysis for Data Base Desigp (Edward Arnold, Ltd., 1983). Johnson, P., "What Kind of Expert Should a System Be?" The Journal of Medicine and Philosophy (Volume 8, No. 1, 1983), pp. 77-97. Joyce, E]. and R. Libby, "Behavioral Studies of Audit Decision Making,” Journal Of Accounting Literature (1982), pp. 103-121. Kahneman, D., and A. Tversky, ”The Psychology of Preferences,” Scientific American (January, 1982), pp. 160-173. Kaplan, S., ”An Examination of the Effects of Environment and Explicit Internal Planning Process,” Auditing: A Journal of Practice & Theogy (Fall, 1985), pp. 1-19. Kissinger. J., "A General Theory of Evidence as the Conceptual Foundation in Auditing Theory: Some Comments and Extensions,” The Accounting Review (April, 1977), pp. 322-339. Libby, R., and BL. Lewis, "Human Information Processing Research in Accounting: The State of the Art," AccountingI Organizations and Socieg, Vol. 2, NO. 3 (1977), pp. 245-268. Libby, R., Accounting Human Information Processing: Theog and Applications (Prentice- Hall, 1981). Libby, R., J. Artman, and J. Willingham, "Process Susceptibility, Control Risk, and Audit Planning," The Accounting Review (April, 1985), pp. 212-230. Libby, R., and D.M. Fredrick, ”Expertise and the Ability to Explain Audit Findings," Working Paper (University of Michigan, 1988). March, S. ”Computer Science Research Methods in Information Systems,” presentation at the International Conference on Information Systems, Minneapolis, Minnesota, (December, 1988). Mautz, R.K., ”The Nature and Reliability of Audit Evidence," The Journal of Accountangr (May, 1958), pp. 40-47. Mautz, R.K. and HA. Sharaf, The Philosophy of Auditing (American Accounting Association, 1964). McCarthy, 1., "We Need Better Standards for AI Research," The AI Magazine (Fall, 1984), pp. 7-8. 139 McCarthy, W.E., ”On the Future of Knowledge-Based Accounting Systems," in DR. Scott Memorial Lectures in Accountangy, Vol. XIII, T.P. Howard and ER. Wilson, eds. (School of Accountancy, University of Missouri-Columbia, 1987). McCarthy, W.E., S.R. Rockwell, and E. Wallingford, "Design, Development, and Deployment of Expert Systems Within an Operational Accounting Environment," presented at the Workshop on Innovative Applications of Computers in Accounting Education, School of Management, The University Of Lethbridge, Lethbridge, Alberta, May 1989. Meservy, R., ”Auditing Internal Controls: A Computational Model of the Review Process," Unpublished Dissertation (University of Minnesota, 1985). Michaelson, R., "An Expert System for Deferral Tax Planning," Working Paper (University of Nebraska, 1982). Miller, L., "Has Artificial Intelligence Contributed to An Understanding Of the Human Mind? A Critique of Arguments for and Against,” Cogpitive Science, (1981), pp. 111-128. Mock, T.J., and I. Vertinsky, "DSS - RRA: Design Highlights," Paper presented at the Symposium on Decision Support Systems for Auditing, The University Of Southern California, (1984). Mock, T.J. and A. Wright, "An Investigation of a Measurement Based Approach to the Evaluation of Audit Evidence," in Nichols, R. and HF. Stettler, eds., Auditing Symposium V, Proceedings Of the 1980 Touche Ross-Univers_ity of Kansas Smposium on Auditing Problems (May 22-23, 1980), pp. 61-76. Mylopolous, J., and HJ. Levesque, "An Overview of Knowledge Representation,” in Brodie, M.L., J. Mylopolous, and J .W. Schmidt, eds., On Conceptual Modeling: Perspectives From Artificial Intelligence, Database, and Programming Langpages (Springer- Verlag, 1982). Newell, A., J. Shaw, and H. Simon, "Elements of a Theory of Human Problem Solving," Psychology Review, 65 (1958), pp. 151-166. Newell, A., and HA. Simon, "GPS: A Program That Simulates Human Thought,” in H. Billing, ed. Lemende Automaten (1961). Newell, A. and HA. Simon, Human Problem Solving (Prentice-Hall, 1972). Nisbet, RE, and L. Ross, Human InferencezStrategies and Shortcomings Of Social Judgment (Prentice-Hall, 1980). Nisbet, RE, and TD. Wilson, "Telling More Than We Can Know: Verbal Reports on Mental Processes," P c 010 'caI ReviewL84 (1977), pp.215-251. Quillian, M.R., "Semantic Memory," in M. Minsky, ed., Semantic Information Processing, (MIT Press, 1968). _,,/ 140 Schandl, C., Theogr Of Auditing (Scholars Book Company, 1978). Schank, RC. and RP. Abelson, Scripts, Plans, Goals, and Understanding (Lawrence- Erlbaum, 1977). Schank, RC. and P. Childers, The Cogpitive Computer (Addison Wesley, 1984). Schank, RC. and L. Hunter, "The Quest to Understand Thinking,” Byte (April, 1985), pp. 143-155. Selfridge, and Biggs, S., "GCX, A Computational Model of the Auditor’s Going-Concern J udgment,” paper presented at the USC Symposium on Expert Systems and Audit Judgment (February, 1988). Shpilberg, D. and LE. Graham, "Developing ExperTAXSM: An Expert System for Corporate Tax Accrual and Planning," Auditing: A Journal of Practice and Theory (Fall, 1986), pp. 75-94. Simon, H.A., "Cognitive Science: The Newest Science of the Artificial," Cogpitive Sciengg (January-March, 1980), pp. 33-46. Sneed, F., Parallelism in Two Disciplines (Arno Press, 1978). Sowa, J .F., Conceptual Structures: Information Processing in Mind and Machine (Addison- Wesley, 1984). Srinidhi, B., and M. Vasarhelyi, "Auditor Judgment Concerning Establishment of Substantive Tests Based on Internal Control Reliability," Auditing: A Journal of Practice & Theog (Spring, 1986), pp. 64-76. Steinbart, P., "The Construction of an Expert System to Make Materiality Judgments," Unpublished Dissertation (Michigan State University, 1984). Steinbart, P., ”The Construction of a Rule-Based Expert System as a Method for Studying Materiality Judgments," The Accounting Review (January, 1987), pp. 97-116. Toba, R., "A General Theory Of Evidence as the Conceptual Foundation in Auditing Theory," The Accounting Review (January, 1975), pp. 7-24. Trotman, KT, and PW. Yetton, "The Effect of the Review Process on Auditor J udgments," Journal of Accounting Research (Spring, 1985), pp. 256-267. Tversky, A., and D. Kahneman, "The Framing of Decisions and the Psychology Of Choice," Science (January 30, 1981), pp. 453-458. Ward, B.H., "Discussant’s Response to An Investigation of 3 Measurement Based Approach to the Evaluation of Audit Evidence," in Nichols, R. and HF. Stettler, eds., Auditing Smposium V, Proceedings of the 1980 Touche Ross - University of Kansas Syppposium on Auditing Problems (May 22-23, 1980). 141 Waller, W.S., and W.L. Felix, Jr., "The Auditor and Learning From Experience: Some Conjectures," Accounting Organizations and Socieg (1983). Waller, W.S., and J. Jiambalvo, "The Use Of Normative Models in Human Information Processing Research in Accounting," Journal of Accounting Literature, (1984, 3:201- 223). Waterman, D., A Guide to Expert Systems (Addison-Wesley, 1986). Williams, J.D., ”An Investigation of Auditors’ Decision Processes in a Risk Assessment During the Planning Stage of an Audit," Unpublished Dissertation (Texas A&M University, 1987). Willingham, J ., and W. Wright, "Development of a Knowledge-Based System for Auditing and Collectability Of a Commercial Loan," research proposal for Peat, Marwick, Mitchell Foundation (1985). Winston, P.H., Artificial Intelligence (2nd ed.), (Addison-Wesley, 1984). in. bAiifir'rv-vli ”“..., II."II' III” III "' III 7111 3129300625 ” IIIII " III II V. I N U E T R T S N AH G I H C I H