- r. Eiiuihfiauil ‘II 0.5%}... ..:vlb§!lv! (1....5: .323“?! ’1‘... 133).!!! .1}... .. cl . .312... 11;:Vé..l¢‘fall)5 PA. .I‘ I . .1 gig; §\v§1 ‘ .chxfli‘i‘itl‘: . \ .fi 1. 3i. 1"! u. . it Ozg’iia‘i‘ \l .v)§£!':'\!£l z}? . x-.. .vl’llfili I 1,! ganl'i‘v-hhjzt‘ . A111,». ’l. t ill-21.5.»); sutiiunlrfi. s31!) e .Iorllllllttunu! 3.. 14.1. It)? 'P‘I‘...Y}.§lrllv ‘1‘}:3‘!’ I. I$).rl.(k .E‘g.i}§1~£‘ I. e . . 5% Ir ... 3.. fil§ftfl$$fl a t i I, 4171. l." {lirtviivyfiiaztgtiw é, . :1 l? u? ‘ f .1 ‘1 )F,Cv..filllll.§ n I n . 11!“ ,‘L‘ I; \t\ ~ t , 1 l t I». . s ‘0 ‘- lunl).'|!Il\-..A . \. st 5‘ i \ I \ Jt~t1|1>llt D! \t I\ . A 1§\.&th~flnuhi4. x31. )‘I‘KQ‘ It .61).: 1 .xti‘Ln 11%.;131t‘35812l: EX‘L .u‘ixl .35! \s.‘\ :02;. Et|5 )A $03: .IIII. Ex : ‘06! EA ‘tlu‘D‘l‘Hfltxkfldflnlg Egg in \‘ \ i... u XWtbk Date ”—— LIBRARY Michigan State : University 1'D‘ This is to certify that the thesis entitled The Investment Decision —— An Analysis Using Verbal Protocals presented by Matthew J. Anderson has been accepted towards fulfillment of the requirements for PhoDo degree in Accounting am; A. / Major professor 8-~.‘5‘—82 0-7639 ea RETURNING MATERIALS: }V1531_J Piace in book drop to LIBRARJES remove this checkout from M your record. FINES wiH be charged if book is returned after the date stamped below. THE INVESTMENT DECISION--AN ANALYSIS USING VERBAL PROTOCOLS By Matthew James Anderson A DISSERTATION Submitted to Michigan State University in partial fulfiTIment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Accounting 1982 ABSTRACT THE INVESTMENT DECISION-AN ANALYSIS USING VERBAL PROTOCOLS By Matthew J. Anderson One of the major issues confronting accounting researchers, policymakers, and practitioners is the problem of information use. How information is to be used is one of the major criteria to be considered when the issues of what and how much information to produce are addressed. Knowledge of how information is used effectively circumscribes the feasible solutions to the above issues. This research addresses the issue of information use using the methodology of verbal protocol analysis. The study involves the examination of a prospectus by four professional and four non- professional subjects. The objective is to allow subjects to do the analysis in a situation which stresses realism. The analysis involves using the verbal protocols to develop behavior graphs and elementary processing models. Results of the study suggest that professionals as a group differ from nonprofessionals. This assessment is addressed in terms of strategies employed, operators and time used, information addressed and processing behavior. However, the results suggest that intra- group differences are also large. ACKNOWLEDGEMENTS I would like to acknowledge my committee members' assistance and guidance to me during this project: Dr. Stephen Buzby, Dr. Sarah Sprafka, Dr. William McCarthy, and Dr. Carl V. Page. Without their invaluable assistance, this project could not have been completed. I would be remiss if I did not acknowledge the kind assistance of Dr. Rene Bouwmmn, University of Oregon. Dr. Bouwman's work served as a springboard for much of the analysis. I also wish to thank the Accounting Department of Michigan State for its support, along with the Ernst and Nhinney Foundation and the American Accounting Association. To Ginger Noel, Marie Dumeney, and others, I owe many thanks for typing and editing assistance. Finally, I wish to thank my wife, Connie, for her patience during the last few months. Her support made the task much easier. ii TABLE OF CONTENTS Page CHAPTER I: INTRODUCTION .................. l l.l Implications for Accounting ............ 8 l.2 Scope and Limitations ............... ll CHAPTER II: LITERATURE REVIEW ............... 14 2.0 Introduction ................... l4 2.l Pertinent Literature ............... l4 CHAPTER III: METHODOLOGY--CONCEPTS AND CONSTRUCTS ..... 22 3.0 Introduction ................... 22 3.1 Problem Solving Approach ............. 22 3.2 The Human Information Processing System ...... 24 3.3 The Problem Space ................. 28 3.4 Task Analysis--The Conceptual Viewpoint ...... 3l 3.5 General Study and Sample Demographic Issues. . . . 36 CHAPTER IV: EXPERIMENTAL RESULTS: DERIVATION OF MODELS . . 39 CHAPTER V: ##hb-fi-b-b OSU'I-DQJN—‘O 0101010101010101 \IOSU'l-ROON-HO Introduction ................... 39 Subject Models .................. 42 Coding ...................... 46 Problem Space ................... 52 The Element Representation: Derivation of Operators 56 The Problem Behavior Graph (PBG) ......... Reliability and Validation Procedures ....... 64 ANALYSIS & REVIEW OF THE MODELS & PROCESSING BEHAVIOR .................... 66 Introduction ................... 66 Hypothesis One: The Assessment of Strategies. . . 66 Hypothesis Two: Information Use Assessment. . . . 75 Hypothesis Three: Operator Mix Analysis ..... 86 Hypothesis Four: Time Use Assessment ....... 90 Processing Behavior ................ 93 Model Reliability ................. lOl Inter-rater Reliability .............. 105 iii Page CHAPTER VI: SUMMARY AND FUTURE DIRECTIONS ......... 107 APPENDICES ......................... 114 Appendix I Subject Problem Behavior Graphs and Models 114 Appendix II Reliability Models of S4 ......... 186 BIBLIOGRAPHY ........................ l89 iv LIST OF TABLES Table Page 1 Profile of Subjects Used ............. 40 2 List Of Operators Used .............. 50 3 Percentage Agreement Among T0pics Addressed Across Subjects ................. 80 4 Time and Operators Employed by Subjects During Processing .................... 87 LIST OF FIGURES Figure Illustration of a Problem Behavior Graph ....... Topic Segments from the Protocol of S4 ........ Problem Behavior Graph of SI ............. Protocol Model of $1 ................. Derived Model of Sl .................. Problem Behavior Graph of $2 ............. Protocol Model of $2 ................. Derived Model of $2 .................. Problem Behavior Graph of $3 ............. Protocol Model of S3 ................. Derived Model of S3 .................. Problem Behavior Graph of S4 ............. Protocol Model of S4 ................. Derived Model of S4 .................. Problem Behavior Graph of $5 ............. Protocol Model of $5 ................. Derived Model of SS .................. Problem Behavior Graph of S6 ............. Protocol Model of S6 ................. Derived Model of $6 .................. Problem Behavior Graph of S7 ............. Protocol Model of S7 ................. Derived Model of S7 .................. Problem Behavior Graph of 54-2 ............ Protocol Model of 54-2 ................ Derived Model of 54-2 ................. Nc—J—J—l-d—Ju—l—l—J—J—ul OQOQNOSMJ>WNdOkOCDVO$UIthd NNNNNN 0501th—- vi CHAPTER I INTRODUCTION A significant amount of the recent accounting literature has been concerned with the users of accounting information. Policy- making bodies, practitioners, and academics have all sought to define the users' role in the process of generating and disseminating accounting information. Most recently, the Financial Accounting Standards Board (FASB), in its conceptual framework project, has begun to address some of the relevant questions on this issue, in- cluding the following: l. What user group(s) ought to be the primary focus of general purpose financial statements? 2. What types of information do users require? 3. For what purpose(s) are financial statements to be used? Any responses to these questions are interrelated, as can be seen from the following quote from the Accounting Principles Board (APB) State- ment No. 4 [1970, para. 43-47}:1 Financial accounting information is used by a variety of groups and for diverse purposes. The needs... of users determines the type of information required.... Information prepared for a particular purpose cannot be expected to serve other needs well.... Improving financial accounting requires...research on...user needs, on the decision processes of users and on the information. 1The quoted language appears to have been taken from the American Accounting Association's, A Statement of Basic Accounting Theory, [1966. pp. 19-21, 63]. l The position taken by the FASB on these questions is outlined in Statement of Financial Accounting Concepts (SFAC) No. l [l978, para. 32]: The objectives... narrow (their) focus to investors' and creditors' primary interest in the prospect of receiving cash from their investment in or loans to business enterprises.... (They) finally focus on information about an enterprise's economic resources, the claims to those resources, and changes in them, . that is useful in assessing the enterprise's cash flow prospects. The preceding position is essentially similar to that adopted by the APB in its Statement No. 4. The FASB does substantially depart from its predecessor policy- making bodies in that it attempts to characterize the users in particular ways. This can be shown by the following quote from SFAC No. l Ipara. 36]: (Users) understand to varying degrees the business... environment...and related matters. Their understanding of financial information and the way and extent to which they use and rely on it...may vary greatly.... Its use can be learned, however, and financial reporting should provide information that can be used by all -- nonprofessionals as well as professionals -- who are willing to learn to use it properly. One possible interpretation of the above statement is that the FASB assumes that some users (presumably professionals) know how to use information properly and other users (nonprofessionals) do not. Such an interpretation is supported by the following quote from a recent literature survey by Dyckman, et al, [l978, p. 76] regarding the impact on users of alternative reporting methods: More sephisticated users...tend to rely more heavily on the accounting data supplied to them in financial reports.... Unsophisticated users...rely more on the nonaccounting data in the financial reports. Sophisticated users are more likely to be able to perceive economic realities underlying alternative reporting methods. One implication of the above is that different users will tend to use different processing models in evaluating financial information. The assumption of differential understanding and processing of information in the domain of the financial statement and investment analysis is addressed in this study. The objective of the research is to compare the problem-solving behavior of expert subjects (i.e., professional investors) with that of relatively naive subjects (i.e., nonprofessional investors). Prototypical subjects in each class of investors will be analyzed and models built on an individual basis. The models will be analyzed in terms of the following hypotheses:2 H1: Subject classes will differ in search strategies employed. Research by Bouwman [l978, l980] indicates that student subjects processed data in a sequential fashion. In contrast, professional subjects used a recursive processing strategy, moving back and forth in the data set. While the dichotomy between subject groups in this project is not as stark as that between Bouwman's subjects, the same general behavior differences are expected. This is primarily due to the fact that, presumably, professionals do analyses such as that in this study much more often. H2: Subject classes will differ in amount of information attended to during the analysis. Research employing both linear modeling and process tracing 2These hypotheses will not be tested in the usual sense and either accepted or rejected. They will serve as guides in the coding and modeling process. Conclusions drawn regarding them will be done on a counting basis only; e.g., this type behavior was observed n times. techniques suggests that the information used by professionals tends to differ from that used by nonprofessionals. Hofstedt [1972] found that professionals tended to use more quantitative data in general and less data as a whole than nonprofessionals (students). Ashton and Kramer [1980] found that cue weight magnitudes differed significantly between such groups, which implies different ranking schemes and information use in decision-making. Elstein, Shulman and Sprafka [1978] also found that experienced clinicians used different information than medical students or interns. H3: Subject classes will differ in operators employed during the analysis. Hypothesis 3 is tased on Bouwman's research [1980]. He found that several types of data manipulations were never engaged in by the nonprofessionals in his study. This would imply a different use of operators by the different classes of subjects. H4: Subject classes will differ in amount of time spent in analysis (both on specific items of information and overall time). The time dimension was tested by Hofstedt:[19721. He found that nonprofessionals use significantly more time than professionals in analysis of annual report data. As used here, a strategy implies "a pattern of decisions in the acquisition, retention, and utilization of information that serves to meet certain objectives" [Bruner, et al, p. 54]. Any differences found in the processing behavior between classes will be held to be tentative evidence that user sophistication is an important variable in the pro- duction of accounting information. The importance of understanding the decision behavior of the users of accounting information has been well documented in the literature. In a recent study, for example, Libby [1975, pp. 476- 477] developed a scenario demonstrating how an information set showing perfect predictability of (and therefore relevance to) an environmental event may not be useful. He points out that failure to consider the cognitive limitations of potential users may lead to ineffective use of the information due to information overload3 or other constraints of the human system. Other research has also generally supported the view that the individual user is an important consideration in any judgment drawn concerning the usefulness of information. McGhee, Shields, and Birnberg [1978, p. 682] hypothesized, based on research by San Miguel [1976] and others, that different users of accounting information may require particular information packages in order to demonstrate effective and efficient processing. Research by Payne [1976] (and replicated by Biggs [1979] in the accounting area) indicated that decision-makers tend to change their processing strategies as the information set changes. He found that increasing the environmental complexity by (1) increasing the number of choices available, or (2) increasing the number of information dimensions related to each choice, caused subjects to resort to simplifying heuristics to reduce cognitive load. It can be inferred from these findings that the usefulness of infor- mation cannot be defined without knowledge of the user and his decision process. 3The concept of information overload is generally linked to the theory found in H.M. Schroeder, M.J. Driver, and S. Streufert, Human Information Processing, [1967, pp. 37-40]. Most of the previous research in accounting investigating information user behavior has followed the Brunswik Lens or Bayesian paradigms4 (see Slovic and Lichtenstein [1971], Libby [1979a, 1979b], Ashton [1974, 1979], Swierenga, et. a1., [1978]). It is generally accepted that models developed under these paradigms are paramorphic [Hoffman,;19601 in nature. That is, they are very effective in an outcome or predictive sense, but they are not explanatory. Different underlying behaviors may be well fitted by the same surface model. In fact, Dawes and Corrigan [1974] demonstrate that linear models will be good predictors in any situation in which there is (l) a monotone relationship between the criterion and the predictors and (2) there is error in the measurement of the criterion or predictor variables or both. One proposed use of these models is as a normative guide for users in a given decision-making context. However, recent research has challenged this view in some cases. For example, in their review of the information processing literature, Slovic, Fischoff and Lichtenstein [1977, pp. 3, 8] cited findings demonstrating that (1) people probably do not make decisions in a Bayesian fashion, and (2) strategies other than linear additive ones are often employed. In addition, the research indicates that even teaching the users optimal information use is difficult and often the learned behavior does not generalize to new contexts [p.13]. One implication of these results 4For reviews of this literature, see Libby and Lewis [1977]; Dyckman, Gibbins and Swierenga [1978]; and Snowball [1980]. is that in order to better understand human behavior, other techniques must be employed. One such technique is protocol analysis,or process tracing, the proposed methodology of this study. The methodology involves having a subject verbalize or think aloud as a task is performed. These verbalizations or protocols5 are tape recorded and transcribed. The protocols are then analyzed in a manner consistent with the theory outlined in Newell and Simon [1972]. An ultimate objective of most process tracing research is a specific mechanistic model of the individual human information pro- cessor in specific task environments. This modeling is based upon the following line of reasoning: people as information processors have several characteristics which can be closely approximated by machines —- especially the computer. For example, people have memories (of varying capacities). They have the ability to react adaptively to changes in their environment. They are able to learn, to think, and to perform tasks in a systematic fashion. Most of these abilities and traits can be modeled or performed by the computer. The computer has a memory (limited capacity). It can be programmed to react adaptively (e.g., conditional actions). It can also be programmed to "think" and “learn" from its experiences. It is, without question, much faster, more efficient, and more reli- able than humans, both temporarily and across time. In this study, the methodology is used to assess the processing behavior of professional and nonprofessional subjects analyzing a 5A protocol is literally an original record or first draft of any output. prospectus. Subjects were allowed as much or as little time as they desired to solve the problem. They were instructed to do whatever they normally did when analyzing a potential investment, assuming they were interested in the one at hand. The process tracing approach to modeling behavior, though currently viewed as an exploratory procedure, is more and more being recognized as a potentially powerful analytical device. Though generally nonstatistical in nature, the methodology offers several validation procedures for modeling results. The most common of these can be interpreted as a type of face validity argument. That is, if such models do indeed represent what they purport to represent, then the observed processes of the models should approximate the processes of subjects doing the same tasks. This is the essential premise of what is referred to elsewhere in this paper as the Turing test [Clarkson, 1962]. 1.1 Implications for Accounting As stated earlier, research has amply demonstrated the need to consider users in describing the usefulness of accounting information. However, much of this research has been input-output analyses and either explicitly or implicitly assumed that decision-makers behaved in particular ways if observed decisions were adequately predicted by surrogate models. The question of whether the underlying behavior fit model assumptions generally was not addressed. Individual actions by subjects tended to be masked by averaging techniques used. The result of such procedures is that while assessments can be made of different weighting policies or perhaps of the extent of information use, the fundamental questions of how and why weighting policies differed across decision-makers remains unanswered. That is, one cannot really say how or why particular subjects arrived at their weighting policies. As a result, attempts to change or correct decision behaviors have been relatively unsuccessful [Slovic, Fischoff and Lichtenstein, 1977]. Newell and Simon [1972] have demonstrated that the task itself is a major determinant of observed behavior. In this study, observed behavior should be closely linked to the information set and task instructions. Therefore, findings of differences in information processing (assuming the differences are reliable and valid) would present tentative evidence relative to questions such as the following: 1. What is the nature of the divergence in behavior between the two subject classes? 2. When or where does the divergence in behavior tend to occur? 3. Can the divergence be related to the information set? 4. Assuming the professionals exhibit desirable behavior, how can nonprofessionals become better information processors? The responses to these questions may have implications for the formation of the information set itself. Previous research utilizing both linear models and process tracing methodologies has examined differences in information processing between groups possessing varying levels of expertise in particular domains. The general nature of these findings, as related to the above questions, is as follows: 1. Professionals (CPA's) may have greater self-insight, are more linearly predictable, and differ in terms of cue weight magnitudes when compared to nonprofessionals (students) [Ashton and Kramer, 1980]. 10 2. Professionals tend to use more quantitative data and spend less time in analysis than nonprofessionals [Hofstedt, 1972]. 3. Professionals tend to process information in a recursive fashion, often switching back and forth between cues in the data. They also seem to possess a type of mental checklist which they apply to the data. This is contrasted with non- professionals (students), who tend to work in a sequential, front-to-back fashion, without such a checklist (Bouwman [1980]; Stephens, Shank, and Bhaskar [1980]; Elstein, Shulman, and Sprafka [1978]). 4. While some variance is usually exhibited across professionals, that from professionals to nonprofessionals is usually far greater [Bouwman, 1980]. The above results serve as useful guides in evaluations of how information may be processed in several contexts. The aim of this study (as stated by the hypotheses) is to assess these issues in a financial context which emphasizes external validity. To the extent that these findings are relevant to (and replicated by) the analysis in this study, then such differences may imply several points at the individual level (as contrasted with the aggregate market) of information processing. For example, the order of present- ation may become increasingly important as expertise decreases. Where a piece of information is may determine whether or not it will be attended to. Additionally, it may be inferred from such studies that particular combinations of data may be more appropriate for efficient ll processing. Finally, there is the pedagogical concern with how to change behavior. More detailed knowledge of how processing is done should permit an enhanced ability to change it. Such knowledge may also be used to design more effective man-machine systems. A finding of no difference has fewer implications. Again, assuming the results are valid and reliable, it may be argued that such a finding is tentative evidence that the information set generally presented in such accounting contexts is not taxing the cognitive abilities of users. Another interpretation given the sample size is that the design simply did not allow investigation of a diverse enough population. 1.2 Scope and Limitations The research question in this study is addressed in the following way: given a fixed information set, it is hypothesized that differ- ential sophistication of users will result in differential perceptions of task complexity and, therefore, different processing models will be used. It is assumed that the task is complex enough to present a realistic and representative problem to the processors. The relative efficiency and efficacy of the processor's model are not addressed in this research. This study will not attempt to judge the information set itself. The information set chosen is assumed to be realistic and representa- tive. Other information sets could have been chosen. Additionally, there is no control over information use within the information set itself. In fact, one of the expected ways that programs may differ is in the amount of information used. 12 The concern here is with individual, not group, behavior. No attempt will be made to find average processes of the particular subject classes. While comparative statements may be made or particu- lar subjects used as prototypes, no attempt will be made to develop models of an "average professional" or "average nonprofessional" subject at this stage of the modeling process. Due to the limited sample sizes to be used, the generalizability of the results is limited. However, to the extent it can be demon- strated that some task characteristics are invariant, (i.e., that other processors will also have to deal with those characteristics to process similar problems), it is expected that observed behaviors should be repeated in such future contexts by processors possessing similar abilities to those in this study. It should be noted that recent research, especially that of Nisbett and Wilson [1977] has questioned the ability of subjects to relate, in a reliable manner, their thoughts relative to performance of a task. In fact, Nisbett and Wilson conclude that many of the mental processes are inaccessible to conscious control, hence cannot be reported on. While it is generally held that as particular behaviors become well learned they are less likely to be reported on in any intro- spective verbal report, Nisbett and Wilson's work has limited applic- ability when applied to process tracing work. First, Nisbett and Wilson reported on retrospective verbal reports. Concurrent verbal reporting is the mode used by current protocol researchers [Ericcson and Simon, 1979]. Secondly, none of the studies cited by Nisbett 13 and Wilson were specifically designed to test verbal reporting. Con- clusions drawn about verbalizations generally come from debriefing sessions, which generally lack the controls of the experimental session itself [Ericcson and Simon, 1979]. CHAPTER II LITERATURE REVIEW 2.0 Introduction The technique of process tracing has been concurrently developed in the disciplines of psychology and artificial intelligence/engin- eering systems. Though the research in each area has proceeded in different directions (behavior analysis versus simulation models, respectively), in terms of emphasis, this conjunction of knowledge for development purposes is logical if intelligence is defined in terms of human cognition. Specific applications of the technique for research purposes has covered areas as diverse as physics and medicine. This chapter is concerned with the use of process tracing in accounting contexts. The implications of prior research for this study is addressed. Finally, a synopsis of other applications of the methodology is provided. 2.l Pertinent Literature Most of the research in the process tracing area is based on the definitive theory outlined in Newell and Simon's work, Human Problem Solving [1972]. The methodology has been applied to a wide variety of what Ungson, Braunstein and Hall [1981] refer to as "ill-structured" problems, as well as to well-defined ones. Kleinmutz [1968] modeled the decision processes used by clinical psychologists in the course of evaluating patients. Montgomery [1976] used the methodology to test 14 15 Tversky's finding of intransitivity of preferences by subjects evaluating simple gambles. Newell and Simon [l972]described problem- solving behavior in logic, cryptarithmetic and chess. Payne, Carroll and Braunstein [1978] investigated the decision processes involved in apartment selection. Bhaskar and Simon [1978] used protocol analysis to assess problem-solving behavior in the domain of engineer- ing thermodynamics. There are numerous other works investigating such domains as algebra, consumer decision-making in various contexts, education and learning, and medical diagnosis of various types of disease entities [Simon, 1979]. The earliest work in the financial area was done by Clarkson [1962]. Clarkson studied the decision processes used by a bank trust investment officer in the course of selecting portfolios for clients. The task chosen was important in that the behavior of the officer was well constrained by both legal fiduciary and firm-specific considera- tions. As a result, the set of possible search lists that the officer might have used in portfolio selection was reduced to a fairly pre- dictable, probable set. The problem reduced to looking at the desired needs of the client, e.g., current income versus long-term growth, or some combination, over a set of firms surviving the screen formed by the legal and firm-specific constraints. Clarkson was able to produce a computer simulation model of the trust officer which was capable of selecting portfolios of five to eight stocks which, at most, 6 differed from those of the officer by two. The basic model consisted 6The computer also tended to differ from the officer in dollar amounts invested in each firm. The differences tended to be quite small, generally less than $100, out of portfolios of $5,000 or more. 16 of a discrimination net (decision tree) which looked for presence- absence of selected criteria in terms of building suitable lists of investment vehicles. From such lists, the model then matched the goals of the investor subject with the available investment instru- ments. Clarkson validated his model by means of several tests. Model output was compared to random number selection processes and to naive models. In addition, Clarkson used a modified Turing test, suggested by Newell and Simon. The test is based on the following premise: given that a model purports to be the representation of some process or processor, then the model should perform as the process or processor. A neutral observer should be unable to distinguish the model output and processes from those of the human counterpart, down to some pre- scribed level of detail. Clarkson notes that the model and human output need not be in the same order; merely the overall content should be equivalent. The model performed well in all of these procedures. However, Clarkson did not cross-validate his model, nor was its test- retest reliability assessed. Both of these criticisms have been applied to much of the work done in the process tracing area.7 Accounting researchers have used protocol analyses in two basic areas -- model building and hypothesis testing. Stephens [1978] did preliminary work which combined both areas in a study of decision processes used by bank loan officers making credit decisions. He found tentative evidence that bank loan officers tend to process financial information in the same ways across companies, regardless of 7Much of this criticism stems from the fact that the methodology was developed primarily in the computer science/artificial intelligence area, not psychology. Validation procedures reflect this. 17 changes in the lines of business or accounting techniques. That is, the officers tended to use the same general program for problem solution in spite of changes in accounting variables or industries. The study was designed in terms of specific hypotheses concerning the issues. The hypotheses were not accepted or rejected in the gen— eral statistical sense. Support for, or the lack of support of, each hypothesis was developed by demonstrating protocol evidence relative to each Specific hypothesis. Bhaskar and Dillard [1979a, 1979b] examined two problem areas in the accounting area. In their 1979a study, they investigated the concept of knowledge representation in the accounting domain. They sought to show that knowledge is organized by accountants in particu- lare groupings based on relationships between assets and equities, in conjunction with certain revenue recognition schemes. They were able to demonstrate that their proposed representation was sufficient to delineate required knowledge and procedures at the level necessary to solve problems taken from an intermediate accounting text. They also demonstrated the use of a semi-automatic protocol scoring machine which improves the reliability of operator assignment across scorers. In their 1979b study, Bhaskar and Dillard essentially extended their previous study. They addressed the problem of becoming skilled in a domain requiring non-trivial amounts of knowledge, or in their terminology, one that is semantically rich. Their preliminary finding indicates that skill acquisition may be a process of unit building. 8This result would seem to conflict with Newell and Simon's contention that the program adopted is a function of the problem itself. However, Stephens' firms were very similar in size and industry. That is, inclusion of firm(s) from very different industries and/or of different size may have resulted in different processing approaches. 18 This is similar to the chunking hypothesis in memory research [Newell and Simon, p. 792]. That is, as an individual becomes more familiar with data it is remembered in larger and larger cohesive units. For example, a zip code is generally one piece or "chunk" of information, not five. Bhaskar [1978] has also developed a computer model which explores the decision processes observed in the solution of certain classes of problems in the cost accounting domain. Bouwman [1978, 1980] has directly addressed the problem of financial statement analysis. Bouwman assumed that the decision process used in financial analysis tasks was similar to the type of analysis done by medical diagnosticians when confronted by a patient. He constructed case studies, each containing a particular "problem.” Subjects were then given a fixed amount of time to find the hidden "ailment" in each case study firm. Using CPA's and students, he developed computer models of the financial diagnostic process. Bouwman validated his model by means of the comparison of model and subject output. Additionally, he demonstrated that the model approximated to a high degree the intermediate stages and processes that the subjects engaged in along the solution path. Bouwman effectively established that financial statement analysis is a semantically rich domain that is amenable to the process-tracing methodology. The task is sufficiently well-structured that strategies employed by processors can be delineated, analyzed, and modeled. Biggs [1979] and Biggs and Mock [1978] have used the technique for the purpose of hypothesis testing. In his 1979 study, Biggs sought to categorize the underlying process model of the decision processes utilized by financial analysts in the course of evaluating the earning 19 power of several companies. He sought to show that the processes could be categorized in terms of the conjunctive, additive difference, additive compensatory or elimination by aspects models, or some combination of them. His results, which were consistent with those reported by Payne [1976], indicated that the decision-makers could be classified in terms of the models or combinations of them. Biggs also tested the reliability of his protocol scoring technique across coders using simple proportion of agreement between scorers and also by using the Kappa Coefficient, which removes the effect of chance agreement between scorers. Biggs and Mock used the technique to analyze the decision processes used by auditors in the course of making audit program decisions. Using a task validated as to its realistic nature by Mock and Turner [1979], they examined the information search patterns employed by four senior auditors. They found that the auditors used two essential types of information search strategies -- systematic and directed (essentially a breadth-first versus depth-first analysis). Further, information used in the making of the task decision was quite similar if one controlled for the amggnt_of information used on a particular cue dimension. In a 1980 study, Shields used a process tracing technique to analyze the information search strategies employed by senior managers reading cost variance reports. The study utilized information boards and manipulated the complexity of the reports. The analysis was done relative to four hypotheses, and the search patterns were categorized using the same models as Payne [1976]and Biggs [1979]. The basic 20 hypothesis under test in the study was that processing would change as task complexity, as represented by the number of units presented and information per unit, increased. The null hypothesis of no effect was not rejected for any case, even though effects were in the appropriate direction. Shields posited that this finding may possibly have been due to the familiarity of his subjects with such reports. That is, they no longer needed to search to make decisions. Finally, it should be pointed out that some researchers have cautioned that the use of eXpli- cit search techniques such as information boards, by requiring an act that subjects normally would not undertake (walking back and forth), may themselves serve to alter decision processes [Olshavsky, 1979]. Process tracing research to date in accounting has generally tackled well-defined problems. The findings of these studies indicate that accounting data is handled in fairly predictable ways by processors. Essentially, they serve to establish that the use of process tracing techniques in accounting contexts is feasible. However, they do demon- strate that care must be taken since in general, statistical controls are absent. The usual solution to this difficulty itself involves the choice of issue to address. By tackling well-defined questions, routes to solution were easily assessed for veridicality. The problem itself served as a control for behavior assessed. The problem addressed in this study is not generally well-defined. As Ungson, Braunstein, and Hall [1981] point out, this is generally the case in realistic settings. This study involves what can be character- ized as a case study assessing a security investment decision. This approach is advocated by Payne [1980] and Ebbesen and Konecni [1980]. 21 The objective is to stress the external validity of the study. As alluded to earlier, most studies in the process-tracing area have been criticized for the lack of cross-validation and reliability procedures. One reason for this deficiency is the basic Reudian premise of the theory that human behavior at the individual level is somewhat deterministic [Newell and Simon, 1972, p. 10]. Another is the generally large time requirements for process-tracing research. There is also the previously cited belief that the validation proce- dures established at present, such as the Turing test, are adequate. This study attempts to provide partial cross-validation and retest reliability assessments. CHAPTER III METHODOLOGY-CONCEPTS AND CONSTRUCTS 3.0 Introduction This chapter is concerned with the methodology as used in this study. The chapter is organized as follows. First, a general description of the technique and some associated problems is pro- vided. The chapter then proceeds with a discussion of the human information processing system, a task analysis, and other issues related to the study, such as sample selection. 3.1 The Problem Solving Approach The methodology proposed in this study is verbal protocol analysis. The technique involves having subjects think aloud as they perform a task. These "think aloud" protocols are recorded and the transcript coded. The coded protocols are used to develop a problem behavior graph (PBG) for the subject. The P86 is, in turn, used to develop the model of the decision-making process. The resultant model is proposed as explanatory regarding the process. To the extent that one is able to show the limits of the human processor, and to demonstrate that the behavior model conforms to the proposed limits, one can say that the model is sufficient in describing the behavior, given the limitations [Simon, 1979]. The PBG is a dynamic representation of the subject solving the problem. It consists of a series of knowledge states about the problem 22 23 and operators which change a given knowledge state. This is depicted below in Figure 1 [Newell and Simon, p. 173]. Knowledge 01 States U1 U2 02 U3 D: a Figure 1. Illustration of a Problem Behavior Graph Operator The result of applying an operator 0 is the new knowledge state U at the head of the arrow. Return to a knowledge state is depicted by a solid line from the particular state to a node below it. Time in the PBG runs from left to right, then down. Movement in this PBG is from U1 to U2 to U3, back to U2, and then to U4. The coding process consists of two general phases: (1) breaking the transcript of protocols into short phrases and (2) encoding the phrases in terms of operators, knowledge states, etc. Each phrase represents "a naive assessment of what constitutes a single task assertion or reference" [Newell and Simon, p. 166]. The phrasing process is designed to facilitate later references. The extraction of meaning from the protocols depends on the task itself. For this reason, it is important that the researcher has explicit knowledge of the task. The extraction of meaning from protocols and the coding process is not standardized. Few well-defined procedural rules have been developed. Since, in addition, there are generally few statistical controls applicable to the technique, researchers have been concerned 24 about the reliability of findings. However, in studies reporting reliability measures, intercoder reliabilities have been comparable to those reported in other types of behavioral research. Biggs [1979] reported intercoder reliabilities averaging .67 and .72. In their 1978 study, Payne, Braunstein, and Carroll cite two studies reporting reliabilities averaging .87 and .95. They conclude that additional research is needed in this area and propose that researchers try approaches such as the following: 1. Automating the coding process as much as possible. 2. Using different techniques to measure the same behavior such as protocols and eye movement data. 3.‘ Clearly stating hypotheses in conjunction with building PBG's. In this study, subjects were given unlimited time to study a prospectus and make a decision about whether they would or would not invest in the particular company. The only constraint was that the subjects had to verbalize the entire process. The objective was to make the decision process conform as closely as possible to the one that the subject normally uses. Work done by other researchers indicates that the decision process should take from one to five hours [Bouwman, 1979; Biggs, 1979]. This study employed two coders. 3.2 The Human Information Processing System This paper is concerned with problem-solving in the investment domain. As used here, a problem exists whenever an individual wants something and does not know immediately how to get it [Newell and 25 Simon, 1972, p. 72]. Problem-solving, then, is the directed activity related to resolution of the want expressed in the preceding definition. The theory proposed by Newell and Simon rests on four essential propositions [Newell and Simon, 1972, pp. 788-789], as follows: 1. A few, and only a few, gross characteristics of the human information processing system (IPS) are invariant over task and problem solver. 2. These characteristics are sufficient to determine that a task environment is represented (in the IPS) as a problem space, and that problem-solving takes place in a problem space. 3. The structures of the task environment determine the possible structures of the problem space. 4. The structure of the problem space determines the possible programs that can be used for problem-solving. Some of the points raised above can be illustrated by the following simplified example: Calculate ending inventory given the following: Beginning Inventory: $100 Purchases: $500 Cost of Sales: $400 To solve this problem requires the following: 1. Reading ability--both syntactic and semantic skills. 2. Knowledge of inventory and relationships: a method. 3. Mathematical ability--algebra. The subject must be able to read the information provided and under- stand the question asked. Knowledge of the relationship between 26 beginning inventory, purchases, cost of sales and ending inventory is essential for solution. The processor must be able to retain, either in short-term or external memory, the relationships above long enough to manipulate them to obtain a solution. While the cognitive limits of the individual is not addressed, the necessary elements for solution can be inferred. The above solution procedures (including substitutions, additions, etc.) together comprise the neces- sary components of a problem space. The invariant features of the human information processor (HIP) are the following [Newell and Simon, pp. 791-792]: 1. The size, access characteristics and read-and-write times of the various human memories -- long—term memory (LTM), short-term memory (STM) and external memory (EM). 2. The HIP system is essentially serial in nature, and processes information at fixed rates. 3. The organization of the problem-solving process (the program used) is production-like in character, and goal oriented. Functionally, STM consists of the set of symbols currently being attended to. Its capacity is about five to seven chunks of information. Its read or access time is virtually immediate. LTM has virtually infinite capacity and is generally held to be a node link structure with an index. Its character is associative. That is, structures are related in various ways. Remembering Dick may evoke memories of Jane and Sally and Spot...till the entire family structure is complete. The read time of LTM is on the order of a few hundred milliseconds. The write time is about 5K to 10K seconds per chunk, where K is the 27 number of familiar subpatterns in the new chunk. This write time (fixation in LTM) is the main reason that Newell and Simon [p. 794] hypothesize that little learning occurs in problem-solving. EM consists of those things in foveal view (being looked at) or that one has otherwise retained (such as a sheet of formulas) or written down. The write time of EM is about a second for overlearned symbols, but increases drastically as familiarity decreases. The access time depends on whether one knows exactly where what one is looking for is located. Its capacity is limited by the size of the instrument used, such as the size of a sheet of paper for taking notes during a lecture. People tend to process information one piece at a time, especial— ly as complexity increases (an exception is routinized processes). This is contrasted with parallel processing, in which two (or more) items may be simultaneously processed. Formally, a processor is serial in nature if the time to solution of problems is proportional to the number of problems. To test this assertion, one might simul- taneously attempt to multiply 293 by 37 while also multiplying 691 by 72. Additionally, the rate at which such processes can be carried out is invariant. A production is a system which consists of two types of state- ments: conditions and actions. If the conditions are satisfied, the action occurs. The production may be conjunctive or disjunctive in nature. In the conjunctive case, all conditions must be satisfied for the action to occur; in the disjunctive case, the action occurs if any one of the conditions is met. 28 Goal-directed behavior in the human IPS implies some type of test structure to determine if the goal is being met. Further, it implies that the behavior of the processor will have some rational relationship with the goal. Patterns should exist in the trace which are discernible and explainable by the data and goal structure implied. As used here, rational means that if the goal is accepted, relevant activity should be directed toward reducing differences between the present state and the goal state. In general, the theory posits that the human can be described as an adaptive production system whose problem—solving occurs in a problem space. The problem space is the internal representation by the problem solver of all relevant aspects of the problem. It includes not only the information heeded by the subject, but also all possible realistic solution paths the subject may think about, reject or otherwise have available. This implies that some general knowledge about the problem must be assumed. Thus, the problem space will consist of some set of methods, including operators and their related operands, or other heuristics designed to achieve the goal at hand. The formal definition of the problem space follows. 3.3 The Problem Space According to Newell and Simon [1972, p. 810], the problem space consists of: l. A set of elements, U, which are symbol structures, each representing a state of knowledge about the task. 2. A set of operators, 0, which are information processes, each producing new states of knowledge from existing states of knowledge. 29 3. An initial state of knowledge, u, which is the knowledge about the task that the problem-solver has at the start of problem-solving. 4. A problem, which is posed by specifying a set of final, desired states, G, to be reached by applying operators from O. 5. The total knowledge available to a problem-solver when he is in a given knowledge state, which includes (ordered from most transient to most stable): (6) (b) (e) (f) Temporary dynamic information created and used exclu- sively within a single knowledge state. The knowledge state itself--the dynamic information about the task. Access information to the additional symbol structures held in the LTM or EM (the extended knowledge state). Path information about how a given knowledge state was arrived at and what other actions were taken in this state if it has already been visited on prior occasions. Access information to other knowledge states that have been reached previously and are now held in LTM or EM. Reference information that is constant over the course of problem-solving, available to LTM or EM. The problem space, as applied to particular problems or tasks, is not unique. Different subjects may choose to represent the same facts or data structures in different ways. Nonetheless, Newell and Simon's research indicates that in most cases the number of potential problem 30 spaces is quite low. Their work indicates that the problem space selected is likely to be a function of the task itself, not the subject. As an illustration of the concept, one might consider a possible problem space in the domain of financial statement analysis. The elements are data structures attended to during the analysis. These include such things as management, net income, ratios, etc. Note that each datum potentially constitutes a knowledge state. The operators are allowable transformations or data manipulations, e.g., mathemati- cal operations such as addition, division, or comparisons or any act which causes a change in the knowledge state via additional data manipulation. These are likely to be repeated during the task. The extended knowledge state consists of all possible knowledge in the problem space which can be extracted simply by knowing what is in the current knowledge state in STM or foveal view. This knowledge must be accessible by the problem-solver. For example, one might apply an operator which evaluates management. Just knowing whether manage- ment is good or bad will have implications for many or perhaps all previously visited knowledge states. The knowledge state consists of what one knows about the task at any point in time, i.e., the contents of STM and what is in foveal view. The final state arrived at is the decision point. The solution path is the route taken to the decision point. Strictly speaking, it does not include wrong turns taken in the solution process. Perhaps a more intuitive example in accounting is the evaluation of accounts receivable as part of the overall assessment of internal control. Here, the invoices and company personnel are the elements. j. 31 Operators would be possible tests--i.e., inquiring, observing, scan- ning or walk through of documents, etc.--compliance tests. Task invariants are those things which any audit program evaluating this aspect of internal control would be expected to contain--accounts receivable, the audit tests, the planned level of assurance desired. There is also some initial knowledge state which depends on the past audit relationship and where the evaluation of accounts receivables fits in the current evaluation of internal control. The goal state is related to the judgment to either rely or not rely on the internal control system of the client, given the desired level of audit assur— ance. The above description contains the necessary components of a problem space. Note that problem-solving involves a series of knowledge states (i.e., elements being attended to). This result follows from a consideration of the access and write times of the various memories, especially LTM and EM. Since LTM has a write time of 5K and 10K seconds, and the empirical evidence indicates that problem-solvers do not spend a large amount of time fixating items in LTM, one should observe a series of knowledge states. The situa- tion is analogous in EM. STM itself lacks the capacity to hold a growing knowledge state [Newell and Simon, 1972]. 3.4 Task Analysis - The Conceptual Viewpoint The task analysis involves delineating the necessary moves or decisions one must make to perform the task. In this study, the relevant question for the task analysis is the following: how does 32 one assess risk and return, given an information set about a possible investment vehicle? The task in this study is the analysis of a prospectus9 in order to make an investment decision. The prospectus will be con- structed to reflect a "proven" company (i.e., one with a track record of five to six years) going public on the over-the-counter (OTC) market. This is done to ensure that the problem can be struc— tured fairly well while still retaining its realistic aspects. Additionally, in such a situation, one would expect the value of accounting information to be enhanced [Grant, 1980]. The choice of company context is also important for market implications. Research by Reilly and others [Reilly, 1979, pp. 173- 174] indicates that in the case of companies going public for the first time, the market is not likely to unambiguously settle on a stock price immediately. The findings imply that if the stock is bought at the offering price, it is possible to gain excess profits by superior forecasting and proper buy-sell decisions. Given the general lack of analySt attention to such companies, the accounting information is likely to be the source of many of the inputs for the price forecast models. Reilly's results indicated that relative imbalances in the price structure may persist for up to a week. Finally, work by Rosenberg and Guy [1976] and Bowman [1980] suggests that accounting data may be used to update estimates of, or to 9The data set also included some national market and industry data, based on the cues used in the study by Ebert and Kruse [1978]. 33 construct surrogates for, market beta. The relationship between market returns and accounting data is well laid out in the litera- ture [Reilly, 1979]. The problem addressed here is not portfolio selection, however, we are peripherally concerned with the question of present holdings of the subjects. Economic and behavioral theories indicate that the willingness of an investor to enter into any investment is likely to be a function of his present portfolio composition, his wealth, and risk preference structure [Reilly, 1979]. This point will be addressed via a demographic profile and sample selection. Partici- pants are assumed to desire a change in their present portfolio. Most studies employing the process-tracing paradigm have tackled what Newell and Simon call well-defined problems. That is, some test exists, performable by the information processor, which will tell him when or if a solution has been reached [Newell and simon, 1972]. This area is somewhat problematic in the present study. The correctness of the "solution", the decision to either invest or not invest is not verifiable in the usual sense. The solution test occurs only when the internal criteria of the problem- solver are met. This may occur at any point in the decision process. While one may, as an observer, make judgments relative to the route taken to the decision, observations about the correctness of the decision cannot be easily drawn without knowledge of the utility structure of the processor, and ex post data regarding the stock's performance. As a result, analysis of the problem-solver's behavior 34 will be somewhat incomplete in that it is virtually impossible to verify that observed behavior demonstrates goal acceptance.10 A focal issue in this study is representation. Few problems that we encounter are solved directly. In most cases, we find that directly attacking a problem is too costly, too time consuming, or otherwise too difficult. In general, we build some representation or model that is analogous to the problem in its essential aspects and "solve" the model first. For example, suppose that we have the problem of building a more fuel efficient car. One could proceed directly to an appropriate assembly line, and start to produce a car, correcting it as necessary to attain the objective of good fuel efficiency. In such a situation, one would expect problems to develop; stoppages and reassessments may be made continually. Ultimately, the product may even be no longer produced. Typically, such a scenario would never occur. Instead, careful blueprints are drawn up. Mathematical models of the proposed car are developed and tested. Prototypes of the car may be built and tested for performance and problems. After this process is completed, the car may or may not be built, depending on the outcome of the testing process. The situation posed in this study in the investment domain presents the same type of problem. The issue is whether or not to invest in a particular company. Ideally, the prospective investor would like to see what is being acquired. However, visual analysis is 1OIt is implicitly assumed in this research that the risk-return paradigm will be adopted by the subjects. Within this paradigm, e isodes of behavior can be assessed. An episode is a sequence of behav1or associated with attainment of a particular goal. It ends either when the goal is achieved or the processor faces a problem that he cannot solve. For example, the analysis of return on invest- ment would be an episode of behavior. 35 virtually impossible in the investment domain. The cost in time and money would be prohibitive. Instead, we tend to solve the problem by constructing some type of model of the firm using financial or other data. Using the techniques of fundamental or other appropriate analyses, we attempt in some way to assess either the value of the firm or the expected level of future returns to potential investors. There must also be some type of structure assumed involving the financial knowledge necessary. Firm risk level and expected returns must be assessed. The items associated with risk might include business risk, financial risk and liquidity risk, while the return category might include measures such as dividends or earnings.11 The information assumed in the structure follows the method of funda- mental analysis outlined by Reilly [1979, pp. 260-368]. In essence, then, the task analysis involves delineating an expected problem representation and solution paths. The researcher must assess qualitatively the semantic knowledge required for problem solution. In this study, subjects must know what a prospectus is and its purpose; some knowledge of algebra and statistics is necessary. Such an analysis allows the researcher to anticipate behavior and be prepared to analyze it. Further, there are other constraints embedded in the problem. One would not expect subjects to assess short-term dividend levels since the company, by definition, needs funds. The analysis should address either long- or short-term capital appreciation, given the context of the problem. For example, one might expect to see concerns 11For example, business risk might be measured in terms of operating leverage or sales volatility. 36 about such variables as the stability and growth of the firm, its profits, and cash flow. The liquidity position may be important, as well as risk measures; management should be assessed. Judgments that are made should be relevant to the issue of survivability of the firm, that is, the relevant questions should relate to present and future prOSpects for the firm. 3.5 General Study and Sample Demographic Issues The study was done in three stages: 1. The development of the task and the task analysis. 2. A pilot study. 3. The experiment. The pilot study and the sample sizes are both related to the fact that protocols produce a large amount of data. Researchers in the area generally feel that a pilot study is a prerequisite for a successful experiment. The pilot in this study involved four stu- dents who had successfully completed at least one investments course. Small sample sizes are a necessity due to the large time requirements of such analysis, the data overload problems due to the volume of information produced. The expert sample in this study consisted of professional analysts. Such subjects should, by virtue of their training, exhibit the same general quality of problem-solving skill. Criteria used in selecting the professionals were as follows: (1) They should hold the designation Certified Financial Analyst,if possible; (2) They should be familiar with initial offerings; and (3) They should be familiar with the industry of the prospectus company. 37 These points become important in improving external validity since the sample size is small. The nonprofessional investors were selected with the following criteria in mind: (1) they should be investors who generally make their own investment decisions; (2) they should be cognizant of the industry of the prospectus company so that the exercise is not a guessing game with respect to industry data; and (3) the investors should be matched as well as possible with respect to present holdings, investment experience, and educational level. In the optimal case, one would prefer investors covering the entire range of these vari- ables and randomization for control. Again, due to the sample considerations, the aim is to have the subjects be as similar as possible to assure that the observed behavior stems from the task constraints, not from demographic variables. One criterion which overrides the sample demographics is the fact that the research is aimed at investigating decision behavior, not learning. For this reason alone, subjects should be experienced in investment analysis. The subjects in this study were paid $20. No attempt was made to adequately compensate the individuals involved. Research by Orne [1973] indicates that even small amounts of compensation such as $1.00 are adequate to ensure that subjects are serious with respect to a research project. As a final note, most of the participants returned all or part of the compensation offered. The generalizability of the results of this study is limited due to the sample sizes and the resultant nonstatistical nature of the research. However, Newell and Simon's[l972] indicates that the 38 task itself is a major determinant of task-related behavior. Thus, to the extent that processors engage in tasks similar to that reported in this study, it is expected that the results reported here should be repeated. That is, processors attempting to assess risk and return are expected to engage in the same types of activity. While individuals may differ as to particular aspects of risk and return assessed, the overall behavior should be similar. CHAPTER IV EXPERIMENTAL RESULTS: DERIVATION OF MODELS 4.0 Introduction There were three stages in the prospectus analysis task reported here. Following task development and analysis, a pilot study con- sisting of four students was run. These sessions were done over a two-week period and were used to refine techniques related to the methodology, including the coding process. The actual experiment consisted of eight subjects--four professionals and four nonprofes- sionals, ranging in age from 22 to 56. The experiment was run over a four-week period due to time constraints of the subjects. Professional subjects for the experiment were contacted by a two-wave mailing to CFA members of the Financial Analysts Federation within 80 miles of East Lansing, Michigan. The final four profes~ sionals consisted to two chartered financial analysts and two non- ‘ chartered professionals. Of these latter two, one was a senior partner of a regional investment firm and the other held a Ph.D. in finance and managed several portfolios. All of these subjects were male. The nonprofessionals consisted of four subjects with varied backgrounds. Subjects were, in order, a vice president of a bank trust department (Operations officer), a manager in a manufacturing concern, a State of Michigan civil servant, and an entrepreneurial 39 4O house-wife student. All had personal portfolios; additionally, two belonged to investment clubs. These subjects were contacted via news- paper advertisements and flyers within the Lansing-East Lansing area. All of the nonprofessionals were selected with several demo- graphic criteria in mind. These criteria, developed from the exten- sive body of research carried out by Lewellyn, Lease, and Schlarbaum [1976], profile a prototypic individual who is likely to do his own investment research and decision making. According to their findings, the average individual would be (1) a male, (2) married with children, (3) between approximately 40 and 55 years of age, and (4) have rela- tively high income. Of the final nonprofessional participants in this study, only one met the age criterion. All were relatively high-level management people; one was single and one was female. The data of the female was ultimately discarded due to technical equipment and data con- tamination problems. Table 1 contains summary demographic data on the remaining experimental subjects; S denotes subject. TABLE 1 PROFILE OF SUBJECTS USED Investment Educational Professional Age Experience Level Certification Professionals 51 31 9 BA CFA $2 44 16 Ph.D. - $3 51 23 MBA CFA S4 56 26 MBA - Nonprofessionals $5 24 3 MBA - S6 33 5 BA - S7 42 10 MBA - 41 The experiment was run in the following manner: each subject was placed in a room and given a training exercise to familiarize them with the thinking aloud technique. The training exercise consisted of two cryptarithmetic problems. Subjects were given twenty minutes to attempt solutions and familiarize themselves with the thinking aloud technique. They were then given a complete prospectus of a real-life firm which had been disguised. Changes to the prospectus included the following: name of firm and its princi- pals, capital structure (to simple capital structure), origin of stock sold (all from the company instead of selling shareholders), and size of firm. The last change consisted of multiplying all financial data by .6. Otherwise, the material was exactly as it appeared in the company prospectus. In addition, the information packet contained several pieces of supplemental information, based on the studies by Ebert and Kruse [1978] and Pankoff and Virgil [1970]. This information consisted of a forecast of gross national product, forecasted inflation levels, and several industry items: beta, profit margins and return on assets. The industry of the selected firm was specifically chosen to be one projected to have better-than-average future growth - computer technology. The firm’s sales during the period covered in the prospectus ranged from approximately one to fifteen million dollars. The nonprofessional subjects were instructed to perform whatever analysis they normally did, assuming that they had found an attrac- tive possible investment vehicle. The professionals were divided into two groups of two, one of which was told to perform an analysis designed for potential clients. The other professional group was 42 instructed in the same manner as the nonprofessionals. Subjects were supplied with pencils, paper, and a calculator, and instructed to talk during their analyses. Their statements were recorded (and later transcribed). Subjects were given as much time as they desired. This process was later repeated for S4, as part of the validation procedure. After completing their analyses, each subject responded, in writing, to a debriefing questionnaire. This involved questions on the particular subject's experience in investing and other demo- graphic data such as age and education. Other questions were con- cerned with information perceived to be most (and least) important in dealing with the firm, the realism of the exercise, and the invest~ ment strategy and/or goal of the subject in real life. The packet of materials used in the experiment is available from the experimenter. 4.1 Subject Models Bouwman [1978, p. 9] cites two requirements for a successful transformation of verbal protocols into a model of the processing behavior of a subject: (1) it (the model) must be unambiguous and (2) it must contain "all” relevant information. He points out that these criteria are difficult to operationalize. However, it is important to note that any model is arepresentation, not a dupli- cation, of reality. It is extremely rare, for example, to develop a model which is perfectly descriptive or predictive of any behavior. Generally, we cannot model every aspect of a particular problem, leading to some unreliability in the model. The true test of a model is whether or not it adequately describes or predicts in a given set 43 of circumstances. In this study, the models are used as a vehicle for comparing the processing behavior of two types of problem- solvers. No claim is made that the models are general models of the subject. The claim is made that the behaviors depicted by the models are representative in the specific context of this study and as such are amenable to analysis. Models developed in process tracing studies are generally assessed from either or both of two perspectives. One approach in- volves the construction of a working computer model of the processor. The principal aim of such research is to produce a model which mimics the actions of the processor in great detail. Generally, the model is approached from a systems, rather than psychological, orientation. An example of such research in accounting is Bouwman‘s 1980 work, cited earlier in this study. The second approach involves devising tasks and modeling the decision processes used in task processing and solution. No attempt is made to "program" the actions of the processor. Instead, discrimination nets (decision trees) are con- structed and assessed from a psychological frame of reference. Examples of such work in accounting include that of Biggs [1979] cited earlier, and Shields [1980]. Research that involves both approaches includes that done by Bhaskar, Shank and Stephens [1980], also cited earlier in this study. The approach ad0pted in this study principally follows the psychological orientation, although some preliminary aspects of the computer model perspective will also be developed. Two models were developed for each subject in this study. The first model consisted of placing the knowledge elements assessed by each subject (from the PBG) in a flow diagram, along with the output 44 . characterization assigned by the processor. This model, which is denoted the protocol model, is simply a list, in processing boxes, of what the subject did. The second model, which is denoted the derived model, was developed as outlined below, and is a more gen- eral version of the first model. The second or derived model is a device for summarizing and categorizing the behaviors exhibited in the first,or protocol,model in order to facilitate analysis and inferences drawn. Thus, if a subject appeared, based on the pro- tocol evidence, to be using a particular type of processing behavior, the derived model was constructed using the general class of behavior indicated. Neither model is presented as an explicit characterization of the subjects' decision process(es) in a general sense; they are considered representative of the decision process(es) used in this specific context. The basic design of this research (essentially a one-shot case study) does not permit the derivation of benchmarks, break points, or other decision criteria employed by subjects.12 It is principally for this reason that the models are not considered general in nature. The derived version of the first model was developed in the following manner. (1) Each knowledge element assessed was grouped with others on the same topic, i.e., all sales characterizations on a particular aspect of sales were grouped together. For example, the origin of sales would be related to customer types, whereas the 12 In the general case, one would need choices over the entire range of variables assessed in order to derive break points. 45 level of sales addresses growth. (2) Each such element category was placed in the general model as a cue (decision criterion). (3) The output characterization of each knowledge element was considered to be made at the margin. That is, if sales growth in the data set averaged 15 percent and the subject described this as good, sales growth less than 15 percent would be denoted as "bad" in the derived version of the subject's model. Thus, the level or state of each knowledge element in the provided information set (along with any protocol or debriefing information), if processed, was used as arbi— trary decision points for each subject. Again, it is noted that the models were primarily constructed for analysis of behavior. This approach, while clearly not generally representative, does fit the data in the present context. (4) Binary, yes—no decisions were placed in the model if the processor demonstrated this type of deci- sion was being made. (5) Data characterizations were assigned only if the processor demonstrated, at least in part, such activity in the protocols. As stated earlier, it was assumed that subjects had adopted the risk-return paradigm from a fundamental perspective. This perspective served as a general guide as to which elements a subject was likely to attend to, as well as for the development of categories in the models. In essence, it describes the rules expected to be followed as the processors attended to the problem. In general, one expects those things which increase return or decrease risk to be positive in effect. Conversely, those things which increase risk or decrease return were expected to be negative in impact on the 46 processor. This notion of risk and return is well accepted in the literature.13 The decision processes analyzed in this study actually involved two decisions. The first decision is one regarding the firm itself. As such, it depends on firm specific characteristics. The second decision involves deciding whether to buy the particular stock or not. This latter decision is a function not only of the firm itself, but also its prospects relative to other available investment oppor- tunities of the processor. The second decision also dependes on the pecuniary state and the present holdings of the processor. This study is concerned only with the first decision--the processor's view of the subject firm. The models developed for each subject appear in the Appendix I of the dissertation. 4.2 Coding The coding scheme adopted in this study follows that used by Bouwman [1978, 1980]. This choice was made for several reasons. (1) Comparison of the transcripts of the processors indicates that the same general type of behavior is found in both the above-cited works and this study. This suggests that the problem space adopted by processors are similar. (2) The problem addressed is similar. Both this study and Bouwman's work is concerned with the analysis of financial and other data about particular firms. Newell and Simon's work [1972, p. 789] indicates that problem spaces employed are dependent on task, not subject, characteristics. (3) The processors 13Other potential perspectives include technical or charting analysis and the efficient market hypothesis. However, both of these techniques are generally used at the aggregate market or portfolio level and depend on historical track records. 47 (subjects) in the studies are similar. The CFA's and other profes- sional and nonprofessional subjects participating in this research have similar backgrounds to the students and CPA's used in Bouwman's work. (4) It is important to demonstrate that the methodology in general and the coding process in particular is amenable to standard- ization. Since, as previously stated, the coding process is somewhat problem-dependent, generalizability is likely for any particular class of problems, such as those involving financial information use. The coding and model-building processes involve several stages, each of which embodies a different representation of the processing behavior of subjects. Each stage consists of applying certain criteria to the subjects' verbalizations of the processing behaviors employed. Bouwman's work includes the following set of representations (not all of which need be utilized in a particular study), after the procedure outlined by Waterman and Newell [1971, 1976]: (l) the audio tape representation, (2) the lexical representation, (3) the topic representation, (4) the element representation, (5) the group representation, (6) the problem behavior graph and (7) the trace. This research develops the first six representations for each subject. Additionally, flow chart models are developed for each subject. The tape representation is simply the audio tape itself. The lexical representation consists of the transcriptionist's and/or the researcher's interpretation of what is on the audio tape. Not only are the spoken words important, but also contextual meanings-- pauses, periods, et cetera. Basically, this representation entails applying linguistic rules to the data, as well as detailing prosodic (emphasis) features and timing. Successful development of this 48 representation is crucial to later representations. The task is non- trivial due to several logistical problems--the tendency of subjects to lapse into silence or low tones, or to mumble, and the tendency of the transcriber to suffer lapses in acuity as the length of the processing episode increases. Solutions to the above problems include telling the processor(s) to continue talking or to walk around or take short breaks. The topic representation involves splitting the lexical repre- sentation into short phrases, each of which is concerned with a single task topic. These phrases are called topic segments, and they become the units of analysis. This backup of the protocols is achieved by applying grammatical and linguistic rules to the transcript. In this study, the following set of rules were used to define topic segments: (1) any single phrase having a subject and verb, (2) any phrase which does not modify its preceding phrase (using normal English grammatical rules), or a subsequent phrase, and/or contains new or different information. In cases where doubt exists, no new segment rather than a new segment will be coded. (3) Phrases which have implied subjects and/or verbs due to idiomatic or contextual use will be coded as topic segments. (4) Any phrase which stands alone contextually, regardless of structure, will be coded as a topic segment. The above rules follow those used by Waterman and Newell [1971]. An example of a partial transcript in topic segment form appears in Figure 2. The element representation is the first in which meaning is extracted from the protocols. Problem space relationships or occur- rences are detailed. Knowledge elements (i.e., things the subject 49 Figure 2 Topic Segments from the Protocol of S4 95. I was offered four new (+) supposedly "hot" issues this morning. 96. And I haven't (+) ... done anything about any of them... (+) .. 97. The only one--there are two of them on here..(+).. 98. Well, there are four on here,... 99. Obviously (+) two of them are...computer related..(+).. 100. One's related to cable (+) television, 101. One's-~one is DeLorian Motors..(+) .. 102. I don't know whether you've seen one of those new DeLorian cars, 103. But they are super-looking (+) cars, if you like that kind of thing.... 104. I think the damned thing--(+) the stock will probably bomb out, "+" denotes the passing 0f‘5 seconds of time. knows) and operator elements (i.e., data transformation mechanisms) are identified. In general, knowledge elements are the in- or outputs of operator elements. A third class of element specifies the relationship(s) between operator and knowledge elements, such as which type(s) of knowledge element(s) is (are) associated with a par- ticular operator element. These relation-Specifying elements are SO known as indicator elements. A list of operators employed in this study appears below in Table 2. The operators are defined in section 4.4. TABLE 2 LIST OF OPERATORS USED Type Operation or Manipulation Examine Search Tag 1 Impression (nonquantitative) CI Impression (quantitative) Compote Calculation Increase Comparison (qualitative) CA Comparison (quantitative) Trend Comparison (quantitative) Formulate Relation Linking Formulate Problem Summarization Coding Devices Reading Reading Tag 2 Paraphrasing, talking, etc. One of the critical issues of coding in a process tracing framework involves the identification of operators. That is, how is an opera- tor identified? As defined earlier, an operator is any manipulation which produces additional or new information, where "new” implies not directly available currently in the problem space. Note that this definition excludes the act of reading as an operator. That is, the assertion is made that reading per se does not necessarily involve 51 data manipulation, rather, data absorption. Thus, a subject reading, "...sales are $6 million," differs substantially from an assertion that "sales are increasing." The latter statement implies that a comparison has been made (i.e., data manipulated); the former merely reiterates a value given. While the former statement or identification of the sales level may prove to be important, it is the subject's actions which must demonstrate this fact. Evidence of such importance may be given by repetition, a slowdown in processing speed, or direct statements by the processor. At any rate, it is the task-related information in the protocols that indicates that new knowledge has been produced that allows the inference of operator application. Instead of asking the subject about x or how x was done, the subject is merely asked to do x while speaking aloud. All con- clusions are drawn from the transcribed protocols. As stated by Simon, the leeway offered the researcher in this situation is not great [Simon, 1979]. The next representation consists of analyzing operator and knowledge elements in terms of which inputs are associated with which operators and its output. The combination of an operator element with its in- and output knowledge elements is called an operator group. Cases in which the research hypothesizes a relationship early on in the analysis is known as a protogroup [Waterman and Newell, 1971]. Operator groups become the nodes in the next repre- sentation-—the problem behavior graph. As previously stated, the problem behavior graph is a dynamic representation of the processor attempting to solve the problem. 52 The trace is the outline of the algorithm or program imitating the processing behavior as characterized by the problem behavior graph. Additionally, the set of representations could be expanded to include the problem space, which is a necessary structure of the theory for analysis. 4.3 Problem Space Adoption of Bouwman's coding scheme implies that the general problem space adopted by subjects in this study is equivalent to that used by subjects in Bouwman's work. Both casual and definitive examination of the protocols indicates that this implication receives reasonably strong support. Except for several differences which were explained earlier in the discussion of coding, Bouwman's operators were sufficient to describe most of the processing behavior of subjects. The differences are mainly related to the fact that com- putations (which are a form of programmed or algorithmic activity) were allowed in this study, whereas Bouwman computed all figures. Additionally, in the present research, more qualitative data was presented to subjects than in Bouwman's study. The result of these differences was the addition of a computer and tag operator in this study and the elimination of Bouwman's remember operator. The problem space was earlier defined as the processor's internal representation of the task environment, in which all problem-solving activity occurs. It consists of all knowledge available to the processor during the task, as well as means for obtaining new knowledge, given what the processor already knows. This combination of knowledge 53 (inputs) and means (operators) thus serves as a boundary and, there- fore, limits what the processor can come to know in a given task [Waterman and Newell, 1971]. It is important that the operator set be complete, since it determines what behaviors the model can des- cribe, ex post. In general, the problem space is not unique. That is, two or more processors may operate in different problem spaces. This implies that more than one problem solution approach generally exists. The knowledge aspect of the problem space depends on both the background of the processors as well as the information presented to the processors in the task. Each subject comes to the task with a certain amount of education, training or experience which influences his approach to tasks such as that of this study. Previous research indicates that such factors are in fact important predictors of the decision outputs of individuals [Ashton and Kramer, 1980]. To the extent that subjects are similar along these dimensions then, the more likely is the observation of the use of similar processing behavior, including the use of a particular problem space. The operators employed in a problem-solving task are behaviors engaged in by the processor as he or she moves through the problem space. Only those behaviors related to the assigned task are relevant. These include behaviors for the selection of elements for analysis, the subsequent analysis, and decisions as to the ultimate disposition of both items analyzed and the results of analysis. Elements selected for analysis may be extracted from the provided information set or derived by the processor by transforming presented information. 54 Analysis itself may involve internal or external comparisons, computations or the assignment of qualitative tags. Disposition behaviors relate to the decision to either discard an element, store (remember) it, or to continue analysis, as appropriate. Since these behaviors cannot be observed prior to the time they are actually engaged in by the processor, development of the problem space as an analytical device occurs ex post. Descriptive operators and other problem-space elements are developed jointly with other analyses. However, given knowledge of the task and prior research findings and/or theory, the researcher is able to develop a fairly precise list of expected operators and elements. Operators are used by the processor to move through the problem space. This movement, or path, is along a series of knowledge states resUlting from the application of the operators. Various defini- tions have been posited for a knowledge state. Newell and Simon [1972]demonstrate, as shown earlier, that the knowledge state is of limited size. Waterman and Newell [1971] define the knowledge state as the set of currently active nodes along the lowest branch of the PBG. This definition implies that the §g§plt_of more than one operator application may be contained in a knowledge state, a point consistent with research results concerning the size of STM [Newell and Simon, 1972]. Bouwman's operational definition of a knowledge state [Bouwman, 1980, p. 7] also follows this result: "A knowledge state consists of a collection of facts (symptoms), relations between those facts, problem hypotheses and leads." As used in this study, a knowledge state minimally implies (1) an operator, (2) the result 55 of at least the previous knowledge state (unless there is a break in the PBG, i.e., the subject change), (3) some path information, (4) some knowledge of the present position relative to the desired position or goal (which may not be apparent to the researcher) and (5) some background knowledge based on past eXperiences (which again may not be apparent to the researcher). Information relative to the last two components may not become apparent until the subject decides to summarize processing activity near the end of the analysis. Finally, it should be noted that, while the knowledge state is a useful theoretical concept, it is extremely difficult to operation- alize. The size of the knowledge state or time spent at each know- ledge state was not addressed in this study for the following reasons. First, there is no definitive set of rules regarding the composition of the knowledge state. Newell and Waterman's above definition as the lower, still open, arm of the problem behavior graph is problem- atical since (1) there is n0' general criterion that assures that the lower arm of one PBG is equivalent to that of another and (2) in the absence of definitive tests, we have no way of knowing if the above PBG criterion is complete; i.e., there is no criterion, other than Miller's number,14 to suggest the appropriate size. When the chunking phenomenon is added, the problem appears to be greater than any added advantage of looking at the number of knowledge states. The breakdown of the PBG into such segments in the absence of explicit tests designed to assess this aspect of processing is arbitrary at 14Miller's [1956] number is seven, plus or minus two. 56 best, and certainly is misleading. Finally, this feature is of questionable value since it is not necessary for analysis, and adds a dimension Of uncertainty to the results. 4.4 The Element Representation: Derivation of Operators The element representation was developed with two objectives in mind: (1) to demonstrate what operators are used and how they are extracted from the data and (2) to demonstrate the sufficiency of Bouwman's operator set as applied to this study. The representation was developed as follows: 1. All topic segments from a particular processor's protocols were listed sequentially and analyzed for knowledge elements. 2. Each knowledge element was then analyzed to determine if any data transformation occurred involving it. This deter- mination was made by assessing whether information in the protocols concerning each tested knowledge element appeared explicitly in the provided information set. Any output not explicitly provided was deemed evidence Of an Operator being applied. 3. Any transformation was detailed, i.e., listed explicitly. Knowledge elements were determined to be either in- or outputs Of the transformations detected. 4. Data transformations were coded by reference to Bouwman's basic and augmented sets as detailed below. 5. Exceptions were used to develop new Operators or expand the sc0pe Of existing Operators as defined. Exceptions were 57 grouped by similarity of transtrmation involved in particular situations, including in- and output knowledge elements. Each class of transformation is coded as an operator. The process is essentially iterative and each knowledge element mentioned by the processor is considered a potential application of an operator. The final determination of such an application depends upon the protocol evidence of input, output, and transformation involved. The operators developed by Bouwman included the following: examine, remember, formulate relation, and formulate problem [Bouwman, 1980]. This set is augmented in this study by compute and tag Operators. Each Operator is examined in turn in this section. The remember operator was not included in the Operator set in this study. The examine Operator can be visualized as an impression forma- tion tool. The behaviors exhibited by processors using this operator are oriented toward answering questions concerning which, what, when, where and why. It is an umbrella Operator for several specific types of processing behaviors, each of which Bouwman denoted as a basic operator.15 The behaviors typically include Observations about what is Observed in the data, as well as processes which select the next item of information to be attended to. The Observations are usually qualitative characterizations of data provided in the infor- mation set. The set of basic operators include the increase, trend, CA, C. CI, compute, and tag 1. 15Basic operators are a minimal set Of operators which, given the task and instructions, will adequately describe the behaviors necessary to process the problem [Newell and Simon, 1972, p. 146]. 58 The increase operator is used to qualitatively assess the movement in the magnitude of a knowledge element across time in a very general sense. The use of the Operator implies that a com— parison has been made of the level of the element assessed for at least two periods. The output is a determination of whether or not an element's value in year x is greater than, less than, or equal to the value for year (x — 1). For example, the processor may state that "sales increased”. This implies a comparison of at least two years' sales. The trend Operator also assesses the movement of the magnitude of an element across time, again qualitatively. However, in the trend case, some time period is selected as an explicit benchmark for comparison purposes, similar to horizontal analysis in financial statement analysis. Use of this operator implies a deeper level of analysis than does the increase Operator; it is also less likely to be employed by the processor. It is likely to be used in situations where the data is not clear-cut, such as either very large or volatile changes of any kind [Bouwman, 1978, p. 168]. In this study, use of the trend operator is implied when (1) the processor explicitly calculates period-tO-period changes (this includes situations in which a growth rate for an entire period greater than two years is calculated), or (2) the processor states that such a comparison is made. An example of the use of this study would entail statements such as ”sales growth was 20% each year, implying computations across time. 59 The CA operator involves the comparison of a firm-specific value of a data item with an industry value, e.g., the current ratio. This Operator may be used in conjunction with other Operators, such as the increase. In general, it tends to be applied to the most recent year's data. It can be contrasted with the C operator, which entails the comparison of the realized value of a firm data element with the internal enterprise forecast (or budget amount) of the particular datum. This operator is not used in this study since no such internal data appears in the prospectus. It is included here for completeness since at least one subject voiced a concern for some internal data. The CI Operator is implied when the processor compares a firm data item with some personal standard or heuristic. For example, one comment of subjects in this study was that government (defense, etc.) contracts had low or limited margins, a potentially negative feature. Such a statement implies that some internalized value has been used to make a comparison. If the data did not fit the pre- conceived notion (or heuristic benchmark) a search was generally undertaken for the explanation. Use of this Operator is implied when the processor uses data for comparative purposes that is not supplied in the information set or derivable from the supplied infor- mation. For example, a statement that the current cost of goods sold is low would not be a use of this Operator since the implication is that a comparison is being made to prior periods, not to some internal value. 50 The compute operator was, as stated before, added to the set used in this study. Strictly speaking, computation is a programmed activity. However, computations are Often engaged in by decision- makers as part of the overall decision-making process and are thus arguably part Of the basic processes. Use Of the compute operator is implied in this study when the processor engages in an explicit calculation Of any kind, at the earliest topic segment in which such indication Of calculation is given. All subsequent topic segments dealing with calculations, including process output, is also coded as a use of the compute operator. Two classes of behavior are coded as "tag" in this study. In the first case, the processor assigns a qualitative label to a knowledge element. This Operation is included in the set Of basic operators. Generally, the label assigned is the processor's assess- ment Of the impact of the given knowledge element on his or her overall characterization of the subject firm. This assessment includes instances in which the subject uses either negative or posi- tive words to describe a situation. For example, the processor may describe the debt level as "good". Such usage was taken as evidence of the impact of the element. ,Any such assignment activity was coded "tag 1" in this study. In the second class Of behavior, the processor is essentially involved with checking the data, with no particular or necessary label being attached. This activity, like reading, is not considered an Operator in this study since data is not transformed or overtly manipulated in any way. For example, a processor might note that 61 the firm has no inventory. This assigns no necessary implication to the knowledge element; it merely highlights it as a part of the information set. Such statements, in the absence of qualitative labels, are coded as "tag 2" in this study. In general, the activity subsumed by this coding device includes (1) selecting a next step in analysis (i.e., next element to be attended to), (2) output of other Operators when removed from the topic segment(s) in which the operator is applied, (3) paraphrasing of the information set provided, (4) statements relating the impact of elements not actually assessed or included in the information set, or, (5) statements of the Opinion Of the processor about future conditions or amounts of parti- cular elements where such statements are not tied to present levels of particular elements of the information set provided. The formulate relation operator is concerned with behaviors the subject engages in as he or she starts to piece together an internal representation of the subject firm. It is essentially a confirmatory device used when the subject's characterization of the firm does not seem to fit together at some juncture. Different elements are linked to each other in attempts to explain perceptions of the data presented. As used in this study, the formulate relation Operator is implied if: element 1 is linked to element 2 by the processor, as either explaining, supporting, or clarifying a particu- lar result (element). The linkage must be explicit. The formulate problem operator is closely related to the pre- ceding operator. However, it appears near the end Of analysis and is a summarization device. The Operator involves the establishing 62 of the central or derived impression of the subject firm and may be either negative or positive in nature. It is implied by (1) listing activity by the subject of features of the firm, along with qual- itative labels, (2) the processor stating that he or she is summar- izing, or, (3) the processor stopping analysis and announcing the problem(s), or the lack of problem(s). This study also utilized a trivial assignment device - reading. Reading occurs if the protocol of a subject follows the text of the provided information set. The remember operator was dropped from the Operator set for several reasons. Among these were the following: 1. Protocol evidence of remembering must be inferred from other protocol segments coupled with ex post debriefing data. There is no evidence that suggests such a remembered list would be complete. Nisbett and Wilson's [1980] results suggest incompleteness. 2. Presence or absence from the list may indicate more than one problem. For example, one difficulty may be a failure to remember. Another difficulty is that, as constructed, this operator's use implies importance. Other trivial items not relative to the "big picture" may be remembered. Thus, the operator may be a source of misspecification of the model. 3. By definition, if a knowledge element is being attended to, it becomes part of the current knowledge state. Such attention implies an implicit ranking and labels an element 63 as probably important. This would argue against an explicit ranking scheme. In general, the element representation involves the establishing of a classification scheme. While this may seem novel, it is quite common to research. Most empirical research which involves more than one sampling unit generally involves some such scheme, at some level of specification. The process itself becomes fairly well defined, as long as the strata involved are sufficiently explained. 4.5 The Problem Behavior Graph (PBG) The problem behavior graph is a dynamic representation of the processor solving the problem. It details the path the problem- solver takes through the problem space. As such, it consists of a linked string of operator groups. The PBG is derived through an iterative process. Once the elements have been identified, the PBG is pieced together by following the path identified by the protocols. The process is essentially one of appending each operator group identified to the growing end of the graph. The PBG is the central vehicle for the building of an explicit model Of a subject, as well as the springboard for analysis. It is here that the researcher is able to identify the regularities (or irregularities) in a processor's behavior or begin to explain how or why a processor chose to engage in particular actions. Also, the PBG, along with the protocols, helps the researcher to identify ways in which the processor applied the operators implicit in PBG formation. 64 Finally, it is at this stage that the research is able to probe issues such as: (1) how does the processor select an operator for application; (2) how does the processor decide which element to analyze next and (3) how does the processor decide to continue, stop or revise analysis at a particular juncture. 4.6 Reliability and Validation Procedures The analysis adopted in this research is primarily from the psychological perspective. Validation and reliability measures reflect this orientation. Model reliability is Often not addressed in protocol studies due to the time constraints invOlved and the availability of subjects for second sessions. In this study, val- idation proCedures included the following. First, a second case was developed. This was then presented to one of the subjects who agreed to participate in the cross-validation. From this session, a second set of protocols was Obtained and analyzed as described earlier. The decision made and the formation of the model(s) was compared to the original model(s) and the decision that would have been made using the old model in the context of the new case. This procedure results in a simultaneous retest reliability and cross-validation of the Old model. The obtained models appear in Appendix II and are analyzed in Chapter 5. Inter-rater reliability was also assessed in this study, using both simple proportion Of agreement and the Kappa coefficient [Cohen, 1960]. The Kappa coefficient removes the effect of chance agreement between raters. The coefficient is computed by subtracting the statis- tically derived proportion of chance agreement from the numerator and 65 denominator of the ratio obtained by placing the proportion of simple agreement over one. The results of this computation appears in Chapter 5.7. CHAPTER V ANALYSIS AND REVIEW OF THE MODELS AND PROCESSING BEHAVIOR 5.0 Introduction In a 1981 article appearing in Administrative Science OUarterly, Ungson, Braustein, and Hall posited that "...problem settings in organizations are typically ill-structured],6 and decision aids developed from studies of well-structured problems (lab studies) may not be applicable." They further stated "... We recommend the use Of computerized simulations over time for studying ill-structured problems..." [pp. 125, 128]. The present research is viewed as a first step in such a simulation process. However, the primary goal of this research is the examination of the processing behavior of subjects in this taks domain. The models developed in this study can be assessed from a process tracing perSpective as follows: As stated earlier, subjects were assumed to have adopted the risk-return paradigm using fundamental analysis as a frame of reference. Given this orientation, traditional measures Of risk and return ought to be important to the processors. In each specific case, the subject's model can be viewed as being a series of tests of items drawn from an internalized list Of variables related to risk and return. Thus, for example, the model Of S4 can be 16An ill-structured problem is (one) in which the problem-solver contributes to the definition and resolution, using information generated from (experience) [Simon, H. and J.R. Hayes, 1976, p. 277]. 66 67 viewed as a test of a list containing the topics: origin of shares sold, uniqueness of product, relationship of product to competition's products, income progression, auditor's report, legal statement, and presence/absence of a venture capital firm. Each item on the list is processed and assigned some designation. This set of designations is somehow summarized and a decision made. Given the above orientation regarding the models, this chapter proceeds in the following manner. First, each of the hypotheses stated in Chapter 1 is analyzed. Second, the overall behavior of the subjects is addressed. Third, model reliability and inter-rater reliability is assessed. 5.1 Hypothesis One: The Assessment of Strategies H1: Subject classes will differ in search strategies employed. Strategies employed by subjects during processing were not un- ambiguous. The nature Of the study itself made it very difficult to assess strategies employed on a systematic basis. For example, analyz- ing whether a subject was involved in depth-first or breadth-first analysis, or intradimensional versus interdimensional probing is virtually impossible in a single firm, case study situation. In general, such characterizations involve choices among or between several alternatives, rather than the analysis of one item. While general Observations can be made about specific patterns or incidences, no general classification is possible. Previous research in this area has solved this problem by looking at strategies from the standpoint Of what the process did, in a literal, descriptive sense [Bouwman, 1978, 1980], as the processing proceeded. This approach is adopted here. 68 Bouwman [1980, p. 23] found that his less-experienced subjects followed the strategy of proceeding through the provided data set in a straight-forward, front-tO-back fashion. lery little deviation Was found to this pattern. Professional subjects, on the other hand, Often moved back and forth through the data, as if marking Off an inherent checklist, developing linkages and chained relationships among processed knowledge elements. These results were partially replicated in this research, but there are notable differences, some Of which can be attributed to the following reasons. First, the nonprofessional subjects, in this study, were not students, but experienced, Older investors. Second, all of these subjects were university graduates with responsible business positions. Such training and experience, in and of itself, ought to provide the subjects with greater maturity and increased evaluation skill when compared to Bouwman's subjects. These factors ought to diminish the differences between groups in this study. The nonprofessional subjects in this study did, in general, follow a sequential, "straight through", strategy. However, all of them also engaged in some type Of oscillating or linking behavior. For example, consider these excerpts from the protocols of $5 and S7. 35: "I'm going back to the table of contents, keying in on certain areas I'm looking for..." "I'll have to find... I'm searching... through the prospectus... for some kind of explanation for... the extraordinary item..." S7: "... fixed costs... are being absorbed by a similar amount of sales with lower profit. SO... I'll have to look at why that is..." 69 "It doesn't say here whether or not they qualified their statement... SO until I see it later, I'll assume that it is an unqualified Opinion..." These statements are used to illustrate the diverse approaches taken by subjects during processing. 55 clearly did not follow a straightforward, front-tO-back approach. At numerous points in the data, he back-tracked, paused to assess his progress, and figured out where he wanted to go next. S7, before analysis, briefly looked at the table of contents and skimmed briefly through the prospectus. And while S7 did exhibit more front-tO-back behavior, the reasoning appears to differ from Bouwman's subjects. When something did not fit, he merely assumed that it had to be a certain way and that cor- roboration (or disavowance) would come later in the data. S6 followed an almost exclusive front-to—back strategy, moving through the data with only minor deviations. The nonprofessionals in this study also clearly seemed to be using some type of checklist. This is alluded to above in the statements by 55. However, perhaps the strongest evidence of all is the fact that most subjects (both professional and nonprofessional) not only exhibited behavior from which the existence of a checklist could be inferred, they also made comparative statements to other specific firms similar to that in the prospectus. For example, consider the following excerpts from the protocols of SS and S7: 55: "... I can't think of the specific company that does this right now...” $7: "Companies who are merely assembling, as these folks are, such as Amdahl or Memorex..., have run into some real problems..." 70 These statements would suggest that the subjects are comparing the presented firm to enterprises they are familiar with. In order to do this, some "outline" of the comparison firm, salient features, etc., must be available. From this, it can be inferred that this ”salient features 'list'" was used to highlight similarities and differences between the firms. And even 56, who made fewer references of this kind, commented that "the firm's situation was similar to that of Prime Computer's several years ago. And today, his bank (place of employment) used Prime's computers..." The nonprofessionals in this study also engaged in fairly ela- borate linking behavior pertaining to relationships found in the data. For example, consider the following excerpts from the protocols of $5, $6, and S7: 55: "The large costs incurred from that is probably due to selling these systems... as opposed to maintaining them. And that's why, as the maintenance aspect increases... cost Of sales decreased..." $6: "...the percentage of growth experienced was greater than expected...experienced.... Obviously, that would explain also why there wasn't any inventory..." S7: "1981, the use of capital has restricted their ability to earn apparently and caused the costs to go up..." Such behavior was repeated several times by each Of the nonprofes- sionals. In fact, this finding occurred most often with the nonprofes- sionals. Only one of the professionals, 51, engaged in this behavior to any degree at all. This type of behavior seemed to be a function of the amount Of time used and the depth of processing employed. Most of the professionals felt that close analysis was simply not worthwhile. This viewpoint is addressed later in this chapter. 71 The professional subjects were much more explicit in describing and carrying out their processing activities, especially the more experienced ones. S4 used the following to describe his processing, which clearly describes a checklist: S4: "... particularly looking at... where the stock is coming from.... Find out what the company does... look at legal matters... I'm not much interested in dilution... see if there is... some... venture capital firm involved..." 82 described his approach as follows: 32: "First thing I would normally do when a stock has not come to market is to take a quick look at the prospective PE ratios for similar companies in the industry... I would make a quick analysis of the rate of growth..." 53 also clearly had some representation in his mind that he was checking the data set, and the firm, against. For example, consider this excerpt from his protocols: 53: "It's supposed to say in one of these prospectus what... who's selling it, the selling stockholders, and it doesn't seem to..." S3 had overlooked certain paragraphs in the beginning of the prospectus which contained the relevant information. It is clear from his comments that the information was not only expected, but important to the decision to be made. Further evidence of this importance is given by the fact that $3 spent several minutes tracking down the desired information. One essential difference between this study and Bouwman's which may cause the processors to act in particular ways is the context. In this study, the presented company should be relatively free of major problems. The analysis involves essentially a spot check at a particular point in time -- somewhat analogous to an annual physical 72 for an individual already subjected to previous screening (S.E.C. rules, etc.). In contrast, Bouwman's subjects expected a problem. This situation is somewhat analogous to the patient who tells the doctor, "I'm sick, tell me what I have." In the latter context, it is probably more likely that the physician-processor will engage in explicit search behavior, linking facts together, in attempting to find and diagnose the problem. In the former case, the likeli- hood of a serious problem should be lower since the firm knows that its securities will be competing in the marketplace with other securities and investment opportunities. In such a scenario, one would expect a high emphasis to be placed on demonstrating above average results. As stated above, the professionals did not, in general, ascribe great importance to detailed analysis of the firm. While all of them used the accounting data from the firm to assess its future prospects, they indicated that this was only part Of a total process. This position was most clearly articulated by 51, who did the most detailed analysis: "... looking at past numbers alone isn't neces- sarily going to give you a good clue as to the future. The fact that they've had a pretty good record is really no guarantee that it's going to continue..." S4 going further, stated that Often a stock just might be "the right stock at the right time, but not the best stock." He pointed out that he purchased stock from Prab Robotics even though he thought that IBM and Cincinnati Milacron made a better product. But since robotics was a current buzzword, probably any good company's stock would go up. The stock was purchased without benefit of financial data. (However, knowledge of the parent company was Obtained.) 73 None Of the professional subjects followed a sequential pro- cessing strategy. 51 and $2, who read most of the prospectus, Often jumped back and forth in the data set. 53 and S4 used a relatively small list of topics for analysis and simply entered the data set and extracted those elements. This point is demonstrated by the following quote from 54's protocols: "Everything that I'm interested in could be put in about four pages, and the rest of it you can throw in the wastebasket." As demonstrated by the earlier quotes, the professionals all appeared to use some type of checklist. Like the nonprofessionals, most of them also had some firm whose circumstances they considered similar to the subject firm's. 51 considered Amdahl to be similar. 52 spoke of Applied Digital Data Systems. S3 likened the firm to Cascade Data. S4 used no comparison firm. The results of the analysis of the strategies employed by sub- jects were not clear-cut. The findings can be interpreted in several ways, given the protocol evidence. For example, a surface analysis Of sequential analysis might suggest that it is likely to be employed by naive subjects. Yet, as evidenced by S7, it might also be simply due to the belief that everything important will be covered, eventually. And while none Of the professionals employed a strict sequential process, much of the processing of those who reviewed all of the prospectus ($1 and 52) was sequential in nature. This result is not surprising since material tends to be read in the order in which it is laid out. Additionally, while 51 and 52 did engage in sequential episodes,they also tended to be much more selective in terms of items selected for closer attention. 74 The existence of a checklist was evident across both groups Of processors, but clearly stronger for the professionals. This result seemed to be related to experience and/or age. However, 35, who also presented strong protocol evidence of such a checklist, was the youngest and least experienced of all processors. While the small sample sizes and the contect of this research is limiting, some inferences can be drawn from the results for account- ing policy-makers and researchers. One possible research inference from the data is that much of an Observed subject strategy may be a function of experience and the degree of detail engaged in by the processor. As a processor investigates data items more closely (increases the degree of detail), the greater is the likelihood that things that do not ”fit” will be discovered. The subsequent examina- tion Of such relationships is likely to result in linking behavior, and, most likely, non-sequential behavior. This assumes, Of course, that the processor recognizes items which do not fit the overall pattern. This assumption simply may not have been valid for Bouwman's students [1980]. As a result, they did not engage in linking behavior. It should be noted that linking behavior always occurred when something was perceived to be wrong in the data set. Thus, the absence of such behavior likely implies that the processor perceives that observed relationships are "correct" or that a more detailed analysis is unnecessary. The latter view seemed to be operative in this study, especially for professional subjects. One implication which can be derived is that observed decision behavior may be a function of task instructions and the experimental demand characteristics, as well as the variables ex— 75 plicitly manipulated. Care should be taken in analyzing Observed behavior relative to what the subjects were instructed to do. That is, if a subject is told to do "x", care should be taken in deter- mining what behavior is necessary to do "x", apart from the experi- ment itself, and what behaviors may be induced by the experiment. This point is also made by Ebbesen and Konecni [1980]. ‘An objective of policy-making is the effective dissemination Of information. The FASB, in its SFAC NO. 1, stated "... the benefits Of information provided should be expected to at least equal the cost involved" [1978, p. 10]. One way in which the poten- tial benefits of financial information may be enhanced is placement. It may be advantageous for policy-makers to decide, a priori, that certain information is more important than other information, based on theory or other appropriate guidelines, and to make sure the selected data is displayed early in prescribed communications to users (such as the annual report). The results Of this research indicates that some users will not simply select apprOpriate in- formation early on in analysis. By strategic placement, policy- makers may encourage more cost-effective use of provided information. 5.2 Hypothesis Two: Information Use Assessment H2: Subject classes will differ in amount Of information attended to during the analysis. Information processed by subjects was assumed to bear some relationship to the assessment of risk and return. If a broad per- spective is taken, this relationship can be construed for most data concerning any particular enterprise. However, most finance texts 76 [Reilly, 1979] address risk from a fundamental perspective in terms Of three areas: business risk, financing risk, and liquidity risk. Business risk is derived from the nature of the business the enter- prise is engaged in. It is generally measured as a function of sales or earnings volatility. Financing risk is derived from the method of financing investments, i.e., the makeup of an enterprise's capital structure. It is usually enumerated in terms Of the degree of debt financing. Liquidity risk is related to the ease of con- vertibility of an investment to cash in a reasonable amount of time without having to make concessions. It is generally measured as the difference between purchase price and the expected selling price at any particular point in time [Reilly, 1979, pp. 16-18]. The return from an investment is generally measured in terms of dividends and capital appreciation. In the context of this experiment, where no market stock price exists, subjects are forced to use surrogates for the expected levels of the stock price -- earnings and sales growth, return on assets, etc. The subjects in this study did indeed address items related to the above areas. Examination Of the PBG‘s and protocols reveals that every subject reviewed either or both of sales and income growth, debt level, and the industry of the subject firm. In fact, an in- complete list Of topics addressed by subjects would include the following: sales and income growth, dividends, compensation, margins, costs, backlog orders, competition, customers, sales basis, unique- ness of the firm's product, industry, military ties, debt, legal matters, the auditor's report, the underwriting firm(s), origin of 77 the stock sold and the size of the offering. While it is relatively a straightforward matter to infer most of the previously cited areas in the above lists, the liquidity notion, which is often ignored, was also clearly evident. For example, several subjects commented that the size Of the offering was good, since it would not "flood the market." S7, in his analysis, stated that he was "concerned about the safety of the stock's value.‘I $2 worried that such an offering might be an opportunity for a small firm to "make a killing as (his) expense, or... to develop a market for estate purposes..." In sum, subjects did appear to address the apprOpriate areas. An interesting aspect of the information selection and use problem was the implicit faith that some Of the subjects expressed relative to the data. For example, S7 assumes that the auditor's report "will be unqualified, since the S.E.C. wouldn't allow the prospectus otherwise." And S4, when beginning his analysis, notes that "(he) assumes that the information is accurate, if the CPA's say it is,... I take their word for it...." In essence, it would appear that safeguards such as an outside audit or legalreview serves to reassure investors as to the existence of some minimum level of safety associated with the investment vehicle. The information attended to by subjects in this study was approached broadly. Information was considered to be attended to if, during analysis, the processor in some way transforms the parti- cular knowledge element. In effect, this implies that any situation in the subject's protocols from which an Operator application can be inferred also implies information use. 78 Both cursory and detailed examination Of the problem behavior graphs, protocols, and models of the subjects indicates that there were differences between the professional and nonprofessional subjects. For example, S4, in his analysis, examined the following subject areas: origin Of stock, industry, product, compensation, sales progression, return on equity, venture capital involvement, the relationship between earnings and sales, legal matters, auditor's report and the makeup of the firm's clientele. S6, in his analysis. examined the Offering size, whether the firm was a manufacturer or not, sales basis, the company's ties to the military, sales growth, debt level, likelihood of trading suspension, dividend payment, return ratios, inventory levels, costs, product price range, contracts out, composition of sales, suppliers, competition, the board, management compensation, where the stock will be traded, the auditor's report, inflation effects on the firm, the assets, the underwriters, firm beta, industry, and employee turnover. While S6 was probably most zealous of all subjects in attending to all facets of the firm, the lists do point out the differences in the information attended to. In the following table, a percentage analysis of the degree of correspondence between information topics covered by each processor relative to the others is shown. The topics are taken directly from the protocol models Of each subject (see Appendix I). In the table, each top figure in a cell for each subject gives the percentage agreement between the topic lists of that subject (ith) with other subjects (jth), using subject "i's" list as a benchmark for measure- ment. The bottom figure gives the percentage that the topics in 79 agreement between the two subjects list comprised of the jth subjects total list Of topics processed. For example, in line 1, under $3, is the comparison for $1 and S3, using Sl's list as a comparison benchmark. The figures indicate that 53's list agreed with 51's list only 39% of the time. However, the items in agreement comprised 90% of 53's list. Conversely, if S3's list is taken as a benchmark, Sl's list is 90% in agreement with the topics on 53's list, but only 39% of his list is used. In other words, 51 covers many more topics than does 53, but still covers 90% of the topics S3 covers. It should be noted that this analysis does not address the issue of how information was used, or what conclu- sion was made; only the notion of information use described above is analyzed. The analysis proceeds by successively analyzing the information use similarities and differences for each subject in comparison with all other subjects. 51 does not offer a clear-cut dichotomy about information use between professionals and nonprofessionals. That is, there is no clear-cut trend in the comparison of his list with the other profes- sionals ($2, $3, $4) when compared to the nonprofessionals ($5, S6, S7). His list is about half in agreement with that Of $2 (52%) and of approximately the same length. But this is also true for $5 and $7. In fact, Sl's list, in both size ppg_content corresponds most closely with that of $6. 51, who has earned the Certified Financial Analysts designation, is the youngest (33) and least experienced (9 years) of the professionals. His analysis covered more topics than did any other professional. 80 TABLE 3 PERCENTAGE AGREEMENT AMONG TOPICS ADDRESSED ACROSS SUBJECTS $1 $2 S3 S4 55 S6 S7 81 -- 52 39 35 48 74 52 -- 52 9O 53 48 63 44 $2 52 -- 35 43 35 35 39 52 -- 80 67 36 29 33 S3 90 80 -- 60 3O 50 50 39 35 -- 4O 13 18 19 S4 53 67 4O -- 53 47 47 35 43 60 -- 35 25 26 $5 48 36 13 35 -- 48 35 48 35 30 53 -- 39 30 $6 63 29 18 25 39 -- 30 74 35 50 47 48 -- 29 S7 44 33 19 26 3O 29 -- 52 39 50 47 35 30 -- 32's information use does demonstrate a fairly clear pattern. The topics addressed are, in a general sense, more closely related to those of the other professionals, especially if both agreement and length is considered. While the degree of agreement is not large in a total list sense, if the length of the other processor's list is considered, the agreement is considerable. S3 and 54's items in agreement comprised 80 and 67 percent, respectively of their total lists. The lowest agreement is 52 percent with $1. In contrast, 81 the highest agreement with a nonprofessional is 39 percent, and the length Of the list of topics is, in each case, similar. One note here is that $1 and 32 both had lists of approximately the same length, each roughly equivalent in length to the nonprofessionals. However, $2 has 16, rather than 9, years of experience. Also, he is 44 years Of age. An implication which may be inferred here is that age and experience may make a difference in how information is used. $3, while having a short list of topics addressed, nonetheless had a fairly high degree Of correspondence between items appearing on his list and also appearing on that of the other professionals. 51 addressed 90 percent; 52, 80 percent; and S4, 60 percent. This was the highest level of agreement for any Of the processors. The level of agreement with the nonprofessionals ($5, $6, and S7) was 30, 50, and 50 percent, respectively. More revealing, however, is the fact that the items in agreement comprised less than 20 percent of each nonprofessionals list of topics. This level was approximately double for the professionals. 53 had also earned the Certified Financial Analyst designation. S4's results were also ambiguous when one considers the degree of agreement only. No clear-cut trend develops since the profession- al's degree of agreement (53, 67, 40) is not notably different than that of the nonprofessional's (53, 47, 47). When one considers the length of the list as well, a degree of consistency does develop in favor of the professionals. The share of their lists used in "agreeing" with S4 ranges from 35 to 60 percent. For the nonprofes- sionals, the highest level Of items is 35 percent with $5. 82 35's list of topics fit best with that of 51. However, in no case did 55's list comprise 50 percent or more of someone else's list. He was 48 percent in agreement with items on 56's list, but the items comprised only 39 percent of 56's list of topics. All other (except for $1) agreement levels ranged from 13 to 36 percent for both the professionals and nonprofessionals. And even though the nonprofessionals all had lists of similar length, when this aspect is added in, $5 still has a maximum agreement share (of the other processor's list) of 40 percent. 55 was the least experienced (in terms Of length of time) of the nonprofessionals. He was, how- ever, most experienced in dealing with new Offerings and had investi- gated several On his own earlier in the year. S6 covered more topics than any other subject. Except for 51, his degree of agreement was not high. Among the professionals, the level was 36, 13, 35, not including 51. When items in agreement as a function of the other processor's topic list's length is considered, the levels are 35, 50, and 47 percent. These levels for both raw agreement and with length considered is roughly equivalent to those with the nonprofessionals -- 39 and 30 percent for agreement, and 48 and 29 percent for length, respectively. No clear pattern emerges with the professionals or the nonprofessionals. $6 made numerous factual and computational errors. In fact, one is left wondering whether the degree of correspondence is a function Of the total items covered by $6 or of chance. The latter becomes much more plausible when the protocol evidence between $1 and $6 is considered. In al- most every case, the conclusion drawn relative to an information item is quite different. 83 S7 was the most experienced of the nonprofessional investors. He belonged to an investment club which regularly performed analyses similar to those performed in this study. As can be seen from the data, S7's level of agreement was not large with any particular processor, in no case exceeding 45 percent. Again, no clear-cut trend emerges. The levels Of agreement with the professionals (44, 33, 19, and 26 percent respectively) roughly corresponds with those with the nonprofessionals (30 and 29). When the length of the list of topics is considered, the picture is slightly clearer. The length adjusted agreement levels are 52, 39, 50, and 47 percent, respectively, with the professionals and 35 and 30 percent with the nonprofessionals. $7 appears to be slightly more in agreement with the professionals than the nonprofessionals. There are some rough trends apparent in the data. If an agree- ment level of 50 percent is used as a benchmark, then several con- clusions can be drawn. First, the professionals as a group did tend to use the same information more than did the nonprofessionals. In fact, if any trend was evident about the nonprofessionals, it was that they were more like the professionals than they were like each other! However, one point is clear. When given the same data, both the professionals and the nonprofessionals tended to look at a wide variety of information. While this study did not assess the degree of correlation among items in the data set (which might indicate a higher degree of agreement), the lack of correspondence between items addressed across subjects was pervasive enough to suggest that all of the processors were picking different elements out as important. 84 One point which became clear in the protocols was the fact that the differential instructions as to the objective of the exercise (analysis for clients versus analysis for own use) made no difference at all. They were routinely ignored. All of the professional subjects simply did an analysis and then proceeded to talk about the kind of individual the subject firm would be suitable for. This invariably included themselves. In no case did any subject suggest that the analysis would be different based on the prOSpective use. Age and experience does appear to be a factor. S3 and $4 achieved the highest levels of agreement from the other processors, both achieving better than 50 percent on average. They were both over 50 years of age and had more than 20 years of experience. And even though their lists were shortest, the above results were still apparent among the professionals. One tentative inference which can be drawn from the data in Table 3 and the protocols is that perhaps an important variable in analysis is how often one does security analysis in a frequency sense rather than how long (overall) one has been doing it. For example, both 55 and S6 indicated that they did such analyses more Often than did S7. While their overall results are not markedly different from S7's, they each did achieve at least one result (63% for S6, 48% for 85) which was better than S7's. While the context and nature Of this research is limiting, the finding of a wide variety of items addressed has some relevance for prior research such as that of Ebert and Kruse [1978]. They found that the R2 of regression models in an investment context was low. 85 This result is certainly not in conflict with the tentative findings of this research. It would seem logical to surmise that one explan- ation of low Rz's in preselected items (independent variables) studies is that the items selected simply do not represent items which are both familiar and useful in a parsimonious fashion by the processors.17 As a result, the models are not overly explanatory relative to output decisions. In addition, subject to the above constraints relative to the context and sample size, the findings also bear some relevance to the work done by Hofstedt [1972]. While nonprofessionals in the present research did indeed use more qualitative data than did the professionals, they also simply did more analysis in general. In fact, not only did most of the nonprofessionals do the qualitative analyses, they also performed the same types of quantitative analyses as did the professionals. In general, it appeared that the non- professionals simply needed more information to make a decision when compared to the professionals. It is interesting to note that none of the nonprofessionals complained about too much data, while every one of the professionals voiced such a complaint. This last result should be of interest to accounting policy-making bodies. Individuals who, as a result Of their professional position, act as proxies for other investors, all felt inundated with information they felt was of questionable value. This position is most clearly stated by S4: "Everything that I'm interested in could be put in about four pages, 17Another, perhaps more interesting supposition is that there is no well-developed theory or possibility of estimating future per- formance of stock prices. In this context, where no past prices exists, the increased uncertainty perhaps leads to more diverse actions than usual. 86 and the rest of it you can throw in the wastebasket... and if you care to relate that to the S.E.C., why, I'd be glad to have you... (laughter)...." Finally, one factor which is important both to the information items addressed and to the experiment in general is history -- specifically, the state of the economy. All of the subjects stated some concern about the condition of the stock market in particular and the economy in general. Most wondered if anyone would or should be involved in the stock market during such an uncertain time period. This concern centered on the fact that high quality notes and other interest bearing securities were paying rates of interest well in excess Of the traditional returns from stock investments in their view. 5.3 Hypothesis Three: Operator Mix Analysis H3: Subject classes will differ in operators employed during the analysis. The operator mix of each subject and the time spent in analysis appears below in Table 4. Like the data regarding strategies and information use, the results are not unambiguous. For example, tag 1, the qualitative impression expression operator, is most popular for all processors. Beyond this Operator, however, the results are mixed, but trends are certainly evident. Operators employed are a reflection of the level of processing engaged in by a subject. In almost every case, the professionals used fewer operators than did the nonprofessionals. This result appears 87 to be related more to the items selected for analysis than total time employed in processing. Note that 52 used more time than two of the nonprofessionals, but still used far fewer operators. TABLE 4 TIME AND OPERATORS EMPLOYED BY SUBJECTS DURING PROCESSING $1 32 S3 S4 SS 86 S7 Tag 1 45 28 18 32 49 3O 20 CI 5 3 O 2 9 l3 14 Increase 15 l l 2 8 15 9 Compute 3 8 3 1 9 6 8 Formulate Relation 9 O O O 5 ll 2 Trend 2 2 l O O 3 4 CA 0 O O O 2 O O Formulate Problem 2 l l l 0 l 1 Total Operators Used 81 43 24 38 87 79 58 Time 35+" 88+" 15" 13" 72” 49+" 97" " denotes minutes; + denotes more than The data continues to indicate that $1, in his analysis, was most similar to the nonprofessionals. Like his information use, this was true whether one considered single Operators or the overall mix. For example, 51 is the only professional who utilized the formulate relation Operator. While his utilization of the CI operator is not as high as that Of the nonprofessionals, it nonetheless is the highest of the professionals. For all other Operators, $1 is clearly most similar to the nonprofessionals. $2 is in line with the dominant trend for the professional sub- jects -- for less processing. The only operator employed by $2 to 88 any significant degree (other than tag 1) is the compute operator. The use level of this operator was most similar to that of the non- professionals. For most of the other Operators, 52's use level was less than fifty percent of that of the nonprofessionals. In addition, he did not employ the formulate relation Operator, which was used by every nonprofessional subject. S3's operator utilization pattern is quite similar to 52's, but S3 uses even fewer operators than does 52. 53's second most popular Operator was the compute, which was used to assess growth rates for sales and net income. Other than tag 1 and compute, no operator was used more than once and three -- formulate relation, Ci and CA were never used. S3 did very little processing that was not qualitative in nature, a result consistent with the professional view, related in the last section, that detailed analysis was not necessary. S3's mix of operators was even less related to the non- professionals than $2's. S4, like $2 and 53, did far less processing than the nonprofes- sionals. Almost all of 84's analysis was qualitative. In fact, as evidenced by the following quote, he did not Often use quantitative techniques in contexts similar to that in his study: "I look at the figures and I just... I really don't do a whole lot of work with a calculator or anything... I just look at the progression of numbers through here... I don't put numbers on paper, or anything..." S4 used the computer operator once -- to determine the return on equity. The formulate relation, trend, and CA operators were never used. S4's second most popular operator (after tag 1) was the CI and increase, both used twice. 89 All of the nonprofessionals did more analysis, on average, than the professionals. All used the CI, increase, compute, and formulate relation operators fairly heavily. In fact, the mix clearly indi- cates that the nonprofessionals tended to do quantitative analysis more often than the professionals. It should be noted that this point addresses quantitative techniques employed as a percentage of total analysis, not the percentage of quantitative data reviewed relative to all data. In general, the nonprofessionals tended to use the CI (55 and S7) or the increase (S6) operators most often (excepting, of course, tag 1). This is in contrast with $2 and S3, who used the compute. All of the nonprofessionals used formulate relation to develop linked relationships in the data. As pointed out earlier, only 51 among the professionals used this operator. In summary, the nonprofessionals clearly seemed to use both more and a different mix of operators than the professionals, with the exception of 51. Again, this finding seems to present tentative evidence that processing behavior is related to age and experience. For accounting researchers, the results appear to imply that experienced subjects engage in somewhat different behaviors than do less experi- enced subjects. As in the general case, this result is limited in generalizability to contexts similar to that found in this study. Finally, an additional caution is also necessary related to the Operator analysis above. The analysis implicitly relies on Newell and Simon's assumption of invariance of processing speed (on average). It is only in this context that operator use across subjects can be compared. 90 5.4 Hypothesis Four: Time Use Assessment H4: Subject classes will differ in amount of time spent in analysis (both on specific items of information and overall time). As can be seen in Table 4, there were substantial differences in the amount of time spent in analysis by the different processors. The professional subjects, when viewed individually or as a group, clearly tended to use less time with the task than did the nonpro- fessionals. Only $2 spent an amount of time similar to that used by the nonprofessionals, and much of his time was used to read the prospectus (rather than in analysis). When time spent in analysis (here time spent is viewed as the application of Operators, not actually time episodes) is considered, even $2's time is much closer to that of the other professionals, rather than to that of the non- professionals. Most of the professional subjects spent their time looking only at specific items in the information provided. And $2, in his analysis generally tended to examine the items as the other professionals, a point which is verified by reviewing the topic areas in his protocol model, as well as the results of section 5.2. The nonprofessional subjects used their time to engage in a much more thorough analysis of the provided information. This included explicit examination of most aspects of the financial condition, asset base, and results of operations of the subject firm. Not only were sales, earnings, and debt reviewed, but also costs, inventory includ- ing cost flow procedure, as well as certain aspects of the statement Of changes in financial position. In fact, it became difficult in 91 many cases to discern which aspects of the analysis were more impor- tant since no real distinctions were drawn by the nonprofessional processors. Little evidence on the importance Of time spent in analysis appears in the literature since most structured lab studies tend to have fixed time analysis sessions. In his earlier cited study, Hofstedt [1972] did find preliminary evidence indicating that pro- fessionals tend to use less time and more quantitative data than do students in financial statement analysis. Hofstedt's finding appears to be substantially supported by the results of this research.18 Time, like any input in analysis, can be viewed as a costly item. Given any number of alternative investments, it is readily apparent that one's ability to assess them is a function Of time available. Excessive use Of time on any particular analysis is clearly going to result in fewer assessments being made or less time being spent on other analyses. Since Lewellyn, Lease and Schlarbaum [1976] found that most nonprofessionals in their study spent less than three hours monthly on such analysis, time per alternative analyzed would seem to be a clear limiting factor on total assessments made. Another perspective may be used to view the time use aspect of the analysis by the nonprofessionals. As stated above, the time was spent in detailed probing of the firm. One potential inference is that the nonprofessionals felt that by spending more time and delving 18None of the subjects appeared worried about time spent, per se. In fact, all wanted to talk after the experimental session it- self was over. 92 more deeply into the firm they Obtained a ”surer" knowledge of its potential worth. Further, it may be the case that such detailed knowledge leads the nonprofessionals to feel that distinctions among items is unnecessary. The detail itself is a sufficient basis to assert knowledgeability about the firm, and hence an ability to make a decision. While there is no protocol evidence to support these suppositions, the processing behavior Of the nonprofessionals does. Instances in which the nonprofessionals ranked or otherwise exhibited an ability to discriminate in terms of importance were few; only $5 did so with any regularity. It is possible that the time element is an indicator of a lack of a clear-cut analysis process which entails data screening and/or ranking. This premise is similar to results reported by Ashton and Kramer [1980] that students have a lesser degree of self-insight and more ambiguous weighting schemes when compared to professional accountants. While the nonprofessional subjects in this study are experienced and reported that they regularly engaged in investment analysis, the above inference may still be valid. It seems reason- able to assume that the nonprofessionals may, at each analysis, go through an elongated, refamiliarization process since they don't do such analysis on a daily basis as do the professionals. Finally, the above findings also support conclusions drawn in earlier sections of this chapter. Nonprofessionals seem to be doing something different than do professionals. The point that policy- makers should probably be cognizant Of placement vis-a-vis importance again seems appropriate. From an accounting research perspective, it 93 would appear that care should be taken in using nonprofessionals to make statements or inferences about professionals. 5.5 Processing Behavior Like most of the data generated in this study, the processing behavior could not be categorized in an unambiguous fashion for all subjects. However, there were some strong trends discernable in the data, especially for the professional subjects. For example, consider the following excerpts from the protocols of S4: "And... if I'm satisfied with the numbers and I like the company and I like the industry it's in and I think the product of their making has a future, then I'll probably make a decision to buy. And if all of these things don't come together, then I don't do anything with it...." S4 is clearly describing, in this review of his processing, what can be viewed as a large production system. This finding was also evident in the protocols of $2 and S3. The above quote can also be used to illustrate, at least in part, the evidence which emerged from the protocols in support of a linear compensatory model for the professional subjects. In fact, the evidence was almost overwhelming in indicating that such a process was being used. For example, consider the excerpts below from the protocols of $1, $2, $3, and $4: $1: "... sales originate from contracts awarded on a competitive bid basis... sounds somewhat to be as of a negative feature... 58% of (ACP's)... sales were to defense-related agencies. Another possible negative . having one large customer; however, given that we seem to be going to a more defense-oriented posture... possibly that could be a plus...." 94 $2: "Number one, it's a new issue... built-in bias right away against new issues.... Number two..., it is a company who's had a very rapid rate of growth... which is a positive.... However, 58... percent of the growth is with one customer... the government... which I would count to be a negative...." S3: "... Competitive bid basis -- well, that's bad... O.K., ... 58 percent went to defense... that's bad.... So far we have two bad marks and not one good... not a bad record... seventeen point six annual growth rate... pretty good. S4: "The stocks all being sold by the company -- that's one point in their favor.... Company's in an industry that's of interest... fact that it's plug compatible with IBM is of even more interest... one potential negative would be the fact that 58 percent of their... sales... were to defense related agencies...." As these quotes demonstrate, the protocol evidence pointed strongly toward the use Of a compensatory model. While this finding was not totally unexpected due to findings such as Payne's [1976] research which indicated that processors tended to use compensatory models for analyzing particular choice items, the strength of the finding was somewhat unexpected. In fact, this protocol evidence was the basis for the construction of models for the professionals which are essentially versions of Dawes and Corrigan's [1974] description Of linear models: "The trick is to decide what variables to look at and then to know how to add." The compensatory nature of the models is also compatible with Newell and Simon's [1972] characterization of the human information processor as a production system. As related earlier in the present study, a production consists of a series of premises, conditional in nature, which lead to certain actions if the prescribed conditions are met. In the usual case, the actions are triggered by the cascading 95 effect of the summarization of likelihoods related to each additional condition or fact. Thus, for example, the situation might begin with some perceived probability that a certain action will take place. As each additional fact (or condition) becomes realized, the prob- ability changes (either increases or remains the same). When some prescribed level of certainty is reached, the action occurs. If the level is not reached, nothing happens. Now consider the scenario in a compensatory model. In the usual case, there is a criterion variable (action) and one or more inde- pendent Or predictor variables (conditions). What makes a model compensatory is the fact that a minimum contribution from any given predictor is not required to reach some prescribed criterion level. Low values on one variable can be offset by (compensated for) larger contributions from other variables. There is no requirement of a relationship with the other predictors (the model assumes statistical independence). Each variable's beta weight can be viewed as analogous to the contribution to the unconditional probability Of the production model. That is, summing the weighted prediction values leads to the criterion level. This summing activity is precisely what occurs in the production system case. The choice of the action level is relatively unimportant since it depends on the use of the system. Therefore, the claim is made that the activities in each case are similar. More precisely, the claim is made that compensatory models can be interpreted as a subclass of the set of production systems. That is, any compensatory model can be restructured as a production system. 96 For example, consider the case of a graduate admission process. Generally, some combination Of undergraduate grade point, graduate record exam score and "recommendation" is used to assess probable success of the candidate if admitted. The weights assigned to each predictor factor can be interpreted as probabilistic assessment of their unconditional value in predicting the criterion variable. 2 or the criterion score can be viewed as the con- The change in R ditional value of each variable (the relationships are best surveyed from a standardized perspective since units are equal). A possible production rule model would be: If (GPA) x ("weight") > "value" or If (GRE) x ("weight") > "value" or If (Evaluation) x ("weight") >I"value” or If [(GRE) x ("weight“) + (Evaluation) x ("weight")l>"value" or If [(Evaluation) x ("weight") + (GPA) x ("weight")l>"value" or If [(GRE) x ("weight") + (GPA) x ("weight")]>»"value“ or If [(Evaluation) x ("weight") + (GRE) x ("weight") + (GPA) x ("weight")] > "value" then admit student. Note that even though the above model may seem naive and "trivial" (for example, why not use the following rule: if [(GPA) x (weight) + (GRE) x (weight) + (Evaluation) x (weight)] "values" then, admit student), the underlying behavior is not. In essence, a line says "If parameter is "value" then conclude success or failure with prob- ability "value". That is, the admit action implicitly carries with it a prediction of future success at some probabilistic level. The production model merely presents a greater level of detail for assess- ing possible combinations of the underlying decisions being made. 97 An additional feature of subject processing was the evidence of a ranking scheme. For example, S4, in starting to talk about pro- cessing states: "If the majority of the stock is coming from selling stockholders, I don't read beyond the first page... I throw (the prospectus) in the wastebasket." 52 has virtually the same sentiments about the same point: "(These are) not shares sold by current officers... sometimes it's the kiss of death... getting a public market for their stock for estate purposes...." In fact, all of the professional subjects commented on the fact that the stock originated from the company rather than from selling stockholders. One element of the ranking seemed to be that negative features appeared to outweigh positive ones, especially if the negative fea- tures were encountered early in the processing. S3, in his analysis, came across two negatives early on. He then saw a positive (high growth) which he disbelieved due to the nature of one of the nega- tives (competitive bids). 53's response to this perceived discrepancy was to search for corroborating evidence to his belief that the situation as depicted was "not realistic." For example, the subject firm had less than one percent debt in its capital structure. While this was a positive to all other subjects, it became a negative for $3. The product's tie-in to IBM, a strong feature for almost all other subjects, likewise became a negative. In short, one perceived negative feature resulted in a ripple-effect which overshadowed subsequent positive ones. This negative feature impact also shows up, in less dramatic fashion, in the protocols of 52 when addressing the issue of having one large customer: 98 $2: "... Companies with single products... or with single markets... as soon as their market was cut Off they, uh...down went the price of stock.... One that comes to mind is ( ). They were in computer systems... a very difficult market, a lot of competitors..." Even though he found many positive features in the firm, $2 continued explaining, for several minutes, the effects of single customers or markets. This behavior never occurred relative to positive features. Additionally, even though positive aspects outnumbered negative ones for almost every processor (all except $3) the decision relative to the firm was generally quite narrow and in every case, hedged. The subjects generally thought the firm had good prospects if -- costs were kept down, the market turned up, etc. It should be noted that protocol evidence clearly existed for other types of processing. This is particularly evident if one breaks the processing down in terms of particular decision points. For example, one could dichotomize a processor's decision in terms of a decision to either stop or continue processing. In this context, the decision by S4 to either "dump the prospectus into the waste- basket" or continue depending on the origin of the securities offered can be interpreted as an indicator Of an elimination by aspects process,19 rather than the mere assignment of a larger weight to this variable. Once the decision was made to continue processing, the evidence clearly pointed toward a compensatory process. Most of the behavior described above in this section was found only in the protocols Of the professional subjects. The behavior of '9 This model, described by Tversky, appears in the 1976 article by Pa ne. The choice behavior consists of ranking each choice on the it attribute, and eliminating all but the highest ranking choice. Here, the "i" denotes importance of the attribute. In case Of ties, the same procedure is repeated for the i + 1th attribute. [Tversky,1972]. 99 the nonprofessionals was much more ambiguous. For example, S6 Often described information or elements addressed in the data as "good", but the meaning was not clear. Not once did $6 describe anything in a directly negative way. Instead, when describing something per- ceived as being good, he would say, "it would not be good if--," where the "if" explained the opposite scenario to that described as "good." $6 basically moved through the data without revealing his perceptions of particular elements. The only hint Of such perceptions came in his summarization of his view of the firm at the end of processing. There, he stated that the company "ought to be a good company if they kept costs down and kept turnover (personnel) low." S7 was clearer than 56, but only slightly. He Often stated that the firm had a problem with "x", without giving much information relative to the importance of "x". However, in his summarization at the end of processing, it was evident that he was much more conscious of the negative features of the firm, even though he had not explicitly designated them as such during processing. For reasons such as these, it was much more difficult to categorize the behavior Of $6 and $7.20 $5 exhibited behaviors similar to the professionals in that he expressed fairly clearly what items were important to him. For example, negative features of high importance were "red flags". Positive aspects of the firm were igoods". In short, 55 tended to exhibit compensatory behavior. However, even 55 did not Offer as clean protocol evidence as did the professionals relative to his over- all behavior. 20These differences are exemplified in the derived models for the nonprofessionals. In an attempt to be as accurate as possible, the models are not explicitly compensatory in overall structure. For example, the mere existence of a positive assignment device was not taken to be direct evidence of the existence of a negative one. TOO While the results of this study are exploratory in nature, they do point to some implications for research. In this task context, one might suspect that a linear model ought to fit subject judgments since, indeed, that appears to have been the predominant model used. In a similar context, Ebert and Kruse [1978] found that linear models could be fit to the decisions of security analysts, although the R2 values were less than .30 in general. One tentative -explanation for that finding may be the criterion variable used -- expected return. While this study is clearly not scientifically conducted in the usual experimental sense, it is interesting that only one subject attempted to estimate return -- the academic partici- pant. It is further enlightening to note that he engaged in at least three iterations before coming up with what he perceived as an accept- able estimation procedure. At worst, it is tentative evidence that probably none of the participants “Willfully engage in such activity. Another implication which may be drawn concerns Ungson, Braunstein, and Hall's [1981] earlier cited contention that the findings of structured problems may be inappropriate for ill-structured settings. While noting the limitations and nongeneralizability of results beyond the context of this experimental situation, some interesting con- clusions may be drawn. First, the processing behavior Often ascribed to structured settings (compensatory) was also found in the relatively unstructured settings Of this research. Second, in the absence of compelling theoretical guides, the choice Of predictor variables presents a substantial problem. The construction of realistic, non- artificial predictors would appear to be extremely difficult. For 10h example, in this study, as many as twenty-eight tOpic areas were addressed. Only three of these appeared on all of the processors' lists, representing at most thirty percent of items attended to by any particular subject. From this perspective, it seems that the problem is not so much one of having the wrong setting, but under- standing what is important and familiar across different settings. For example, while price/earnings (P/E) ratios may supposedly be important in a particular setting, they may not be to subject "i", who may not understand completely what they're supposed to do. Or, the subject may use an entirely different set of variables to do his own predicting. When presented with P/E ratios in the ex- perimental setting, he may simply be trying to apply inappropriate rules. These difficulties simply would not surface in most compen- 2 or regression weight values. satory models, except as lowered R Each subject, in the normal ill-structured setting, has the ability to review all available data with no real constraints as to how it should be condensed or summarized. Better understanding of the variables and their relationships to users and each other in parti- cular settings would probably go far in alleviating some of these problems currently found in modeling "judges" and problems. 5.6 Model Reliability As related earlier, the primary purpose of this research was the analysis of the processing behavior of the participants. The models derived were developed as vehicles for this assessment. It was not possible nor intended that the models be proposed as explicit subject models beyond the context of this experiment. Such a proposition would have required many arbitrary and indefensible assumptions about 102 the subjects' behavior since no tests were conducted relative to contexts different than that of this experiment. In spite of these limitations, some limited cross-validation and reliability proce- dures were conducted during the course of the research. These procedures are described below. In the general case, models are cross-validated by testing in a new population (one other than that from which the model was developed). This usually entails predicting some criterion variable based on inputs from the new population. In this study, validation consisted Of constructing a new case prospectus and allowing one subject and his derived model to assess the new case firm. The results of this analysis is described below. Validation procedures were limited by time, and subject con- straints. Many of the subjects simply did not have time (given the research time constraints) to participate in validation procedures. S4, who did participate, did yield interesting results Application of the derived model (see Appendix I - Figure 14) of S4 to the new data set yields a prediction of a negative view of the firm. This prediction is apparent from either of the two decision boxes of 54's model -- the growth potential of the industry or the origin of the stock being sold. Also note that this prediction is independent of the case firm's operating results. Obtained actual results matched those predicted by the model. While S4 talked of the origin of the stock as a "negative in his mind" in the second case, he clearly was not enamored with the firm and was ready to stop processing. This is evident in the following 103 quote, in which he also mentions operating results: I'I don't see anything that makes me excited about this company... there are many others doing the same thing.... They have a nice earning progression, but... I don't think much Of anything is going to happen to this stock...." After several additional explanatory comments, S4 said he really had nothing more to add (see the Appendix II, Figures 24-26). It can be inferred that the negative features outweigh any positive attributes. The retest reliability of the model must be assessed relative to the structure of the second data set. The subject firm of the second data set had a different line of business (computer related, but not minicomputers) as well as a different composition of stock being sold. Much of the stock in the second data set came from a selling stockholder. These revised features Sufficiently altered the data set such that a complete test of 84's derived model was not obtained. S4 stopped processing before examination of all elements in his derived model, just as that model would predict. In spite of these limitations, the model derived from the second case analysis is quite similar to the original model developed for $4. In fact, when S4 was probed after processing, he stated that he would normally review information on the auditors, legal matters, and whether venture capital firms were involved when interested in a particular firm. When these features are added to the derived model from the second case analysis, the Obtained model has seventy percent of the cues in 54's original model. Those cues left out, customer type, return on equity, and the product's relation to existing products 104 are relatively minor in impact. Customer type is an assessment of the expected margin (military customers have restricted margins), return on equity is related to sales growth, and the product's relation to other products is related to the uniqueness aspect of the product. That is, each of the cues are partially correlated to at least one other cue already in the model. One final aspect Of the validation of the derived models involves the processing order. As constructed, the derived models contain elements in the order processed by the subjects. In a process tracing framework, order is of importance, but not in an absolute sense. As Clarkson [1962] pointed out, a particular processor may change the order in which information items are assessed, either by chance or many other potential factors. The primary issue is whether the important components of the decision process are present in the model. For example, it may not be critical that a processor such as S4 address the industry item before addressing the origin of stock sold item. The critical issue is whether the behaviors and decisions predicted by the model relative to those items corresponds to those of the processor. In general, production model systems will enact first that action for which the preconditions for action have been met [Newell, 1973]. This aspect of processing models and the linear compensatory nature of the derived models are not strictly compatible. If one considers a statistical linear compensatory model, processing order is of importance unless the cues are uncorrelated; an unlikely situa- tion for most ill-structured problems. This result, while problem- 105 atical, does not alter the nature Of the findings of the study. The objective is to point out that care must be taken when building models that unwarranted statements about the importance of information not be made in the absence of tests or other data which supports the conclusions drawn. The indication that processing behavior is compensatory does not correspond to a statement about processing order and importance. 5.7 Inter-rater Reliability Two raters were used in this study -- the researcher and one student. The student rater was a senior accounting major with no previous research experience or psychological training. He was given a description of the operators, Bouwman's [1980] description of his Operators, and two training sessions. The training sessions involved coding the protocols of subjects from the pilot study. These sessions lasted about one hour. In short, the student coder was relatively naive. Obtained inter-rater reliability is shown below and discussed in the following paragraphs. The reliabilities are calculated as explained in section 4.6 of the dissertation. Unadjusted Adjusted Inter-rater reliability .90 .66 The unadjusted figure is simply the proportion of agreement between raters. The adjusted figure is the Kappa coefficient developed by Cohen [1960]. The obtained results are comparable to those reported in section 3.1 in this study. Analysis of the errors made indicated that in general errors were made within behavior types, rather than across behaviors. For example, the rater would have difficulty 106 distinguishing between an increase operator versus a trend Operator. The distinction is a rather fine one which is difficult to detect. One has to discern if a change involves a period greater than two years. Without extremely careful analysis of the protocols, the implication could be missed. The adjusted inter-rater reliability corrects for chance agree— ment between the raters. It is, therefore, a conservative assessment Of agreement between raters. CHAPTER VI SUMMARY AND FUTURE DIRECTIONS This study has addressed the problem Of information use in the investment domain. Using the technique of verbal protocol analysis, an examination was made of the processing behavior ex- hibited by professionals and nonprofessionals in the assessment of an initial offering in the computer industry. The objective of the research was to ascertain the nature of processing behavior in a setting which emphasized external validity. The issue of information use has been addressed in the liter- ature of psychology as well as accounting. The results Of research to date indicates that though we can construct models which will predict well the output decisions of decision-makers, we cannot really translate prediction success into knowledge of how decision-makers make their decisions. Attempts to increase the consistency of decision made or to otherwise change the behavior Of decision-makers has met with limited success [Slovic, Fischoff, and Lichtenstein, 1977]. This has led to a search for new methodological techniques which allow for more realistic data inputs. Attempts have also been made to revamp commonly used techniques and experimental settings in order to increase the degree of correspondence to real world settings [Ebbesen and Konecni, 1980; Olshavsky, 1979]. By increasing the external validity of their research efforts, researchers hope that 107 108 they will then be able to address directly problems that decision- makers face in their work. Another potential benefit is the develop— ment Of decision aids which increase the consistency and quality Of decisions. Much Of the motivation for the study stemmed from the consider- able body of evidence that decision-makers often did not make decisions in ways posited by models used to describe the decision-making process [Einhorn and Hogarth, 1980; Ungson, Braunstein, and Hall, 1981]. The evidence indicates that in many situations man is not bayesian in nature; nor does he make his decisions in a linear fashion. The objective of this study was to allow decision-makers to process a problem which had not been decomposed in order to see what kind of behavior would be used. The task was processed by two groups -- professionals and non- professionals. The Objective here was to see if professional subjects would process the information in ways substantially different from the nonprofessionals. Obtained results of the study support the premise that profes- sionals do indeed process information differently than nonprofessionals. In this study, professionals used less time and examined different kinds of data than the nonprofessionals. Professionals tended to use only the sales, earnings, and debt items of the quantitative data provided. They expressed some interest in the audit report and legal matters, but were far more interested in the firm's product(s) vis-a-vis the marketplace as a whole. This appeared to be the primary decision point in this task and context. Little interest was shown in probing the firm in detail. 109 In contrast, the nonprofessionals closely examined most facets of the firm. As a result, they tended to spend greater amounts of time and address different items of information. The nonprofessionals did not clearly discriminate among items processed on the basis of importance. Little evidence existed to support any scheme for ranking or otherwise making distinctions among information items. The difference in processing also extended to the behaviors exhibited by the subjects. Professional subjects processed infor- mation in a linear compensatory fashion. Protocol evidence support- ing this conclusion were prevalent in the transcripts of all pro- fessional subjects. This result was in marked contrast to the behaviors exhibited by the nonprofessionals. No clear-cut pattern emerged. Some of the behaviors were linear in nature; for other situations, the implications were uncertain. The results of the research would imply several tentative con- clusions. The use of students and other nonprofessionals to draw inferences about the behaviors of professionals is not supported by the results of this study. The subjects in this study, all of whom were educated and held responsible business positions, performed differently than did the professionals. The placement of information appeared to matter to the nonprofessionals more so than for the professionals. Nonprofessionals tended to move sequentially through the data. It would seem appropriate to place information in ranked order in required disclosures for processors such as the non- professionals. This position was also supported by several Of the professional subjects in unsolicited comments during analysis. 110 As pointed out earlier, some of the tentative findings of this study do indicate that more research is needed in certain areas. For example, the finding that all of the subjects tended to process widely disparate topic areas from the data set clearly demonstrates, at least for the experimental subjects, a lack Of precise knowledge about what is important. A previous solution to this problem has been the application of factor analytic techniques to develop composite factors which represent all of the underlying variables [Libby, 1975]. While clearly useful if the general areas (and factors) are interpret- able and understood by subjects, the aggregation may serve to confuse, rather than clarify, the situation. It may be possible that the research objective may be better served by presenting nonaggregated data and allowing the subjects to select from the menu. As long as the menu is theoretically derived, the models should be capable of both individual and aggregate interpretation. That is, since most variables are correlated for a particular topic area (this is in fact the underlying premise of factor analytic techniques), it would be possible to develop general models which are theoretically related to the individual models. The individual models would be used to better examine individual differences in processing strategies.21 And, as a minimum, one should expect such studies to indicate possible directions or emphasis for educational programs. The strong indications of compensatory processing lends support to the findings of the extensive body of research which indicates 21This is essentially the technique demonstrated by Abdel-khalik [1973]. He found that subjects using nonaggregate data were able to make better decisions in particular tasks. 111 that such models are good predictors of human decisions. In addition, there were some tentative indications as to when linear processing would perhaps likely occur (in this context), a question advanced for examination by Payne [1976]. In this study, linear processing occurred when characteristics of the single study firm were addressed. However, when other firms or situations were introduced, screening processes or other ranking schemes were mentioned. More research is needed to fully address this issue. Another potentially interesting research question involves the examination of variables assessed in this study in a more structured multiple choice setting. Such a study would enhance the possibility of ascertaining the importance of particular data items to subjects. If done in a process tracing setting, such a study could potentially offer information about how weights are derived in a multiple choice setting. To date, this process has not been closely investigated. Additionally, if the choices Offered are large enough, multiple data analysis techniques could be applied to the data, a procedure advocated by Payne, Carroll, and Braunstein [1978]. The results of this research study are clearly limited by the research design and the experimental context. Most investment decisions are not made in terms of initial Offerings. However, the analysis of the firm and its position relative to other investment Opportunities is relatively representative of many such decisions in the real world. To the extent that decisions are made in environ- ments which conform to the setting and context of this research, the results Offer tentative evidence relative to expected behaviors. 112 This conclusion is in confonmity with Newell and Simon's contention that the task is the major determinant of processing behavior [1972]. By its very nature, exploratory research is probably best viewed as a vehicle for establishing fruitful avenues for future research. Such studies usually entail the abrogation of some or even all of the usual tenets of experimental research, or perhaps involve data manipulations without benefit of well-developed theoretical guidelines. This is clearly the case with the present study. The experimental sample was not random or large. The experimental sessions were not simultaneously administered. Internally, no structured tests were included in the data, nor were data manipulation techniques standardized. All such factors work to limit the generalizability of the results of this study beyond the context found in the study itself. In the usual case, violations of accepted practice are permissable when the Objective is to investigate particular aspects of a problem. A researcher may, for example, wish to maintain high internal validity and consistency in a given situation, at the expense of greater external validity, or vice versa. As long as the objective is clearly stated and reasonable practices adopted, such tradeoffs are not only acceptable, but quite common. In this research, the Objective was the maximization of realism in the experimental situation. While this Objective can never be fully met, to the extent that it is achieved, experimental control is proportionately diminished. However, it is hoped that studies in the same vein 113 as this can be used to develop more precise and theoretically sound experimental studies in the classical sense. APPENDICES APPENDD( I APPENDIX I Subject Problem Behavior Graphs an “iodels Small capitalization Read CI l—l7 Not terribly Sales by different. competitive Like Amdahl - bid - a more negative. Read tag 1 established tag 1 19-29 . . competition. One large Government customer - explains potential competitive t§g_l negative. formulate bids. Defense . relation posture possible plus. Figure 3 '11.[ Problem Behavior Graph of SI 115 Figure 3 (cont.) Number of contracts provides some tagg} diversifica— tion. Sales growth Nice jump consistent - for 1980. surprisingly Read tag 1 strong. increase 44-53 increase I Net income Firm has has nice book value jump. Not . . . not increase bad. tag 1 just a concept tag 1 (repeats) Crown nicely. Decent Not a bad growth rate. operation. \ 4 Changing from Company sales to doesn't pay leasing of cash systems - dividends - $3330 tagglg sends up tag 1 no problem; red flag. better to have capital gains. Read CI Figure 3 116 (cont.) 88—91 Read tag 1 Dilution not substantial. 93-94 Read compute Present shareholders may sell, later potentially some dilution. Price depression. 100-104 r——“‘—“— Read CI/tag l compute Debt percent of total capitals looks rather small, only 1.8 percent; not a problem. 110—116 trend Growth between 78 and 79 is 65 percent. Between 79 and 80, 34 percent. tag 1 Guess of continuing rate not borne out by the figures - rate (per- cent) definitely slowed. Figure 3 (cont.) Earnings Company earned higher in 81 $.22 first six than 80 — 80 months,.56for increase had , compute last; probably exceptionally 4~—— due to payment tag 1 , formulate low period. . [schedule] or relation . , may indicate earnings taking off. Change in EPS is .50/share or 227 per- cent growth. compute _ trend ‘ formulate [v relation Interest Interest expense income decreased. increased; IRead increase increase gégzfir 121 '141-143 an ' Increase in income due to formulate [increase in] , systems of relation CI and software sales — [certainly] big, big increase. H8 Figure 3 (cont.) Margins [worthwhile topic].Gross margins de- Question: does company have control of costs and EEZE156 inigeise clined;nmin- tag 1 growth . tenance im- proved; latest margins decreased [Margin] declined, but not . dramatically. increase tag 1 Growth in Maintenance sales and fees N/I increased. Read tag 1 dramatic. increase 868A 168 declined. Net income tax increased. Company fairly small. Read CI 175-180 119 Figure 3 (cont.) Hefty Management competitors. fairly young. Read tag 1 CI 182-189 Firm somewhat dependent on one supplier. Read tag 1 194-232 Pay level Pay not clue if terribly company is out of being CI line. "milked." Majors, Other Pettis have officers have vested good chunk interest in of stock; lgggd273 tag 1 success of tag 1 hefty - company; after control it. offering. L Read 278-282 tag 1 Figure 3 120 (cont.) formulate relation Young company, difficult to analyze — much depends on ability for [sales]. formulate relation Theirtalentnot in production. Likely'software ability to make competitive 4 bidsgpast num— bers won ' t necessarily tell you much about the future. Read 292-305 tag 1 A good track record not really a guarantee for continuing in the future. Read 306-315 tag 1 Profit sharing plan not too bad. Can't buy for trust purposes - SEC rules. 121 Figure 3 (cont.) Hot new issue; tend to have proclivity against; Wouldn't put in a safe deposit box. Definitely vulnerable; might have niche in market. Hardware largely bought, develop software - not untypical. Volatility in earnings. No dividends. Aggressive situation. Risk-free assets returning 15-17 percent; opportunity cost is high; much depends on growth. formulate problem 122 Not terribly different — like Amdahl Sales by competitive bid - a negative One large customer — potential negative Number of contracts provides some diversification Sales growth consistent . . . strong NI — not bad Decent growth rate Firm has book value . . . not a bad operation Changing from sales to leasing - red flag No cash dividends - no problem Dilution — not substantial Debt percent of total capital rather small — not a problem Growth rate definitely slowed Earnings taking off? Interest expense decreased, interest income increased System sales — big increase Margins declined — not dramatically Growth — sales and NI dramatic Company fairly small Hefty competitors Firm somewhat dependent on one supplier Management young Pay not terribly out of line Majors, Pettis have vested interest in success of company Talent not in production; ability to make competitive bids Good track record not really a guarantee for continuing in the future Hot new issue . . . proclivity against . . vulnerable . . . Might have niche in market Hardware largely bought — not untypical Volatility in earnings No dividends irm Specific Decision Protocol Model of 81 Figure 4 123 Start Origin of stock sold: company positive selling stockholders negative Company product: unique positive not unique negative Sales basis: customer solicited positive competitive bids negative Customer type: NI, sales growth: Book value: Suppliers: Cash dividends: Dilution: Debt ratio: (of total capital) Competition: Management: Pay rate of management: First offering: more than 50 percent of sales to one customer negative No one customer provides more than 50 percent postive Increasing trend —————"-positi changing or decreasing negati more than $3/share positive less than $3/share negative one more than 20 percent neg none more than 20 percent pos company growing and none ———-—— p company not growing and none less than 43 percent positive more than 43 percent negative less than 1 percent positive more than 1 percent negative large companies negative small companies positive average age more than 50 nega average age less than 50 posi top pay less than $200,000/year top pay more than $200,000/year negative positive yes no ve ve ative itive ositive negative tive tive positive negative Firm specific decision Derived Model of 81 Figure 5 Read 1—42 tag 1 124 Read 46-48 tag 1 First question-—the heavy reliance on tag 1 government agencies. Read 50-57 taggl Use of proceeds . leasing . no problem. . . . Not shares sold by current officers. vehicle . kiss of death. Problem Behavior Graph of 82 Figure 6 [Budget situa— tion] in Washington is veryimportant. If cuts . company is highly dependent. .. very critical area. 125 Figure 6 (cont.) Financial Rate of information growth . fairly well over Read CI standard. compute 40 percent. 59-62 trend NI - no Working problem. capital . not too tag 1 tag 1 relevant. L Again 58 percent to defense . Read tag 1 a negative. 82-108 [Non—payment of dividends] not a [ particular Read t§g_l problem. ‘ 111—141 Read tag 1 Figure 6 126 (cont.) 143-157 Read tag 1 This [dilution] not a pro or con. 159-17 Read tag 1 Sales [of stock] may adversely affect market . no problem. 173-176 Read compute One cent par value - ok. No problem. 179-182 CI 106 . . ./50; 106 . . ./587. That is negligible. .003 percent .003 percent. tag 1 Leverage - not too relevant. Read 188-190 compute Read compute Figure 6 127 (cont.) 35,000 shares at one cent is 358 . 192—201 Read trend increase years. 15 into 78 . rated number of shares . five times . . four Excess of 40 percent. Cost of sales. trend going down; does 211—225 Read 228—332 tag 1 tag 1 big problem. not appear to be a risk Government obligated to pay only 10 percent reemphasizes associated 128 Figure 6 (cont.) Risk of the benchmark Read tag 1 334-354 No leases . con— tracts Read tag 1 terminat::l 356—386 right. They are paid very well. . . Read CI 388-446 [Stock options, Pettis and Majors not Read tag 1 . 447-460 included1 might raise a question as to why. Read 462-498 129 Figure 6 (cont.) Management has obtained . independent tag 1 appraisal of Read 500—503 terms. ..[shm— ilar to] those obtainablefrom an unrelated entity .. . good point. Prior to Pettis and offering Majors, 49.8 percent 63 percent compute owned by before, Read 511-514 Pettis . compute 46.4 percent Majors. after. 40,000 . . . approximately 2 percent. .. less over compute here Read 519-550 one to . . 47.9. Majors and Pettis controlbetter than 50 percent. [Statement of changes] no problem. tag 1 Read 552-557 130 Figure 6 (cont.) Single Inventories industry . no segment . . . problem. no problem. tag 1 tag 1 Read Property, plant and equipment . no taggl problem. Net invest- ment in sales . . . tag 1 no problem. 563-572 Read During the Notes to year . . . the state- established a ments . profit shar- no problem. tag 1 tag 1 574-577 ing plan. no particular problem. 131 Figure 6 (cont.) Underwriting agreement . no Read tag 1 problem. 582-595 #1) New issue . . . no prior market; ok. There is . . . built-in bias against new issues. #2) Rapid rate of growth . . . a positive 58 to 64 percent of the growth is with one cus- tomer . . . the government . . . a negative rate of growth was positive . . . product . . . up against some sizable competitors. Companies with single products . . . market Read formulate cut off . . . down went the price of the 597-605 problem stock. One . . . was . . . ADDS - Applied -m. Digital Data. Acquired by National Cash Register . . . built that in the back of my mind. . . . Rule 144 doesn't concern me. Would concern me . . . if they were getting a market . . . for estate purposes. Considerable risk. . . . Competition tough. Product subject to considerable cutting. Nothing about backlogs . . . absence of back orders may indicate a substitutable product. Rate of growth, one year, 51 compute percent tripled. trend formulate 132 Figure 6 (cont.) problem Background information . . . economy . monetary . . . fiscal policy . . . good time to be in stocks . . [?] . . Dow's been hit pretty hard. . . . Good industry to be in. . It . . . good company. . . . Margins down. Not signifificantly bothered by the absence of backlogs. Government obligated to only 10 percent . . . cancel any time.. .. Maybe outlet for a small company to make a killing at my expense. . . . Accounting information is fine. . . . Look at sales a . . profit growth . . . where these can be supported company is growing. . [ Start ] Question heavy reliance on government - a negative Shares sold by company — not by officers (kiss of death) Financial information — fairly standard Rate of growth - well over 40 percent — NI - no problem 58 percent to defense - a negative No dividends - not a particular problem Use of proceeds - no problems Debt [is negligible] - leverage not relevant Cost of sales - does not appear to be a big problem Government obligated to pay only 10 percent - reemphasizes risk Risk of benchmark No lease . . . contracts terminated They are paid well . Pettis and Majors to control company Statement of changes - no problem Single industry segment . . . no problem Single product market . . . if market removed, down goes the stock New issue - bias against new issues - a negative Rapid rate of growth - a positive 58 plus percent of growth with one customer - a negative Protocol Model of 82 Figure 7 134 Figure 7 (cont.) Competition tough — a negative Product subject to cutting (budget) - a negative Absence of backlog orders - substitutable product - bothers me Rate of growth, 51 percent a year - a positive Margins down - not significantly Government obligated for only 10 percent, cancel anytime Accounting information is fine Sales . . . profit growth where these can be supported . . . irm Specific Decision Stock mainly sold 135 no Customer base: (greater than 50 percent) Customer class: (greater than 50 percent) Dividends: Use of proceeds: Debt level: Cost of sales: New issue: Competition Substitutable product: Back orders: P/E ratio: one large negative none large positive government negative private sector positive company growing and none paid -—————-positive company not growing and none paid negative to company positive to stockholders, ne ative debt g low (less than 5 percent) positive high (more than 5 percent) negative decreasing or unchanged positive increasing negative yes-—————— negative no -—————-positive large companies negative small companies positive yes -—————-negative no -—————-positive yes -—-———-positive no -———-—— negative more than 10 positive less than 10 negative Derived Model of 82 Figure 8 136 Figure 8 (cont.) Pay adequate: top pay less than $200,000 ——-———-positive top pay more than $200,000-——-—- negative Firm specific decision Read 31-40 137 Sales on Record not competitive bad. bid - that's bad, 58 per- tag 1 cent of sales tag 1 to defense, that's bad. Sales Annual increased. growth rate of 17.6 . percent. increase compute trend Annual growth Profit rate - pretty margins 31 good; profit percent tag_l margins — compute before taxes. good. tag 1 Pretty damned Problem Behavior Graph of S3 Figure 9 good; highway robbery. Read 66-70 tag 1 Figure 9 138 (cont.) Read 72-80 Read 90-97 Read 103-111 compute Competitive bids - bad. tag 1 Book value tripled. taggl tag 1 Not a real- world prospectus - they'd be dead. tagil L-T debt - unbelievable; only 106,000. Nobody could do that, not in today's world. Competition - the biggies. Unless they have one little gimmick - unique thing. 139 Figure 9 (cont.) Products don't exist. Product line - Read tag 1 noneXistent 114-115 Like Cascade Data went bankrupt three times. 30 percent margins-—must Read formulate 120-123 problem be selling marijuana on the side. Not realistic. Numbers don't mean much. Would have been wiped out. Wouldn't touch the damned Read tag 1 thing. 131- 140 Sales on competitive bid - bad 58 percent of sales to defense - bad Record - not bad Annual growth rate - 16.7 percent - pretty good Profit margins — good Margins 31 percent before taxes - pretty good - too darn good Book value tripled — L-T debt unbelievable; company unreal Product - unique: no, doesn't exist Competition - the biggies . . . kill these guys . Major stockholders not selling [would be most of the time] specific decisio Protocol Model of S3 Figure 10 141 Sales basis: Customer type: Growth sales: Profit margins: Dilution of: (book value, etc.) Competition: Product unique: [ Major stockholders selling majority of stock: competitive bid customer solicited government agencies more than 50 percent government agencies less than 50 percent rate more than inflation rate rate less than inflation rate pretax margins more than 24 percent pretax margins less than 24 percent less than 10 percent more than 10 percent large companies mostly small companies yes -—-———- good no -——-——-bad -—————-bad good yes no bad good ——————-bad -—————-good good -———-—- bad good ——————-bad good -———-—- bad -—————-bad good d irm specific oecisio- Derived Model of S3 Figure 11 142 Stock sold Industry of by company — interest. point in Read tag 1 their favor. tag 1 68 [Product] plug compatible with IBM of even more interest. taggl 58 percent of sales to defense, a potential negative - margins Read tag 1 restricted, perhaps erroneous conclusion Problem Behavior Graph of 84 Figure 12 143 Figure 12 (cont.) Sales growth looks Income pro— gression fine, spectacular. income growth compatible Read . tagil . tag 1 sales growth increase increase (often not the case with a new company) not of particu- lar importance Return on This industry equity is . 18-25 satisfactory. percent certainly compute CI tag 1 tag 1 adequate. Dilution not State of important; market when P/E ratios offered is not important of more tag 1 on a new tag 1 interest; issue. . currently new issues basically go down Salaries Voting rights. don't make Really not any sense, interested. V Read tag 1 $2:: :3“ t tag 1 127-141 V difference. 1'. ....-- .. 144 Figure 12 (cont.) [Auditor's Legal statement] statements there's no contain tag 1 exceptions tag 1 nothing that causes problems . . Only negative . . . There . . . is the really are people . . . no negatives formulate they do the tag 1 ' here. majority of problem their business with. 145 sold mainly ., N° Industry No of interest Product plug compatible with IBM - of even more interest 58 percent of sales to defense - a negative Sales growth spectacular Income growth - fine; compatible with sales growth Return on equity - satisfactory - a positive Dilution: not particularly important P/E ratios: not relevant for a new issue Auditor's statement - no exceptions No venture capital firm involved Legal statements - nothing that causes problems irm specif decision Protocol Model of S4 Figure 13 Industry growing, high technology? 146 No Stock 801: by compan Product new, high quality: Product related to existing strong product: Customer type: Net income, sales, growth: Return on equity: Auditor statement: Legal statement: Venture capital firm involved: yes -—————-positive no -—————-negative yes-——-——— positive no ——————-not relevant more than 50 percent to government yes -—————-negative less than 50 percent to government no —————— positive increasing -——-——-positive volatile -—-——-negative more than 18 percent -—————-positive less than 18 percent —————-negative no exceptions ——————-positive exceptions -—————-negative no lawsuits ——————-positive lawsuits -———-— negative yes-—————-positive no -———-—— not relevant Firm Specific Decision Derived Model of S4 Figure 14 147 Competitive bids. Margins may be lower Read taggl — key. 1-52 I Competitive Sales - over bid - key. one-half from Navy Read tag 1 compute contracts. 57-67 Sales - over Risk of 10 percent cut-off low. funded by Read compute :pgzgpria- tag 1 76-89 ' Problem Behavior Graph of SS Figure 15 148 Figure 15 (cont.) Risk of non- fulfillment of contracts Read slim. 93-98 Competition among services -advantageous ‘t _ Read tag 1 sales 81 ua 100-106 tion' Risk of Benchmark is termination flaw; if of contracts unsuccessful Read - low. tag 1 lose sales. 108—120 Substantial amount over 50 percent of sales - lose 50 per- cent of sales if they screw up. CI compute 149 Figure 15 (cont.) High growth $15 reason- - very, very able [price]. strong industry. Read tag 1 Cl 129-133 Offering Not diluting ”low” issue. worth much - not fooling Read CI CI marke t‘ 136 I like the Problem - IBM fact they may choose to are in do this them- computer selves. tag 1 servicing taggl and IBM tie—in. ___.| Not that much Sales grown of a worry - 76 to 80. start up costs; . tag 1 alienate increase marketplace if IBM did so. tag 1 150 Figure 15 (cont.) Expect large increases due to increase in military spending. increase Six months of 81 surpassed total sales of 1980. tag 1 Growth potential extremely optimistic. Read 164-177 tag 1 Proceeds going right back into company - good sign. Read 180-185 tag 1 tag 1 Net proceeds to finance leasing and WC - good point. Expanding. Growth good anticipation. L_ Proceeds not going to pay off shareholders - growth sustained. Figure 15 151 (cont.) No dividends Company looks paid - very good. another good Read tag 1 Slgn' tag 1 189-198 Maintenance [Cost of increased — sales] fairly software low - SG&A Read increase 321::dence CI increaSing 207-214 9 increase gone down. Ll. Fixed costs Little more increased than 10 per- 2 percent cent decrease - cost of in dependence. increase compute sales compute . decreaSing. As maintenance Income increases, increased 6 cost of percent. sales . formulate increase . decreases. relation compute CI Figure 15 152 (cont.) Read increase [Increase] not extra- ordinarily high. tag 1 249—255 Read 267-274 formulate Inflation rates risen not expected to be in bounds of past. tag 1 Extraordinary item - red flag. Read 280-289 relation compute Extraordinary item. Must be referring to a debt or refinancing. tag 1 Problem - election next year - lack of political will. CI Almost $1 million more in 1981 than 80 (net income) for six months. Large increase formulate Not in 79, 80 or 81; eliminate [it]. relation In line with military buildup. 153 Read 303-316 Figure 15 (cont.) Return on . One time equity - 1 equity. 8 percent. 5 Change com ute Not all i formulate causes ta pl/CA that great relation fluctuation g compared to and dilution industry of rates. (14.5) Not bad Three percent [compared to above industry rate [industry tag 1 of profit]. compute level]. CA Anderson's Pettis and ' [experience] Majors [young] could be a and in plus. From finance - tag 1 Maryland. tag 1 advantageous. Close to ”power.” Vianolo has Bradey - key experience employee - that would eliminate carry over start up taggl to computers. taggl time if *———- IBM changes. Read 342-347 154 Figure 15 (cont.) Satisfied regarding management. Cég,l Solid. CI [Low] base 1 salaries could be a tag 1 problem — Read 357-365 tgg l Salaries don't seem out of line. people seem to have high management skill. formulate relation Amendment to the contract of Pettisand Majorsappears to be a slap in the face - prior arrangement a perk or fringe - change could cause problems. ~— Read 370-375 tag 1 Company only ' issuing ' 660,000 of l 6 million tagil authorized - holds down dilu- tion. Good. ‘ Company leases Pettis's air- plans seems to be a "this for a that" - things may be fine. 155 Figure 15 (cont.) 395—405 Market bad - E. F. Hutton, probably one of the won't open underwriters. Read tag 1 afeii m CI Large 377-381 p u . company. Nice if If fly-by- Shearon were night company one. underwrote, would be a t l ag tag 1 red flag. E. F. Hutton tends to let me relax - tag 1 one of the majors. I can see this opening at a premium. Read tag 1 156 Competitive bids - margins may be lower - key Sales - over one-half from Navy contracts, over 10 percent funded; Risk of cut-off - low Contract termination risk — low Benchmark is flaw - lose over 50 percent of sales if unsuccessful Industry - high growth; very, very strong $15 - reasonable price Issue [total] low; no diluting market place Computer servicing with IBM tie-in [I like] Problem: IBM may choose to do this themselves: not much of a worry ]Sales - grown; expect large increases due to increase in military spending Growth potential - extremely optimistic Proceeds: going back into company: good sign Proceeds: to finance leasing; provide working capital - good point Growth a good anticipation Dividends: none - good sign (another) Company: looks very good Maintenance increased; dependencecnisoftware down Cost of sales - fairly low SG&A increasing As maintenance increases, cost of sales decreases Income increase - not extraordinarily high Extraordinary item - red flag; not in 80 or 81; eliminate Increase in income for 81 is large - in line with military buildup ROE (8 percent) not all that great compared to industry; however one—time change (stock sale) causes fluctuation and dilution of the ratio [Ratio] not bad when compared to industry profit rate Anderson's experience - could be a plus P&M, [young] and in finance - advantageous Vianolo's experience - could carry over to computers Bradley - key employee; eliminate start—up time if IBM [changes models] Protocol Model of 85 Figure 16 157 Figure 16 (cont.) Satisfied regarding management - solid Salaries — not out of line Low base salaries could be a problem Fringes, perks, etc. - seems ok Company issuing only 660,000 of 6 million authorized shares - good Market bad - stock probably won't Open at a premium E. F. Hutton one of underwriters - tends to let me relax If fly-by-night company underwrote, would be a red flag. irm Specific Decision 158 [ Start ) Product: Sales basis: Contracts: Benchmark: Industry growth: Stock price: Number of shares offered: Proceeds of issue: Dividends: More than one product: Extraordinary item: Income level: Return on equity: (after offering) Management experienced in industry, finance: Salaries: Base salaries, top management: Stock market trend: good sign red flag high technology, new- conventional customer solicited competitive bids good sign red flag good sign red flag noncancellable cancellable good sign red flag not necessary necessary strong not strong good sign red flag reasonable unreasonable less than or equal to $15 more than $15 less than 1 million more than 1 million good sign red flag good sign red flag to company to shareholders, debt none paid-—————- good sign red flag yes good sign no -—————-red flag none present good sign red flag increase more than 80 percent increase less than 80 percent —————— good sign red flag good sign red flag more than 80 percent less than 8 percent yes good sign no -—————- red flag top pay less than $200,000 top pay greater than $200,000 ———-——- good sign red flag greater than $25,000-—————- less than $25,000 good sign red flag up -—-——-— good sign down red flag Derived Model of SS Figure 17 Underwriter: Company defense _ related: 159 Figure 17 (cont.) large, well—known —-———- good sign small -————-— yes -—————— good sign no -—————-not relevant Read 1—23 160 Read 31-37 Produce Firm is hardware, competitive software. since it Important. is staying tag 1 Provide tag 1 in business, training competitive and systems bid support. contracts. Defense relation good - more money coming tag 1 with our heavy defense spending. Sales slow. This period Steady sales pattern of fantastic. growth. _ They increase tag 1 increased. increase -—--—-- Figure 18 Problem Behavior Graph of S6 increase CI Read increase 45-50 Read tag 1 161 Figure 18 (cont.) EPS Started out increasing double - now . at good fourth. rate - slowed down some. compute ] 1 growing. Company really 52—65 Read tag 1 69-70 ‘ . Shares can be traded "right away." That' good. Trading suspended. Wouldn't do. Make price go down. [More and more stock- holders do not like risk of no dividends]. Read 72-75 CI 162 Figure 18 (cont.) Read 78-86 tag 1 increase Don't have much L-T debt, which is good; refute loss. Read 88-89 compute Sales, costs progressively moved up. Read '95-97 trend tag 1 N/I as ratio of sales - last year, this year . increase Management discussion interesting. tag 1 formulate Dip some years - grown in five years, but not substantially. relation No inventory buildup - they don't produce way ahead. 163 Figure 18 (cont.) Not a big 1 Flow stock - inventory keep costs probably good down. position. CI tag 1 Especially in computer software. Net income 1 Decline , declined. .1 because 1 costs are . uP increase formulate down . . . relation Costs Costs back continued up would down - now make a trend starting formulate lower back up. . percent. We go from relation 54.1 to 55.7. Growth greater . . explains why there wasn't formulate , an inventor . relation y y Read 123-124 increase 164 Figure 18 (cont.) Maintenance fees increased. increase CI Cost of sales rose dramatically. CI Read 137-138 formulate Costs rather big. Read 141 relation taggl Income taxes increased - that's why pretax income much higher, but after tax real income substantially lower. Now . . . we compare 79 to 80, which gives better figures. Read 143—153 165 Figure 18 (cont.) Read 157-160 Retirement plan - profit sharing drop in figures Read 168-171 formulate relation [due to fact that] they put money in each year. Prices range Government from $50,000 biggest to $600,000 supplier, which is source, and tag 1 good - tag 1 user of diversify. computers. Keep good bids - help. Ratio of Sixty percent 1:4 of of sales new business from five from customers contracts. already They . . . satisfied - large good. contracts. Some . five years . . that is good Read 178-179 CI 166 Figure 18 (cont.) Read 181-191 tggil Suppliers larger companies. Corporate Data and Xerox very large. Read 192-207 CI [They have] new competition, others been out. Read 211-225 tag_l Board in 30's and 40's . . . young age nowadays. tgg l [Stock option plan] leverage to keep key employees which is good. They have good people - diversified O O O 0 Important. 167 Figure 18 (cont.) Money advanced [in employment Read c1 contragggyl ‘ 227-23 substantial sums. Stock traded Normal CPA's NASDAQ, report. ‘ not OTC. I Read tagil, That 8 good. Cl 232-247 Some assets [Increase] $1 substantial million lease increase. and temporary Read increase formulate izzisgngts 250 CI relation 16K _ substantial; [explain] Liabilities Interest two-and-a- expense down - half times debt down. as big. . compute increase Figure 18 168 (cont.) Read 262 S/E growing [S/E] . . . . . almost year later doubled almost increase trend doubled again. compute [Statement of They haven't changes] picked up supports a much debt. formulate grow1ng CI Steady company. growth. Not relation 3 lot of debt issues. Liabilities FIFO good grown. for inflationary . times. increase €28 1 Expensive inventory. Sold last formulate [leads to] better relation margins. 169 Figure 18 (cont.) Equipment L-T debt doubled. 6-3/4 percent. Fantastic Read compute tag 1 nowadays. 271—281 Company That [due growing . to] percentage planning . taxes down. for their Read increase formulate at 285-296 relation tax 1' urns° [Donaldson, [They] right Lufkin] done in picking a lot of two very those large Read tag 1 . tgg l , 299_302 [underwriting]. [underwriters] Quite a low beta - that's good. Read Cl 308-312 tag 1 170 Figure 18 (cont.) Company sell shares. Not pay dividends. [They] don't have much debt, low mortgage. If they don't have a big turnover of key people. formgiate Company could be quite a mover. Product has pro em a very, very good market. This is a fast-moving industry. They keep costs in line. Should be a good company - solid company. 171 f [Company] produces hardware and software — important Firm is competitive since it is staying in business with competitive bids Defense relation good Sales - steady pattern of growth; fantastic this period EPS increasing at good rate; slowed some Company really growing Shares traded right away - good; suspension makes price go down No dividends - more and more stockholders do not like the risk of no dividends [Not] much long-term debt - which is good (refutes loss) Sales, costs progressively moved up NI as ratio of sales . . . dip some years - grown; but not substantially No inventory buildup . . . keep costs down, probably good position NI declined . . . because costs are up Costs . . . down . . . now starting back up . . . costs back up make a lower percent ~ Growth greater, explains why no inventory . . . Maintenance fees increased Cost of sales rese dramatically [current year]; costs rather big . . Income taxes increased . . . that's why pretax income increased but after-tax . . . income substantially lower . . Profit sharing would [cause] drop in figures Price range from $50K to $600K - which is good; diversify Government biggest . . . source and user of computers . . . keep good bids . . . help Ratio of 1:4 of new business from customer's already satisfied - good Five . . . large contracts . . . five years . . . good Suppliers larger companies [They have] new competitors; others . . . been out Board in 30's and 40's . . . young age Have good people . . . diversified . . . important Protocol Model of S6 Figure 19 172 Figure 19 (cont.) [Stock Option plan] leverage to keep key employees . . . good Money advanced [in employment contracts] very substantial sums Stock traded NASDAQ . . . good CPA's report . . . normal Some assets . . . substantial increase Lease [increase] and temporary investments [increase] substantial Liabilities two-and-a-half times as big . . . Interest expense - down Debt - down S/E - growing Statement of changes - supports a growing company Haven't picked up much debt . . . liabilities grown . FIFO [inventory]: good for inflationary times . . . better margins . . . Equipment . . . doubled Long-term debt - 6-3/4 percent; fantastic nowadays Company grown . . . percentage taxes down . . . [due to] planning for . . return Donaldson Lufkin . . . done a lot [of underwriting]; [company] right in picking two very large [underwriters] Low beta . . . good Company sells shares . . . not pay dividends Don't have much debt If no turnover of key people, could be quite a mover Product has very, very good market Fast-moving industry Keep costs in line, should be a good solid company Firm Specific Decision 173 Start Offering size: Company manufacturer: Sales by competitive bids: ‘Company tied in military: Sales: Debt: Shares tradeable right away (rule 144): Dividends: NI as percent of sales: Costs: Product price range: Contracts in process: Repeat business: Suppliers: Competition: Board diversified, young: 660,000 shares or less -————-— good more than 660,000 shares-—————— bad yes-—————— important no ——————- ? yes-—————— firm competitive no ——————- ? yes —————— good no -—————-bad increasing growth greater than 17 percent growth rate less than 17 percent less than 2 percent ——————-good more than 2 percent-—————— bad yes ——————-good no -—————-bad yes-—————— good no —-—————'bad increasing ——————-good changing —-————-bad stable, decreasing ——————-good increasing -—————— bad from $50,000 to $600,000-—————— good less than above range -————- bad five or more -—-——-—good less than five-—————— bad more than 20 percent-———-—— good less than 20 percent-—————— bad large —————— good small -—————-bad new companies older companies yes-———-—— good no ——————-bad Derived Model of S6 Figure 20 fantastic bad Stock option plan: Stock traded NASDAQ: CPA report: Inflation/NI effects: Net asset base: Underwriter: Beta: Industry: Key management: 174 Figure 20 (cont.) yes -———-— good no ——————-bad yes -——-—— good no ——————-bad normal ——————-good not normal -—-———— bad rising prices - FIFO -—————-good falling prices - FIFO-—————- bad growing -—-——-— good not growing —————— bad well-known —————- good unknown -—————— bad less than 1.6 ——————-good more than 1.6‘——-——— bad high growth-——-——— good low growth -———-—— bad stable -————— good turnover-—————- bad Firm Specifi Read 1-56 175 Read 64-67 Number of shares offered. compute Sales Sales grown. increased six times in four . years - 50 increase compute percent a trend year. -: increase Sales Income as doubling percent of this year. sales increase dropping. Problem Behavior Graph of 87 Figure 21 176 Figure 21 (cont.) EPS reflects Read 77-85 tag 1 Read 87-109 Read 110-122 drops. formulate relation PE ratio is ten. compute Right to terminate contracts - extremely 1 0 tag important . To date no termination . . . all ro'ection. tag 1 p 3 Fixed coststo salesrelation- ship disturbing . Half year has lower profit with virtually same profit level. Read 124-146 CI Figure 21 177 (cont.) Read increase/CI [Dilution] normal enough. 148-186 tag 1 Sales and maintenance progressed nicely. Sales faster than maintenance. Phenomenal - may double. tagrl Cost of sales - bothersome. comppte Cost of sales, 55 percent in 1980, 61 percent in 1981. increase/CI compute SG&A has gone up significantly . . at the half rate .. .1 compute trend SG&A 8 percent as percent of sales 1980; 11 percent in 1981. tag 1 Seems ok. 178 Read 202—204 Read 213-219 Figure 21 (cont.) Interest costs up by 10 million. compute increase [Pre-tax Reasonable income] as range. percent of sales. “Egg? 1979 — 16-1/27. CI l980-15.6Z l981-17Z Interest Profits down income - in 81 since expense; use of tagpl wash. formulate capital d relation restricte ability to earn. Costs and Cost of expenses in sales current: reasonable 61 percent. CI relationship. trend Prior: 55 percent; 63 percent in 78. 179 Figure 21 (cont.) SG&A quite a Income before range in taxes _ no short run, surprises.. low last CI six months. tag 1 O.E.M. fine. Read 227-282 No EE's in Less design unions ‘ and manufac- they are turing than Read tag 1 getting. tag I imagined. 284—301 along fine. Pay levels Medical not expense excessive. plan - Read CI CI probably not 304—322 significant. 180 Figure 21 (cont.) [Directors pay] low. Stock options won't cause dilution — not too important. CI Read tag 1 328-331 Read CI 334-337 tag 1 Primary EE's well taken care of - Stay - good CI incentives. Salaries and bonuses low. CI No - [they're fine] 181 Figure 21 (cont.) Amendment of contracts — excellent. 500,000 that should not have been out. Read tag 1 344-348 Pettis and Majors principal owners. IRead CI 352-357 Nobody else significant. Legal matters Auditors - no big deal. somebody nobody ever Read tag 1 tag 1 heard Of' 359-365 Tremendous growth risk for future, sizable potential for lot more sales. Vulnerable. Beta—minimal? Computer peripheral - risky. B/S - don't care about. B.V. - don't care Read formulate about. Care about ability to earn. Care 369-377 problem about safety of stock value. Speculative company. Strictly sales. Existence depends on sales staff. Defense market - good. Future sales probability seems good. Probably survive. Questionable technical advantage. No proprietary product. 182 Start Sales grown - increased six times in four years Sales doubling this year Income as percent of sales dropping EPS reflects drop Fixed costs to sales relationship disturbing — (one-half year has lower profit with virtually same profit level P.E. ratio is 10 Right to terminate contracts - extremely important No terminations to date Dilution - normal Sales and maintenance progressed nicely Sales faster - phenomenal Cost of sales — bothersome ‘ SG&A gone up significantly . . . [trends] seem ok [Pretax income as percent of sales] - reasonable range Costs and expenses - reasonable relationship SG&A - quite a range in short run Income before taxes . . . no surprises [Suppliers] - fine [Employee relations] - getting along fine [Company does] less design and manufacturing than imagined Pay levels - not excessive Director's pay - low Stock options - [not dilutive] [Firm has] good incentives for primary employees Company controlled by P and M Auditors - [unknowns] [Firm] has tremendous growth potential Risk of forfeiture sizeable [Firm] vulnerable Potential for lot more sales Protocol Model of 87 Figure 22 -... -_--'—" .._-.. . .... 183 Figure 22 (cont.) Beta - minimal Computer peripherals - risky B/S - not important Book value Ability to earn [care about] Speculative company Strictly sales [company] Existence depends on [quality] of sales staff Defense market — good Future prospects — seems good Questionable technical advantage — no proprietary product Firm Specific Decision 184 ( Start 5 Sales: Income as percent/sales: Fixed costs to sales: Contracts: Cost of sales: Dilution: Pretax income as percent of sales: After tax income: Suppliers: Employee relations: Pay levels: Management incentives: Company manufacturer: Legal matters: Auditors: Director's pay: Company industry: increasing fine not increasing disturbing increasing fine not increasing disturbing stable fine increasing disturbing noncancellable fine cancellable disturbing not increased more than 22 percent relative to sales fine increased more than 2 percent relative to sales disturbing fine disturbing less than or equal to 33 percent more than 33 percent fine disturbing more than less than fine disturbing no volatile changes big changes fine disturbing multiple single fine disturbing no labor unions union fine disturbing fine disturbing top pay less than $200,000 top pay more than $200,000 stock options, profit sharing no stock options, profit sharing yes-—————— fine no -—————— disturbing no lawsuits fine lawsuits disturbing well-known ——-———-fine unknown disturbing $1,000 or less -————— low more than $1,000 high risky not risky computer peripherals other industry Derived Model of S7 Figure 23 185 Figure 23 (cont.) Growth potential: past growth greater than 17 percent -————— fine past growth less than 17 percent -—————— disturbing Proprietary yes ————- >fine product: no -——-——- disturbing irm Specifi APPENDD( II Read l-ll APPENDD( II Re11’ab1’11’ty Mode1s of S4 Don't see any- One of the . . thing which stockholders is makes me exci— . . . trying to ted about this make some mo- tag 1 company . . . tag 1 ney out of the,__ Many others compute thing; he's doing the same selling almost thing . . . 50% . . . Company is not They have a realizing the nice earning full benefit of progression . . tag 1 the sale - tag 1 which in my increase _1 mind is a neg- ative . . . I don't think [ This type of much of any- company is a thing is going so-called high to happen to tech . . . it tag 1 this stock - tag 1 is in that there are category . . . others in the same ballpark Problem Behavior Graph of 84-2 Figure 24 186 187 Start Don't see anything which makes me excited about this company . . . Many others doing the same thing . . . One . . . of the stockholders . . . is trying to make some money . . . he's selling almost 50% . . . Company is not realizing the full benefit of the sale, which in my mind is a negative . . . They have a nice earning progression. I don't think much of anything is going to happen to this stock . . .there are others in the same ballpark This type of company is a so-called high tech . . . it is in that category . . . Look at legal matters to see if there are any suits outstanding . . . Give it a cursory glance to see if there is or isn't some venture capital firm involved . . . Look at the information provided by the public accountants o o o Protocol Model of 84-2 Figure 25 188 [ Start ] Product new, high quality: yes —----- positive Origen of stock: Net income, sales growth: no ------ negative company selling all ------ positive stockholders selling ------ negative increasing ----— positive volatile ------ negative Yes Legal statement: no lawsuits ----- positive lawsuits ----- negative Venture capital firm yes ------ positive involved: no ------ not relevant Auditor statement: no exceptions ---- positive exceptions ----- negative 'ECiSiOI Derived Model for 84—2 Figure 26 BIBLIOGRAPHY BIBLIOGRAPHY Abdel-khalik, A.R., ”The Effect of Aggregating Accounting Reports on the Quality of the Lending Decision: An Empirical Investi- gation", in Supplement to Journal of Accounting_ReSearch 11 (1973): l04-l38. American Accounting Association, A Statement of Basic Accounting Theorv, Sarasota: 1966. American Institute of Certified Public Accountants, Inc., "Statements of the Accounting Principles Board," in FinanCial Accounting, Standards, pp. 440-486. Financial Accounting Standards Board, Stamford, FASB, 1976. Ashton, R.H., "An Experimental Study of Internal Control Judgments," Journal of Accounting Research 12 (Spring 1974): 143-157. , "Behavioral Assumptions 0f Normative Decision Theory: An Experimental Test of the Independence Axiom in an Accounting Business Context," in Behavioral Experiments in Accounting II. Edited by T. Burns. Columbus: The Ohio State University, T979, pp. 175-204. Ashton, R.H. and 8.5. Kramer, "Students as Surrogates in Behavioral Accounting Research: Some Evidence," J0urnal of Accounting Research 18 (Spring 1980): l-l5. Bhaskar, R. and H. Simon, "Problem-Solving in Semantically Rich Domains: An Example from Engineering Thermodynamics," Cognitive Science No. l (1977); 193-2l5. Bhasker, R., "An Information-Processing Analysis of the Cost Accounting Domain," unpublished manuscript, Ohio State Univer- sity, 1978. Bhaskar, R. and J. Dillard, "Skill Acquisition in Semantically Rich Domains," paper presented at AERA Symposium, San Francisco, April, 1979. , "Human Cognition in Accounting: A Preliminary Analysis," 1n Behavioral Experiments in Accounting II. Edited by T. Burns. Columbus: The70hi0 State University, 1979, pp. 323-325. Biggs, S., "An Empirical Investigation of the Information Processes Underlying Four Models of Choice Behavior," in Behavioral Research in Accounting 11. Edited by T. Burns. Columbus: The Ohio State University, 1979, pp. 35-8l. 189 190 Biggs, S. and T. Mock, "Auditor Information Search Processes in the Evaluation of Internal Controls,” Working Paper 2-80-6, University of Wisconsin, 1980. Bouwman, M.J., "Computer Simulation of Human Decision-Making in Accounting: The Analysis of Financial Statements," unpublished Ph.D. dissertation, Carnegie Mellon University, I978. , "The Use of Accounting Information: Expert Versus Novice Behavior," unpublished manuscript, University of Oregon, 1980. Bowman, R.G., "The Importance of a Market-Value Measurement of Debt in Assessing Leverage," Journal of Accounting Research 18 (Spring 1980): 242-254. Bruner, J.S., J. J. Goodnow, and G.A. Austin, A Study of Thinkigg, cited by H. Simon; K. J. Gilmartin; and A. Newell; Models of Thou ht. Edited by H. Simon. Englewood Cliffs: Prentice-Hall, 1979; p. 85. Clarkson, G., Portfolio Selection: A Simulation of Trust Investment, Englewood Cliffs: ’Prentice-Hall, 1962. Cohen, J., "A Coefficient of Agreement for Nominal Scales," Educational and Psyghological Measurement 20 (1960): 37-46. Davis, R., E.H. Shortliffe, and B.G. Buchanaan, "Production Rules as a Representation for a Knowledge—Based Consultation Program," Artificial Intelliggnoe 8 (1977); 15-45. Dawes, R.M. and B. Corrigan, "Linear Models in Decision-Making," Psychological Bulletin 81 (Spring 1974): 95-106. Dyckman, T.R., M. Gibbons, and R.J. Swierenga, "Experimental and Survey Research in Financial Accounting: A Review and Evalua- tion," in The Impact of Acc0Unting Research on Practice and Disclosure. Edited by A.R. Abdel-khalik and T. Keller. Durham; DukeFUniversity Press, 1978; pp. 48-105. Ebbesen, E.B. and V.J. Konecni, "0n the External Validity of Decision- Making Research: What Do We Know About Decisions in the Real World?" in Cognitive Processes in Choice and Decision Behavior. Edited by T.S. wallsten, Hillsdale: anrence Erlbaum Associates. 1980; pp. 21-45. Ebert, R.J. and T.E. Kruse, "Bootstrapping the Security Analysts," Journal of Applied Psychology 63 (1973): 110-119. Einhorn, H.J., and R.M. Hogarth, "Behavioral Decision Theory: Processes of Judgment and Choice," in Annual Review of Psychology. Edited by M. Rosenweig and L. Porter. Palo Alto: Annual Reviews, Inc., 1980; pp. 53-88. 191 Elstein, A.S., L.S. Shulman, and 3.5. Sprafka, Medical Problem Solving, Cambridge: Harvard University Press, 1978. Ericcson, K. and H. Simon, "Retrospective Verbal Reports as Data," Complex Information Processing working paper #388, l978. , "Thinking Aloud Protocols as Data: Effects of Verbaliza- tion," Complex Information Processing working paper #397, 1979. , "Verbal Reports as Data," Psychological Review No. 87 (1980): 215-251. Financial Accounting Standards Board, Statement of Financial Account- ing Concepts No. 1, Stamford: FASB, 1978. Grant, E., "Market Implications of Differential Amounts of Interim Information," Journal of Accounting Research 18 (Spring 1980): 255-268. Hayes, J.R. and H. Simon, "Understanding Written Problem Instructions," in Knowledge and Cognition. Edited by L.W. Gregg; Hillsdale: Lawrence Erlbaum, 1974; pp. 167-200. Hoffman, P.J., "The Paramorphic Representation of Clinical Judgment,“ cited by R. Dawes and B. Corrigan "Linear Models in Decision- Making," Psychological Bulletin 81 (Spring 1974): 100. Hofstedt, T.R., "Some Behavioral Parameters of Financial Analysis,” The Accounting Review 47 (October 1972): 679-692. Howard, J.A. and H.M. Morgenroth, "Information Processing Model of Executive Decision," Management SCience 14 (1968): 416-428. Kleinmutz, B., "The Processing of Clinical Information by Man and Machine," in Formal Representation of Human Judgmpgg, Edited by B. Kleinmutz; New York: Wiley, 1968; pp. 14 - . Lewellen, W.G., R.C. Lease, and 6.6. Schlarbaum, "Patterns of Invest- ment Strategy Among Individual Investors," The Journal of Business (July 1976): 296-333. Libby, R., "The Use of Simulated Decision-Makers in Information Eval- uation," The Accounting Review 50 (July l975): 475-489. , "Bankers and Auditors' Perceptions of the Message Communicated by the Audit Report," Journal of Accountigg Research 17 (Spring 1979): 99-122. , "The Impact of Uncertainty Reporting on the Loan Decision," supplement to Journal of Accounting Research l7 (1979): 35-71. 192 Libby, R. and B. Lewis, "Human Information Processing Research in Accounting: The State of the Art," Accounting Organizations and Society N0. 2 (1977): 245-268. McGhee, W., M. Shields, and J. Birnberg, "The Effects of Personality on a Subject's Information Processing," The Accounting Review 53 (July 1978): 68l-687. Miller, F.D. and E.R. Smith, "Limits on Perception of Cognitive Processes: A Reply to Nisbett and Wilson," _§ychological Review 85 (1978): 355-362. Miller, G.A., "The Magical Number Seven, Plus or Minus Two," Psychological Review 63 (1956): 81-97. Mock, T.J. and J. L. Turner, Internal Accounting Control Evaluation and Auditor Judgment, New York: AICPA, 1981. Montgomery, H., "A Study of Intransitive Preferences Using a Think Aloud Procedure," Goteborg Psychological Reports 5 (1975): l-l4. Newell, A. and H.A. Simon, Human Problem Solving, Englewood Cliffs: Prentice-Hall, 1972. Newell, A., "Production Systems: Models of Control Structures," in Visual Information Processing, Edited by W.G. Chase; New York: Academic Press,71973; pp. 463-526. Nisbett, R.E. and T.D. Wilson, "Telling More Than We Know: Verbal Reports on Mental Processes," Egychological Review 84 (1977): 231-257. Olshavsky, R.W.,"Task Complexity and Contingent Processing in Decision Making: A Replication and Extension," Organizational Behavior and Human Performance 24 (1979): 300-316. Orne, M.T., "Comnunication by the Total Experimental Situation," in Communication and Affect. Edited by P. Pliner, L. Krames, and T. Alloway;7New YBFk: Academic Press, 1973; pp. 157-l9l. Pankoff, L. D. and R. L. Virgil, "Some Preliminary Findings From A Laboratory Experiment on the usefulness of Financial Accounting Information to Security Analysts, in supplement to the Journal of Accounting Research 8 (1970):l-48. Payne, J.W., "Task Complexity and Contingent Processing in Decision- Making: An Information Search and Protocol Analysis," Or aniza- tional Behavior and Human Performance 16 (l976): 366-387. Payne, J.W., M. L. Bruanstein, and J.S. Carroll, "Exploring Predeci- sional Behavior: An Alternative Approach to Decision Research," Organizational Behavior and Human Performance 22 (1978): l7-44. Payne, J.W., "Analyzing Decision Behavior: The Magician's Audience," in Cognitive Processes in Choice and Decision Behavior. Edited by T.S. Wallsten; Hillsdale: Lawrence Erlbaum Associates, 1980; pp. 69-76. 193 Reilly, F.K., Investment Analysis and Portfolio Management, Hinsdale: The Dryden Press, 1979. Rosenberg, B. and J. Guy, "Prediction of Beta from Investment Funda- mentals,” Financial Analysts Journal (May-June 1976): 60-71. , "Prediction of Beta from Investment Fundamentals," Financial Analysts Journal (July-August 1976): 62-70. San Miguel, J.S., "Human Information Processing and Its Relevance to Accounting: A Laboratory Study," Accounting Organizations and Society 1 (1976): 357-373. Schroeder, H.M., M.J. Driver, and S. Streufert, Human Information Processing, New York: Holt, Rinehart and Winston, 1967. Shields, M., "Some Effects of Information Load on Search Patterns Used to Analyze Performance Reports," AccOunting, Organizations, and Society 5 (1980): 429-442. Simon, H.A., "Information Processing Models of Cognition," in Annual Review of Psychology. Edited by M. Rosenweig and L. Porter; Palo Alto: Annual Reviews, Inc., 1979; pp. 363-393. Slovic, P. and S. Lichtenstein, "Comparison of Bayesian and Regression Approaches to the Study of Information Processin in Judgment," Organizational BehaviOr‘and'HUman Performance 6 (1971): 649-744. Slovic, P., B. Fischoff, and S. Lichtenstein, "Behavioral Decision Theory," in Annual Review Of'Psychology. Edited by M. Rosenweig and L. Porter Palo Alto: Annual Réviews, Inc., 1977; pp. 1-39. Snowball, D., "On the Integration of Accounting Research on Human Information Processing," ccounting and Business Re§§arch (Summer 1980): 307-318. Stephens, R.C., "Accounting Disclosures for User Decision Processes," reprint series #RS 79-25, The Ohio State University, 1979. Stephens, R.C., J.F. Shank, and R. Bhaskar, "The Lending Decision: A New Perspective on the Role of Accounting Information," unpublished manuscript, The Ohio State University, 1980. Swierenga, R.J., M. Gibbins, L. Larson, and J.L. Sweeney, "Experiments in the Heuristics of Human Information Processing," in supplement to Journal of Accounting Research 14 (1976): 159-187. Thorndyke, P.W. and B. Hayes-Roth, Human Processing of Knowledge from Text, Santa Monica: Rand Corp., 1979. Tversky, A., "Elimination by Aspects: A Theory of Choice," Psychological Review 79 (1972): 281-299. 194 Tversky, A. and D. Kahneman, "Judgment Under Uncertainty: Heuristics and Biases," Science 185 (1974): 1124-1131. Ungson, E.R., D.N. Braunstein, and P.D. Hall, "Managerial Information Processing: A Research Review," Administrative Science Quarterly 26 (March 1981): 116-133. Waterman, D.A. and A. Newell, "Protocol Analysis as a Task for Artificial Intelligence," Artificial Intelligence 2 (1971): 285-318. Waterman, D.A. and A. Newell, "Pas II: An Interactive Task-Free Version of an Automatic Protocol Analysis System," IEEE Trans- actions on Computers 4 (1976): 402-413. Wormly, W.P., "Portfolio Manager Preferences in an Investment Decision- Making Situation: A Psychological Study," unpublished Ph.D. dissertation, Harvard University, 1976. 11111111111111 3.11111... 111111111