AN AVALAATIAN AF EONVANTIANAL -------- :7 . _ AND STATISTIAAL METHODS 0F: AACOUNAING VARIANCE CONTROL mm AAAAA UNIVERSITY RABART W. AOAAIER ' 1967 LI R 3? *1 .7? V Michigan mute University Willi/III!!!ll”lillllillfllllllflllIllIlI/I/Illllllfllll This is to certify that the thesis entitled AN EVALUATION OF CONVENTIONAL AND STATISTICAL METHODS OF ACCOUNTING VARIANCE CONTROL presented by Robert W. Koehler has been accepted towards fulfillment of the requirements for Ph. D. degree in Accounting Major professor Mew 0-169 ...——-— ABSTRACT AN EVALUATION OF CONVENTIONAL AND STATISTICAL METHODS OF ACCOUNTING VARIANCE CONTROL by-Robert W. Koehler Standard costs are developed primarily to aid management in performance control. Accountants typically indicate the need for follow-up if the variance exceeds These percent— some selected percentage of the standard. age cut-off points are subjectively determined by intui- A 10 per cent variance is tion, judgment, and experience. commonly designated as significant. Lack of objective criteria for significance deter— Furthermore, any summary mination has hampered control. report used as the principle control device permits signif- icant variances to be averaged-out over time and to be off- such a set between operations. In addition, of course, report does not facilitate timely control because those significant variances that are not averaged—out are still not detected until-after the report is issued. Accountants have not adequately considered the They‘ignore reasons for not investigating all variances. the fact that labor and overhead efficiency, material usage, Robert W. Koehler and volume as well as some manufacturing costs vary because of unexplainable factors which are identified as chance. Once chance is recognized as contributing to some variances, probability statistics evolves as a useful tool for signif- icance determination because it involves procedures for evaluating patterns of chance occurrences. The hypothesis that was tested in this dissertation is that new applications of.presently developed statistical tools can increase the effectiveness of accounting variance In the test, all of the proposed statistical mod- control. els resulted in significantly greater overall control than Consequently, the commonly used 10 per cent cut-off point. it is recommended that statistical procedures be adopted to aid in variance control. Statistical models permit explicit consideration of various combinations of the following relevant factors: Probability distribution of chance performances. l. (The performances that vary for unexplainable rea— sons.) 2. Probability distribution for each assignable cause. (These include faulty equipment, faulty materials, laziness, etc.) 3. Probability of making an unwarranted investigation (Type I error). Probability of-accepting variance when an investi— gation is warranted (Type II error). Robert W. Koehler Opportunity cost of Type I error. 5. 6. Opportunity cost of Type II error. 7. Prior probabilities of the occurrence of chance and each assignable cause. 8. Probability that any given variance is due to chance and the probability that it is due to each assignable cause. Initially four statistical models that had been Each contained some proposed by others were examined. In an effort to counteract these, questionable aspects. this writer constructed two additional models. One is an extension of Classical statistics which considered factors The other, which is identified as the Mini- 1 through 6. mization approach, contains an element of Bayesian statis- tics in that it incorporates factor 7 in addition to the All of these statistical models in addition to first six. the 10 per cent cut—off point were then tested to determine the best model for control purposes and to substantiate the hypothesis that statistical models are more desirable. The test consisted of three parts. First, a hypo- thetical example was developed for which the causes and performance values of 1000 performances ofaa certain opera- tion were assumed. Second, these values in conjunction with economic assumptions were used to compute the upper and lower control limits for each of the models under four Robert W. Koehler The third phase of the test consisted of a testing plans. financial analysis conducted to rank the approaches for control effectiveness for each corresponding control limit and testing plan. The most significant conclusion is that all of the statistical procedures resulted in significantly greater As ex— overall control than the 10 per cent cut-off point. pected, the Minimization approach which incorporated the largest number of relevant factors produced the greatest overall control. Factor 8 which was used in two of the models that had been proposed by others proved to be a suf- ficiently important determinant of effective control limits to outweigh some of the other deficiencies associated with However, the individual rankings of the sta— these models. tistical approaches can be expected to vary somewhat depend- ing upon the probability distributions of chance and assign- able cause performances and also upon the testing plan with its corresponding control limit. The example also illustrated how significant per- formances can be averaged—out so that they are not reflected in summary reports. To reduce this average—out effect and to facilitate more timely control, it is suggested that How- statistical models be applied at the performance level. ever, because statistical procedures take cognizance of the degree of summarization, they can also be used for better interpretation of summary report. AN EVALUATION OF CONVENTIONAL AND STATISTICAL METHODS OF ACCOUNTING VARIANCE CONTROL BY v‘ AV &‘ Robert W1 Koehler A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Accounting and Financial Administration 1967 © Copyright by ROBERT WALLACE KOEHLER 1968 ACKNOWLEDGMENTS The author extends his sincere appreciation to Dr. James Don Edwards, Chairman; Dr. George C. Mead; and Dr. Richard F. Gonzales who were members of his doctoral committee. Each contributed of his time, talent, and en- couragement during the progress of the research and writing. Special thanks go to Dr. James Don Edwards and the faculty in the Department of Accounting and Financial Ad- ministration at Michigan State University for contributing greatly toward my academic develOpment and for financial aid received during the initial stages of my doctoral stud- ies. Numerous members of the Department of Accounting and Business Statistics at The Pennsylvania State Univer- sity acted as sounding boards for my ideas. Appreciation, is expressed for the benefit resulting from these exchanges. The author is especially grateful to Dr. William L. Ferrara who edited much of the manuscript. For the constant encouragement and devotion of my mother and my grandmother go my heartfelt and everlasting thanks. iii TABLE OF CONTENTS Page ACKNOWLEDGMENTS. . . . . . . . . . . . . . . . . . iii LIST OF TABLES . . . . . . . . . . . . . . . . . . vii LIST OF FIGURES. . . . . . . . . . . . . . . . . . xiii LIST OF APPENDICES . . . . . . . . . . . . . . . . xiv Chapter I. INTRODUCTION . . . . . . . . . . . . . . . 1 Background The Problem Helpful Information Attempted Solutions Purpose of Dissertation Hypothesis Methodology Contributions II. AN EVALUATION OF CONVENTIONAL VARIANCE CONTROL . . . . . . . . . . . . 15 Conventional Significance Determination Treatment in the Literature Aggregation Problems Need for further Study Dissertation Objectives III. CONTROL AND CHANCE CONCEPTS. . . . . . . . 28 Definitions of Control The Notion of Chance Illustration of Overlapping POpulations Chance Influences on Individual Variance Classifications Conclusions IV. STATISTICAL CONTROL TECHNIQUES-- EVALUATION OF THREE PROPOSED METHODS . . 51 Hypothesis Testing The Basic Control Chart Approach iv Chapter Page The Bierman, Fouraker, and Jaedicke Approach Analysis of the Bierman, Fouraker, and Jaedicke Approach McMenimen Approach Conclusions V. STATISTICAL CONTROL TECHNIQUES-- TWO MORE-REFINED METHODS . . . . . . . . 92 An Equilization Approach Comparison of the Equilization Approach with the Bierman, Fouraker, and Jaedicke Approach Bayesian Statistics Bayesian Application to Quality Control Application of a Bayesian Concept to the Meat-Cutter Example--Minimization Approach VI. A TEST OF THE ACCOUNTING AND STATISTICAL CONTROL TECHNIQUES . . . . . 136 Introduction The Example Derivation and Financial Analysis of Upper Control Limits for Single ' Observations--Each Performance Tested Derivation and Financial Analysis of Lower Control Limits for Single Observations--Each Performance Tested Derivation and Financial Analysis of Upper Control Limits for Single Observations-~Every Tenth Performance Tested Derivation and Financial Analysis of Lower Control Limits for Single Observations-~Every Tenth Performance Tested Derivation and Financial Analysis of Upper Control Limits Sample Size Five--Every Performance Included in a Sample Derivation and Financial Analysis of Lower Control Limits Sample Size Five—~Every Performance Included in a Sample Chapter Page Derivation and Financial Analysis of Upper Control Limits-~Sample Size Five--Sample Taken in Every Fifty Performances Derivation and Financial Analysis of Lower Control Limits--Sample Size Five-—Sample Taken in Every Fifty Performances Conclusions VII. SUMMARY AND CONCLUSIONS. . . . . . . . . . . 280 Reasons for Study Conceptual Distinction between Significant and Insignificant Variances Examination of Statistical Models Examination Testing the Relative Control Effectiveness of the Conventional Accounting and the Various Statistical Methods Aggregation Problems Summary of Conclusions Summary of Recommendations BIBLIOGRAPHY O O O O O O O O O 0 O O O O O O O I O O 304 APPENDIX . . . . . . . . . . . . . . . . . . . . . . 311 vi LIST OF 1.. Probability Distribution Performances for Table 2. Probability Distribution Performances Resulting TABLES of Chance Assembly. . . . of Chance from Assignable Cause Due to Improvement . . . . . . . 3. Probability Distribution Performances . . . . . 4. Probability of Error for of Chance Various Parameters Given Single Observations and a .05 Level of Significance. . . . 5. Comparison of the Probabilities of a Type II Error for Various Parameter Values Under Different of Significance. . . . '6. Conditional Cost Table . 7. McMenimen's Illustration Levels 0 O O O O O O O O O O O O O O O 8. Opportunity Costs of a Wrong Decision for Various POpulation Means . . . . . 9. Conditional Average Opportunity Costs. . lO. Weighted Opportunity Cost of Type II Error. . . . . 11. Expected Opportunity Costs of Two Alternatives . . . 12. Unconditional Expected Opportunity Costs for Various Rejection Numbers. . . . . 13. Revision of Prior Probabilities. . . . . l4. Unconditional Expected Costs of Various Levels of Significance . . . . Vii Page 37 38 54 63 67 7O 86 97 100 108 120 122 125 130 Table 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. Revision of Prior Probabilities. . . . . Revised Probabilities for Performances 260 and 270 . . . . . . . Causes--Their Frequencies and Means. . . Distribution of Performance Values by Cause. . . . . . . . . . . . Application of McMenimen Technique . . . Decision Table for Equalization Approach. . . . . . . . . Decision Table for Minimization Approach. . . . Extra Savings of Basic Control Chart Method . . . . . . . . . . . . Financial Comparisons between Approaches Decision Table for BF and J Application. First Interpretation of P. . . . . . . Decision Table for BF and J Application. Second Interpretation of P . . . . . . Decision Table for Equalization Approach. . . . . . . . . Decision Table for Minimization Approach. . . . . . . . . Financial Comparisons between Approaches Decision Table for BF and J Application. First Interpretation of P. . . . . . Decision Table for BF and J Application. Second Interpretation of P . . . . . . Application of McMenimen Technique Application of McMenimen Technique . . . Derivation of Savings Values . . . . . . viii Page 132 134 138 140 155 159 161 167 167 171 171 174 176 179 182 182 183 186 188 Table 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. McMenimen Technique--Incremental Application. . . . . . . . . . . . Decision Table for Equalization Approach. . . . . . . . . Decision Table for Minimization ApproaCho o o o 0‘ o o o 0 Extra Savings of Basic Control Chart Method . . . . . . . . . . . . . Financial Comparisons between Approaches Decision Table for BF and J Application. First Interpretation of P. . . . . . Decision Table for BF and J Application. Second Interpretation of P . . . . . . Application of McMenimen Technique . . Decision Table for Equalization Approach. . . . . . . . . Decision Table for Minimization Approach. . . . . . . Financial Comparisons between Approaches Decision Table for BF and J Application. First Interpretation of P. . . . . . . Decision Table for BF and J Application. Second Interpretation of P . . . . . . Determination of P's . . . . . . . . . Derivation of Savings Values . . . . . . Application of McMenimen Technique . . Decision Table for Equalization Approach. . . . . . . . . Decision Table for Minimization Approach. . . . . . . . . Additional Savings of Basic Control Chart Approach . . . . . . . . . ix Page 190 191 193 196 196 199 199 201 202 203 204 210 211 213 216 218 220 222 226 Table Page 53. Financial Comparisons between Approaches . . 226 54. Decision Table for BF and J Application. First Interpretation of P. . . . . . . . . 228 55. Decision Table for BF and J Application. Second Interpretation of P . . . . . . . . 230 56. Determination of P's . . . . . . . . . . . . 230 57. Application of McMenimen Technique . . . . . 232 58. Decision Table for Equalization Approach. . . . . . . . . . . 233 59. Decision Table for Minimization Approach. . . . . . . . . . . 234 60. Financial Comparisons between Approaches . . 237 61. Decision Table for BF and J Application. First Interpretation of P. . . . . . . . . 241 62. Decision Table for BF and J Application. Second Interpretation of P . . . . . . . . 241 63. Application of McMenimen Technique . . . . . 242 64. Determination of P's . . . . . . . . . . . . 243 65. Decision Table for Equalization Approach. . . . . . . . . . . 244 66. Decision Table for Minimization Approach. . . . . . . . . . . 245 67. Financial Comparisons between Approaches . . 246 68. Decision Table for BF and J Application. First Interpretation of P. . . . . . . . . 249 69. Decision Table for BF and J Application. Second Interpretation of P . . . . . . . . 249 70. Application of McMenimen Technique . . . . . 250 71. Decision Table for Equalization Approach. . . . . . . . . . . 251 Table 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. Decision Table for Minimization Approach. . . . . . . . . . . Financial Comparisons between Approaches Summary of Rankings. . . . . . . . . . . . . Summary of Rankings. . . . . . . . . . . . . Numerical Differences in Control Limits between the Top Ranking and the other Approaches for Testing Plan A. . . . . . Numerical Differences in the Control Limits between the TOp Ranking and the other Approaches for All Testing Plans. . . . . . . . . . . . . . . Summary of Upper Control Limits by Testing Plan. . . . . . . . . . . . . Application of McMenimen Technique . . . . McMenimen Technique-—Incremental Application. . . . . . . . . . . . . . . McMenimen Technique-—Incrementa1 Application. 0 O O O O O O O O O O O O O 0 Application of McMenimen Technique . . . . . Single Performance Opportunity Costs for Corresponding Assignable Causes. . . Weighted Opportunity Cost Associated with Poor Attitude Assuming UCL = 260. Averaging Process to Find the Conditional Opportunity Cost of a Type II Error for Test Value 260 . . . . . Weighted Conditioanl Opportunity Costs for Test Values 255 and 260. . . . . . Weighted Conditional Opportunity Costs for Selected Values Determined by Interpolation. . . . . . . . xi Page 253 255 256 259 272 275 278 316 317 318 321 325 326 327 328 329 Table Page 88. Averaging Process to Find the Conditional Opportunity Cost of a Type II Error for Various Test Values. . . 329 89. Relevant Prior Probability Distribution for Test Value 260. . . . . . 330 90. Weighted Conditional Opportunity Costs for Test Values 265 and 260. . . . . 331 91. Weighted Conditional Opportunity - Costs for Selected Values Determined by Interpolation. . . . . . . . 332 92. Calculation of the Probabilities of a Wrong Decision for Each Assignable Cause Under Test Value 260 . . . . . . . . 333 93. Differences in Weighted Conditional Opportunity Costs between Test Values 250 and 255 . . . . . . . . . . . . 342 94. Weighted Conditional Opportunity Costs Determined by Interpolation. . . . . 342 95. Averaging Process to Find the Conditional Opportunity Cost of a Type II Error for Test Values 253 and 254 . . . 343 96. Weighted Opportunity Cost for Test Value 249 . . . . . . . . . . . . . . 350 xii LIST OF FIGURES Figure l. Gryna's Target Analogy . . . . . . . . 2. Figure Showing Overlapping POpulations 3. Illustration of a Control Chart. . . . 4. Illustration of the Determination of the Probability of a Type II Error . 5. Cost Control Decision Chart. . . . . . 6. Diagram Indicating Direction of Desired Level of Significance. . . . 7. Direction of Upper Control Limit . . . 8. Outcomes of Financial Comparisons. . . 9. Outcomes of Financial Comparisons. . . 10. Outcomes of Financial Comparisons. . . ll. Outcomes of Financial Comparisons. . . 12. Outcomes of Financial Comparisons. . . l3. Outcomes of Financial Comparisons. . . 14. Outcomes of Financial Comparisons. . . 15. Outcomes of Financial Comparisons. . . xiii Page 35 39 54 65 73 103 158 169 179 197 205 227 238 247 255 LIST OF APPENDICES Appendix Page A. BibliographycflfStatistical Applications to Accounting Variance Control . . . . . 311 B. Computational Detail to Support Chapter VI . . . . . . . . . . . 314 xiv CHAPTER I INTRODUCTION In a standard cost accounting system standard unit product costs are established for materials, labor, and overhead. These standards are almost essential for the preparation of an adequate budget. Standard costs pro— vide guidelines for pricing and expedite the valuation of inventories; but they are designed primarily to aid manage- ment in performance control. Possibilities for control emanate from the pre-determined standards. Actual per- formance is seldom equal to the standards because "persons and machines do not perform uniformly; there is always some variability in their work."1 Accountants typically allow for this variability by an amount of variance be- tween actual and standard which is termed "insignificant." A variance which is too large to be ignored is called "significant." The accountant's function in variance control is to measure and report performance and to highlight "sig- nificant variances" so that management can initiate an lLawrence L. Vance and John Neter, Statistical Sampling for Auditors and Accountants (New York: John Wiley and Sons, Inc., 1956), 148. investigation and take corrective action. This imposes upon the accountant the need to develop criteria to deter- mine when a variance is significant. Background It is appropriate to begin by considering answers to the following two questions: 1. Why are variances inevitable? 2. Why is some variability allowed and indeed eXpected? In practice, it is unusual for performance exactly to equal standard; but there is little evidence to suggest that accountants have considered why this inequality in— evitably emerges.2 On the other hand, quality control lit- erature does introduce the concept of chance to explain why variances are inevitable. Chance is "the absence of any known reason why an event should turn out one way rather than another."3 Chance variability can be explained only through an inherent omnipresent non-uniformity. It is now useful to define a significant variance as one resulting from an assignable cause and an insignifi- cant one as resulting from chance. Possible assignable The small amount of literature pertaining to sta— tistical applications of accounting variance control has been devoted mainly to mechanics and has largely failed to consider these conceptual matters. 3C. L. Barnhart, ed., The American College Dic- tionary (New York: Random House, 1960), 200. causes include lack of training, illness, laziness, faulty materials, or improvement.4 Once this concept of "chance" is recognized, the answer to the second question follows logically. Vari— ability due to chance should be allowed because it cannot be profitably reduced—-it is inevitable. Chance variabil— ity can therefore be considered to be non-controllable for any given operational procedure. There is also some chance variability present in the results due to an assignable cause. That is, a worker with faulty equipment will not always obtain the same re- sults. In this case, the variability is attributed to 4Normally, the distributions of values from the several assignable cause populations will overlap with the distribution of values from the chance population. In these cases, it is net possible to select control limits so that results falling inside these limits are always due to chance and so that those falling outside result from an assignable cause. At this point, the statistically minded reader will note that chance per- formances falling outside of the control limits will signal the need for an investigation. This results in an error that statisticians identify as a Type I error. On the other hand, no action is indicated for assign- able cause results that fall inside the control limits. This error statisticians refer to as a Type II error. The goal is to achieve a prOper balance between the probabilities of committing Type I and Type II errors. This problem will be elaborated upon at greater length in the next section and also in Chapter III. To summarize, the definition of a significant variance as one resulting from an assignable cause will be incorrect when a Type I error is committed. Also, the definition of an insignificant variance as one re- sulting from chance will be incorrect when a Type II error is committed. The definitions will, however, con- tinue to be used because they are useful in defining the problem. Furthermore, since Type I and II errors cannot be eliminated when the populations overlap, there are no more precise definitions available. both chance and the assignable cause. This variance is controllable because the average variance from the standard can be reduced by elimination of the assignable cause. After this elimination, however, non-controllable chance variability will still occur. The Problem Significance determination, then, prOperly involves distinguishing between variances due solely to chance and those due to assignable causes in conjunction with chance. On the surface it would seem relatively simple to obtain information regarding the set of values for which no assign- able cause could be identified (e.g., to obtain an estimate of the distribution of values for the population of chance performances). Performance values falling outside this range of values could then signal the presence of an as- signable cause. If, however, one were to obtain informa- tion pertaining to the set of values resulting from each assignable cause, he would find an overlap between the chance pOpulation values and the values of some of the assignable cause populations. That is, any specified de— viation may be the result of either chance or several as— signable causes. Without an investigation one cannot usu- ally be sure which population a given performance came from. Since the cost of an investigation for every per- formance is prohibitive, a decision must be made to in— vestigate only those variances that are unlikely to have come from a chance population. The problem involves making a decision as to whether or not chance is operative on any given performance. More generally it involves setting up limits, called control limits, within which chance is likely to be Operative. The use of probability statistics to help determine these limits seems logical because probability "is a statistical area dealing with the number of techniques for evaluating the possibilities and patterns of chance oc- currences and the degree of effort needed to control them within pre—established limits."5 To reiterate, if a performance seems likely to have come from a pOpulation of chance performances, the cost of an investigation can be saved. On the other hand, if it seems unlikely to have come from a population of chance performances, it is important to investigate to determine the cause and to make the apprOpriate corrections if the cause is assignable, that is, due to factors other than chance. The problem involved is attempting to quantify the terms "likely" and "unlikely." Also involved is the determination of whether unlikely variances are worthwhile examining. Because any given performance value may come from more than one population, two kinds of error are involved 5Arthur H. Smith, "Problem Solving Through Mathe- matical and Statistical Techniques: Use of Operations Re- search," N. A. A. Bulletin, XLII, No. 1, Section 3 (Septem~ ber, 1960), 10. in making a decision. One may decide to investigate a performance that he later finds to have come from a chance population. This error is referred to as a Type I error. Conversely, one may decide to forego an investigation when, actually, an assignable cause is present. This error is called a Type II error. The risk of at least one type of error is present as long as there is an overlap between the values of the pOpulation of chance performances and those of the populations of some assignable cause perform— ances. It will later be seen that the probability of a Type I error cannot be reduced without increasing the probability of a Type II error. Likewise, the probability of.a Type II error cannot be reduced without increasing the probability of a Type I error. The solution lies in striking a balance between these types of error. Helpful Information For any value to be tested as a control limit, it is necessary to know (or estimate) the distribution of values of chance performances in order to evaluate the probability of a Type I error. Likewise, it is necessary to know (or estimate) the distribution of pOpulation values for each assignable cause in order to evaluate the proba— bility of a Type II error. From the population distribu- tions of each possible cause, one can determine the proba- bility that any given variance is due to chance by dividing the number of times a given variance has occurred into the number of times it has occurred for chance causes. This is an important probability under some methods of striking the balance between the two types of error. Certainly the opportunity costs of incurring each type of error are an important consideration in striking the balance. In some analyses it may be helpful to know the probability that chance and each assignable cause will occur. In summary, statistical models permit explicit con— sideration of various combinations of the following rele- vant factors: 1. Probability distribution of chance performances. (Theperformances that vary for uneXplainable rea— sons.) 2. Probability distribution for each assignable cause. (These include faulty equipment, faulty materials, laziness, etc.) 3. Probability of making an unwarranted investiga— tion (Type I error). 4. Probability of accepting variance when an investi- gation is warranted (Type II error). 5. Opportunity cost of Type I error. 6. Opportunity cost of Type II error. 7. Prior probabilities of the occurrence of chance and each assignable cause. 8. Probability that any given variance is due to chance and the probability that it is due to each assignable cause. Attempted Solutions Conventional variance control does not eXplicitly consider any of the factors listed above although some com— bination of them may be considered on an intuitive basis. Several writers have suggested an application of the basic statistical quality control chart to analyze variances from accounting standards. This approach explicitly con- siders the distribution of chance performances and the probability of a Type I error. The analyst may intuitively consider some combination of the other factors. Both the Bierman, Fouraker, and Jaedicke6 and the McMenimen7 ap- proaches have introduced into their models the economic aspects of the cost of an investigation and the present value of savings made possible by prompt detection of an assignable cause. (These economic aSpects are similar to 6Harold Bierman, Jr., Lawrence E. Fouraker, and Robert K. Jaedicke, "A Use of Probability and Statistics in Performance Evaluation," Accounting Review, XXXVI, No. 3 (July, 1961), 409—417. Harold Bierman, Jr., Lawrence E. Fouraker, and Robert K. Jaedicke, Quantitative Analysis for Business Decisions (Homewood, Illinois: Richard D. Irwin, Inc., 1961), 108-125. Harold Bierman, Jr., Topics in Cost Accounting and Decisions (New York: McGraw—Hill Book Company, Inc., 1963) 15-23. - 7Leo J. McMenimen, "Statistical Analysis of Cost Deviations," (Unpublished Master's Thesis, The Graduate School, The Pennsylvania State University, August, 1965). factors five and six above.) McMenimen also included the cost of corrective action and factor eight into his analy— sis. In order to overcome some questionable aSpects of these last two approaches, this writer develOped an ap— proach which explicitly considers the first six factors. He also introduced an application which minimizes expected Opportunity costs by incorporating the first seven factors formally into the analysis. Purpose of Dissertation The purpose of this dissertation is to evaluate the accounting and statistical variance control procedures in an effort to ferret out the most adequate method of vari— ance analysis for control purposes. Hypothesis The hypothesis to be tested is that new applications of presently develOped statistical tools can increase the effectiveness of accounting variance control by providing a helpful analytical framework to determine the control limits. Methodology In Chapter II there is an evaluation of the con- ventional variance control procedures used by accountants. The purpose of this evaluation is to outline the limita- tions of these conventional procedures and, in this manner, 10 to establish the need for further study. Next, Chapter III inquires into the conceptual nature Of control in order to provide clues to help establish more adequate control pro— cedures. Chapter IV reviews several statistical procedures that have been prOposed for variance control. These pro— cedures are identified as the Basic Control Chart approach; the Bierman, Fouraker, and Jaedicke approach; and the McMenimen approach. Some aspects Of each Of these methods will be questioned. In an attempt to counteract these limitations, this writer has applied two additional methods in Chapter V. One which is referred to as the Equalization approach sets the control limit at that value where the probability of a Type I error times the opportunity cost of a Type I error is exactly equal to the probability of a Type II error times the Opportunity cost of a Type II error. The other method develOped in Chapter V establishes the control limit at that value which minimizes the expected Opportunity costs. It is identified as the Minimization approach. In Chapter VI all of the variance control methods indicated above are tested in an attempt to rank them in order of their usefulness for variance control. This test is accomplished through three major steps. In the first step a hypothetical example is develOped which involves the time required for each of fifty meat cutters to butcher each Of twenty cows. It is assumed that each of the 1,000 11 performances is investigated to determine the cause Of its variance. The value Of each performance is recorded and identified as to cause. In addition to chance the follow- ing assignable causes are assumed: dull knives, tough cows, lack Of training, poor attitude, illness, improvement, and laziness. This detailed information of the values that occurred for each cause is used to complete the second step in the test of the variance control methods. The second step involves the calculation Of upper and lower control limits for each Of the six above mentioned methods. Moreover, these control limits are calculated for each Of four testing plans. The first two testing plans involve tests of single Observations rather than samples. In the first plan the worker compares each performance with the control limits and reports any performances falling outside Of these limits. The foreman compares every tenth performance on the average with the control limits in the second plan. The last two sampling plans consist of a com- parison Of a mean of a sample Of five performances with the control limits based on these plans. In the third plan each performance is included in a sample. In the fourth plan one sample is taken for every fifty performances. The third step in the test of the variance control methods involves a financial examination of the resulting differences found among the control limits associated with each method. This financial examination will consist of 12 analyzing the approaches by twos insofar as it is necessary to rank them in preferential order. Of any two approaches being compared, the one closer to the standard bears a greater investigation cost than the one farther from the standard. However, it also carries additional savings be- cause Of more timely detection of assignable causes. These investigation cost and savings figures are dependent upon the assumptions outlined in Chapter VI. A decision will be made on the following basis: 1. If the added savings is greater than the added in- vestigation cost, the approach with the control limit closer to the standard will be regarded as more effective. 2. If the added savings is less than the added in— vestigation cost,the approach with the control “ limit farther from the standard will be regarded as more effective. This analysis for each pair of approaches will be performed until it becomes possible to rank all of the approaches. The conclusions are presented in Chapter VII. Contributions The following contributions emerge from this study: 1. A conceptual distinction between significant and insignificant variances is established through ” specific recognition Of chance factors. 13 The chance concept is used to alter the definition of control to enable a clearer description of what is actually involved in the control process. The statistical models that have been prOposed by others are evaluated in order to clarify their strengths and deficiencies. Two additional models are developed from available statistical concepts. These recognize factors not considered in the models previously developed. The Equalization approach, which explicitly incorporates relevant factors 1 through 6, contributes by using factors 2, 4, 5, and 6 which have not been proposed as a group for variance control within the context of a Classical model. The Minimization approach contributes by formally including relevant factor 7 in addition to the first six. For this reason it has some Bayesian overtones and will from time to time be classified as a semi—Bayesian approach. A test is developed in order to ascertain the ad— vantages Of the statistical methods and to rank them in order of their control effectiveness. The test adds new insights into variance control by: A. Delineating and developing a probability dis— tribution for each relevant assignable cause. This enables a more scientific estimation of 14 the probability of committing a Type II error and of the related opportunity costs. Separating the Bierman, Fouraker, and Jaedicke model into two approaches in order to show the effect of the conflicting interpretations of their probabilities. The possibility of significant variances being aver- aged—out is illustrated. This illustration depicts the importance of: A. B. Funcusing control at the performance level. Considering the sample size and the frequency of sampling when setting performance control limits. Using statistical techniques to interpret the results reflected in summary reports. These techniques consider the degree of summarization. The framework for making accounting decisions is provided via the integration of accounting and statistical concepts. CHAPTER II AN EVALUATION OF CONVENTIONAL VARIANCE CONTROL Conventional Significance Determination Conventionally accountants have indicated the de— sirability of an investigation when either the dollar amount of the variance or the ratio Of the variance to the standard have exceeded some cut-Off point. These cut-Off points have been determined on the basis of "subjective judgments, guesses, or hunches."l While "guesses or hunches or feel— ings for situations are fundamental parts of managerial be— havior," Horngren stresses that "these subjective methods Often engender management disagreements, barren investiga— tions, and a sense of frustration."2 A In some cases intuition may be so keen that control AA“ of the variances will be adequate; however, conventional ; A procedures do not provide an objective means to verify ing adequate control. Often barren investigations are under- a taken with the result that time is wasted looking for causes that do not exist.3 Likewise, investigations are sometimes 1Charles T. Horngren, Cost Accounting-—A Managerial Emphasis (Englewood Cliffs, New Jersey: Prentice Hall, Inc., 1962), 748. 2 Ibid., 155. 3This error has been referred to as Type I error. 15 16 not undertaken when they should be. This error, known as a Type II error, results in delay in detecting assignable causes. It was noted in Chapter I and will be illustrated in subsequent chapters that these errors cannot be elimi— nated. The best solution to the problem of significance determination lies in striking an optimum balance between the two types of error. A majOr difficulty with conven- tional variance control is that is does not provide a framework to evaluate the probability of error for any given decision. Of the two conventional methods, the percentage of the variance to the standard cut-Off point is more desir— able because it allows a greater dollar variance for larger dollar amounts. In most cases there is a larger amount of variability inherent in Operations involving larger expen— ditures. In spite of this, however, greater percentage variability is expected in some situations than in others. For example, Bierman, Fouraker, and Jaedicke4 indicate that a $10,000 variance from a $10,000 budget for snow re— moval might be uncontrollable during a bad winter so that an investigation would be unprofitable even though the de— viation is 100 per cent of the standard. On the other hand, a $10,000 deviation from a $100,000 budget for fire insur— ance may be worthy of an investigation despite the fact that the deviation from standard is only 10 per cent. 4Bierman, Fouraker, and Jaedicke, 113. 17 In this hypothetical example neither conventional method helps to determine which variance or variances should be investigated. The dollar amounts Of the vari— ances are the same—-yet it is possible that the variance pertaining to fire insurance should be investigated while the one pertaining to snow removal should not be. In this case a 10 per cent deviation for fire insurance should be investigated while nothing can be saved from an investiga— tion of a 100 per cent deviation for snow removal. A thoughtful management would understand the dif— ferences between these expense classifications and conse— quently would not make the drastic investigation errors suggested by this extreme example. In less obvious cases, however, cut—Off points may be uniformly applied and re- sult in costly errors. Although the cut—off points may be varied in prac— tice, this procedure has not been widely discussed in the literature. The N. A. A. Research Report 22 implies that standard percentage cut-off points are consistently used throughout the firm. On a suggested form for analyzing the reasons for variances, the first row under a column labeled "reason for variance" contains the following . 5 "reason": "No reason, variance less than 10 per cent." 5National Association of Accountants, The Analysis Of Manufacturing Cost Variances, Research Report 22 (New York: National Association of Accountants, August 1, 1952), 12. l B 18 It is not logical that a variance less than 10 per cent is automatically due to chance nor that one larger than 10 per cent is automatically attributed to an assignable cause. Allowance for different expectations Of variability could be introduced by varying the percentage cut—Off points. Percy Carter6 has recently suggested some guidelines for varying the cut—Off points between 5 and 15 per cent de- pending upon the cost center and the type of expenditure. Even with this improvement, however, the cut-Off points remain essentially arbitrary without explicit considera— tion of process variability. Treatment in the Literature In order to determine the treatment accorded to performance control in the literature, some forty cost and managerial accounting textbooks and numerous journal arti- cles were reviewed. This review revealed that there is general agreement that performance control is a major bene— fit to be derived from the operation of a standard cost (system. Detailed attention is devoted to the calculation 'Of the following seven basic variances: material price, material usage, labor rate, labor efficiency, variable overhead efficiency, fixed overhead budget, and fixed over- ‘head volume. In many cases further refinements are made to _ 6Percy C. Carter, "Maintaining the Adequacy and Ac— curacy Of Standard Costs," N. A. A. Bulletin, XLV, No. 7 ‘(March, 1964), 33—40. ii l9 arrive at spoilage and grading variances. After the stu— dent occupies his time learning the techniques Of these calculations, he reads that the accountant must highlight significant variances reflected in his report. There is a real danger that students go through these motions think— ing that the report rather than control is the end product of a standard cost system. Indeed, the accountant's preoccupation with the techniques Of calculating the component breakdowns Of vari— ances has reduced his effectiveness in the control function. Allen Rucker warns that "there is a fascination about neatly tabulated figures and charts that needs to be resisted lest it lead managers to believe they are on tOp of their prob— lems without thinking through them and coming to decisions.“7 Most of the books and articles reviewed include no discussion of how a significant variance is recognized. However, a cursory comment regarding the importance of judgment in significance determination is Often noted. Illustrative reports containing variances frequently do not identify those that are significant. A few of these books apply the 10 per cent criterion. They fail, however, \to make it clear just why some variances are significant while others are insignificant. Reasons commonly advanced 7Allen W. Rucker, "Clocks for Management Control," Administrative Control and Executive Actions, eds. James Don Edwards and Bernhard Carl Lemke (Columbus, Ohio: C. E. Merrill Books, 1961), 329. 20 for an unfavorable labor efficiency variance were noted in Chapter I. They included faulty equipment, faulty materials, illness, laziness, poor attitude, and improper training. Since these reasons represent actual causes different from those involved in setting the standard, variances caused by these conditions are all significant. In none of the forty books and numerous articles that were reviewed is chance listed as a reason for a variance. Consequently, it appears as though accountants believe that all variances are the result of assignable causes. Their implicit dis- tinction between significant and insignificant variances centers around a comparison of the cost of identifying and correcting the cause with the savings that will result from this action. How either the costs or the savings are de— termined is not Specifically set forth. In evaluating this conventional implicit approach, it is this writer's contention that a comparison of these costs with the resulting savings is an important considera- tion; but it is irrelevant if the variance is attributed to chance because nothing can be saved from an investigation. That is, if a variance is attributed to chance no assign— able cause is present. Therefore, no savings would result from an investigation. Since accounting literature fails to mention chance as a possible cause of variances, it im— plies that all variances result from assignable causes. This would lead one to believe that accountants feel that A} 21 the savings figures should be derived by taking the present value of some multiple of the difference between standard and actual. This conclusion would be fine if, indeed, all variances did result from assignable causes. If this were true the present value of the savings should always be more than the present value of the cost of identification and correction because if a variance emanates from an assign— able cause it should always be worthwhile to identify the specific cause and make the appropriate correction. The reasoning behind this is that the values of all assignable cause performances are eliminated when standards are es- tablished. That is, the standard should represent the mean of all performances which are due only to chance. In other words, only performances which are "in control" from the standpoint of management are included in setting the standard. If a cause subsequently appears, it should be ‘worthwhile to eliminate it again if the standard is realis- tic. If the standard is not realistic it should be cor- :rected. In no case should the known presence of an assign— ‘able cause be permitted to exist without reflection in the 1reports. If an assignable cause is reflected the variance :is labeled as significant. The result of this reasoning ‘is that the accountant is left without a way to explain the nature of insignificant variances until he recognizes ?the concept of chance. 22 Aggregation Problems The accountant exercises his function in the con— trol process primarily through departmental cost reports, which typically include time periods varying from a day to a month. Some years ago the National Association of Account— ants, then the National Association Of Cost Accountants, published a report in which it indicated the following fre— quencies with which sixty-two companies reported labor per- formance variances: Daily 7 Weekly 21 Monthly 25 Not at all 9 “638 To the extent that the accountant waits for sig— nificant variances to show up on his report before he indi— cates the need for an investigation, his role in the con- trol function is limited by a lack of timeliness. As C. E. Noble reports: “It certainly seems unwise to wave the red flag, to inform management that $50,000 was lost last month 8National Association of Cost Accountants, Hg! Standard Costs Are Being Used Currently, Complete N. A. C. A. Standard Cost Research Series (New York: National Associa- tion of Accountants, Not Dated), 40. (Records indicate that this publication was received at the Michigan State Univer— Sity Library in 1949.) :6 23 if the flag could have been raised the first day Of the month and something done about the situation."9 In addition to lack of timeliness, aggregation problems inherent in variance reports further hamper con— ventional variance control. Edwin Gaynor has recognized that these aggregation problems exist in conjunction with timing. He said: Under conventional cost reporting methods periodic variances from the . . . standard are not discovered until the end of the day or week, or not until pro— duction and standard time are compared at the end of a payroll period—-Or perhaps not at all. Usually, an average for a relatively long period of time is com— posed of a great many compensating plus and minus variances completely overlooked simply because they are not apparent.10 These aggregation problems or, in Gaynor's termi— nology, problems of compensating variances, exist in the following ways: In cases where there is more than one Operation in a department a significant variance in one operation may be off—set by the chance variances from the other operations. Even if variances are accumulated by Operation, there is a danger, under conventional procedures, Athat significant variances occurring at one time during the paccumulation process may be averaged-out by chance variances joccurring during other times. These problems of "average—out" 9C. E. Noble, "Cost Accounting Potentials of Statis- .tical Methods," N. A. C. A. Bulletin, XXXIII, No. 12 (August, 1952), 1477. 10Edwin W. Gaynor, "Use of Control Charts in Cost (Control," N. A. C. A. Bulletin, XXXV, No. 10 (June, 1954), .1301. 24 and "off—set" contribute further to the delay in detecting assignable causes. It is difficult to evaluate the probability that significant variances are "averaged-out" or "off-set." Nevertheless, variances are sometimes found to be signifi- cant according tO the accountant's conventional criteria for significance determination. Even, then, aggregation problems of the performance report contribute to delay in assignable cause detection. In order to locate the source of the significant results, it is necessary to sort through the detail that was used to build the report. The extent of this sorting varies in proportion to the extent Of the summarization reflected in the report. As a possible solu— tion to these aggregation problems L. Wheaton Smith sug— gested using the Operation rather than the department as the unit of control. The importance Of the Operation as a unit is that "significant variations are localized in a particular Operation and as occurring between certain times 11 when the regular checks on that operation were made." Keller and Ferrara assert that the accountant ‘should begin the cost control process before the variances are accumulated. They introduce five lines of defense to protect against waste or Off-standard conditions. These ’lines of defense consist of (1) workers, (2) foremen, (3) plant superintendent, (4) vice—president, and (5) president. ' llL. Wheaton Smith, Jr., "An Introduction to Statis- -tica1 Cost Control," N. A. C. A. Bulletin, XXXIV, No. 4 (December, 1952), 511. 25 For the lower lines of defense, they stress the importance of observing variances as they occur so that significant ones can be eliminated long before the reports are issued. They state: Workers, foremen,and supervisors should be aware Of production standards, and thus it is not inconceivable to find that the root causes Of some variances might be eliminated long before such variances are reported by the accountant, this is, by on the spot action taken by workers, foremen, and supervisors who observe vari— ances as they are occurring. Need for further Study It has been the purpose of this chapter to point out that while accountants have developed refined tech- niques for classifying variances (material price, material usage, labor rate, labor efficiency, variable overhead efficiency, budget, and volume components), their function in performance control is limited on three counts: 1. They have failed to explain conceptually the dis- tinction between a significant variance and an in— significant one. 2. They have failed to utilize objective criteria for determining significant variances. 3. Their strict adherence to the report has caused delays in detection of significant variances be— cause 121. Wayne Keller and William L. Ferrara, Manage— ent Accounting for Profit Control (Second Ed.; New York: éGraw—Hill Book Co., 1966), 250‘ rectify l. 26 A. An analysis is not made until the period covered by the report is completed. B. Significant variances can be Off-set or averaged- out. C. Significant variances that do show up must be localized. Dissertation Objectives The purpose of this dissertation is to attempt to these limitations by Using quality control concepts to explain the dis— tinction between significant and insignificant variances. Examining more objective criteria for significance determination. Illustrating through an hypothetical example the financial impact of employing these Objective pro— cedures at the individual performance level. Showing through this example the tendency of sig- nificant variances to be Off-set and averaged—out in the process accumulation used in developing the performance report. By employing the statistical procedures at the erformance level and by showing the financial impact Of he aggregation problems, this writer hopes to persuade 27 the accounting profession to direct more attention toward observation on the performance level as a basis of control. The statistical procedures are intended to quantify Ob- servation by providing guidelines so that the worker and his foreman know by Objective criteria when the variance is significant. Under this system the conventional sum- mary report will not be the primary function of control; but it will be used to illustrate the financial impact of any efficiencies or inefficiencies. I: CHAPTER III CONTROL AND CHANCE CONCEPTS Chapters I and II noted that accountants currently lack an analytical foundation for variance control because they are unable to conceptually explain the difference be- tween significant and insignificant variances. Each analyst appears to make his own arbitrary distinction on an ad hoc basis. It was pointed out earlier that the ideas developed by quality control engineers would be useful. With this recognition an insignificant variance is defined as one re— sulting solely from chance and a significant variance as one resulting from both chance and an assignable cause.1 This chapter will examine in more detail the con- ventional notions of variance control. Chance concepts 1It was noted in footnote 4, Chapter I, that Type I and II errors when committed will render these definitions incorrect. In spite of this problem, the definitions are conceptually useful and the best available. Of course, for the statistically SOphisticated reader, the definitions could be qualified in the following manner so that they will always be correct. An insignificant variance is one resulting solely from chance unless, Of course, a Type II error has been committed in which event assignable causes are also unknowingly present. A significant variance is one due to both chance and assignable causes unless a Type I error has been committed in which event only chance is present. 28 It 29 will be incorporated into the accountant‘s concept of vari- ance control. This fusion will establish a logical basis for the use of statistical procedures. Finally, the vari— ance classifications will be studied in order to select those that are influenced by chance. Only for these will statistical procedures be helpful. Definitions of Control It is difficult to find a meaningful all—inclusive definition Of control because there are so many different facets of control and many of these facets are exercised by different groups. Certainly there are differences be- ,tween the kind of control exercised by the general stock— holders over the board of directors and the kind which a mature man exercises over himself. Moreover, both of these facets are different than the control which an accountant exercises over variances. Even within the realm Of ac— counting control there are differences in concept between ‘control over variances, inventories, cash, and accounts *receivable. The following words are listed as synonyms Of con— xtrolz "authority," “influence," "power," "command,"2 "regu- \ late,""handle)”administer," "oversee," "look after," and \ "supervise."3 Of these words, all but the first two imply 2C. 0. Sylvester Mawson, ed., Roget's Pocket Thesaurus (New York: Pocket Books, Inc., 1946), 44. 3 Ibid., 201. 30 a line rather than a staff function. That is, they repre- sent the kind Of control exercised by the boss over his em— ployees rather than the control exercised by the accountant over variances. In this regard the accountant has the au— thority, and indeed, the reSponsibility to determine the significance Of the variances; but the command facet implies action which the accountant in his staff function would not (and probably should not) undertake. While the accountant does not take action, he uses his influence to encourage management to take appropriate action where and when it is needed. This notion of influ— ence is in accord with James L. Peirce's suggestion that control "does not take action, but it frequently impels action by turning a spot light on the pertinent facts."4 The words "authority" and "influence" might pertain to vari— ance control, but they certainly do not adequately describe the function. Webster's New Collegiate Dictionary defines the noun "control" as "anything affording a standard of com- :parison or means of verification; a check."5 Since control ‘emanates from a comparison of actual with standard, this :Statement supplies a starting place on which to develop 4James L. Peirce, "The Planning and Control Con— cept," Administrative Control and Executive Action, eds. AB. C. Lemke and James Don Edwards (Columbus, Ohio: Charles E. Merrill Books, Inc., 1961), 8. 5Webster's New Collegiate Dictionary (Springfield, Massachusetts: G. C. Merriam Co., 1956), 181. 31 the definition; but it does not hit at the heart Of vari— ance control. Webster's introduces the notion of limits in the following definition of the verb "control"——"to check or regulate, as payments; to keep within limits, as speed."6 However, this definition is not meant to apply specifically to accounting variance control. Moreover, it does not include the concept of chance. Finally, the dic- tionary definition of a controller as "an officer appointed to check expenditures"7 includes neither the notion of con- trol limits nor the concept Of chance. Consequently, nei- ther synonyms nor dictionary definitions are very helpful in developing a conceptual foundation for variance control. Accountants themselves have failed to develop an operationally meaningful definition of control. Eric Kohler has defined control as "the method and manner by which a person, or an organization, operation, or other activity is conformed to a desired plan of action."8 The objection to this definition is that the words "conformed to" seem to imply "made equal to." This suggests that accountants feel that standard and actual should be equal in order for control to exist. Since everyone knows that standard is rarely equal to actual, Kohler's definition is not 6Ibid. [Emphasis Mine.] 7Ibid. 8E. L. Kohler, A Dictionary for Accountants (Third ed.; Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1963), 127. [Emphasis Mine.] 32 operationally meaningful. It does, however, support an earlier contention that many accountants apparently feel that any deviation is the result of assignable causes; but that some are not worthy of action. Definitions similar to Kohler's typically appear in accounting literature. One difficulty in developing a concept for variance control hinges on the fact that accountants have not decided what is meant by a significant variance. Carman Blough ad- mits that the terms "significance" and materiality" . . . are very important and yet we have no useful defi— nitions of them. . . . Possibly these are terms which defy definition and whose meaning will have to be left to judgment in each situation, just as they have been in the past. However, if there are principles or cri- teria that may be used to interpret them, surely some effort should be made to develop and state them. If there are none, at least that could be stated.9 The accounting profession just has not identified chance as relevant to variance control. The books and articles listed in Appendix A pertain to accounting applications of statis— tical variance control; but they are concerned primarily with technique. They do not identify an insignificant vari— ance as one due to chance nor do they discuss the fact that the use of probability statistics is logical because proba— bility is a statistical area which evaluates patterns of chance occurrences. It is, however, encouraging that Kohler defines a significant magnitude as 9Carman G. Blough, "Challenges to the Accounting Profession in the United States," Journal of Accountancy, CVIII, No. 6 (December, 1959), 38. It 33 . . . measured by a departure from some norm or stand— ard, to raise doubt that the deviation is the result of chance, random, or compensating factors; hence, in- dicating behavior calling for a better awareness or understanding of the cause, the removal Of the cause, or a modification of the standard because of its in— adequacy. On the other hand, it is informative to note that he defined statistical quality control as "the state of equilibrium reached when deviations from a given norm (such as the process average) are only random in character and "11 A comparison of Kohler‘s without assignable cause. definition of control with his definition Of statistical quality control is interesting because he does not recog— nize the relevance of chance concepts for control but he does include them in his definition Of statistical quality control. This writer contends that the accountant's fail— ure to recognize chance concepts as they might pertain to accounting variance control has kept the profession from (adopting statistical tools to aid in variance control. :When chance is recognized, the usefulness of statistics becomes evident because statistics deals with an evalua— tion of the patterns of chance occurrences. Without the recognition Of chance there is no apparent reason for using statistics. To the extent that chance is relevant to accounting variance control, Kohler's definition of statistical quality loKohler, 446. llIbid., 127. 34 control is Operationally meaningful-when applied to vari- ance control. In the remainder of this chapter the nature Of chance will be elaborated on further. This will be followed by an examination of the extent to which chance really pertains to accounting variances. The Notion of Chance Chance causes variations in the amount of time re- quired to perform any activity even under substantially the same conditions. For example, a man does not consist- ently take exactly the same amount of time to shave. Some variation could be attributed to cold water, a dull blade, or a two-day growth instead of one; but if these assignable causes are eliminated, he still will be unable to shave in exactly the same amount of time. Likewise, there is a general lack of uniformity present in all natural phenome- non. Scientists agree that no two leaves, Or snow flakes, or blades of grass are identical. This holds even when they are grown under the same conditions. Chance explains differences in scores in sporting events, such as bowling or golf. Any bowler will agree that it is virtually impossible to continuously bowl the same score even though the same ball, shoes, and alley are used. In fact, many leagues award a prize to one who Ob— tains the same score for three consecutive games. 35 Frank Gryna12 has used a target analogy to illus- trate the Operation of chance patterns. In the left hand target Offigure 1 all shots have hit the bull's eye. Chance causes some variation in the shots; but the marks- man still achieves a perfect score. Chance is also Opera- tive in the right hand target because again the marksman has failed to hit the same spot twice. Here, however, assignable causes are also Operative because the marksman has not been hitting the bull's eye. Variation Due to Variation Due to Chance Causes Chance Plus Only Assignable Causes FIGURE l.--Gryna's Target Analogy Accounting variance control, like marksmanship and quality control, should be concerned with the distinction between variation due solely to chance and that due to chance plus assignable causes. 12 . . Frank M. Gryna, Jr. "Statistical Methods in the Quality Function," Quality Control Handbook, ed., J. M. Juran (Second ed.; New York: McGraw—Hill Book Company, Inc., 1962), 13-42. 36 W. A. Shewhart who developed the control chart ex— pressed variability and stability as the two characteris— tics of control. Variability is a characteristic because "a controlled quality must be a variable quality."l3 Sta— bility is a characteristic because results should vary only within pre-determined limits. In Shewhart's words, "The problem then is: how much may the quality of a prod- uct vary and yet be controlled?"14 The problem could be re-stated as follows to suit the accountants needs: How much may a variance vary and yet be in control? Gryna's Target Analogy is oversimplified because the kxsundary between the chance pOpulation and the chance plus assignable cause population is clearly determined. In most variance control situations, the accountant is frustrated by the problem Of overlapping populations. Illustration of Overlapping POpulations In the following illustration of the problem of overlapping populations, a standard Of 40 minutes has been established for the time to assemble a certain table. The probability distribution of chance performances shown in Table 1 indicates that chance performances have taken as long as 47 minutes and as few as 33 minutes. After the 13W. A. Shewhart, Economic Control Of Manufactured Eroduct (New York: D. Van Nostrand and Co., Inc., 1931), 14Ibid., 3. 37 worker becomes familiar with the assembly Operation, his skills improve. When his average time is reduced to 35 minutes, he is transferred to a more complex assembly Operation and given a raise. TABLE l.——Probability distribution of chance performances for table assembly Minutes Probability at least 33 but less than 35 .02 at least 35 but less than 37 .03 at least 37 but less than 39 .20 at least 39 but less than 41 .50 at least 41 but less than 43 .20 at least 43 but less than 45 .03 at least 45 but less than 47 .02 1.00 Table 2 shows the distribution of chance performances after the improvement. Even though improvement is an assign— able cause, chance also causes variation in performance values. Notice that the improved worker has performed his task in as few as 32 minutes; but that he has also taken as long as 38 minutes. The population Of only Chance perform— ances (represented in Table 1) overlaps the population Of performances due to improvement (represented in Table 2). The overlap indicates that only for results between 32 and 33 minutes is improvement conclusive because chance per— formances have been completed in as few as 33 minutes but never in as few as 32. Only for results over 38 minutes is it clear that improvement has not occurred because 38 improved performances have taken as long as 38 minutes but never longer. TABLE 2.——Probabi1ity distribution Of chance performances resulting from assignable cause due to improvement Minutes Probability at least 32 but less than 33 .03 at least 33 but less than 34 .07 at least 34 but less than 35 .40 at least 35 but less than 36 .40 at least 36 but less than 37 .07 at least 37 but less than 38 .03 1.00 Figure 2 shows these overlapping populations graphi— cally. The solid curve shows the distribution of chance per— formances and the dotted one shows the distribution after im- provement has occurred. If 38 is selected as the lower con- trol limit, the risk of a Type II error will not be incurred; but the risk of a Type I error is relatively high (equal to the proportionate area Of the chance population, under the solid curve, below 38). As the control limit is reduced, the probability of committing a Type II error increases. It is equal to the proportionate area which is higher than the control limit under the dotted curve. At the same time, however, the probability Of committing a Type I error is re- duced because the proportionate area less than the control limit under solid curve will decline as the control limit declines. Thus, the probability of committing one kind of error can be reduced only at the expense of increa51ng 39 the other. The problem of determining significance in— volves striking an Optimum balance between the probabilities Of committing each of these errors. Certainly the oppor— tunity cost of an investigation and the Opportunity cost of failing to detect an improvement are relevant in striking this optimum balance. 35 38 40 FIGURE 2.--Figure showing overlapping populations The problem Of determining the upper control limit is enhanced because significantly unfavorable variances may be caused by any number of assignable causes such as ill- ness, laziness, lack Of training, faulty equipment, faulty materials, etc. Most of the remainder Of this dissertation will be devoted to an evaluation Of various techniques for striking a balance between the probabilities of committing each Of these errors. The intent of this evaluation is, of course, to discover that technique which yields the Optimum balance. The argument for the use of statistical procedures to determine control limits has been built around the 40 premise that chance factors are expected to cause variances. The illustrations just covered which involve the time re— quired for table assembly and the time needed to shave in- dicate the operation of chance on the labor efficiency variance. Since the extent of the possible usage of sta— tistical techniques for variance analysis is dependent upon the extent to which chance factors cause variances, it is now appropriate to survey the other variance classifica— tions to determine for each the extent to which chance is Operative. Because the amount Of the variance and whether it is favorable or unfavorable depend upon how the stand— ards are established a brief discussion of the setting of standards will preface the examination Of the presence Of chance in the variance classifications. Setting Standards Standards fall into at least three categories: 1. The theoretical, ideal, or perfection standard. 2. The attainable good performance standard. 3. Average past performance standard.15 It is not expected that the ideal standard "will be attained in actual Operations, but the standards are set up as goals toward which to work in the attempt to im- prove efficiency."16 The objection to this type of standard 15National Association Of Cost Accountants, How Standard Costs . . ., 8. 16Ibid. 41 is that employees without an objective that they can rea— sonably be expected to meet may "cease to pay serious at- tention to the standards."17 The weakness of standards based on average past performance "lies in the implicit assumption-—most often wrong——that what has happened in the past is what should "18 continue to happen. In fact, the Lybrand Newsletter recently reported that "experience Of repeated instances indicates that work pace is rarely more than 60 per cent of what ultimately proves to be a reasonable standard."19 Consequently, standards based on attainable good performance are most effective. Good attainable perform- ance should be established by chemists, engineers, and foremen who are familiar with the material and manpower requirements. The values are determined by a series Of Observations, revisions, and further observation until the mean of the performances coincides with what the experts consider to be good attainable performance. Performances attributed to assignable causes are not included in the set of values which are averaged in arriving at the standard. A7Ibid. 18Richard L. Smith, Management Through Accounting (Englewood Cliffs, New Jersey: Prentice—Hall, Inc., 1962), 397. 19Lybrand, Ross Brothers, and Montgomery, "Reducing White Collar Costs," The Lybrand Newsletter (November, 1964), 50 42 Accordingly, the standard represents an average; but an average Of current performances based on capabilities rather than an historical average. It is possible to have a satisfactory standard; but too much variability among the performance results. Continued performance, observation, and revision can re— duce this variability. Also, workers become more uniform as they become familiar with their new tasks. Once the variability has been reduced as far as deemed profitable, it is the accountant's task to measure results and to highlight significant deviations. With the standards set according to the procedure just described, favorable vari— ances will be expected to occur with the same frequency as unfavorable ones. Each will occur one half of the time when the operation is in control.20 Chance Influences on Individual Variance Classifications Material Quantity Variance Specifications are established for the number of pages in a book, the board feet of wood in a piece of fur— niture, the pounds of metal in a typewriter, and the square feet of fabric in a suit. Thus, it might appear that the 2OUsually, unfavorably significant variances will occur more frequently than favorably significant ones. However, significant variances are not included in the set of values which are averaged to obtain the standard. Moreover, the Operation is not in control when significant variances are present. 43 quantity of material used is not influenced by chance. It is doubtful, however, that the same amount Of varnish is used for each piece of furniture (of the same style), or that the same amount of glue is used in assembly. A. C. Rosander21 reports chance variations in the number of grains Of material used in the manufacture of apparently identical stockings. Accordingly, it seems reasonable that the number of grains of material used in any fabric might vary. Perhaps more important than its influence on the amount of material appearing in good units is the effect chance has on the number of units spoiled while in process, the number rejected as finished goods, and the number that can be sold as seconds. Conventional standards properly allow for the expected amounts of these factors as well as the expected amount of material shrinkage. Sometimes sepa— rate variance accounts are established to isolate these various influences. What is now needed is the application of probability statistics to analyze the material quantity variance and its subdivisions.22 As with the labor efficiency variance, control over the material quantity variance is truly effective only if 21A. C. Rosander, Industrial Quality Control, XI, No. 8 (May, 1955), 26. 2For a control chart application for the analysis Of material quantity variances see Dewey W. Neal, NAA Bul— letin, XLII, NO. 9 (May, 1961), 73-78. 44 it is applied at the performance level. Aggregate account balances are subject to the same average-out, off—set, and timing problems that hinder adequate control over labor ef- ficiency. Labor Rate Variance The labor rate variance does not Often fluctuate randomly. Wage rates are generally negotiated and stated in labor contracts. When the rates change, the standards should be revised. Variances may arise from using a dif- ferent labor classification than that established for a job or from using overtime. Both actions may be desirable in the short run in certain circumstatnces; but they should, nevertheless, be identified and explained. Accordingly, statistical procedures have extremely limited usefulness in analyzing the labor rate variance. Material Price Variance Similarly, the material price variance would not Often be expected to occur randomly. The prices of many materials are: administered. In cases where prices vary between suppliers, it is the responsibility Of the pur— chasing department to make the most judicious purchases. Gillespie points out that in addition to negligence on the part of the purchasing department, a material price variance could reflect: 1. Failure of factory to anticipate needs. 2. Rush order accepted by sales department. 45 3. Transportation strike. 4. Error in forecasting costs.23 As with the labor rate variance, it is advantageous to identify these causes so that responsibility can be es- tablished. Statistical variance analysis can only be help— ful for prices which fluctuate randomly, such as those that truly reflect the conditions of supply and demand. Variable Overhead Efficiency Variance Overhead expenses can usually be identified with a particular cost center and, in this manner, responsibility for the various costs can be established. On the other hand, "physical standards exist for very few elements of factory overhead in the same sense that physical standards exist for direct materials and direct labor.“24 Therefore, the efficiency variance is usually analyzed in monthly de- partmental reports which represent a summary of the de- partmental expenses for the entire month. Thus the average— Out, Off-set and timing problems are present in this analy- sis . Keller and Ferrara report: The summary nature of these variances for all practical purposes eliminates any control features, except per- haps the possibility of illustrating the overall profit realization Of waste in factory overhead which could bring forth a fuller realization of waste and thus yield an important pressure for cost control on the prior lines Of defense.25 23Cecil Gillespie, Standard and Direct Costing (Engle— wood Cliffs, New Jersey: Prentice-Hall, Inc., 1962), 63. 24I . Wayne Keller and William‘L.Ferrara,lfl9. 251bid., 325. 46 It is suggested that physical standards be estab- lished for overhead in order to bring about more adequate control. The National Association of Cost Accountant's re— port How Standard Costs Are Being Used Currently states: With overhead it is especially important that control be exercised at the source of the cost. After various prorations or distributions have been made the results Of excess spending become diffused and it is virtually impossible to ascertain how much inefficiency has cost or who was responsible for it.26 Physical standards could be expressed in terms of the time required to clean designated areas, to remove six inches Of snow from the parking lot, to set up a machine, etc. Performance should then be checked on a sample basis by superiors. Phil Carroll suggests that time studies with incentives should be applied to indirect work. He writes: You need some kind of work standards to control costs. Either you set standards or your people set their own. The difference is large. . . . It amounts to about 67 per cent excess costs when employees decide how much work to do. The 67 per cent is the difference between the 100 per cent you pay for and 60 per cent eXperts say you get on 'day work.‘ The aggregate overhead variances should not then be relied upon to control overhead costs. Their purpose should be relegated to (1) show the total impact of in- efficiencies, (2) review the adequacy of control, and (3) explain the difference between budgeted and actual costs for the period. 26National Association of Cost Accountants, 45. 27Phil Carroll, Overhead Cost Control (New York; MCGraw-Hill Book Company, 1964), 79. Budget Variance The budget variance results from subtracting actual fixed factory overhead from budgeted fixed factory over- head. Some expense classifications, such as rent and sal— aries, are arranged by contract;cfihers,such as depreciation, are decided by company policy; and still others, such as insurance rates and taxes, are decided by outside agencies. Chance is not Operative for any of these kinds of eXpenses; therefore,statistical procedures are not useful for analyz— ing any resultant variances. All variances should be ex- plained. Chance may contribute to some variation28 in the fixed portion of heat, light, and power and therefore ad- mit the possible usefulness of statistical procedures. On the other hand, since the aggregate account does not pin point the source of trouble, control may best be exerted by checks to see that machines are not running when they are not being used, that rooms are not overheated, that lights are turned Off when the rooms are not in use, etc. Volume Variance To the extent that a pre-determined volume will never be precisely attained, chance is expected tO Operate on the capacity utilized. Consequently, statistical pro- cedures can be helpful in analyzing the volume variance. 28The term "fixed" does not mean that this portion Of these eXpenses does not vary; but only that they do not vary in respect to productive activity. 48 Non-Manufacturing Variances Some have suggested that standards be established for clerical work. Charles H. Grady, Jr. contends that the clerical supervisor does not spend as much time plan— ning and controlling the activities Of his people as the factory supervisor. He Offers this as partial explanation for the "continued trend toward larger proportions of clerical workers in relation to production workers."29 In the context of reducing white collar costs The gybrand Newsletter reported that "without a rather clear knowledge Of output per man, idle time will indeed tend to be invisible on the principle of Parkinson's Law: work expands to fill the time available for its execution."30 Neither the Grady nor the Lybrand article recom- mended statistical variance analysis; but John L. Gable of the Industrial Engineering Division of Collins Radio Com— pany inquired: "Would it be worthwhile for us to apply quality control procedures and techniques to some Of our Office and paper work functions?"31 He suggests that the routine paper work be organized and subjected to time and 29Charles H. Grady, Jr., "Reducing Clerical Costs Through Improved Manpower Utilization," N.A.A. Bulletin, XLVI, NO. 7 (March, 1965), 42. 3OLybrand, Ross Brothers, and Montgomery, "Reducing White Collar Costs," The Lybrand Newsletter (November, 1964), 3. 31John L. Gable, "An Internal Audit Using Receiving Inspection Techniques," Industrial Quality Control, XIV, NO. 7 (January, 1958), 15. 49 motion studies so that standards could be established. Since chance affects these performances much the same as it affects factory labor, statistical variance analysis should be equally applicable. Such a program should be initiated by experimenting first with a few of the most routine functions. There are some non—manufacturing expense classifi- cations for which physical performance standards are not relevant. For some, particularly salaries, control in— volves checking adherence to the budget. Statistical prO- cedures are not helpful in analyzing variances from ex— pense classifications that are not affected by chance. Conclusions The accountant's concept Of control is limited be— cause it does not give formal recognition to chance in~ fluences. Once chance is recognized, the logic behind using statistical tools to deterine the significance Of variances is evident from the fact that probability statis— tics is concerned with evaluating the patterns Of chance influences. An examination of individual variance classifi- cations revealed that chance influences definitely cause variations in labor and material usage. Chance is also Operative on many elements of the overhead efficiency variance and the volume variance. Moreover, it causes variations in many non—manufacturing costs such as clerical work. Consequently, statistical procedures are helpful 50 for determining the significance Of variances associated with the above items. Contrariwise, chance is not usually expected to have an effect on the material price, the labor rate, Or the budget variances; therefore, statis- tical procedures would not be helpful in analyzing these variances. Since statistical tools are helpful in analyzing some important variance classifications for which chance is expected to cause the variances, it is now worthwhile to find those statistical tools which are most helpful for variance control. CHAPTER IV STATISTICAL CONTROL TECHNIQUES-~EVALUATION OF THREE PROPOSED METHODS. Existingaccounting literature involving statistical techniques for variance control is concerned mainly with an application Of basic control chart_procedures that were originally develOped in 1924 by W. A. Shewhartl of the Bell Telephone Labs for purposes of quality control. While this method considers the distribution of chance performances in selecting the control‘limits, it has not typically consid- ered the opportunity costs associatedwith investigative decisions. Recently, two approaches which have been iden- tified as the Bierman, Fouraker, and Jaedicke Approach and the McMenimen Approach have formally considered these op— portunity costs in their models for determining the appro— priate control limits. This chapter evaluates all three methods for the purpose of isolating the strengths and weaknesses of each. The reader should refer to Appendix A for a bibliography Of accounting literature pertaining to statistical techniques for variance cOntrOl. Hypothesis Testing Throughout the remainder Of this dissertation fre— quent reference will be made to the term "hypothesis testing." 1W. A. Shewhart. 51 52 An hypothesis is simply any statement that is capable of being tested. The hypothesis may be expressed in any of the fol— lowing forms: 1. The operation is in control. 2. The standard is the mean of current performances. 3. The variance is attributed to chance. 4. The process has not changed because the same chance factors are contributing to variability among performances. 5. No assignable causes are present. The term "the hypothesis" will be used to imply all of these forms of statement. Acceptance of the hypothesis indicates that the test failed to provide sufficient evidence for rejecting these statements so that there is no reason for further investigation. Rejection of the hypothesis indicates that the sample variance would rarely be as large as that ob— tained if the hypothesis was true. Rejection, then, indi- cates negation of the above statements. It signals the need for 1. An investigation to determine the assignable cause. 2. Action to eliminate the assignable cause or revise the standard. 53 The Basic Control Chart Approach The development of the control chart uses a com— bination of the theory of probability, which was formu— lated by Pascal and Pierre Fermat in 1654, and the sub— sequent theory of sampling which is dependent upon proba— bility theory.2 The control chart is really just a graphic presentation of the results of operations. It is used in situations where the same hypothesis must be tested over and over again. Hence, it is useful for accounting vari- ance control where, ideally, the hypothesis that no as— signable causes are present should be tested for frequent performance values. A numerical example will be used to illustrate this approach. Assume that a standard of 245 minutes has been es— tablished for the time it should take to butcher a cow. This standard was established after all performances for some recent period of time were investigated. All per- formances with assignable causes were eliminated. Only the values pertaining to chance performances were averaged to arrive at the standard. The probability distribution on which this standard is based is represented in Table 3. The resulting standard is considered to represent good at— tainable performances. 2Douglas H. W. Allan, Statistical Quality Control (New York: Reinhold Publishing Corporation, 1959), 129. TABLE 3.--Probability distribution 54 of chance performances Number of Minutes Probability at least 220 but less than 225 .005 at least 225 but less than 230 .020 at least 230 but less than 240 .225 at least 240 but less than 250 .500 at least 250 but less than 260 .225 at least 260 but less than 265 .020 at least 265 but less than 270 .005 1.000 The format of a control chart is depicted in Fig- ure 3. The vertical scale contains a central line which represents the standard or the mean of the chance perform- ances. The upper control limit is represented by the let- ters UCL and lower control limit by the letters LCL. The horizontal scale simply indicates the time sequence in which performances are tested. FIGURE 3.--Illustration of a control chart UCL /\ standard /\ V \ LCL 55 The values of individual performances3 are plotted on the control chart in the manner illustrated in Figure 3 and a decision to accept or reject the hypothesis is made according to the following general decision rules: 1. Accept the hypothesis for all observations falling between the upper and lower control limits. Reject the hypothesis for all observations yielding values higher than the upper control limit or lower than the lower control limit. The chart recognizes variability in that perform- ances need not conform to a single value to be considered in control. It also recognizes stability because controlled performances may vary only within the control limits. Actually, it is the method by which the control limits are determined that this writer has identified as the control chart approach. There is, however, no reason why the control chart could not be used to portray re- sults regardless of the approach used to determine the con- trol limits. Two elementary observations may be drawn from the distribution of chance performances in Table 3. First, any performance less than 220 has always been identified with a favorable assignable cause. Second, any performance The means of samples of four or five performances may also be plotted. In this event, of course, the control limits are based on means with this sample size. In order to simplify this presentation the testing of individual performances is assumed. Tests involving small samples will be introduced in Chapter VI. 56 over 270 minutes has always been the result of an unfavor- able assignable cause. The hypothesis can, therefore, be automatically rejected for observations less than 220 be— cause performances less than 220 have always been identi— fied with a favorable assignable cause. Second, any per- formance over 270 minutes has always been the result of an unfavorable assignable cause. The hypothesis can, there- fore, be automatically rejected for observations less than 220 minutes or for those over 270 minutes without the risk of incurring a Type I error. Control limits set at 220 and 270 would, however, carry an unusually high probability of incurring a Type II error. The control limits are generally set at points which permit a specified probability of committing a Type I error. The probability of incurring a Type I error for any limits is called the level of significance. Suppose .05 is chosen as the level of significane. Table 3 shows that the control limits would be 230 and 260 because 2-1/2 per cent of the chance performances are less than 230 and 2-1/2 per cent are over 260. The probability of a Type I error is .05-—the same as the level of significance. By the same approach if .01 is chosen as the level of signifi— cance, the control limits would be 225 and 265 because 1/2 per cent of the chance performances are less than 225 and 1/2 per cent are over 265. The probability of a Type I error is now only .01; but, of course, the probability of 57 committing a Type II error is now greater than when the level of significance was .05 because more hypotheses, true as well as false, will be accepted with a .01 level of significance than with a .05 level. Probability distributions, like Table 3, cannot give the control limits associated with a given level of significance unless the control limit happens to be one of the class limits of the distribution. For example, the control limits corresponding to the .03 level of sig— nificance would appear at those points where l-1/2 per cent of the chance performances were less and l-l/2 per cent of the chance performances were greater. Since the class intervals do not occur at these values, from reading Table 3 one can only learn that the lower control limit is between 225 and 230 and that the upper control limit is between 260 and 265. To help pin-point the control limits it is generally assumed that the distribution of chance performances is a normal one. Normality is frequently assumed in statistical work; but it is rarely rigorously fulfilled. Since sta- tistical decisions are based upon the laws of probability, inferences regarding the shape of a probability distribu— tion are often necessary. If the shape of a given distri- bution does not differ significantly4 from normality, 4A chi square test can be used to test the hypoth- esis that the difference between the given distribution and a normal distribution is not significant. 58 useful, although not precise, conclusions will result. If Ithe distribution in Table 3 were perfectly normal, the con- trol limits corresponding to the .03 level of significance would be 229.2 and 260.8 instead of 230 and 260.5 It does not appear that these differences will greatly hamper the conclusions. The assumption of normality is, therefore, a practical one if it provides useful results. It is gener- ally assumed in quality control work. Other distributions 5These figures can be verified by solving the fol- lowing formulas for the lower control limit, LCL, and the upper control limit, UCL: Z = LCL - p Z = UCL - u 0 0' where: Z represents the number of standard deviation units between LCL or UCL and p u is the standard or the mean of the chance performances a is the standard deviation of the distribu- tion of chance performances. Substitution yields the following: _ LCL - 245 _ UCL - 245 —1.96 — 8.06 1.96 — 8.06 LCL s 229.2 UCL = 260.8 The Z value of 1.96 can be obtained from any table of Nor— man Curve Areas. ‘ The table used by this writer pertained only to the area on one side of the mean. Since LCL in this ex— ample is to be that value which is greater than only 2-1/2 per cent of all chance values, 1.96 is that Z value cor- responding to an area of .475 (.5 - .025) found in the body of the Table of Normal Curve Areas. (The table is constructed in such a way that it measures the area from u to any specified Z value.) 59 can be used in cases where the assumption of normality is completely unrealistic. With the assumption of normality it is possible to calculate the control limits corresponding to any level of significance. For example, solution of the formula be— low shows the control limits for the .03 level of signifi- cance to be 227.51 and 262.49 respectively. The meanings of the symbols are indicated in footnote 5. _ LCL — u _ UCL - p Z - ———6——— Z ~ ———a——— _2.17 _ LCL - 245 2.17 = UCL8-Og45 8.06 ' LCL = 227.51 UCL = 262.49 The Z value of 2.17 corresponds to the area of .485 (.5 — .015) found in the body of the Table of Normal Curve Areas. Since the risks of error cannot be eliminated, the goal is to establish the control limits at those values which strike an economic balance between the possible risks associated with the two kinds of error. In this country, however, it is customary to use 2 or 3 sigma control limits. That is, the upper and lower control limits are drawn either at 2 or 3 standard deviations above and below the central line. The 2 sigma limit corresponds to the .056 level of significance and 3 sigma limit corresponds to the .0026 6More accurately, the .05 level of significance is associated with a 1.96 sigma limit. The 2 sigma limit cor— responds to a .0456 level of significance (.5 — 4772 = .0228 x 2 = 0456). 60 level of significance. (The reader can easily check for himself that 99.74 [.4987 XZ] per cent of the area under the normal curve lies between Z = —3 and Z = 3.) The main objection to this customary practice is that the level of significance is arbitrarily selected7 without consideration of the other factors necessary to establish an economic balance. As Freund and Williams readily admit, "the use of 3— sigma control limits does not provide any guarantee, or for that matter any informa— tion, about the probabilities of committing Type II error. . . . Nevertheless,‘ it is their opinion that "the use of 3- sigma control limits can be justified on the grounds of long experience and satisfactory performance in practice, and it is recommended that they be used unless there are very good reasons why other control limits should be pre— ferred."8 It is this writer's contention, however, that without occasional tests of each control chart application, one cannot be sure that the 3 sigma, or for that matter the 2 sigma, control limits are satisfactory. At least, without such tests, one cannot be sure that they establish 7It should be emphasized that this approach is still not as arbitrary as that conventionally employed by accountants. At least this approach considers the dis- tribution of chance performances and permits an evaluation of the probability of committing a Type I error. 8John E. Freund and Frank J. Williams, Modern Busi— ness Statistics (Englewood Cliffs, New Jersey: Prentice- Hall, Inc., 1958), 478. 61 the best control limits for the given application. Such a test will be made in Chapter VI. It is hoped that it can be shown that, under certain circumstances, the customary 2 or 3 sigma levels do not provide the most adequate con— trol over accounting variances. Actually, the control chart is not essential for testing hypotheses. The control limits could be determined and the decision rules could be applied without plotting the values on the chart. The chart, however, serves as a visual guide to show the adequacy of control to both the worker and to management. Moreover, this visual presenta— tion makes it easier to employ the theory of runs which serves to reduce the probabilities of not detecting a change in the cause system (i.e., to reduce the probabilities of committing a Type II error). A run is "any consecutive sequence of points falling above or below the process aver— age."9 Probability statements can be constructed concern- ing the likelihood of runs of various magnitudes. If the probability of a given run is "small," an investigation is indicated despite the fact that all points fall within the control limits. 9Richard M. Cyert and Justin H. Davidson, Statis— tical Sampling for Accounting Information (Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1962), 183-185. 62 0 has indicated the following probabilities l Cowden concerning the number of successive points which are ex- pected to fall on the same side of the central line: Sequences Probability 7 straight .016 10 out of 11 .012 12 out of 14 .013 14 out of 17 .013 16 out of 20 .012 Because these probabilities are all in the neighborhood of .01 the sequences are often used in addition to the control limits to indicate a shift in the parameter. Tests are also constructed which indicate the minimum number of runs to be expected in a long series of observations.11 Although the Basic Control Chart approach does not usually consider the probability of committing a Type II error, it is possible to evaluate such probabilities under loDudley J. Cowden, Statistical Methods in Quality Control (Englewood Cliffs, New Jersey: Prentice—Hall, Inc., 1957), 231-232. llFor more information on the theory of runs, refer to the following sources: Freund and Williams, 272—276. Eugene L. Grant, Statistical Quality Control (New York: McGraw—Hill Book Company, Inc., 1952), 129. F. Mosteller, "Note on Application of Runs to Con— trol Charts," Annals of Mathematical Statistics, XII (1941), 229. P. S. Olmstead, "Distribution of Sample Arrange— ments for Runs Up and Down," Annals of Mathematical Statis— tics, XVII (1946), 24. S. Swed and C. Eisenhart, Tables for Testing Ran— domness of Sampling in a Sequence of Alternatives," Annals of Mathematical Statistics, XIV (1943), 66. _“——‘ 63 conventional statistical techniques. The probability of a Type II error is a function of both the unknown pOpula- tion mean (hereafter to be called the parameter) and the level of significance.12 Table 4 shows these probabilities for selected parameter values for a .05 level of signifi- cance. They have been calculated under the assumption that the individual performances are normally distributed for each of the parameter values. TABLE 4.-—Probability of error for various parameters given single observations and a .05 level of significance Probability of Probability of Parameter Type II Error Type I Error 210 .0066 0 215 .0314 C 220 .1075 0 225 .2676 0 230 .5000 0 235 .7314 0 240 .8859 O 244 .9352 0 245 0 ‘05 246 .9352 0 250 .8859 0 255 .7314 0 260 .5000 0 265 .2676 0 270 .1075 0 275 .0314 O 280 .0066 0 Figure 4 illustrates the probability of committing a Type II error for the alternative parameter 240 minutes. 12In cases where the test concerns sample means, rather than individual performances, the probability of a Type II error depends also upon the sample size. The probability of a Type II error can be reduced for a given level of significance, if the sample size is increased. 64 The top curve shows the standard as the mean and the con- trol limits 230 and 260. The shaded area, called the critical region, indicates the values for which the hy— pothesis would be rejected. The lower curve shows that the parameter has changed to 240. The hypothesis that the parameter is 245 will, however, be erroneously ac- cepted if the test performance falls between 230 and 260. The probability of this happening equals the unshaded area under the lower curve. This area can be computed by con- verting each control limit into standard units and using the Table of Normal Curve Areas to find the corresponding area under the curve. The calculations appear below: Area Betwen the Control Limits and the Alterna- tive Parameter = LCL — 240 _ 230 - 240 o 8.06 — —l.24 .3925 N l l Z = UCL — 240 = 260 - 240 = 2.48 .4934 o 8.06 Probability of Committing a Type II Error.8859 The probabilities of committing a Type II error for all other parameter values are computed in a similar manner. Two general observations may be made from Table 4. First, only one type of error is possible for each parame- ter. For any value of the parameter other than the stand— ard, acceptance results in a Type II error; rejection is a correct decision. If the parameter value and the standard 65 .025 230 245 260 .3925.4934 230 240 260 FIGURE 4.-—Illustration of the determination of the probability of a Type II error coincide, rejection, which is a Type I error, will occur with a probability equal to the level of significance. For this event acceptance is a correct decision--a Type II error is impossible. The second observation is that the probability of a Type II error is very high for parameter values close to the standard and becomes successively smaller for parameter values as they move away from the standard. In other words, small shifts in the parameter value are rarely detected; whereas, large shifts are al— most always detected. This is counter—balanced by the fact that the error in failing to detect small shifts is 66 not costly relative to the error in failing to detect large ones. Consideration of the costs of these errors will be taken up in Chapter V. In any given situation, the value of the parameter is unknown. Consideration of the figures in Table 4 per- mit a cursory evaluation of the level of significance. If the probability of a Type II error is considered to be too high for a parameter that is judged to‘be serious, the probability can be reduced by using a higher level of significance. The fact that a higher level of significance will result in lower probabilities for the Type II error and vice versa can be viaualized by referring to Figure 4. If a higher level of significance is selected, the lower control limit will be higher than 230 and the upper con— trol limit will be lower than 260. This will increase the shaded or critical region under both curves. Consequently, the unshaded region in the lower curve, representing the probability of a Type II error, will be less. Conversely, the selection of a lower level of significance will reduce the critical region under both curves and increase the unshaded region which in the lower curve portrays the probability of a Type II error. This inverse relationship between the level of significance and the probability of a Type II error can also be observed in Table 5. The probabilities for the .(H. level of significance were calculated in the same 67 manner as the probabilities for the .05 level that were previously listed in Table 4. The manner of calculation is illustrated in Figure 4. The reader will note that for any parameter value the probability of a Type II error is greater for the .01 level of significance than for the .05 level. TABLE 5.--Comparison of the probabilities of a Type II error for various parameter values under different levels of significance Probability of a Type II error Parameter .05 Level .01 Level 210 .0066 .0314 215 .0314 _ .1075 220 .1075 .2676 225 .2676 .5000 230 .5000 .7324 235 .7314 .8925 240 .8859 .9685 244 .9352 .9862 245 0 0 246 .9352 .9862 250 .8859 .9685 255 .7314 .8925 260 .5000 .7324 265 .2676 .5000 270 .1075 .2676 275 .0314 .1075 280 .0066 .0314 Calculations for the above comparisons could be made for any desired number of levels of significance. These comparisons, however, do not automatically indicate the level of significance, although they do provide more 68 objectivity than the arbitrarily selected level. The probabilities of greatest concern are those associated with alternative parameters (representing changes in the cause system) which would engener "serious" losses if they were not detected. The goal is to select a level of significance which will give a "low" probability for a Type II error for such alternative parameters without making the level of significance too "high." While this method makes use of more objective evidence than the ar- bitrarily selected level of significance, it supplies no objective way to evaluate this evidence. Without specfic consideration of the costs of each type of error or with- out quantifying what is meant by the term "serious loss," both the selection of an alternative parameter and the final balance between the level of significance and the probability of a Type II error for an alternative parameter, once it is specified, are haphazardly determined. It would appear that this appraoch is superior than the methods conventionally used by accountants because this method considers the distribution of chance performance which permits an evaluation of the level of significance. It is unfortunate, however, that the level of seignificance is generally chosen arbitrarily between .001 and .05. The probability of a Type II error is not often considered, although it can be evaluated for any given level of sig- nificance and alternative parameter as indicated in the above discussion. 69 The Bierman, Fouraker, and Jaedicke Approach Bierman, Fouraker, and Jaedickel3 consider the op— portunity costs associated with each decision in addition to the probability distribution of chance performances. The basic features of this model are illustrated in Table 6 which is a revised version of the Conditional Cost Table used by Bierman, Fouraker and Jaedicke.l4 The following notation is used: P is the probability that the hypothesis is true (i.e., that the deviations are caused solely by chance) given the occurrence of an unfavorable variance. 1-P is the probability that the hypothesis is false given the occurrence of an unfavorable variance. C is the cost of an investigation. L is the present value of the expected opportunity cost resulting from not taking corrective action on the basis of the present deviation. Bierman, Fouraker, and Jaedicke would use the fol- lowing table to analyze unfavorable variances only. A slightly different approach is used to analyze favorable variances. It is, of course, understood that no further l3Harold Bierman, Jr., Lawrence E. Fouraker, and Robert K. Jaedicke, Quantitative Analysis for Business De— cisions (Homewood, Illinois: Richard D. Irwin, Inc., 1961), 108—125. See also Harold Bierman, Jr., Lawrence E. Fouraker, and Robert K. Jaedicke, "A Use of Probability and Statistics in Performance Evaluation," Accounting Review, XXXVI, No. 3 (July, 1961), 409—417, and Harold Bierman, Jr., Topics in Cost Accounting and Decisions (New York: McGraw-Hill Book Company, Inc., 1963), 15—23. 14Instead of using the term "events,“ Bierman, Fouraker, and Jaedicke refer to “states.“ State one they de— fine as a variance attributed to random, noncontrollable causes; state two is a variance attributed to nonrandom, con— trollable causes. In this writer's terminology, state one is the same as the event that the hypothesis is true andtiatetwo is identical to the event that the hypothesis is false. 70 action will result from accepting the hypothesis; but that rejection signals the need for an investigation. TABLE 6.-—Conditional cost table Acts Accept Hyp. Reject Hyp. Events Prob. Conditional Ex— Conditional Ex— Opportunity Opportunity Cost pected Cost pected True Hyp. P 0 0 C CP False Hyp. l—P L L (l-P) c C—CP ‘ Expected Cost of Acts L(l-P) C The following explanation describes how the sym— bolic opportunity costs have been derived for various com— binations of act and event. For combination of act—accept, and event—true hypothesis, the opportunity cost is zero because acceptance is a correct decision. For combination of act—accept and event-false hypothesis, the opportunity cost is equal to L because the hypothesis should be re- jected. If act—reject, is chosen, the opportunity cost is C regardless of the eVent because the cost of the in— vestigation is the same whether or not the decision is correct.15 The values in the expected column for each act are obtained by multiplying the conditional opportunity 15Afallacy in this logic will be noted in a sub— sequent subsection. ————___l 71 costs for each combination of act and event by their re- spective probabilities. Only the totals in the expected columns have any meaning. These totals represent the ex— pected cost of each act. The act with the lowest expected opportunity cost should be chosen. That is: if L(l—P), the expected Opportunity cost of accepting the hypothesis, is less than C, the expected opportunity cost of hypothesis rejection, the hypothesis should be accepted; but if L(l—P) is greater than C the hypothesis should be rejected. When the expected costs of each act are equal, the decision maker is just indifferent between the two acts. Bierman, 1 Fouraker, and Jaedicke equate the expected costs of the two acts to obtain the following formula for the critical probability, PC. C = L(l—P) P = C L_—_9 L It is assumed that C is less than L. If P is larger than PC the hypothesis is accepted; if P is smaller than PC the hypothesis is rejected. Some general observations may be made from the above formula. When C is very small relative to L, Pc is close to l and most variances will be inves- tigated. As C approaches L, the profitability of inves— tigation decreases. The following numerical example presented by Biermanl6 illustrates the mechanics of the model. l6Bierman, 22—23. —_—__l 72 The following assumptions are made: 1. The yearly budget for a certain expenditure is $10,000. 2. The actual expenditure is $13,000. 3. The standard deviation is $6,000. 4. The cost of an investigation is $40. 5. The condition, if off—standard, and not detected would continue for four years. 6. The discount rate is 10 per cent. It is now necessary to calculate.the conditional probability that a chance expenditure will deviate by $3,000/$6,000 = .5 standard deviations or more from its expected value, given that the deviation is an unfavorable one. The formula for calculating any conditional proba- bility is: ‘U P(B/A) = 1222]?) In this case, B is the event that the deviation is .5 standard deviation units and A is the condition that the deviation is unfavorable. The probability that a devia- tion is unfavorable and at least .5 standard deviation units from its mean, P(AB), is found from a table of nor— mal curve areas to be .31. Therefore, the conditional probability of the $3,000 deviation is: P(B/A) = .3l/.5 = .62 73 The critical value is: _ L - c _ $9,000 — $40 _ PC — — $9’000 — .996 L is determined by multiplying the $3,000 deviation by 3-- the approximate present value of $1 conveyed per period for four.periods at a 10 per cent interest rate., Since P is less than PC the hypothesis is rejected and an inves— tigation is undertaken.' Bierman, Fouraker, and Jaedicke depict the decision process with a cost control decision chart similar to that shown in Figure 5. The curve, or critical path, can be drawn by plotting several combinations of PC with its re- Spective variance. With this chart, the calculation of PC for every test can be avoided. If P lies above the criti- cal path, the hypothesis is accepted. Otherwise, it is rejected. Probability that 1.0 Accept ‘ Hypothesis Unfavorable Variance .8 is the Result of .6 Chance Causes .4 Reject Hypothesis Amount of Unfavorable Variance FIGURE 5.-—Cost control decision chart 74 Analysis of the Bierman, Fouraker, and Jaedicke Approach The Relationship between PC and the Level of Significance While Bierman, Fouraker and Jaedicke do not iden— tify it as such, Pc is the same as the level of signifi- cance. It is interesting to note that this critical value becomes larger as the amount of the unfavorable variance becomes larger. For most variances, Figure 5 shows PC to be substantially higher than the conventional .05 or .001 values selected for the level of significance. In fact, with the above calculations a Type I error will be made 99.6 per cent of the time. The Pc indicates that 99.6 per cent of all chance variances will be investigated. One,explanation for this extremely high PC re— sulting from Bierman's calculations is that his example pertained to yearly variances; whereas, the .05 or .001 levels which have been used in quality control work gen— erally pertain to analyses of individual performances. Pc is higher for a yearly analysis because C is likely to be smaller in relation to L than it would be for an analy- sis of individual performances. The reason for this is that there is a certain minimum cost of an investigation so that one would not expect the investigation cost of a yearly variance to be proportionately higher than the in— vestigation cost of an individual performance. These higher Pc's which will result from yearly and even monthly 75 analyses illustrate even more dramatically the danger in applying an arbitrary level of significance. Control of Performance vs. Summary Expense Classifications It has previously been noted that control is more timely and that the source of off-standard conditions can more easily be identified by analysis at the performance and operational levels. Nevertheless, the basic procedure is the same (although the level of significance is dif— ferent) for the analysis of summary expense classifications. Actually, analysis of summary accounts should be encouraged because there are some cost items for which analysis by performance or operation is either not possible or not practical; but for which some review is desirable. It will be seen later, however, that these monthly and yearly analyses of entire expenditure classifications serve mainly to review the adequacy of control rather than to actually control costs. A serious limitation of the Bierman, Fouraker, and Jaedicke example, along with most of the examples of the writers cited in Appendix A, is that they apply the control procedures at a level where control is too late and where off—set and average-out problems enter. Professor Ferraral7 contends that failure to identify the various levels where l7Discussion, April 10, 1966, with William L.Fer— rara, Professor of Accounting at the Pennsylvania State University. 76 these techniques should be applied and failure to indicate their usefulness at each level has contributed to the de- lay of acceptance of statistical procedures for variance analysis. 'Value of the Alter- native Parameter Bierman avoids mention of the restraint imposed by selection of an alternative parameter. Instead, he im- plicitly assumes that the $3,000 deviation, if significant, pertains to a parameter thatis exactly $3,000 more than the budget. This assumption, if true, would certainly be a coincidence. Part of any variance, whether or not it is significant, is due to chance. In the example cited by Bierman, the $3,000 variance if significant, is not re— stricted, as he assumes, to a $13,000 parameter. Just one of an infinite number of possibilities, concerning a $13,000 actual cost is that the parameter is $11,000. In this case $1,000 ($11,000 — $10,000) of the variance is due to chance. Because Bierman implicitly assumes that the alter- native parameter coincides with the actual results, the alternative parameter depends upon the size of the vari— ance which in turn causes PC to depend upon the size of the variance. This explains why Pc increases as the size of the unfavorable variance increases as illustrated in Figure 5. 77 Time Interval before De- tection of Inefficiency An off-standard condition not detected at the end of the first year could be detected at the end of the second or third year in which event it would not proceed into the fourth year as Bierman assumes. Actually, the probability that it would continue into the fourth year is only .0310. This is calculated by the following pro— cedure: 1. Re—calculate PC with L equal to $3,000. This gives PC equal to .987--on1y slightly less than the .996 obtained with L equal to $9,000. 2. Compute the upper control limit corresponding to the revised PC. The result is $10,090. 3. Use the upper control limit to calculate the prob— ability of making a Type II error given the alter— native parameter $l3,000. This probability is .3139. 4. Take the third power of the probability of making a Type II error. The result is .0310. Bierman's introduction of the present value ap- proach into variance control is commendable; but, on bal- ance, it appears that his example assuming arbitrarily that the inefficiency would last for four years is not well founded. It should be emphasized that there are no 78 right or wrong values to use for L. From the probabilities, it appears that L should fall somewhere in the range be- tween $3,000 and $9,000. Since both of these values are so high in relation to C, the actual value selected for L within this interval will not greatly effect Pc' (It has already been seen that PC is .996 and .987 respectively when the corresponding L's are $9,000 and $3,000 respec- .tive1y.) In an analysis of individual performances, how- ever, where the difference between L and C is not so great, the value of L will have a larger influence on the value of Pc' In the next chapter L will be estimated by first estimating the pOpulation variance (i.e., the difference between the standard and the alternative parameter). This estimate of the pOpulation variance will be weighted by the probability of failing to detect the change after n number of analyses. One other possibility that Bierman's analysis failed to consider is that this cost expenditure would not be restricted to a yearly analysis. The inefficiency could, therefore, be detected by monthly or weekly analy— ses or by the analyses of individual performances. This ‘ extra consideration further reduces the probability that I. the inefficiency would continue for four years. Inconsistency between Interpre- tation of P and its Calculation In presenting their conditional cost table, Bier- man, Fouraker, and Jaedicke define P as "the probability 79 of an unfavorable deviation resulting from uncontrollable 18 This is the same thing as saying [chance] causes." that P is the probability that the hypothesis is true given the occurrence of an unfavorable variance. Their calculation of P [or P(B/A) in the numerical example just cited], corresponds to an earlier interpretation which differs substantially from the above interpretation. In their numerical examples they calculate P by converting the variance into standard units and using the table of normal curve areas. They correctly interpret this as "the probability of a deviation this large or larger oc- . "l9 curring from random causes. Although the wording is similar, the method used to calculate P assumes that random or chance causes are prevailing. "P" then, is the probability that a deviation at least as large as that observed would result from the chance population. The interpretation of P in the Con— ditional Cost Table places the probability on whether the deviation came from the chance population (resulting in a true hypothesis) as Opposed to coming from one of the as- signable cause populations. In order to determine "the probability of an un- favorable deviation resulting from uncontrollable causes," l8Bierman, Fouraker, and Jaedicke, 121. 19Ibid., 113. 80 Leo McMenimen correctly contends that it would be necessary to know the following two values: (He assumes a $500 un— favorable deviation.) a = the number of times we have observed a $500 cost deviation due solely to uncontrollable factors. b = the number of times we have observed a $500 cost deviation. The ratio a/b would then be an estimate of P and the ratio b—a/b an estimate of l—P, as P and 1—P are interpreted in the Conditional Cost Table (Table 6). In an effort to clarify the distinction between these two interpretations, Leo McMenimen portrays a hy— pothetical company that he assumes never has and never will experience an assignable cause. The probability dis- tribution of all results would, then, be due solely to chance causes. Assume that a given cost variance is $500 and that by the method of converting to standard units and using the table of normal curve areas one gets P = .3. The proper interpretation of P associated with this calcu— lation is that .3 is the probability that a deviation this large or larger will occur from chance causes. In other words, 30 per cent of all chance unfavorable deviations are larger than $500. Bierman, Fouraker, and Jaedicke's second interpretation is that .3 is the probability that this unfavorable deviation results from chance causes. 20McMenimen, 60. 81 Since, in this hypothetical case, all variances are due to chance, the probability that an unfavorable deviation will result from chance causes must equal one. Cost of Control McMenimen observed that Bierman, Fouraker, and Jaedicke did not incorporate the cost of control into their analysis. They merely assumed that if you investigate the deviation and de- termine its cause, you can take corrective action with- out additional cost. This might be true in some cases, but probably not in all cases.21 This observation is of interest because there may be times when an assignable cause creates such a slight change in the parameter that it is not worth the cost of correcting. The logic behind the relevance of this recogt nition to the placement of the control limits is that there is no value in incurring the investigation cost to detect an assignable cause that one does not intend to correct. It will be seen shortly that the cost of control fits nicely into the McMenimen model to provide the deci- sion maker with the eXpected value of his decision. It would, however, be difficult to incorporate this cost into the Bierman, Fouraker, and Jaedicke model because one can- not know the cost of correction until he knows the assign— able cause. This, however, is determined by an investiga- tion. 21Ibid., 60. 82 While the cost of control is an important considera— tion, it is this writer's Opinion that this cost is more relevant in establishing the standard than in determining the significance of results. Most standards could be re— duced by incurring more cost. This additional cost could take the form of more employee instruction, more time and motion study to facilitate greater efficiency, or the policy of hiring more highly skilled workers. The fact that a standard is set at a given level implies that it is worth reducing to that level but that further reduction is not profitable. Now if the parameter shifts, it should be profitable to re-establish the standard, if its level was profitable in the first place. The Cost of an Investigation The use of C as a constant for both events in Table 6 is questionable. If the hypothesis is false, rem jection is a correct decision. An investigation, in this case, would be continued only until the particular assign- able cause is determined. If, on the other hand, the hy- pothesis is true, an investigation would proceed until all potential causes were checked. Only then could one be réaSonably certain that there were no assignable causes and that a Type I error was committed. The logic of this reasoning leads to the conclu— sion that the value of C is higher if the hypothesis turns It 83 out to be true than if it is false. Moreover, there is no unique cost of an investigation associated with a false hypothesis because some assignable causes can be detected more readily than others. Consequently, the use of C as a constant leads to questionable results from application of the Bierman, Fouraker, and Jaedicke model. Another problem which should be made explicit is that C is more appropriately an Opportunity cost concept related to the use of the investigator's time since a change in C is probably non-existent given a salaried in- vestigating staff. On the other hand C as an opportunity cost is relevant in making an investigative decision be— cause it is important that investigators spend their time in the most profitable endeavors. If one is spending his time in one way, he cannot be spending it in some other way. Evaluation In spite of the foregoing critique, the Bierman, Fouraker, and Jaedicke model has much to commend it. Not only did these authors deviate from traditional variance analysis by recognizing the probability of a chance de- viation being at least as large as that observed (their interpretation corresponding to their calculation of P); but they were the first, known to this writer, to incor- porate "the cost of investigation and expected benefits 84 of investigation explicitly into the analysis of cost variances."22 Third, they considered the expected value of future inefficiencies rather than just the cost inef- ficiency of one experiment. Finally, their second inter- pretation that P is the probability that the hypothesis is true, that is, that the deviation is the result of chance causes, would be useful information. Their limi- tation in this approach is that this is not the probability found by their calculations. McMenimen Approach McMenimen considered recognizing the possibility of more than two possible acts and two possible events. He wrote: . . . it is possible to spend various amounts for the investigation of cost deviations before we either: 1. determine the cause of the COSt deviation and the measures necessary to prevent its recur- rence, or 2. designate the cost deviation as uncontrollable [i.e., due to chance]. We might also realize that the cost deviation may be reduced by various amounts depending upon how much control is exerted.2 McMenimen's technique to handle more than two com- binations of acts and events is shown in Table 7. For simplicity this table shows only three combinations of acts and events; but the approach can be adapted to con- sider any number of acts and events. 22Bierman, 23. 23McMenimen, 60. 85 Notice that the analysis is prepared for a devia- tion of a specific size, $50 in this case. Savings con- sists of the "present value of the difference between the amount of the deviation eliminated and the cost Of correc- tive action."24 If the decision-maker selects act Al and does not investigate, it is obvious that nothing will be saved through the exertion of more control; therefore the proba— bility of event E given Al must be 1.00 (i.e., P(El)/Al = l 1.00). If act A is selected, there is a .5 probability 2 that either the deviation is due to chance or that a $10 investigation is not sufficient to discover the assignable cause. Therefore, P(El)/A2 is .5. The other P(El)'s may be interpreted in a similar manner. Notice particularly that the probability of saving $0 decreases as the amount spent investigating increases because the probability of overlooking an assignable cause decreases as the investiga- tion becomes more extensive. If $20 is spent on an invest tigation instead of $10, there is a 20 per cent greater opportunity of detecting an assignable cause which, if corrected, would enable the savings Of $10. The expected value is highest for act A there- 1; fore, an investigation would not be undertaken for a $50 deviation. 24Ibid., 63. 86 .smEHsmEoS so mmmm Eoum soxmv mH OHQMB wHQBm oHIm H mfi 05Hm> .mxm mlw H Nd mus> .mxm ow u Hm OSHm> .mxm w o>mm Ammv H m om.o m mH om.o II II In mm N m: 0H1 om.o o o om.o II 1| II OHw m>mm A my H 6.8 omum om.e mid oauw om.o ow ow oo.e o w m>mm A ml msHm> msHm> H msHm> msHm> H msHm> msHm> H muso>m .mxm .osoo A mvm .mxm .psoo A mvm .mxm .Ucoo A mvm qupmmHumm>QH omw musmmHuwo>sH OHw deDmmHumm>sH om mpom on as ocmmm Ammv on as Usmmm Amfiv on as pcmmm AHHV AQOHDMH>OQ omw "sm>va QOHumnpmsHHH m.smEHsmzozll.h MHmda .m 87 Consideration of more than two acts introduces the idea that it might be profitable to begin an investigation but that it may also be profitable to terminate it short of completion. This idea is analogous to a decision to Spend no time looking for a badly worn golf ball which has gone into the rough, but to spend up to ten minutes looking for a new ball costing $1, and to Spend up to twenty min- utes looking for a new ball costing $2. The golfer may stop searching for a dollar golf ball after ten minutes not solely because it is not worth another five or ten minutes to find a dollar ball; but, also, because he may subconsciously assign a low probability to his finding it in another five or ten minutes. MeMenimen's suggestion that it may be profitable to terminate an investigation short of finding the cause is a good one conceptually but in this writer's judgment it would not be feasible in practice unless: 1. The cost of an investigation is very high in rela- tion to the present value of expected savings. 2. The cost of control is so high that no action would be taken even if the cause were determined. 3. The probability that the variance is attributed to an assignable cause other than those already in— vestigated is very low. The first of the above items it not likely to hold for analyses at the performance or Operational levels 88 although it may hold for monthly or yearly analyses at a departmental or higher organizational level. With regard to the second item, it has already been noted that if it is worthwhile to establish a certain standard in the first place, it would be worthwhile to re-establish it unless conditions have changed in which case the standard should be revised. The third case may indeed frequently result. This will be illustrated in the example in Chapter VI. The McMenimen technique would be clearer if it in— cluded a comprehensive numerical model to illustrate pre— cisely how each value is derived. As it stands, some as— sumptions implied but not specifically stated by McMenimen must be set forth in order to employ his approach. The first of these assumptions concerns the determination of the values for the various amounts to be saved. The only directive given by McMenimen is his statement that savings consists of the "present value of the difference between the amount of the deviation eliminated and the cost of corrective action."25 When he reviewed the Bierman, Fouraker and Jaedicke approach, McMenimen neither noted nor attacked their selection of an alternative parameter as being equal to the actual result. Moreover, McMenimen did not relate his savings values to the parameters per- taining to specific assignable causes. His suggestion that "the_cost deviation may also be reduced by various 251bid., 63. Il- 89 amounts depending upon how much control is-exerted" im- plies that the mean (parameter) of an off-standard condi— tion might profitably be reduced.but to some value still greater than the standard. However, it has.a1ready been noted that if the standard was realisticein.the first place, it should be profitable to re—establish.it unless conditions have changed to the extent that a revised stand— ard is indicated. The second assumption that must be made eXplicit in order to employ the McMenimen approach concerns his derivation of P. McMenimen said, "In order to obtain these values we would have had to either investigate all cost deviations (including cost deviations equal to zero) for a period of time, or sample all cost deviations for a period of time."26 From this, it appears that McMenimen desires probabilities similar to the second interpretation used by Bierman, Fouraker, and Jaedicke. The following interpretation would apply to P(El) = 0.50 which corre— sponds to act (A2) and event (E1) in Table 5: Given the occurrence of a $50 deviation and the fact that up to $10 is spent on an investigation .5 is sum of (l) the proba— bility that the hypothesis is true27 and (2) the probability 26Ibid., 63. 27 This is really the probability of a Type I error if an investigation is undertaken. The fact that P(El) which corresponds to act (A ) and event (E1) is 0.30 means that the probability that the hypothesis is true given a $50 deviation is at most .30. 90 that a $10 investigation is insufficient to find an assign— able cause.28 The probability of detecting an assignable cause by spending up to $10 investigating is 0.50 (1 - 0.50L This writer does not understand how McMenimen could allo— cate this 0.50 probability between events (E2) and (E3) without specifying assignable causes and their parameters (means). With this information, the interpretation, given a $10 investigation, would be: (1) 0.30 is the probability of detecting an assignable cause which makes possible the saving 0f $10 and (2) 0.20 is the probability Of detecting another assignable cause which makes possible the saving of $20. There is a difficulty in implementing the proce— dure that McMenimen suggests. An analysis like that shown in Table 7 would have to be undertaken for each possible cost deviation. McMenimen, himself, points out that an extreme amount of information is needed for this technique and that this information must be constantly revised. Moreover, for some specific sized deviation there may be very few Observations so that the probabilities assigned to the events would be largely a matter of guess. How— ever, the application of this approach into the 28This is the probability of a Type II error. It is at least .20 (.50 — .30) because 20 per cent of the time when the deviation is $50 an assignable cause can be de- tected by spending an additional $10 investigating (i.e., by spending $20 on an investigation instead Of $10). Table 7 is not sufficiently detailed to determine the probability of detecting an assignable cause if more than $20 is spent on an investigation. —¥—l 91 comprehensive numerical illustration of Chapter VI in which assignable causes and their parameters are identi— fied and an investigation procedure is established reveals several interesting insights into variance control. Conclusions Of the three approaches to statistical variance con— trol that were evaluated in this chapter, the Basic Con- trol Chart approach is the easiest to apply. All that is needed is the probability distribution of chance perform— ances and some basic knowledge of probability statistics. This approach is, however, limited because it does not consider the economic aspects of decision making as the other appraoches have attempted to do. The strengths and weaknesses as well as the similarities and differences Of these approaches will become clearer in Chapter VI when they are all tested for their adequacy in variance control. CHAPTER V STATISTICAL CONTROL TECHNIQUES-- TWO MORE-REFINED METHODS In Chapter IV, three statistical control techniques were evaluated. These techniques were identified as (l) the Basic Control Chart approach, (2) the Bierman, Fouraker, and Jaedicke approach, and (3) the McMenimen approach. Cer— tain limitations were noted for each of these approaches. In an attempt to improve upon these limitations, this writer has applied two more approaches to variance control. The first is called an Equalization approach. The other method is referred to as the Minimization Approach because it mini- mizes the expected opportunity costs. An Equalization Approach This approach establishes the control limits at those points where the probability of committing a Type I error times the opportunity cost assocaited with a Type I error is exactly equal to the probability of committing a Type II error times the opportunity cost associated with a Type II error. The probability of committing a Type I error and the probability of incurring a Type II error for a specified alternative parameter were developed in 92 93 Chapter IV in conjunction with a cow butchering illustra- tion. These probabilities are shown in Tables 4 and 5 and in Figure 4. Accordingly, it is now appropriate to con- sider the opportunity costs associated with each type of error. Opportunity Cost of a Type I Error The opportunity costs of a Type I error have two aspects. One aSpect is associated with the cost of an investigation which could be saved if the best act-~not to investigate--were chosen for the event which occurred (the cause system has not changed). This cost is an op- portunity cost because those making the investigation would normally be salaried. As an opportunity cost it is no less relevant, however, because it is important that salaried employees spend their time in the most profitable ways. Of course, if an increased number of such errors were incurred,at some point an additional supervisor would have to be added to make the additional investiga- tions. An investigation may take varying lengths of time depending on the cause of the variance. The longest time would be spent investigating a chance cause (which gives rise to a Type I error) because each other possible cause would be checked—out before the investigator could be rea— sonably sure that he had made a Type I error. 94 The other cost aspect associated with a Type I error is the cost of employee ill-will engendered by the implication that an employee is not performing according to standard. This cost is difficult to determine, but it can be greatly reduced by an educational program de— signed to explain the purpose of standards, control charts, sampling, and sampling errors. If employees understand that their wages ultimately depend on the success of the control prOgram, greater cooperation can be elicited. Qpportunity_Cost of a Type II Error The Opportunity cost of a Type II error is also composed of two elements. If the performance comes from an alternative parameter which is unfavorable the cost consists of the worker's time which could be used more productively if the assignable cause could be detected and corrected. The other cost element of a Type II error concerns performances from favorable alternative parameters. In these cases, the cost involves wastes incurred by delays in revising the standard. This cost element is difficult to determine. The reader will recall that a hypothetical example involving the time required to butcher a cow was used in Chapter IV in conjunction with an explanation of the control chart approach. The standard for this opera- tion was 245 minutes. Now, if because of a favorable 95 assignable cause the standard could be reduced to 240, the total manufacturing cost would be less than if the standard is 245. If, however, this change in the cause system is not detected, and Table 5 indicates that it will not be 88.59 per cent of the time with a .05 level of significance, the firm has no assurance that the meat cutter is not capable of performing at 235 or 230. Indeed, he may be aware of his increased skill and decide to reap the rewards by taking his time. An incentive plan for those who better their performance would partially elimi- nate this problem and make improvement detection easier; but there are still those who would prefer to work slower even at the eXpense of less pay. Therefore, it is impor— tant to determine a lower control limit so that improve— ment may be detected and the standard revised. An estimate of the opportunity cost of failing to detect favorable changes in the cause system is a factor involved in deter— mining an Optimum lower control limit. Quantification of the Costs of a Wrong Decision For illustrative purposes, the Opportunity costs of error will be quantified by continuing with the cow butchering example. The investigation cost associated with a Type I error can be computed by: 1. Determining the time required to run through the cOmplete list of procedures before chance, the residual cause, can be agreed upon. 9" [r fi—_——————L ‘.—.—.-.-. 1..— 96 2. Converting this time into a dollar figure by taking an apprOpriate portion of the investigator's salary. In this case, assume that (1) it takes one hour to run through a list of procedures before chance, the residual cause, can be agreed upon and (2) the prorated salary of the investigator is $5 per hour. The cost of a complete investigation is, then, $5. Assume,.in this example, that educational programs have resulted in negligible employee ill-will associated with an investigation. The cost of a Type I error, therefore, is $5. The opportunity costs of a Type II error associated with an unfavorable change in the cause system are deter- mined by: (l) dividing the difference between the standard and the alternative parameter by 60 to convert the differ- ence into an hourly fraction and (2) multiplying this hourly fraction by the hourly wage. In this case it is assumed that the butcher receives $3 per hour. For the illustrative purposes of this problem, it has been assumed that the Opportunity costs of a Type II error for favorable changes in the cause system are the same as the costs for equivalent unfavorable changes in the cause system. That is, the opportunity cost of a Type II error for a 240 parameter, representing a five minute favorable change in the cause system (245 — 240), is the same as the Opportunity cost of a Type II error for a 250 parameter, representing a five minute unfavorable change in the cause system (245 — 250). It 97 The opportunity costs of Type I and Type II errors are shown in Table 8 for various population means. Notice that a Type I error is made only when the population mean is 245. Since a Type I error is that error of rejecting a true hypothesis, it can be made only when the hypothesis is true. Moreover, a Type II error, that of accepting a false hypothesis,‘can be made only when the hypothesis is false. Therefore, it can be incurred for all non—chance parameters other than 245. Notice that the Opportunity cost of a Type II error increases as the change in the cause system increases. That is, as the pOpulation mean moves away from the standard in either direction, the op— portunity cost of a Type II error increases. TABLE 8.-—Opportunity costs of a wrong decision for various population means . Op. Cost Op. Cost POpulatlon of Type I of Type II Mean Error Error 210 $1.75 215 1.50 220 1.25 225 1.00 230 0.75 235 0.50 240 0.25 244 0.05 245 $5 246 0.05 250 0.25 255 0.50 260 0.75 265 1.00 270 1.25 275 1.50 280 1.75 It 98 In contrast to this, Table 5 shows that the proba- bility of making a Type II error decreases as the population mean moves away from the standard. This means that as the change in the cause system becomes greater the probability of detecting the change, and thus avoiding a Type II error, also becomes greater. Determining the Control Limits Information to aid in the determination of the con— trol limits has been marshalled in Table 9. The opportunity costs of a wrong decision are the same figures that were derived in Table 8 except that they are not identified as to the type of wrong decision. It is understood that the opportunity cost associated with parameter 245 pertains to the Opportunity cost of a Type I error and that the oppor— tunity costs of the other parameters represent the oppor— tunity costs of Type II errors. The figures appearing under the columns entitled "Prob. of Wrong Decision" were taken from Table 5 with the following exception. The probabilities listed in Table 5 are the probabilities of a Type 11 error; but those listed in Table 9 are the probabilities of a wrong decision whether it be a Type I or Type II error. Accordingly, the proba- bilities in Table 9 corresponding to the standard, or the 245 parameter, represent the probabilities of committing a Type I error——.05 for the .05 level of significance and .01 for the .01 level of significance. The probabilities 99 corresponding to each of the other parameters represent the probabilities of committing a Type II error. Conditional opportunity costs for.each population mean and level of significance are Obtained by multiplying each Opportunity cost Of a wrong decision by.its probability of making a wrong decision. With the exception of the fig— ures for the standard, these conditional opportunity costs represent, for each specified parameter, the expected Op- portunity cost of making a Type II error. For the standard, the conditional opportunity cost represents the expected Opportunity cost of making a Type I error. As a result of the interaction of the decreasing probability of a Type II error and the increasing Oppor— tunity cost of a wrong decision, the conditional average Opportunity costs increase at first and then decrease as the alternative parameter moves further away from the standard in either direction. For parameters other than the standard, the con— ditional average Opportunity costs are higher for the .01 level of significance than for the .05 level because, of course, the probability Of a Type II error is higher with the .01 level. For the standard, the conditional average Opportunity cost is higher for the .05 level. The conditional average opportunity cost figures are helpful in several ways. First, they help to determine ll ' II when a change in the cause system becomes serious. 100 For both the .05 and the .01 levels of significance, the highest conditional average Opportunity costs occur for alternative parameters 230 and 260. Therefore, 230 and 260 will be specified as the alternative parameters in de- termining the control limits. Both of these parameters correspond to a 15 minute change in the cause system. Second, it will be seen that the conditional average Op— portunity cost figures aid in determining the desired level of significance for any specified parameter. TABLE 9.——Conditional average opportunity costs Level of Significance Op. Cost .05 .01 u Bf Wrong Prob. of Cond. Prob. of Cond. eClSlon Wrong Ave. Wrong Ave. Decision Op. Cost Decision Op. Cost 210 $1.75 .0066 $.0116 .0314 $.0550 215 1.50 .0314 .0471 .1075 .1612 220 1.25 .1075 .1344 .2676 .3345 225 1.00 .2676 .2676 .5000 .5000 230 .75 .5000 .3750 .7324 .5530 235 .50 .7314 .3657 .8925 .4462 240 .25 .8859 .2215 .9685 .2421 244 .05 .9352 .0468 .9862 .0493 245 5.00 .05 .25 .01 .05 246 .05 .9352 .0468 .9862 .0493 250 .25 .8859 .2215 .9685 .2421 255 .50 .7314 .3657 .8925 .4462 260 .75 .5000 .3750 .7324 .5530 265 1.00 .2676 .2676 .5000 .5000 270 1.25 .1075 .1344 .2676 .3345 275 1.50 .0314 .0471 .1075 .1612 280 1.75 .0066 .0116 .0314 .0550 For review, the Equalization control limit occurs at that value where the probability of committing a Type I 101 error times the Opportunity cost of committing a Type I error just equals the probability of committing a Type II error times the opportunity cost of committing a Type II error. More simply, this can be expressed by saying that the Equalization control limit occurs at that value where the conditional average Opportunity cost of a Type I error equals the conditional average Opportunitity cost of a Type II error. The word "conditional" is used because these costs are conditional on u- The value where this equality occurs is located by trial and error. First, a level of significance is randomly selected near the value which the analyst expects to be the control limit. This level is, of course, related to a set of values which are being tested to see if they are the Equalization control limits. The level of significance can be converted to the values being tested for control limits by referring to the probability distribution of chance performances in Table 3. For example, if .01 is chosen as the level of significance, the values being tested for control limits are 225 and 265 because Table 8 shows .005 of the chance performances to be less than 225 and .005 to be more than 265 (.005 + .005 = .01). The next step in the test is to see if the condi- tional average opportunity cost of a Type I error is equal to the conditional average opportunity cost of a Type II error for the level of significance or performance values 102 being tested for a specified alternative parameter. For testing the .01 level of significance, the reader can verify from Table 9 that the conditional average Oppor— tunity cost of a Type I error is $.05. This appears Opposite the population mean 245--the only value for which a Type I error could be made. The conditional average Opportunity cost of a Type II error for alter— native parameters 230 and 260 is $.5530-—the number correSponding to each of the parameters 230 and 260. Clearly .01 is not the desired level of significance (225 and 265 are not the Equalization control limits) be- cause $.05 is not equal to $.5530. In testing for sig— nificance, the analyst should reject the hypothesis for either the value 225 or the value 265 and run the risk of incurring a Type I error because the conditional average opportunity cost of a Type I error, $.05, is lower than the conditional average opportunity cost of a Type II error, $.5530. A Type I error is the only type of error that can be made if the hypothesis is rejected. A Type II error is possible only when the hypothesis is accepted. Since .01 is not the desired level of significance, another level must be tested. The direction of the ap— propriate level can be determined by the following line of reasoning. The desired level of significance deter- mines the value of control limits which divide the area of hypothesis acceptance from the area of hypothesis rejection. If the test for a given level of significance ~s-aq 103 shows the conditional average Opportunity cost of rejec— tion to be lower than the conditional average Opportunity cost of acceptance, the best act is to reject the hypothe— sis for performance values corresponding to the level being tested. The performance value correSponding to this level should fall clearly in the area of rejection. In order to move toward the boundary, a larger level Of significance is necessary. This explanation can be visualized by reference to Figure 6 in which the shaded area represents the region of rejection. The boundaries marked with LCL and UCL represent performance values for which one would be just indifferent between the acts of rejection and ac— ceptance. That is, they represent the Equalization con— trol limits; but their values are unknown. If, however, the conditional average Opportunity'costs associated with the level of significance being tested indicate hypothesis rejection, the corresponding performance value falls in the shaded region and a move toward the boundary involves a larger level of significance. LCL UCL FIGURE 6.--Diagram indicating direction of desired level of significance 103 shows the conditional average Opportunity cost of rejec- tion to be lower than the conditional average opportunity cost of acceptance, the best act is to reject the hypothe— sis for performance values corresponding to the level being tested. The performance value corresponding to this level should fall clearly in the area of rejection. In order to move toward the boundary, a larger level of significance is necessary. This explanation can be visualized by reference to Figure 6 in which the shaded area represents the region of rejection. The boundaries marked with LCL and UCL represent performance values for which one would be just indifferent between the acts of rejection and ac— ceptance. That is, they represent the Equalization con— trol limits; but their values are unknown. If, however, the conditional average opportunity'costs associated with the level of significance being tested indicate hypothesis rejection, the corresponding performance value falls in the shaded region and a move toward the boundary involves a larger level of significance. LCL UCL FIGURE 6.--Diagram indicating direction of desired level of significance 104 Since .05 is a larger level Of significance and since the necessary information appears in Table 9, .05 will be tested for the desired level Of significance. Table 3 shows 230 and 260 to be the corresponding per- formance values (.025 of the performances are lower than 230 and .025 are higher than 260). Table 9 indicates that the conditional average Opportunity cost of a Type I error for a .05 level of significance is $.25. The con— ditional average Opportunity cost for a Type II error for alternative parameters 230 and 260 is $.3750. The .05 level is still lower than that required because it is still cheaper on the average to reject the hypothesis for test values 230 and 260 and run the risk of incurring a Type I error. It might now be appropriate to test the .07 level since it can easily be seen that for this level the con- ditional average Opportunity cost of a Type I error is $.35 (.07 x $5). To find the conditional average Oppor— tunity cost of a Type II error for alternative parameters 230 and 260 it is necessary to find the control limits corresponding to the .07 level of significance and to use these to calculate the probability of a Type II error. Since the probability distribution in Table 3, is not suf- ficiently detailed to permit reading these control limits directly from the table, they must be computed by assuming that the distribution is normal. Solution of the following 105 formula yields a lower control limit of 230.4 and an upper control limit of 259.6: z = LCL (UCL) — n 0 where: Z for LCL = -1.81 p = 245 Z for UCU = 1.81 o = 8.06 The probability of making a Type II error for al— ternative parameters 230 and 260 is .48. This probability is computed in the same manner as illustrated in Figure 4. When .48 is multiplied by the $.75 Opportunity cost of a Type II error, $.36 the conditional average Opportunity costs for acceptance results. Since this is so close to the $.35 conditional average opportunity cost of rejec— tion, one can conclude that the Equalization level of significance is just slightly higher than .07. The foregoing analysis is Offered as evidence that the most desirable level of significance does not always fall in the .05 to .001 range as is generally assumed in the Basic Control Chart approach. There is, however, one questionable aspect to the procedure just discussed. The opportunity costs of a Type II error are understated because an off-standard per— formance not detected on its first occurrence will extend Opportunity costs into the future until the change in the cause system is detected and corrected. The reader will 106 recall that Bierman's solution to this problem was to calculate the present value of an inefficiency which he arbitrarily assumed would continue for four years.1 Since the cow butchering example pertains to individual per— formances rather than to yearly reports, the calculation of present values is not important because each perform- ance is being tested at the present time. It could be assumed that an inefficiency would continue for four per— formances before being detected; but since four is an ar- bitrary value, a more scientific approach is illustrated in Table 10. The purpose of this table is to develop a more realistic Opportunity cost of a Type II error associated with parameters 230 and 260--flmnxa realistic, that is, than the $.75 shown in Table 9. Column A represents the number of successive failures to detect a change in the cause system. The $.75 Opportunity cost in column B rep— resents the opportunity cost of failing to detect a change in the cause system from a mean of 245 to a mean of 230 or 260 on its first occurrence. The other figures in column B increase successively by $.75 for each additional failure to detect the change. The numbers in column C show the probability of failing to detect the assignable lSee asumption five under the Bierman, Fouraker, and Jaedicke presentation in Chapter IV. This assumption was later questioned in the section entitled "Time Inter- val Before Detection of Inefficiency." 107 cause after the number of occurrences shown in column A. That is, the probability of failing to detect the change on its first occurrence is .5. The derivation of this probability was originally eXplained in conjunction with Table 4. The probability of failing to detect it on its second occurrence is .5 squared or .25. The probability of failing to detect the change on its third occurrence is .5 cubed or .125. The other figures in column C are determined by taking the power of .5 corresponding to the values in column A. As indicated on Table 10, column D results from multiplying the values in column B by those in column C. Only the summation of the values in column D is shown since it is the only value used in subsequent calculations. This summation of $1.456650 is divided by the summation of the probabilities, .9960, in column C in order to get $1.4625 as the Opportunity cost of a Type II error. This value considers the fact that it takes on the average ($1.4625/$.75) iii; tests to detect an as- signable cause once it has occurred. Now that the Opportunity cost of a Type II error has been increased, the Equalization level of significance will be increased. The increased opportunity cost of a Type II error increases the conditional opportunity cost of a Type II error. In order to raise the conditional Opportunity cost of a Type I error to bring about the necessary equality, the level of significance (probability of a Type I error) must be increased because the Oppor- tunity cost of a Type I error is constant at $5. TABLE lO.——Weighted Opportunity cost of Type II error Number Accumulated Probability Column B of Tests Opportunity of tests in Times Costs Col. A Column C (A) (B) (C) (D) l .75 .5000 2 1.50 .2500 3 2.25 .1250 4 3.00 .0625 5 3.75 .0312 6 4.00 .0156 7 4.75 .0078 8 5.50 .0039 .9960 $1.456650 . _ $1.456650 weighted COSt ‘ .9960 $1.4625 As a start, a test will be made to see if .10 is an Optimum level of significance. The conditional average opportunity cost of a Type I error is $.50. This is de— termined by multiplying the probability of a Type I error, .10, by the $5 opportunity cost of a Type II error. The conditional average Opportunity cost of a Type II error assuming alternative parameters of 230 and 260 is $.6028. This is determined by multiplying the probability of a Type II error, .4127,2 by the $1.46 opportunity cost of a 2This value was obtained by the following procedure: 1. Finding lower control limit corresponding to the .10 level of significance. This value is 231.74. It is 109 Type II error. Since the conditional average Opportunity cost of a Type I error is lower than that for a Type II error, the desired level of significance is higher than .10. The level is, however, clearly less than .12 because the conditional average opportunity cost of a Type I error with a .12 level of significance is $.6000 (.12 X$5). The conditional average opportunity cost of a Type II error is $.6028 for a .10 level of significance. It must be less for a .12 level since the probability of a Type II error is less for a .12 level than for the .10 level. determined by solving the following formula for LCL: _ 245 — LCL - 1.645 — 8.06 where: —1.645 is the normal devaite correspdong to the .10 level of significance (2 tailed test) 245 is the standard 8.06 is the standard deviation of the distri— bution of chance performances 2. Assuming that the cause system changed so that the parameter is now 230 instead of 245. 3. Finding the area under the normal curve between 230 and 231.74. This is .0811determined by solving the following for Z and using the table of normal curve areas. 231.74-230 Z — 8.06 — .2159 .22 4. Finding the area under the normal curve between 231.74 and the corresponding upper control limit of 258.26. This result of .4129 is Obtained by subtracting .0871 from .5 (the area between 230 and 258.26). The probability of committing a Type II error, then, is .4129 because the hypothesis will be accepted if the test value is between 231.74 and 258.26 with a .10 level of significance. If, however, the population mean has changed to 230 acceptance would be a Type II error. The same value would be Obtained by using 260 as the alternative parameter. 110 Consequently, a test will be made to see if .11 is the appropriate level of significance. Now the conditional average opportunity cost of a Type I error is $.55 (.11 x $5). The $.5802 conditional average opportunity cost of a Type II error is Obtained by multiplying the .39743 prob- ability Of a Type II error by the $1.46 opportunity cost of a Type II error. Accordingly, the desired level of sig- nificance is higher than .11 but less than .12. The Equalization lower control limit is then between 232.12 and 232.47 and the Equalization upper control limit is between 257.88 and 257.53. (232.12 and 257.88 are the control limits associated with a .11 level of significance and 232.47 and 257.53 are associated with a .12 level of significance.) Effects of Changes in the Oppor- tunity COsts of a Wrong Decision and/or Changes in the Probability of a Wrong Decision If the cost of an investigation increases while everything else remains the same, the cost of false alarms becomes more costly. Consequently, a lower level of sig- nificance, giving fewer false alarms, becomes more de— sirable. For example, if the cost of an investigation increases to $7.50, the .05 level of significance would be the Equalization level because this Opportunity cost 3This value was obtained by following the same procedure outlined in footnote 2. 111 of rejection times the .05 level is just equal to the $.75 opportunity cost of acceptance times the .5 probability of making a Type II error (for alternative parameters 230 and 260) at a conditional average opportunity cost of $.375. Contrariwise, if the cost of an investigation is reduced, rejection becomes less expensive relative to acceptance, thus signaling the desirability for a higher level of sig— nificance. Moreover, if meat cutter's wages are increased, while everything else remains constant, the opportunity cost of a Type II error is increased so that acceptance is more expensive relative to rejection. Therefore, a higher level of significance is desirable. If meat cut- ter's ‘wages are reduced, the Equalization level of sig- nificance is lower by reverse reasoning. The following generalizations can be drawn from this discussion. 1. The Equalization level of significance is increased if: A. The cost of an investigation (i.e., the cost of a Type I error) is reduced. B. The Opportunity cost of a Type II error is in— creased. 2. The Equalization level of significance is reduced if: A. The cost of an investigation is increased. B. The opportunity cost of a Type II error is re- duced. 112 Comparison of the Equalization Approach with the Bierman, Fouraker, and Jaedicke Approach The Equalization approach concerned itself with an analysis of individual performances rather than an analysis of summary reports. While the Equalization approach could be applied to summary reports, it is used at the perform— ance level because control is more effective at this level without the aggregation and timing problems inherent in summary reports. The Equalization approach selected alternative parameters that could be serious-—the ones that yielded the highest conditional average Opportunity cost (see Table 9); whereas, the Bierman, Fouraker, and Jaedicke approach implicitly assumed that the alternative parameter would be equal to the actual performance value. Table 10 shows a more scientific approach toward calculating the lapsed time interval before the detection of a change in the cause system; in contrast, Bierman, Fouraker, and Jaedicke arbitrarily multiply the single performance Opportunity cost by four. The Equalization approach considers the cost of a Type I error, which is a constant, rather than the cost of an investigation, which is not a constant if an investigation is a correct decision. In reSpect to the probabilities, the Equalization approach uses both the probability of Type I and Type II errors as they are defined in Classical Statistics. 113 The first interpretation of the Bierman, Fouraker, and Jaedicke approach uses the Classical probability of a Type I error. It does not consider the probability of a Type II error. Their second interpretation of P is neither Classical nor Bayesian but may yield useful results. Bayesian Statistics Without identifying their procedure as such Bier— man, Fouraker, and Jaedicke made use of some aspects of the recently developed approach that is commonly labeled 4 This branch of statistics is so "Bayesian Statistics." named after Bayes whose theorem "specifies how a prior distribution, when combined with additional sample evi- dence, leads to a revised distribution reflecting the most current information about the unknown parameter."5 Robert Schlaifer6 combined "explicit consideration of consequences [costs] of possible wrong decisions" and decision making "on the basis of expected monetary value"7 4Birnberg [J. G. Birnberg, "Bayesian Statistics: A Review," Journal of Accounting Research, II, No. 1 (Spring, 1964), 113.] did, however, recognize the Bayesian aspects Of the Bierman, Fouraker and Jaedicke model. 5Robert Smith, "Quality Assurance in Government and Industry: A Bayesian Approach," Journal of Industrial En— gineering, XVII, NO. 5 (May, 1966), 256. 6Robert Schlaifer, Probability and Statistics for Business Decisions (New York: McGraw—Hill Book Company, Inc., 1959) and Robert Schlaifer, Introduction to Statistics for Business Decisions (New York: McGraw—Hill Book Company, Inc., 1961). 7Gerald H. Glasser, "Classical Versus Bayesian Method of Statistical Analysis," The Statistical News, XV, No. 6 (February, 1964), 3. 114 with Bayes' Theorem. This combination has been known as Bayesian Statistics although Robert Smith8 suggests that this second feature would more correctly be called the Schlaifer Method. The reader will recognize this second feature of the Bayesian approach as that adOpted by Bier— man, Fouraker, and Jaedicke. To draw a distinction between Classical and Bayesian statistics, Morris Hamburg states that in classical statistics, probability statements gen— erally concern conditional probabilities of sample outcomes given Specified pOpulation parameters. The Bayesian point Of View would be that these are not , the conditional probabilities we are usually interested in. Rather, we would like to have the very thing not permitted by classical methods--conditional probability' statements concerning pOpulation values, given sample information. It has previously been mentioned that Classical statistics estimates the probability of obtaining a chance deviation as large or larger than that observed. The hy- pothesis is assumed to be true unless this estimated prob- ability is smaller than an arbitrarily selected, and usually small, level of Significance. The Situation is somewhat analogous to a person charged with a crime who is assumed to be innocent until "proven" guilty. Regard— less of whether the hypothesis is accepted or rejected, 8Robert Smith, Journal of Industrial Engineering, XVII, No. 5 (May, 1966), 256. 9Morris Hamburg, "Bayesian Decision Theory and Statistical Quality Control," Industrial Quality Control, XIX, No. 6 (December, 1962), ll. It 115 no probability statements are placed on the truth or fal- sity of the hypothesis. This is why it was wrong for Bierman, Fouraker, and Jaedicke to calculate their prob- ability according to the classical procedure and then to interpret this as the probability that the hypothesis is true. In fact, the Bayesian approach does not even con- cern itself with the formulation of hypotheses. It be— gins by placing a probability on the existence of each parameter that might be possible. The resulting proba- bility distribution is known as the prior distribution. The probabilities may be assigned on the basis of past information, intuition, or a combination of the two. A sample is then taken and the sample results are used to revise the original probabilities. AS Hamburg indicates, this results in "conditional probability statements con- cerning population values, given sample information."10 The Bierman, Fouraker, and Jaedicke second inter- pretation of P corresponds to the prior distribution, al- though their P is calculated according to the Classical interpretation. They do not, however, carry through to revise these prior probabilities in light of sample in— formation. The classical statistician objects to the Bayesian assignment of probabilities to possible parameter values. 10Ibid. 116 He claims that the parameter is a constant, in spite of the fact that its value is unknown, and that the assignment of probabilities implies that it is a random variable. The Bayesian retorts that the parameter is a random variable to the statistician, if he doeS not know the value. In this regard, Gerald H. Glasser makes the following dis— tinction between Classical and Bayesian statistics. In reply to his own question "What is a random variable?" Mr. Glasser noted: Objectivist [Classical statistican]: A random variable is any sample quantity such as the sample mean the value of which will depend on the particular sample of observations that is Obtained in a study. The quantity is a random variable in the sense its value would vary from sample to sample if we repeated our random sam— pling procedure many times. Bayesian: If a decision—maker is uncertain of the value of some quantity (statistic or parameter or in- .dividual characteristic) it is a random variable to him. He may make personal probabilistic statements about the random variable. Once the value that the random variable assumes is known, it no longer is a random variable.11 Bayesian Application to Quality Control Robert Schlaifer has used an example from quality control to illustrate his application of the Bayesian ap- proach. He assumes that a manufacturer uses an automatic machine to produce a particular part in production runs of 500. After each production run, the machine is taken down llGerald H. Glasser, The Statistical News, XV, No. 6 (February, 1964), 3. 117 for the replacement of worn tools, etc., and.then is read- justed by the operator. When the machine is prOperly ad- justed, it will produce a process average fraction defec- tive of .01. The machine is not capable of doing better, but there is no mechanical reason why it should do worse. From past records, the manufacturer computes the following frequency distribution of the fraction defective resulting from adjustments by the machine Operator: Fraction Relative Defective Frequency .01 .7 .05 .1 .15 .1 .25 .l 1.0 (This is known as the prior distribution.) As an alterna- tive to having the adjustment by the machine Operator, the manufacturer can hire an expert mechanic who will always adjust the machine properly. The following information is needed to make a decision: 1. The mechanic charges $6 for each adjustment. 2. Each defective part can be reworked at a cost of $.40. 3. The operator can adjust the machine at no extra cost. (This is not, however, a realistic assump- tion; but the model could be adjusted to account for a charge.) 118 The expected Opportunity cost of each alternative must be calculated. That alternative which yields the lowest expected opportunity cost Should be selected. In order to find the expected opportunity costs, it iS neces- sary first to find that fraction defective at which the firm is just indifferent between the alternatives. This break-even point can be found by equating the cost of ac— cepting the operators set-up with cost of rejecting his set-up. The cost of accepting the operator's set-up is 500, the number of parts in the run, times the unknown fraction defective, P, times $.40 for re-working each defective part. (500 P represents the number of defective parts.) The cost of rejecting the Operator's set-up is the $6 cost of hiring the expert mechanic plus the quan- tity 5, the number of defective units that will inevitably result (.01 X 500), times the $.40 cost of re—working each defective unit. By equating these two costs and solving for P, one obtains the break-even point of .04 in the man- ner shown below. Cost of Operator Acceptance Cost of Operator Rejection (500 p) $.40 = $6 + (.40 x'5) 200 p = $8 P = 8 _ —55 — .04 If the manufacturer knew a priori that P on any given adjustment would be less than..04, he would allow Il- 119 the operator to make the adjustment. Contrariwise, if he knew that P would be more than .04, he would hire the me- chanic. Since he could not know this in advance, he must make his decision on the basis of expected opportunity costs. Table 11 shows how the expected opportunity costs are derived. The conditional opportunity costs of accept- ance and rejection are conditional upon the fraction de- fective. The individual values may be determined by reference to the following opportunity cost functions. Opportunity cost of acceptance Opportunity Event Cost if P 5 .04 0 if P > .04 $200 P - $8 Opportunity cost of rejection if P < .04 ' $8 — $200 9 if P ; .04 0 If the operator's set-up is accepted and if P S .04 the best decision was made for the event which actually oc— curred So there is no Opportunity cost. Conversely, if P > .04, the manufacturer would Spend 200 P by accepting the operator's set-up when he Should have Spent only $8 by hiring the mechanic. The difference when P = .05 is [$200 (.05) — $8] $2——the conditional opportunity cost It 120 of acceptance. When P is .15 and .25 respectively, the conditional opportunity costs of acceptance are $22 and $42 respectively. Now if the operator's set—up is rejected and P is less than .04, the manufacturer Spends $8 by hiring the mechanic; whereas, he only had to spend $200 P by allowing the Operator to set-up the machine. The difference when P = .01 is [$8 - 200 (.01)] $6--the conditional opportunity cost of rejection. If P S .04 and the operator's set-up is rejected, the Opportunity cost is zero because the best decision was made for the event which occurred. TABLE ll.--Expected opportunity costs of two alternatives Opportunity Opportunity Fraction Relative . . Cost of Acceptance Cost of Rejection Defective Frequency Cond. Exp. Cond. Exp. .01 .7 $ 0 $0 $6 $4.20 .05 .1 2 .20 0 0 .15 .l 22 2.20 0 0 .25 .1 42 4.20 0 0 1.0 $6.60 $4.20 The following abbreviations were necessary: Cond. for conditional Exp. for Expected The expected Opportunity costs are the result of multiplying the conditional opportunity costs by the rela— tive frequencies. In other words, the conditional figures 121 are averaged in order to find the expected ones. Since the expected Opportunity cost of rejecting the operator's adjustment is less than the expected opportunity cost of accepting it, the manufacturer would hire the mechanic if he had to make a decision to either always hire the mechanic or always accept the operator's adjustment. Fortunately, it is possible to reduce the Oppor- tunity costs still further by following the procedure in- dicated below: 1. Allow the operator to adjust the machine. It is assumed that this can be done at no extra cost to the manufacturer. 2. Take a sample of the first n pieces. 3. Record the number of defectives, r. 4. Make a decision on the basis of the following rules: A. If r 3 some pre-determined number, C, reject the Operator's adjustment and call in the me- chanic. B. If r < C, accept the Operator's adjustment. Schlaifer begins by holding n constant at 20 and uses the probabilities and the opportunity costs to arrive at the best rejection number, C. The relevant information has been marshalled in Table 12 for rejection numbers one to three which fall in the relevant range. The table in- dicates that two is the best rejection number because its 122 expected opportunity cost, $.71, is less than the Oppor- tunity cost for either of the other rejection numbers. TABLE 12.--Unconditional expected Opportunity costs for various rejection numbers Op. Cost Prob. of Ave. Op. Prior Expected P of Wrong Wrong Cost Given Prob. Dec. Dec. P COSt C = 1 .01 .7 $ 6 .1821 $1.09 $ .76 .05 .l 2 .3585 .72 .07 .15 .1 22 .0388 .84 .08 .25 .l 42 .0032 .13 .01 1.0 $ .92 C = 2 .01 .7 $ 6 .0169 $0.10 $ .07 .05 .1 2 .7358 1.47 .15 .15 .1 22 .1756 3.86 .39 .25 .1 42 .0243 1.02 .10 1.0 $ .71 C = 3 .01 .7 $ 6 .0010 $ .06 $ .04 .05 .1 2 .9245 1.85 .18 .15 .1 22 .4049 8.91 .89 .25 .1 42 .0913 3.85 .38 1.0 $1.49 The following abbreviations are used: P for fraction defective Prob. for probability Op. for opportunity Dec. for Decision Some explanation of Table 12 might be helpful. The first two columns Show the prior probability dis— tribution. The opportunity costs of a wrong decision 123 were derived from the Opportunity cost functions. The figures have already been Shown in Table 11. If P = .01, rejection is a wrong decision and the opportunity cost is $6. If P = .05, or .15, or .25, acceptance is a wrong decision and the Opportunity costs are $2, 22, and 42 respectively. The probabilities ofia wrong decision are obtained from the table of binomial probabilities. For C = 2 and P = .01, a wrong decision consists of rejection so .0169 is the probability of obtaining two or more de— fective units in a sample of 20. The other probabilities are interpreted in a Similar manner. The average oppor— tunity cost given P is determined by multiplying the op— portunity cost of a wrong decision by the probability of a wrong decision. The unconditional expected opportunity cost is a result of multiplying the average opportunity cost given P by the prior probabilities. Schlaifer added the cost of sampling $.65, to the $.71 unconditional expected opportunity cost to arrive at $1.36 total Opportunity cost. He then found, by computor operation, the total opportunity costs for the best re- jection numbers for other sample sizes. In this manner, Schlaifer found that the best sample Size was 27 and the best rejection number was 2. This writer plans to use this same approach to determine the Optimum level of Sig- nificance to use for accounting variance control. So far, however, Schlaifer has considered only the opportunity 124 costs of wrong decisions and the probabilities that these wrong decisions will be made. The work of Bayes enters the picture only when the prior probabilities are combined with sample evidence in order to arrive at a revised prob- ability distribution. Schlaifer introduces the Bayesian aspect by as- suming that a sample of 20 parts yielded 2 that were de- fective. Table 13 Shows how this sample evidence is com- bined with the prior probabilities to yield the revised AS before, the first two columns repre- probabilities. sent the prior distribution. The conditional probabilities represent the probabilities of getting exactly 2 defective units in a sample of 20 given P. That is, .0159 is the probability that exactly 2 defective units will be found .1887 is the probability that exactly given that P = .01, 2 will be found if P = .05, etc. These probabilities are found in a table of the binomial distribution. The joint probabilities are obtained by multiplying the prior probabili- ties by the conditional probabilities for each respective P. The revised probabilities represent tflie ratio of the joint probability for each respective P to the summation of the joint probabilities. For example .187 is equal to .0113/.05963. Now is it possible to make the kind of statement that distinguishes Bayesian statistics from Classical. The Bayesian would say given the sample evidence that 2 125 defectives were found and the prior probability distribu- .l87 is the probability that P = .01. Likewise, the and .25 are .316, .385, tion, probabilities that P = .05, .15, and .112 respectively. If the operator's set—up is ac- cepted, the probability of a Type II error is .813 (.316 + .385 + .112). Conversely, if the hypothesis is rejected, the probability of a Type I error is .187. Since it has been determined that 2 is the best rejection number, the Operator's set-up will be rejected. It is significant that the probability of a Type I error, .187, is higher than Significance that are customarily the .001 or .05 levels of 12 used in the application of the Basic Control Chart approach. TABLE 13.-—Revision of prior probabilities Prior Prob. Conditional Joint Revised P of P Probability Probability Probability .01 .7 .0159 .01113 .187 .05 .l .1887 .01887 .316 .15 .1 .2293 .02293 .385 .25 .1 .0670 .00670 .112 1.0 .05963 1.000 Morris Hamburg indicates that the Bayesian approach criticizes Classical on these grounds: (1) Classical does not provide a method for combining prior information with experimental evidence, and (2) too much burden is placed on significance levels as a 12The foregoing discussion was developed in Robert Schlaifer, Introduction to Statistics for Business Decisions, pp. 150-197. 126 means of deciding between alternative acts--Specifi- cally, no formal method is provided for the inclusion of economic costs as a part of the decision making process.13 While the second of these grounds is historically quite true, there is no reason why economic costs cannot be in— corporated into Classical statistics to determine the op- timum level of Significance. In fact, this is exactly what was done in the Equalization approach discussed ear— lier in this chapter. The inclusion of economic costs has been identified with the Bayesian approach; but these costs could just as well be included in the Classical approach. The level of significance is not necessarily unique to The term "rejection number" used Classical statistics. by Schlaifer is analogous to the level of significance. A study of the calculations indicates that the level of Sig— nificance associated with the rejection number of 2 and the sample Size of 20 is .0169. That is, .0169 is the probability of rejecting a hypothesis that Should have been accepted. The first of Hamburg's criticisms, however, strikes at the heart of the difference between Classical and Bayesian statistics. Application of a Bayesian Concept to the Meat— Cutter Example—-Minimization Approach Determination of Level of Significance It is possible to incorporate the prior probabili- ties and the economic costs of a wrong decision into the 13Morris Hamburg, Industrial Quality Control, XIX, No. 6 (December, 1962), 14. 127 Table 12 format to determine a level of significance for the meat-cutter example. The first step iS to prepare a prior probability distribution for all possible parameters. The results of this task will not be precise because one can never know the exact value of the parameter from which an individual performance is observed. One approach would be to: 1. Prepare a list of the causes of all performances. 2. Estimate the probability of the occurrence of each cause. 3. Estimate the value of the parameter associated with each cause. Assume that this procedure results in the following infor- mation: Cause Probability Parameter Improvement in Skill .05 235 Chance (Standard Performance) .85 245 Dull Knives .05 265 .05 280 Laziness The estimates of the probabilities and the para- meters may be made from past information, from intuition, or from a combination of both past information and intui— tion. This procedure permits the use of the most Objective information available. When past information is not avail- able or when it is incomplete, one must use his best judg- ment. The charge that the use of statistics replaces 128 judgment is not valid. Judgment is an integral part of statistics. If it is not explicitly incorporated into the analysis, it is implicit, as it must be, with an ar- bitrarily selected level of significance. The advantage of using Bayesian statistics, how- ever, is that it provides the procedure whereby intuitive judgment can be revised on the basis of experience. This procedure has just been indicated in Table 13. When past records are not available to estimate a prior probability distribution such a distribution can be based upon intuitive judgment. The sample results then serve as a basis for revising the prior distribution according to the procedure indicated in Table 13. Such revisions should be made fre- quently until the differences between the revised distribu- tionsare insignificant. At this time the probability dis— tribution will be reasonably accurate. One could assume that the above probabilities were derived in such a manner. They will be used as the prior probabilities in Table 14. The next step is to determine the Opportunity cost of a wrong decision correSponding to each parameter. These figures are derived by the same procedure illustrated in Table 8. They are weighted according to the procedure followed in Table 10 to account for the fact that off- standard conditions are not always detected on their first occurrence . 129 Table 14 indicates how the relevant information is combined to determine the level of significance associ- ated with the Minimization approach. This table is essen- tially the same as Table 12. For explanatory purposes attention will be directed first to just one level of Sig- nificance--the .05 level. The figures for the Opportunity cost of a wrong decision and the probability of a wrong decision are the same for their respective parameters as 'those shown in Table 9 except that the opportunity costs have been weighted for each parameter by the same proce- dure discussed in conjunction with Table 10. The average Opportunity costs given the cause are the result of mul— tiplying the weighted opportunity costs by the probability of a wrong decision. The contributions to the expected opportunity costs are determined by multiplying each aver— age opportunity cost given the cause by the probability ‘that its respective cause will occur. The sum of the ex- pected opportunity cost column represents the unconditional expected opportunity cost for a .05 level of significance. This procedure Should be carried out for other levels of Significance within the relevant range. The desired level is the one with the lowest expected opportunity cost. For the levels tested in Table 14, the decision maker is indifferent between the .01 and the .03 level. It is possible, however, that a lower expected Opportunity cost could result from a level less than .01 or from one between .03 and .05. 130 TABLE 14.-—Unconditional expected costs of various levels of Significance Prob. of . . Ave. Op. Prior Weighted . Expected Chance Wrong Cost given Prob. Op. Cost Decision Cause Op. Cost .07 Level Improvement .05 $1.50 .7146 $1.07 $ .05 Chance .85 5.00 .07 .35 .28 Dull Knives .05 1.30 .2514 .33 .02 Laziness .05 1.75 .0057 .01 .00 $ .35 .05 Level Improvement .05 $1.62 .7314 $1.18 $ .06 Chance .85 5.00 .05 .25 .21 Dull Knives .05 1.31 .2676 .35 .00 Laziness .05 1.75 .0066 .01 .00 $ 029 .03 Level Improvement .05 $2.24 .8078 $1.81 $ .09 Chance .85 5.00 .03 .15 .13. Dull Knives .05 1.52 .3557 .54 .03 Laziness .05 1.75 .0129 .02 .00 $ .25 .01 Level Improvement .05 $3.79 .8925 $3.39 $ .17 Chance .85 5.00 .01 .05 .04 Dull Knives .05 1.72 .5 .86 .04 Laziness .05 1.75 .0314 .05 .00 $ .25 Space provisions necessitated the following abbreviations: Prob. Op. for Probability for Opportunity Cond. for Conditional 131 The fact that this level of significance is lower than that obtained by the Equalization Approach, .1l+, is Obviously due to the introduction of the prior probabili- ties. In Chapter VI the financial impact Of differences in the control limits will be examined in order to deter— mine which of the methods of significance determination is most useful for variance control purposes. Revision of the Prior Probability Distribution The complete application of Bayes' Theorem requires that sample information be used to revise the prior proba- bility distribution.l4 Table 15 has been prepared under the assumption that a performance taking 250 minutes was Observed. The first two columns represent the prior prob— ability distribution. The conditional probabilities for each cause represent the probability of obtaining a per— formance of exactly 250 minutes given that cause. These probabilities are obtained by the method of "normal curve approximation." For example, .0085 is the probability of observing a performance of exactly 250 minutes if the cause is improvement. It is computed by finding the area under a normal curve between 250.5 and 249.5 given a parameter of 235 and a standard deviation of 8.06 (the mean and 14Here it can be assumed that the prior probability distribution has already been finalized by the revision process. The procedure is still useful, however, in at- taching probabilities to the causes of Specific variances. 132 standard deviation for assignable cause—-improvement). The joint probabilities for each parameter are calculated by multiplying the original probabilities by the conditional probabilities. The revised probabilities represent the ratio of the joint probabilities for each parameter to the summation of the joint probabilities. The revised probabilities may be interpreted in the following manner. Given a performance value of 250, the decision maker is .9752 confident that this is a chance variance from standard; he can assert with a probability of .0124 that there has been an improvement in the meat-cutter's ability. Finally, he can assert with a probability of .0124 that dull knives were used. TABLE 15.-—Revision of prior probabilities Cause Original Conditional Joint Revised Probability Probability Probability Probability Improvement .05 .0085 .000425 .0124 Chance .85 .0394 .033490 .9752 Dull knives .05 .0085 .000425 .0124 Laziness .05 .0000 0.000000 0.0000 .034340 1.0000 For either the .03 or the .01 level of Significance, 250 falls within the region of acceptance. (The upper con- trol limits corresponding to the .03 and .01 levels of sig- nificance are 262 and 265 respectively.) The revised prob- ability distribution indicates that acceptance will be a 133 correct decision 97.52 per cent of the time. The proba- bility that acceptance is a Type II error is l—.9752 or .0248. Thus, the Bayesian is saying that the probability that the hypothesis is true is .9752; the probability that it is false is .0248. One advantage of the Bayesian method is that it permits consideration of all possible alternative para— meters which are worth the cost of control; in contrast, the Classical method permits consideration of only one alternative parameter or a pair of alternative parameters with each value being the same distance from the standard. Another advantage is that the revised probabilities help to identify the cause of the variance. In the case just cited the betting odds of chance over assignable causes would be given as .9752 to .0248. Table 16 Shows the revised probabilities for observed performances 260 and 270. Notice that 260, which falls within the control limits corresponding to either the .01 or the .03 levels of significance, still carries a high probability, .8571, of being attributed to chance. A performance of 270, how- ever, is almost certain to be the result of an assignable cause. This is consistent with the distribution of chance performances in which a chance performance over 270 had never been observed. The odds favor dull knives,(p=265) over laziness (0:280) 65 to 35 reSpectively. Not only do the revised probabilities help to determine the cause of 134 the variance, but, in the case of significant variances, they determine the course of the investigation. For a performance of 270, it would be more profitable to check the Sharpness of the knives before investigating laziness as a possible cause. TABLE l6.—-Revised probabilities for performances 260 and 270 Revised Revised Cause Probability Probability Perf = 260 Perf = 270 Improvement .0000 .0000 Chance .8571 .0000 Dull knives .1310 .6470 Laziness .0119 .3530 Assume that the $5 opportunity cost for a complete investigation consists of $1 for dull knives and $4 for laziness. If an invegtigation for dull knives reveals their condition to be satisfactory, laziness in this sim— plified problem, is identified as the cause by the process of elimination. Since the investigation for dull knives is a cheaper element of the investigation cost than the investigation for laziness, it is the only element that needs to be incurred for performances of 270 or more. The cost of an investigation, therefore, is only $1. Had this model considered the parameters for other unfavorable assignable causes such as illness, improper training, or poor attitude, an investigation would have been continued 135 until either the assignable cause was discovered or until all but one of the causes had been eliminated. In this more practical case, the Bayesian approach wOuld be more helpful in establishing priorities for the investigation process. For performances between the upper control limit and 270, the complete investigation would have to be made Since chance performances are possible in this interval. (That is, a Type I error is possible between the upper con— trol limit and 270; but performances over 270 always result from an assignable cause.) CHAPTER VI A TEST OF THE ACCOUNTING AND STATISTICAL CONTROL TECHNIQUES Introduction In addition to the arbitrary methods conventionally employed by accountants, this dissertation has discussed five methods involving statistical procedures for determin- ing the significance of variances. These methods have been identified as: 1. Basic Control Chart approach with an arbitrarily selected level of Significance. 2. Bierman, Fouraker, and Jaedicke approach with two conflicting interpretations of the probabilities. 3. McMenimen approach. 4. An Equalization approach develOped by this writer. 5. An Minimization approach which employs prior probabilities. In this chapter an example will be developed and the upper and lower control limits will be calculated under each meth— od according to the following testing plans: 1. Tests of single performances where A. Each performance is tested 136 137 B. Every tenth performance is tested on a systema- tic basis 2. Tests Of the means of samples of five consecutive performances where A. Every performance is included in a sample B. The frequency of sampling is adjusted so that on the average a sample of five is taken in» every 50 performances. The purpose of these calculations is to test the impact of the resulting differences in cOntrol limits-in order to ferret out the method which is most effective for cost control. The Example Development The hypothetical example involves the time taken for each of fifty meat cutters to butcher each of twenty cows. It is assumed that each of the 1000 performances was investigated to determine the cause. The value of each performance has been recorded and the mean has been computed for each cause. Table 17 summarizes the results. More detail is shown in Table 18 which depicts the number of performances occurring at each value under each cause. It is assumed that_this information-has been Obtained without the knowledge of the butchers so that the frequency 138 of the causes represents what has been experienced in the immediate past without an unusual effort on the part of the butchers to reduce assignable causes. Such an unusual effort might be put forth on the part of the workers if- they knew that each individual performance was being ob—- served. Prior to this test it is assumed that control has taken the form of a comparison of actual weekly cost for the department (composed of all butchers) with budgeted cost for the-cows butchered. An investigation has been. undertaken when the variance exceeds 10 per cent of the budget. TABLE 17.--Causes--their frequencies and means ‘— Number of Cause Performances Mean Dull Knives 120 270 Tough Cows 20 y 280 Lack of Training 40 285 Poor Attitude 60 255 Illness 20- 265' Improvement 100 230 Laziness 40 275 Chance 600 iss- Grand Mean 1,000 251 Note that the mean of the chance performances is 245. This value has previously been established as the- standard. Note further that the grand mean is 251. Under the 10 per cent rule currently employed an investigation would not be undertaken unless the grand mean was 270 139 (245 + 10% of 245). In this case, the 10 per cent rule hardly seems adequate in view of the fact that 400 of the 1000 performances were due to assignable causes. This point will be taken up in more detail later.— The causes enumerated in Tables 17 and 18 are cer- tainly not mutually exclusive. That is, it would be pos- sible to have several combinations of causes present during any performance. However, to keep this analysis to a manageable level and to focus attention on concept rather than procedure, the cause of each of the 1000 performances has been assumed to be mutually exclusive in this example. Before this plan was instituted it is assumed that- two new apprentices were employed. Normally, it takes several months for the work performance of new employees to come up to that of the other butchers. These new men account for the forty performances in the lack of training category. This department contains five long-standing butchers whose mean performance has been known to be less than standard. The 1000 performances of these men are re— vealed in the improvement category. Even though the stan- dard might be kept at 245 for double entry accounting pur- poses, it is this writer's opinion that it should be revised for control purposes to take cognizance of the improvement of these senior men. mr~ eqH Nln eqH N va .owm mmm mmm hmm mmm mmm gmm mmm mmm Hmm OMN mmm mmm 5mm mmm mmm gmm mmm NNN HNN omm mHN mHN bHN mHN mHN VHN MHN NHN HHN OHN mr-ILOI‘NkaOfl'N Hc-IF-IHNNI—Ir-lc-Ir-l meoooxomi—im I—lI-lI-lI—lI-lI-{I—l I-II-INNNNNQOW 0 4 l t-lr-lI—lt—ln-II-lr-INNQ‘NNmmmmmv'fi'fi'koNQ‘fi'komNMMMHN I—lI—it—lr-lr—II—II—INNV'MNMV'Q‘BLDKOKOQ Hmuoa mmosHNmH mmwcHHH mquHmnB mocmno msoo mo>HsM mpspHuu< usosm>oumEH 05Hm> mo MOSH £0505 HHSQ Hoom Omsmo an mmon> mocmfiuomnwm mo GOHuOQHHumHOII.mH mqmma 141 m N H H m o H H s .mem OH H H w New mH H H NH H Hem NH H H H m H osm OH H H e H mom m H H m H wmm e H H a H sow oH ,H H m a H bow OH H H N v N mom OH H N a m sow m H H N H N H mmN HH m w H me SH H HH m m omm Hm SH H a emm HN H 0H w mmm SH mH H m amm eH H HH m mmm m oH a H Hmm EH NH H m H .omN sH mH H H mam me me mam mm om m m mvm mm mm H m mam mm mm H H New HmuOB mmmcHNMH mmmcHHH mchHmnB mocmno msou mm>HsM opsuHup¢ unmEO>OHmEH demb mo Roma HHSO Hoom nmzoe AUOSGHUGOUV mH mqm<8 142 [ (Du-la)mmmmw‘hmwmfi‘fl'Lnfl‘Nl—INNNOOI—IOOHOOOOI—l HHH MV‘NNNNr—ll—IHHH HHH Ht—lr-IHNMMVLDMMMM N NI—lr-lr-l Hr—IHH V'VQ‘Q'Q‘Q'HHHHNH III mom vom mom mom Hom oom mmm mmm ham 0mm mam wmm mmm mmm Hmm 0mm mmm mmm .mmm mmm mmm vmm mmm mmm HwN own mhm mum bhm whm mbm vbm Hobos mmosHNMH mmmcHHH mchHmne mo xomq mosmnu mzou nmsoa mm>Hcm HHSQ mpduHuud usoEm>oumfiH msHm> Hoom AUOSGHUGOUV mH mHmHB 143 It is assumed that the other information ianables l7 and 18 was detected by the following procedures. The' knives were all tested for Sharpness before the study began. They were also tested periodically thereafter. If the knives were found to be dull, no action was taken until thev butcher himself reported this Condition.» The reason for this is, of course, that the firm wanted to determine how frequently the butcher will not realize that his knives are dull and at the same time obtain an estimate of the probar bility that this assignable cause will be present. A-psy- chological test was administered without explaining this study. It indicated that three men accounting for Sixty performances had poor attitudes because of family problems. The same test Showed that Seven butchers are prone to lazi- ness. This Situation shows up in their work sporatically and Contributed to forty performances during the Observation period. The best way to improve this record is to detect the condition immediately and call it to the attention of the butcher involved. This might be accompanied along with the hint that his wage rate or other benefits might be ad- versely affected. A physical examination revealed that one man ac— counting for twenty performances was ill although he had been unaware of his illness. 144 Use of the Example The hypothetical information "discovered" through the procedures just described has been summarized-in~Tablet 18. It will be used throughout the remainder of this chap- ter to estimate the probabilities of committing Type I and Type II errors as well as to estimate the probabilities of. the existence of the various causes. Selection of the Example This writer considered using an empirical example from an actual industrial Situation. However, Since statis— tical procedures other than the scant use of the Basic Con— trol Chart approach are not actually employed anywhere to this writer's knowledge, complete information would not be readily available. Of course, estimates could have been. made as they must be initially for a company adopting such procedures. It was, however, felt that the basic features of the model could be more clearly portrayed with the assump— tion of perfect knowledge through the construction of an hypothetical example. The distribution of chance performances Shown in Table 18 is symmetrical about the standard. It approximates a normal distribution but has not been fitted to a normal. curve. The reader will note that it is not perfectly con— tinuous. For example, no Observations are reported for values 221 and 222 although there are observations listed 145 at 220 and 223. Also, there are 11 performances at 238 but only 9 at 239; whereas, a perfectly continuous distribution would require more performances at 239 than at 238. The number Of performances selected for each value were selected as this writer felt 600 performances might actually fall in practice. No attempt was made to bias the example to achieve any particular results. In fact, it will be noted in the conclusions that the results may change if the distribution of performance values changes. Likewise, the distributions for each of the assign- able causes are symmetrical about their respective means. Because there are fewer observations for each assignable cause than there were chance observations, the assignable cause distributions do not, in most cases, even approximate normality. The values were purposely set down so that there would be some overlap among the assignable cause distribu— tions and the chance distribution. The reason for this was to include the possibility of Type I and Type II errors. Other than this, the values were not selected with any par- ticular design in mind. They were selected so that they might give the appearance of reality. However, they were not selected in any effort to achieve any particular results. Investigation Procedure The purpose of this example is to calculate the upper and lower control limits for each testing plan under each 146 other hand, the investigation may be undertaken so that suppliers can be informed when the quality of cows is poor. When all these procedures fail to reveal an assignable cause, it is concluded that a Type I error has been made. The cost of the Type I error is $6 which is determined as follows: Test For Incremental Cost Accumulated Cost Dull Knives $ 1 $ 1 Attitude-Laziness l 2 Illness 3 5 Tough Cows 1 6 If an investigation is not undertaken for tough cows, then, the Opportunity cost Of a Type I error is only $5. When a performance value Observed by one Of the butchers other than the five who are already known to have improved is smaller than the lower control limit, an in— vestigation should be undertaken to ascertain whether im— provement has occurred. This investigation would probably involve an analysis of some past performances for this worker as well as closer attention of his next few perfor— mances. It is assumed that the opportunity cost associated with this investigation is $4. The purpose of the foregoing discussion has been to explain the determination of the Opportunity cost of a Type I error (or the cost of an investigation). These figures will be used to calculate the control limits. 147 Derivation and Financial Analysis of Upper Control Limits for Single Observations——Each Performance Tested Accountant's Conventiona1_MethOd It has already been pointed out that this firm has historically designated a variance as significant when it exceeded 10 per cent of the standard. With this criterion the upper control limit would be 270 [245 + 245 (.10)] min— utes. It is interesting to note that this upper control limit would reduce the probability Of a Type I error to zero because Table 18 shows that a chance performance has never taken longer than 270 minutes. If investigations are undertaken only for performance values over 270, it would be impossible to investigate a performance coming from a chance population. Basic Control Chart Approach This approach involves the arbitrary selection of a level of significance based on the distribution of chance performances shown in Table 18. This level is usually in the range from .05 to .01 based on a two—tailed test. The .05 level with a two—tailed test would place the upper con— trol limit at that value which is exceeded by 2.5 per cent of the chance performances. Since 3.667 (22/600) per cent of the chance performances are at least 260 and 2.167 (13/600) per cent are at least 261, the upper control limit is between 260 and 261. 148 Bierman, Fouraker, and Jaedicke Approach- Chapter IV discussed the Bierman, Fouraker, and Jaedicke development of the following formula for the crit— ical probability, Pc: where: L is the present value of the expected opportunity cost resulting from not_taking corrective action on the basis of the present_deviation. C is the cost of an investigation The following decision rules were then adopted. 1. Accept the hypothesis if.P is larger than Pc 2. Reject the hypothesis and investigate if.P is smaller than Pc. In order to find the upper control limit it is necessary to test values until P is equal to PC. At this- point the decision maker is indifferent between the acts of making an investigation and not making one. Therefore, the value at the point of equality represents the_control limit. A test will be run first to see if this equality exists at 260. In order to find the value for L it is first necessary to find the Opportunity cost sustained on each performance if the operation is off—standard. In Chapter 149 IV it was noted that Bierman, Fouraker,.and Jaedicke con- sidered the alternative parameter to be equal to the test value.lv Therefore, in testing the value 260 they would say that the Opportunity cost sustained on each off—standard 245 - 260 60 min performance_is equal to X $3 ($3 is the wage rate for butchers) or $.75. To‘translate this into an L value it is necessary to consider how long the off—standard—con— dition will prevail before it is detected. In their illus- tration which dealt with determining the significance of a yearly variance from an aggregate account Bierman, Fouraker,- and Jaedicke assumed that the variance would continue for four years. Four was an arbitrarily selected number. Ac- cordingly, it seems reasonable that Bierman, Fouraker, and Jaedicke would assume that an off-standard performance would continue for four more performances before detection.2 With this assumption, L would be $3. 1This assumption was questioned in Chapter IV. Table 17 shows that none of the causes have a mean of 260. 2The use of the number four was also questioned in Chapter IV. The Table below shows that the probability that an off-standard performance would not be detected on the. third performance (and thus extend the inefficiency to the fourth performance) is only .011. Column A represents the number of successive failures to detect a change in the cause system. Column B represents the probability that the change will not be detected after the number of tests indi- cated in the first column. B .267 .053 .011 UJM+4W 150 The cost of an investigation, C, depends upon the investigation procedure-and upon the cause. For example, according to the procedure explained in conjunction with this example the cost of an investigation associated with the various assignable cauSes is $1 for dull knives, $2 for poor attitudes or laziness, $5 for illness, and $6 for tough cows. Since Bierman, Fouraker, and Jaedicke did not. discuss the steps of an investigation procedure, they must have had in mind the cost of a complete investigation which is $6 in this case. However, since a tough cow cannot be butchered in less than 262 minutes, $5 can be assumed to be the cost of a type I error for test values below 262. The table is interpreted to mean that the probabil- ity that an off—standard condition will not be detected on its first occurrence is .207. This probability is deter— mined in the following manner: 1. Find from Table 18 the number of unfavorable assign able causes less than the test value 260. This number is 62 itemized as follows: Poor Attitude 45 Dull Knives 10 Illness 5 Laziness _2 62 2. Divide this by 300--the total number of unfavorable assignable'causes. This is the proportion of unfaVorable assignable causes be— low 260 and as such represents the probability that an as— signable cause will not be detected with an upper control limit-of 260. . The probability that an off—standard performance. will not be detected on its second occurrence, if all perfor— mances are tested, is .207 squared or .053. Likewise, the probability that it will not be detected on its third oc- currence is .207 cubed or .Oll. 151 Substitution in the formula yields a negative PC value (pc = E_%_E = §§§%_§§.= —.667). When 270 is tested, Pc is still negative if C is set at $6.3 In this case, the Bierman, Fouraker, and Jaedicke approach will not yield a determinate solution within a range that makes sense. Since all performances higher than 270 resulted from assignable causes, 270 is the highest value that any wise person would use as an upper control limit. The reason for the failure to yield a determinate solution is that this testing plan is con— cerned with an investigation of individual performances-— a case that Bierman, Fouraker,.and Jaedicke did not discuss. When all performances are investigated, Lris small in re- lation to what it is when a testing plan involving a samp- ling procedure is used. It is small because it consists only of the savings on an individual performance weighted by the fact that an off-standard condition may not be de— tected on its first occurrence-~rather than a long range savings. On the other hand, if C is set at $5, Pc is zero for test value 270. This yields a determinate solution when P is defined as "the probability of a deviation this 3PC = L - c = $5 g $6 =f_$.20 152 large or larger occurring from random causes."4 It is also specified whether the deviation is favorable or unfavorable. This will be referred to as the Bierman, Fouraker, and Jae— dicke first interpretation of P. Since there are no chance performances over 270 and only one at 270, P is equal to 1/600 divided by 2 or .0034. Since this is almost zero, P = Pc and 270 is the upper control limit. The reader will recall, from Chapter IV, that at a later point, Bierman, Fouraker and Jaedicke define P as "the probability of an unfavorable deviation resulting-from uncontrollable [chance] causes."5 This has already been identified as their second interpretation of P. They in- correctly used these two definitions of P synonomously; but all of their calculations were carried out with the first definition in mind. In order to evaluate P according to the second definition, it is necessary to divide: (l) the number of times that each deviation (or test value) has re— sulted from chance causes by (2) the total number of times that each deviation (or test value) has occurred. Table chontains the information to evaluate P in this manner. All that needs to be done to calculate the P for any test value is to divide the number of chance performances by the total number of performances for that test value. For example, for test value 270 P is l/15 or .0667. 4Bierman, Fouraker, and Jaedicke, 113. 5Ibid., 121. 153 It has just been found that Pc is at most zero for test value 270. Hence, 270 is not a control limit under this approach because .0667 deos not equal zero. It has also been established thatch is negative for test values lower than 270.’ Since P is either zero or positive for all test values, the upper control limit cannot lie below 270. A control limit above 270 would not be useful since no per- formances due to chance have ever taken longer than 270 minutes. Consequently, the Bierman, Fouraker, and Jaedicke second interpretation of P does not yield a determinate solution. McMenimen-Approach Leo McMenimen proposes that various amounts could be spent on an investigation with the result that various amounts would be saved. He also insists that the cost of correcting an off-standard condition be formally involved in the investigative decision. Since an investigative procedure has been established in this example, it is appropriate to begin by testing to see if it is worthwhile to spend $1 investigating a per— formance value of 260 for dull knives. McMenimen did not specify a procedure for determining the amounts to be saved; but in this example a procedure seems clear. The knives are either sharp or they are dull. If they are sharp nothing can be saved by an investigation. On the other hand, if 154 they are dull the amount tO be saved is equal tO the Oppor— tunity cost less the cost Of sharpening the knives. The single performance Opportunity cost for dull knives is 270 - 245 60 mances with dull knives].6 Bierman, Fouraker, and Jaedicke $1.25 [( )$3 where 270 is the mean Of the perfor- arbitrarily assumed that fOur tests would lapse before an assignable cause would be detected. Since McMenimen neither questioned this assumption nor Offered a weighting scheme Of his own, the $1.25 will be multiplied by 4 tO arrive_at a $5 Opportunity cost which considers the fact that dull knives may not be detected on their first occurrence. It costs $1.50 to sharpen the knives; but since each Of the four performances that lapse on the average before dull knives are detected would benefit from sharpening, the cost can be spread over these four performances. Consequently, the cost applicable tO the sharpening Of one set Of knives 6Actually, McMenimen did not attack the Bierman, Fouraker, and Jaedicke assumption that the mean Of the as- signable cause is equal tO the performance value being tested, i.e., Bierman, Fouraker, and Jaedicke would use 260 rather than 270 in the above calculation. While in his theoretical discussion McMenimen did not relate the amount tO be saved to specific assignable causes, it is clear that in applying his procedurecmuamust know the assignable cause before the cost Of correcting the cause and thus the amount tO be saved can be estimated. For example, it costs more tO cure an illness than to sharpen dull knives. These two causes have different means even though either cause may produce some Of the same performance results. Accordingly, in this application, this writer is taking the liberty Of introducing specific assignable causes with their related means into McMenimen's work. 155 applicable to a single performance is $.375. Thus, the savings from detecting dull knives is $4.625 ($5 — $.375). The infomation necessary to make the investigative decision has been marshalled in Table 19. In this initial step only two events are Considered. It would not make sense to have savings between $0 and $4.625 because it would not be rational tO partially sharpen the knives as a result Of the investigation. The knives would either be sharpened to enable parameter reduction tO 245 with a savings of $4.625 or they would not be sharpened so that nothing could be saved by the investigation. TABLE 19.——Application Of McMenimen technique Spend $1 investigating Event Pe Cond. Exp. Test Value 260 Save so ' .7143 $—1 $— .7143 Save $4.625 .2857 3.625 1.0357 Expected Savings $ .3214 The Pe = .7143 is determined by dividing the 10 per- formances with 260 values (shown in Table 18) that are not attributed to dull knives7 by the 14 performances with 260 values. Since the 4 remaining performances are due to dull 7One is due tO poor attitude and nine to chance. The one due to poor attitude is included in the save $0 category inthis case because it would not be detected by an investigation for dull knives. n It. 156 knives; 4/14 or .2857 represents the probability that $4.625 could be saved by an investigation. The conditional values result from subtracting the $1 investigation cost from the savings values. The contributions to the expected value represent the product Of each respective Pe by its. conditional value. The sum Of these contributions repre— sents the expected savings as a result Of the investigation. Since expected savings is positive, it would be worthwhile tO investigate for dull knives. Whether it is worthwhile tO extend the investiga— tion for poor attitude8 depends upon the expected savings for act spend up tO $2 investigating. McMenimen states thatv "we will choose that act with the highest expected value."9 This would be agreeable if the decision maker were faced with a number Of independent alternative acts or if the particular act had tO be chosen a priori. In this example, however, it has already been decided tO spend $1 tO invest- igate for dull knives. If dull knives are the cause, the investigation is terminated and the knives are sharpened. If dull knives are not the cause, some guidelines are necessary tO determine whether an additional $1 should be spent investigating for poor attitude. As long as the 8This is the only other possible assignable cause with a 260 value. 9McMenimen, 63. 157 expected savings is positive more will be saved on the. average by the investigation than will be spent; there— fore, it is this writer's Opinion that each respective investigation procedure should be undertaken as long as the resultant expected savingsfrom employing the proce- dure are positive. Accordingly, this treatment will be followed throughout this dissertation. Appendix B contains the computational explanations tO relieve the text from a plethora Of such detail. The results from this appendix are summarized below. Consider— ing only the incremental costs Of additional investigation procedures, the upper control limit for poor attitude and laziness is between 261 and 262. Under these assumptions McMenimen would never find it profitable tO investigate' for illness. The upper control limit associated with dull knives is between 259 and 260. Equalization Approach For convenience, the probability Of a Type I error times the Opportunity cost Of a Type I error has been des- ignated as the expected opportunity cost Of a Type I error. The probability Of a Type II error times the conditional Opportunity cost Of a Type II error has been designated as the eXpected Opportunity cost Of a Type II error. Consee quently, the Equalization control limit occurs at that test value for which the expected opportunity cost Of a Type I error equals the expected Opportunity cost Of a Type II error. 158 Various values have been tested in an effort tO 'find one which equates the expected Opportunity costs Of each type Of error. The results Of these tests are shown in Table 20. This table shows that 260 falls in the re- gion Of hypothesis rejection because the expected Oppor— tunity cost Of committing a Type-I error, $.1853, is less than the $.3111 expected Opportunity cost Of committing a Type II error. Hence, for the occurrence Of performance value 260 it is cheaper in the long run tO reject the hy- pothesis, undertake an investigation and run the risk Of making a Type I error than tO accept the hypothesis and run the risk Of committing a Type II error. It is now apprOpriate to test a smaller value. The reason for the move in this direction is illustrated in the following diagram. Here 260 is shown tO be in the shaded region Of rejection. Obviously, the boundary (to be the upper con- trOl limit) must be less than 260. FIGURE 7.-—Direction Of upper control limit 159 The same conclusion (that Of rejection) holds for test value 259. However, for test value 258, the expected Opportunity cost of a Type II error is less than the ex— pected Opportunity cost Of a Type I error. This fact in— dicates the desirability Of accepting the hypothesis and refraining from an investigation for a performance value Of 258. Thus, the Equalization upper control limit is between 258 and 259. TABLE 20.—-Decision table for Equalization approach Test Values 258 259 260 Probability Of a .0700 .0517 . .0367 Opportunity Cost Of a $5 $5 $5 Expected Opportunity Cost of a $ .3500 $ .2585 $ .1853 Probability of B .1700 .1867 .2067 Opportunity Cost Of B $1.3726 $1.5379 $1.5053 Expected Opportunity Cost Of 8 $ .2333 $ .2684 $ .3111 Decision Accept Reject Reject Type I error Type II error H II a 8 Appendix B contains an explanation of the deriva— tion Of the probabilities and Opportunity costs of each type Of error. Minimization Approach It was pointed out in Chapter V that this approach involves the calculation of an expected opportunity cost 160 for each performance value in the range likely to represent the best control limit. The value yielding the lowest ex— pected Opportunity cost is the Minimization control limit. The expected Opportunity cost is derived by adding the prO- ducts Of the prior probability, the weighted Opportunity cost, and the probability Of a wrong decision for each cause. Table 21 indicates the results. From the results Of the other statistical approaches, the upper control limits ap— pears tO be around 260. Therefore, the results in Table 21 are shown first for test value 260. They are indicated next for test values 259 and 258 respectively. However, since the expected Opportunity costs are higher for these values than for 260, it appears that the Minimization upper control limit is 260 or higher. Table 21 next shows the expected Opportunity costs for test values 261 and 262. Since the expected Opportunity cost for test value 261 is the lowest, it is designated at the Minimization upper con— trol limit. The reader should refer to Appendix B for an ex- planation Of the derivation Of the detail shown in Table 21. The reader will note, however, that this table bears the same format as Table 140 — Ir. 161 TABLE 21.-—Decision table for Minimization approach Expected Prob. of Cond. Prior Weighted Wrong Average Opportunity Cause Prob. Op. Cost Decision Op. Cost Cost Test Value 260 Chance .7143 $5.0000 .0367 $ .1835 Poor attitude .0714 1.8036 .75 1.35 Illness .0238 1.3379 .25 .3345 Dull Knives .1429 1.3583 .0833 .1131 Laziness .0476 1.5825 .05 .0791 1.0000 $ .2554 Test Value 259 Chance .7143 5.0000 .0517 .2585 Poor attitude .0714 1.6127 .7167 1.1558 Illness .0238 1.3006 .2000 .2601 Dull Knives .1429 1.3410 .0667 .0895 Laziness .0476 1.5352 .0396 .0608 1.0000 $ .2890 Test Value 258 Chance .7143 5.0000 .0700 .3500 Poor attitude .0714 1.4218 .6833 .9715 Illness .0238 1.2632 .2000 .2526 Dull Knives .1429 1.3237 .0500 .0662 Laziness .0476 1.5000 —0- —0- 1.0000 $ .3516 162 Table 21 (Continued) Prob. Of Cond. Expected Prior Weighted Wrong Average Opportunity Cause Prob. Op. Cost Decision Op. Cost Cost Test Value 261 Chance .7143 $5.0000 .0217 $ .1085 Poor attitude .0714 2.1635 .7667 1.6588 Illness .0238 1.3770 .2000 .2754 Dull Knives .1429 1.3750 .1167 .1605 Laziness .0476 1.6084 .0500 .0804 1.0000 $ .2292 Test Value 262 Chance .7143 $5.0000 .0183 $ .0915 Poor attitude .0714 2.5234 .7833 1.9766 Illness .0238 1.4161 .2000 .2832 Dull Knives .1249 1.3915 .1333 .1855 Laziness .0476 1.6344 .0500 .0817 1.0000 $ .2436 Comparison Of Upper Control Limits Among the Methods The control limits that have just been derived under each Of the approaches are shown below for review purposes: Approach Upper Control Limit Accountant's Conventional 270 Basic Control Chart 260—261 Bierman, Fouraker, and Jaedicke First Interpretation Of P 270 Second Interpretation Of P indeterminate McMenimen 259-260 for dull knives 261-262 for poor attitude and laziness Equalization 258-259 Minimization 261 163 The listing Of two figures such as 260—261 indicates that an investigation would not be undertaken for the occurrence Of the first figure (260) but that one would be for the second one (261); therefore, the control limit is between the two. Financial Analysis and Ranking The approaches will be analyzed by twos insofar as necessary to rank them in preferential order Of their de— sirability. The analysis takes the general form outlined below. Of the two approaches under consideration at any given time the one yielding the lowest upper control limit will involve larger total investigation costs. This addi— tional cost can be measured by counting from Table 18 the number of chance performances between the two control lim— its (these performances would be investigated under the approach yielding the lower control limit but not under the approach yielding the higher one) and multiplying this number by $5. If the tough cow investigation is made for performances equal to or greater than 262 the factor would be $6 for those chance performances at least as large as 262 but less than the higher upper control limit. This approach gives the additional investigation cost per 1000 performances. It is, Of course, true that an investigation would be undertaken for all performances between the con- trol limits and not just those due to chance. Those 164 investigations where assignable causes are present will not,however, be considered as additional investigation costs because the investigation would ultimately be under- taken even under the approaCh yielding the higher upper control limit. The second part Of the analysis considers the fact that the lower Of the two upper control limits being eval— uated will detect assignable causes which would not be detected until some later time with the higher Of the upper control limits. This saving can be determined by counting from Table 18 for each assignable cause the number Of per— formances having values between the two control limits. This number for_each respective cause is multiplied by the weighted Opportunity cost for the value representing the higher Of the two control limits. The sums Of these pro— ducts for each cause are then added to Obtain the incremental savings associated with the lower Of the control limits. If this savings is greater than the incremental in— vestigation cost, the approach yielding the lower upper control limit is designated as more effective than the ap— proach yielding the higher upper control limit. Conversely, if the savings is less than the incremental investigation cost, the approach yielding the higher upper control limit is designated as the more effective. The following example illustrating the analysis be- tween the Accountant's Conventional approach (yielding an 165 upper control limit Of 270) and the Basic Control Chart approach (yielding an upper control limit between 260 and 261) should clarify the procedure. Table 18 shows that there are 12 chance performances as large as 261 but less than 270. These would be investigated under the Basic Control Chart approach but not under the Accountant's Con— ventional method. The investigation would have an Oppor— tunity cost Of $5 each for the two performances at 261 and $6 each for the remaining ten performances. Consequently, the added investigation cost under the Basic Control Chart method is $70 per thousand performances. Table 22 shows the derivation Of the added savings per thousand performances associated with the Basic Control Chart method. The number Of performances pertaining to each assignable cause with values between 261 and 269 in- clusive represent the number Of assignable causes that the Basic Control Chart method will detect that would not be detected under the-Accountant's Conventional method. The sum Of the products Of each Of these numbers by the weighted Opportunity cost at 270 is the savings. Since the savings, $266.25, is more than the extra investigation cost, the Basic Control Chart method is a more effective basis Of control for this testing plan than the method conventionally employed by accountants. The reader will note that the weighted Opportunity costs are those developed by this writer in conjunction 166 with the Equalization approach. In making these compari— sons it is necessarytn employ the same weighting scheme. This one was selected because it is more scientific than the arbitrary selection Of the number "four" used by Bier— man, Fouraker, and Jaedicke. The reader will also note that nO weighted costs are indicated for tough cows and lack Of training. The fact that two new butchers are on hand is known in advance regardless Of the control approach employed. In this case, savings emanates only from experi— ence. By the same token, nothing can be saved after the tough cow has been butchered. The results of all the comparisons are summarized in Table 23. The following abbreviations are necessary: AC for Accountant's Conventional BCC for Basic Control Chart BFJ lst for Bierman, Fouraker, and Jaedicke first BFJ 2nd for Bierman, Fouraker, and Jaedicke second McM for McMenimen Equal for Equalization Min for Minimization. Since in the McMenimen approach the UCL varies, a slightly different analysis is undertaken. McMenimen would spend $1 investigating nine chance performances at 260 that the basic control chart method would not investigate. This involves an extra cost Of $9. By investigating perfOrmances at 260, McMenimen would detect four dull knives at a weighted 167 TABLE 22.—-Added savings Of basic control chart method Performances Opportunity between 261 and Cost Weighted Cause 269 incl. (F)' at 270 (C) CF Poor Attitude 12 $12.5600 Dull Knives 40 2.3058 Tough Cows 2 Illness 4 1.8447 Laziness 8 1.9900 Lack Of Training 1 Added Savings $266.2508 TABLE 23.—-Financial comparisons between approaches Added Inv. Most Approaches Cost Of Added Savings Effective Tested Lower UCL Of Lower UCL Approach BFJ lst and Min and Min and AC BCC $ 70 $266.25 BCC BFJ 2nd and Equal AC 150 321.60 Equal Equal Min and BCC 90 17.73 Min and BCC McM Min and BCC 41 3.11 McM 168 cost Of $1.3583 for a total savings Of $6.4332. TO con- tinue, McMenimen would Spend only $1 investigating the two chance performances at 261 for dull knives; whereas, the control chart method would spend $5 on these performances giving them the complete investigation. Therefore, the Control Chart approach would spend another $8 here. For this, it would detect one performance at a weighted Oppor— tunity cost Of $2.1635 due to poor attitude. Finally, McMenimen would spend only $2 investigating the eleven chance performances between 262 and 270, while the Control Chart approach would spend $5 on the two at 262 and $6 on the other nine for a total Of $64. Accordingly, the Con- trol Chart approach would spend an additional $42 for which it would detect four cases Of illness at a weighted cost Of $1.8447 each for a total Of $7.3788. This discussion can be summarized as follows: Approach Added Cost Added Savings McM ($9) (56.4332) BCC 8 2.1635 BCC _42 7.3788 $41 $3.109l Since the Basic Control Chart approach spends more investigating relative to what it saves over the McMenimen approach, the McMenimen approach is more effective as a control method. 169 As a guide to ranking the approaches, it is possi- ble to depict the analysis in the form Of a tree-diagram shown in Figure 8. FIGURE 8.--Outcomes Of financial comparisons Equal '94 Equal , (BFJ 15:t)A \ # BCC and Min -J//fiv _1. McM . \ - BCC(M1n) . - As a result Of Figure 8, the following ranking now: becomes Obvious: .Approach Rank McM 1.0 BCC 2.5 Min 2.5 "Equal 4.0 AC 5.5 BFJ lst 5.5 BFJ 2nd 7.0 In cases Of ties in ranks, it is customary to give each itemtflmaaverage Of the ranks which they jointly occupy. 170 Derivation and Financial Analysis Of Lower Control Limits for Single. Observations—-Each Performance Tested Accountant's Conventional Method With the 10 per cent rule, the lower control limit will be [245 - 245 (.10)] 220 minutes. As with the upper control limit, this criterion results in a zero probability Of committing a Type I error. In this case, however, the probability Of making a Type II error is .83. This is de— termined by dividing the 83 improved performances with values over 220 by 100——the total number Of improved performances. Basic Control Chart Approach A two-tailed test with a .05 level Of significance was selected to determine the upper control limit. This same criterion will place the lower control limit at that value which is higher than 2-1/2 per cent Of the chance performances. Table 18 shows that the probability that a chance performance will be 230 or less is 22/600 or .0367; whereas, the probability that it will be 229 or less is 13/600 or .0217. Since .025 is between .0217 and .0367, the lower control limit is between 229 and 230. Bierman, Fouraker, and Jaedicke. Approach First Interpretation Of P. The necessary informa— tion for pin—pointing the control limit has been marshalled 171 in Table 24 which shows that the lower control limit is between 224 and 225. Appendix B explains how the indivi— dual numbers which comprise the-table are determined. TABLE 24.—~Decision table for B. F., and J. application first interpretation Of P Test Value L C Pc P Decision 220 $5 $4 .2000 .0034 Reject 221 4.80 4 .1667 .0034 Reject 222 4.60 4 .1304 .0034 Reject 223 4.40 4 .0909 .0068 Reject 224 4.20 4 .0476 .0136 Reject 225 4 4 0 .0167 Accept Second Interpretation Of P. Table 25 contains the information required to determine the control limit under this interpretation. Appendix B explains the logic behind placing it between 222 and 223. The appendix also explains some reservations concerning the utility Of this approach under this particular set Of assumptions. TABLE 25.—~Decision table for B. F., and J. application second interpretation Of P Test Value L C PC P Decision 220 $5 $4 .2 .33 Accept 221 4.80 4 .1667 0 Reject 222 4.60 4 .1304 0 Reject 223 4.40 4 .0909 .25 Accept 172 McMenimen Approach The savings value for the McMenimen approach is determined by multiplying the single performance Opportu— nity cost by four and then subtracting the.cost Of correc— tion. The single performance Opportunity cost is $.75 (245 — 230 60 performances) regardless Of the test value. The weighted X $3 where.230 is the mean Of the improved Opportunity cost is, then, $3 which is less than the $4 investigation cost even before the cost Of correction is subtracted. Consequently, the MOMenimen approach fails to yield a determinate solution for this situation.. The reason that this model does not yield a deter- minate solution and that the Bierman, Fouraker, and Jaedicke model yields a questionable solution is that these models were constructed with longer periods Of time in mind than just the time required to complete one performance. There— fore, the savings were intended tO be Of a more long run nature. Actually, on the surface, it does not seem rea- sonable to spend $4 on an investigation which may result in a savings Of $3 even though the other models give deter— minate solutions for this assumption. The paradox is that the $3 reflects savings if we catch an Off—standard condi— tion now rather than at some other time in the near future. This figure assumes that each performance is tested. This writer believes that it is important to use a model which will yield a determinate solution at the performance level. 173 Not only is control more timely at this level; but the age gregation problems of average—out and Off—set are eliminated.. Moreover, there is greater difficulty in objectively measur- ing long—run savings. Equalization Approach The information needed to make a decision by this method has been marshalled in Table 26. The table indi— cates that the Equalization lower control limit falls be— tween 233 and 234. It is clear that the hypothesis should be rejected for test value 233 because the expected Oppor— tunity cost associated with act reject, $.3668, is less than the expected Opportunity cost associated with act ac- cept, $.4633. Likewise, it is clear that the hypothesis should be accepted for test value 234 because the expected Opportunity cost associated with act accept, $.4ll7, is less than $.4732——the expected Opportunity cost for act reject. The derivation Of the detail for this table is explained in Appendix B. Minimization Approach Table 27 shows that the lowest expected Opportunity cost occurs for test value 229 which then becomes the lower control limit under this approach. The table is constructed in the same manner as Table 21 which was used to find the Minimization upper control limit. “I R 174 TABLE 26.-—Decision table for Equalization approach Test Values 232 233 - 234 235 Probability of d .0700 .0917 .1183 .1483 Opportunity Cost Of a $4 $4 $4 $4 Expected Opportunity Cost of a $ -2800 $ .3668 $ .4732 $ .5932 Probability of B .4000 .3600 .3000 .2700 Opportunity Cost Of B $1.2871 $1.1999 $1.1127 $1.0254 EXpected Opportunity, Cost of 8 $ .5148 $ .4320 $ .3338 $ .2768 Decision Reject Reject ’ Accept Accept d = Type I error B = Type II error 175 Since improvement is the only assignable cause of concern, the prior probabilities are.composed only of chance and improvement.. Out of the 1000 original perfor— mances, 600 were due to chance and 100 to improvement; therefore 600/700 = .8572 and 100/700 = .1428 respectively represent the prior probabilities for chance and improvement. The weighted Opportunity cost, the probability of a wrong decision and the conditional average opportunity cost are the same for chance as the Opportunity cost Of a Type I-error, the probability Of a Type I error, and the expected Opportunity cost Of a Type I error respectively-— all figures that were develOped under the Equalization approach. For improvement the weighted Opportunity cost, the probability of a wrong decision, and the conditional average Opportunity cost are.the same as the respective figures developed under the Equalization approach for the Opportunity cost of a Type II error, the probability Of a Type II error, and the expected opportunity cost of a Type II error. The weighted Opportunity cost of a Type II error for test value 230 is $1.4615. This same value is listed in Table 27 under improvement for test value 230. The weighted Opportunity cost for improvement for test value 225 is $2.2170. The weighted Opportunity costs for test values 228 and 229 are found by interpolating between $1.4615 and $2.2170. 176 TABLE 27.--Decision table for Minimization approach Prob. Of Prior Weighted Wrong Cond. Ave. Expected Cause Prob. Op. Cost Decision Op. Cost Op. Cost Test Value 231 Chance .8572 $4 .0517 .2068 Improvement .1428 1.3743 .4400 .6047 1.0000 $.2636 Test Value 230 Chance .8572 $4 .0367 .1468 Improvement .1428 1.4615 .4600 .6723 1.0000 $.2218 Test Value 229 Chance .8572 $4 .0217 .0868 Improvement .1428 1.6126 .5200 .8386 1.0000 $.1942 Test Value 228 Chance .8572 $4 .0183 .0732 Improvement .1428 1.7637 .5600 .9877 1.0000 $.2038 177 The expected opportunity costs are calculated in the same manner described in conjunction with Table 21. Comparison of Lower Control Limits Among the MethOds The control limits that have just been calculated under each Of the approaches are summarized below: Approach Lower Control Limit Accountant's Conventional 220 Basic Control Chart 229-230 Bierman, Fouraker, and Jaedicke First Interpretation 224-225 Second Interpretation 222-223 McMenimen Indeterminate Equalization 233—234 Minimization 229 Financial Analysis and Ranking The analysis takes the same general form as the analysis Of the upper control limits. The approaches are paired Off in twos and an evaluation is made to the extent necessary to rank the approaches. The only difference is that Of the two approaches under consideration at any time the one yielding the higher lower control limit will in— volve larger total investigation costs while in the analy- sis of the upper control limit, the lower of the approaches being compared involved the larger investigation costs. As 178 a general rule, it is safe to say that the control limits closest to the standard involve higher investigation costs than those farther away. It is also always true that the control limit involving the greatest investigation costs will detect some off-standard conditions sooner than the other. For example, in the analysis between the Equaliza— tion approach (with a lower control limit between 233 and 234) and the Accountant's Conventional approach (with a lower control limit Of 220) the Equalization approach will spend $4 investigating each of 54 chance performances be— tween 221 and 233 inclusive (itemized in Table 18) that the Accountant's Conventional approach would not investi— gate. This is an extra cost of $216. For this extra in— vestigation cost, the Equalization approach would detect 47 improved performances at a saving of $4.127310 each. The total savings of $193.9831 is less than the extra cost Of $216 so the Accountant's Conventional method yields more profitable results in this situation. In an analysis betWeen the Accountant's Conventional approach and the Basic Control Chart approach, it is found that the Basic Control Chart approach leads to the investi— gation of 12 chance performances between 221 and 229 inclu- sive for an extra investigation charge Of $48. This approach 10This is the Opportunity cost of a Type II error weighted for performance value 220. — I: 179 will, however, detect 31 improved performances at a savings Of $4.1273 each for a total savings of $127.9463. Since the savings is greater than the charge, the Basic Control Chart approach is better than the Accountant's Conventional approach. Table 28 summarizes the results Of all comparisons. TABLE 28.——Financial comparisons between approaches Added Inv. Added Savings Most Approaches Cost Of Of Effective Tested LCL Higher Higher LCL Approach Equal AC $216 $193.98 AC AC BCC 48 127.95 BCC BFJ 2nd AC 4 33.02 BFJ 2nd BFJ lst BFJ 2nd 8 20.18 BFJ lst BFJ lst BCC and 40 , 44.38 BCC and Min Min The Basic Control Chart approach must share honors with the Minimization approach since the investigative de— cisions are the same under either. Figure 9 depicts the results Of this analysis in diagram form. FIGURE 9.--Outcomes of the financial comparisons BFJ 2nd BFJ lst BCC and Min 180 From this diagram, the following ranking emerges: Approach Rppk BCC 1.5 Min 1.5 BFJ lst 3 BFJ 2nd 4 AC 5 Equal 6 McM 7 Those approaches yielding indeterminate results are given the highest ranking. Derivation and Financial Analysis Of Upper Control Limits for Single Observations——Every Tenth Performance Tested Introduction So far the coverage in this Chapter has assumed that each performance is tested. In some cases this is a realistic assumption. The worker can be trained to com- pare the results of each performance with the control limits and tO report cases in which his performance falls outside Of the control limits. Of course, the complete success of this procedure depends upon the cooperation Of the worker. Another procedure may involve the foreman's testing every n single performances. "N" may vary depend- ing upon the number of workers per foreman, the length of II. it 181 time needed to complete a task, and the degree of control already attained. In the following illustration, n is assumed tO be ten. Accountant's Conventional Method Accountants have not made a distinction between testing every performance or testing every nth performance in conjunction with the selection of control limits. Therefore, consistent application Of the "ten per cent rule" results in the same upper control limit, 270, that resulted when every performance was investigated. Basic Control Chart Approach This approach like the Accountant's Conventional method also refrains from making a distinction between testing every performance and testing every nth perfor— mance. Accordingly, the upper control limit with a two— tailed .05 level of significance remains between 260 and 26l——the same interval that was found when each perfor— mance was tested. Bierman, Fouraker, and Jaedicke Approach First Interpretation Of P. Table 29 shows that the upper control limit with this interpretation Of P is 250. For this value P and PC are equated. Some explana— tion Of the values in the table is given in Appendix B. 182 TABLE 29.—-Decision table for BF and J application. First ' interpretation of P Test Value L C PC P Decision 255 $20 $5 .7500 .2966 Reject 251 12 5 .5833 .4600 Reject 250 10 5 .5000 .5000 Indifferent 249 8 5 ' .3750 .5300 Accept 248 6 5 .1670 .6334 Accept Second Interpretation of P. Table 30 shows that the upper control limit according to this interpretation is between 254 and 255; whereas, under the first interpre— tation it was 250. The only difference between Tables 29 and 30 lies in the calculation Of P. This difference is explained in some detail in Appendix B. TABLE 30.——Decision table for BF and J application. Second interpretation Of P Test Value L C PC P Decision 260 . $30 $5 .8330 .6428 Reject 255 20 5 .7500 .6207 Reject 254 18 5 .7222 .7620 Accept 253 16 5 .6875 .7647 Accept McMenimen Approach To start, a test will be made to see if it is prof— itable to spend $1 investigating a performance value of 260 for dull knives. The information necessary for an investi— gation decision appears in Table 31. The $49.9625 savings figure is different than the $4.625 used in Table 19 because 183 Of the differences in the frequency Of testing. The $1.25 (219—%63£§ X 3 where 270 is the mean Of the dull knive per- formances) single performance saving is still multiplied by four—-the arbitrarily selected number of tested perfor— mances which lapse on the average before an assignable cause is detected. The $5 result must further by multi— plied by 10 because under this testing procedure on the average only one performance out Of ten is tested. The $49.9625 savings is found by subtracting the prorated per- formance cost of sharpening the knives from the $50 Oppor— tunity cost. In making investigative decisions where every performance was tested the $1.50 cost of sharpening was spread over the four performances which allegedly lapse on the average before detecting the dull knives. Similarly, one may argue that the $1.50 should be spread over the 40 (10 X 4) performances that are said tO lapse before dull knives are detected. This averaging results in a $.0375 (1430) cost per performance which when subtracted from the $50 leaves the $49.9625 saving. TABLE 31.—-App1ication Of McMenimen technique Spend Up tO $1 Investigating For Dull Knives Event Pe Cond. Exp. Test Value 260 Save $0 .7143 S—l $ -.7143 Save $49.9625 .2857 48.9625 13.9886 Expected Savings 513.2743 In It 184 The probabilities and the mechanics Of determining the expected savings are both found in the same way as. that discussed in conjunction with Table 19. Since the expected savings is positive, it is worthwhile to spend at least $1 investigating for dull knives. Decision tables like Table 31 for all performance values for which dull knives have been Observed yield positive expected savings. This means that at least $1 would be spent on an investi— gating for performance values of 250 or more for dull knives.ll This is less than the 259-260 control limit de— termined under the McMeniman approach when all performances were tested. The reason for the decrease in the control limit is that it becomes more costly to fail to find an assignable cause when only one performance out of ten is tested. In McMenimen's terminology more can be saved by detecting an assignable cause when only one—tenth of the performances are tested. To continue the McMenimen application, it is nec- essary tO determine whether or not it is profitable to 11Table 18 shows that dull knives have not resulted in performance times of 251, 252, or 254 so these may not be investigated for dull knives. The investigator may in— vestigate them anyway, however, because there is no reason why these values could not be produced with dull knives. Just because they were not observed in the original 1000 values does not mean they could not occur in a larger pOp- ulation. The original 1000 values provide guidelines for decision making. These guidelines may be altered as a re- sult of later Observations or well-founded intuition. — it 185 spend up to $2 to administer the test for laziness and poor attitude. For values which have resulted from ill— ness, it is also important to decide whether it is worth- while to spend up to $5 tO require a physical examination. The model used to make these decisions is depicted in Table 32. In order to avoid confusion with tOO much arithmetic, the information is presented only for certain test values. Table 33 shows how the savings values are deter— mined. The single performance-Opportunity cost is cal— culated in the usual manner by the following formula XE_%EE£§ X $3 where Ya stands for the mean of the corres- ponding assignable cause. The multiplication weight is the result Of multiplying the four performances that lapse on the average before an assignable cause is detected by ten. (Every tenth performance is testedJ The weighted Opportunity cost results from multiplying the single per- formance Opportunity cost by the multiplication weight. The savings values are, of course, determined by subtract— ing the correction cost per performance from the weighted Opportunity cost. The cost of correction per performance is an estimated figure. Surprisingly, rather large changes in these figures do not produce drastic changes in the control limits. 186 omen. I m mmmw.H+ m I- mmcfl>mm UOPOOQNM msmm.m+ om.mm mmmo. embm.m mn.ma mmva. om.mmm o>mm 33.7% 7m $3. immiw 7% 2mm. 3.me 96m ow w>mm NmN ooam> Home H .+ No v mmsfl>wm pwuoomxm mmmm.a mb.ma ease. . 3.8.- m 7% swam. ms mww wwmm mom o5am> boos mowv.l mmcfl>mm touoooxm ammo. mn.ma mmmo. ms.mam o>mm smnm.| w Hum smnm. om w>mm mom wDHm> #mwB m . . m . . m . mxm . ocoo m nxm ssoo m mxm 95o a page mmOCHqu one mmOCHHH How mcflpom mosnfluud noom How mcflpmm mO>HaM HHOQ Mom mcaumm IHDmO>CH mm on QD Unmmm Iflpmo>CH mm Op QB qumm Iflpmo>CH am on mo osomm oswflczooe GOEHCOEOS mo COHuMOHHmmmm owsommxm meso.m ms.sm son. i ms.mmw oosm.ma+ . . . m>mm omms.m om.sm mm. mmsm m4 mm mmsa.mvw m>wm meom.m ms.sa use. mm.amm m>mm ms.m-m min ms. ommm.s-w mlm was. ms. I H-m ms. 5 mm” wwmm mom wsam> nmms mm. lw memo.m comm.m+ mmcfl>om Umpoomxm osss.s ms.sm mono. ms.mmm m>mm somm.o+ mmoa.ms mmmn. mmom.m o>m ms.m om.sm coon. om.mmm w>mm samp.m ms.sH mmma. ms.maw m>mm om.4-w mum oooa. smmm.slm mlm mass. seem. . a-“ boom. om m>mm mmm msam> pmws .mxm .ocoo mm .mxm .oqoo mm .mxm .ocoo mm qu>m mmOCHHH now moflnmm Iflpmo>CH mm on no pcomm mmosflmmq pom moopflpum noom now maflumo IHumO>CH mm on no onomm mo>HsM HHOQ How mcflnom Iflpmo>cH am Oh mo Usomm lemscnpcooc mm magma 188 TABLE 33.—-Derivation of savings values Single Multi— Correction Perf. Op. plication Weighted Cost per Cause Cost Weight Op. Cost Performance Savings Poor attitude $ .50 40 $20 $ .25 $19.75 Laziness 1.50 40 60 .25 59.75 Illness 1.00 40 40 .50 39.50 Some discussion Of how the PE‘s are determined in Table 32 for act "spend up to $2 investigating" might be enlightening. Since only one Of the thirty-eight perfor— mances with value 249 resulted from poor attitude, 1/38 or .0263 is the probability of saving $19.75. Accordingly, the probability Of saving $0 frOm an investigation to de— tect poor attitude is l — .0263 Or .9737. Because the expected savings is negative, an investi— gation would not be profitable for a performance value Of 249.12 The same analysis for all test values 248 and above yields a positive expected savings. Therefore, the 12The reader will note that the conditional values are only $1 less than the amount of the savings instead Of $2 less. This is because no performances due to dull knives have ever occurred for value 249 so the investiga- tor would by—pass checking the knives for sharpness and save $1 in the investigation process. The same is true for test values 248 and 252. 189 upper control limit for the investigation for poor atti— tude and lazinessl3 is between 247 and 248. The possibility Of investigating for illness pre— sents itself in conjunction with test value 252. Table 18 shows that two of the fourteen values at 252 are due tO poor attitude. If the test fails to reveal poor atti— tude as the cause, the investigator reassesses the prob— ability Of finding the cause if he continues his investi— gation. The original sub—pOpulation shows that 12 performances occurred at 252 that were not due to poor attitude. One of these was due tO illness and the other eleven to chance. The probability Of saving $39.50 is thus 1/12 or .0833 by investigating for illness. The probability of saving $0 is 1 — .0833 or .9167. Since the expected savings for this act is negative, it would not be profitable to extend the investigation tO the point Of a physical examination in conjunction with a performance value Of 252. If, however, the incremental approach is employed, Table 34 shows that the expected savings is positive. Consequently, the upper control limit in the incremental sense is 252 for illness. 13The reader will note that for test values 259 and 262 laziness is also a possibility. Since it can be detected with the same test administered to detect poor attitude, its savings of $59.75 is shown in conjunction with act "spend up to $2 investigating." 190 TABLE 34.—-McMenimen technique——incremental application Investigate for Illness Event PE Cond. Exp. Test Value 252 Save $0 .9167 $—3 $-2.7501 Save $39.50 .0833 36.50 3.0404 Expected Savings $+0.2903 The reader will recall from the discussion in con— junction with Table 80 that the incremental analysis con— siders only the additional Opportunity cost of the next investigative step. In this case an investigation for illness is considered only if the investigation for poor attitude fails to reveal that as a cause. It requires only an additional $3 to investigate for illness since the $1 Opportunity cost Of an investigation for poor attitude is at that point a sunk cost. (Dull knives did not pro— duce a value Of 252.) On the other hand, if the incremental approach is not employed, negative expected savings result for test values 254, 257 (not shown on Table 32) and for 259 (shown) as well as for test value 252. For test value 262, how— ever, the expected saving is positive for an investigation for illness. Accordingly, the upper control limit for illness is 262 if the incremental approach is not applied. 191 Equalization Approach The same type of information that appeared in Table 20 is marshalled in Table 35. All figures are determined in the same way with the exception that the Opportunity cost Of a Type II error for each corresponding test value is ten times larger. This accounts for the fact that ten performances now lapse between tests. Since Type II errors are now more expensive than they were when each performance was-tested it is important that they be made less frequently. For this reasontflmaupper control limit is Closer to the standard. Table 35 indicates that the upper control limit. is between 253 and 254. TABLE 35.--Decision table for Equalization approach Test Values 255 254 253 Probability of a .1483 .1750 .1967 Opportunity Cost Of a $5 $5 $5 Expected Opportunity Cost of a $.7415 $.8750‘ $.9835 Probability of B .1000 .0833 .0700 Opportunity Cost of B $11.331 $11.160 $10.989 Expected Opportunity Cost.of 8 $ 1.1331, $ .9296 $ .7692 Decision Reject Reject Accept a = Type I error 8 = Type II error 192 The process Of interpolation has again been used to find the opportunity costs of committing a Type II error for test values 254 and 253. This interpolation process is explained in Appendix B. Minimization Approach Table 36 shows the expected opportunity costs for various test values. From this, it is apparent that 255 is the upper control limit under the Minimization approach because the expected Opportunity cost for this test value is less than for any other. This table is basically the same as Table 21 except that the weighted opportunity costs for each combination of test value and assignable cause are ten times larger to account for the ten performances that lapse between tests. The weighted Opportunity costs for test values 256 and 257 were derived in Table 87. These costs are shown in Tables 93 and 94 for test values 255 and 254 respectively. For the reason just indicated these weighted costs are multiplied by ten before they are en- tered in Table 36 under the weighted Opportunity cost column. The values for the remainder Of this table are calculated according to the same procedure explained in conjunction with Table 21. TABLE 36.--Decision table for Minimization approach 193 Prob. of Prior Weighted Wrong Cond. Ave. Expected Cause Prob. Op. Cost Decision Op. Cost Op. Cost Test Value 257 Chance .7500 $ 5.000 .0917 $ .4585 Poor attitude .0750 12.309 .6333 7.7953 Illness .0250 12.258 .1500 1.8387 Dull knives .1500 13.065 .0417 .5448 1.0000 $1.0562 Test Value 256 Chance .7500 $ 5.000 .1183 $ .5915 Poor attitude .0750 10.400 .5667 5.8937 Illness .0250 11.885 .1500 1.7828 Dull knives .1500 12.892 .0333 .4293 1.0000 $ .9946 Test Value 255 Chance .7500 $ 5.000 .1483 $ .7415 Poor attitude .0750 8.492 .4167 3.5386 Illness .0250 11.512 .1500 1.7268 Dull knives .1500 12.720 .0167 .2124 1.0000 $ .8965 Test Value 254 Chance .7500 $ 5.000 .1750 $ .8750 Poor attitude .0750 8.019 .3500 2.8066 Illness .0250 11.305 .1000 1.1305 Dull knives .1500 12.706 .0167 .2122 1.0000 $ .9118 194 Comparison Of Upper Control Limits Among the Methods For review purposes the upper control limits that have been derived under the testing plan whereby a test is taken on the average for one out Of every ten performances are listed below: Approach Accountant's Conventional Basic Control Chart Bierman, Fouraker, and Jaedicke First Interpretation Of P Second Interpretation Of P McMenimen Equalization Minimization Financial Analysis and Ranking Upper Control Limit 270 260-261 250 254—255 250 for Dull Knives 247-248 fOr Poor AttitudeAand Laziness 252 for Illness 253—254 255 This analysis is performed in the same manner that was illustrated when every performance was tested. The only difference is one Of degree lying in the computation of the savings associated with the lower Of the two upper control limits being compared at any given time. The op- portunity costs weighted at the higher Of the two limits are ten times higher for this analysis than they would be 195 for the same higher upper control limit when every perfor— mance is tested. The following explanation of the analysis between the Accountant's Conventional approach (yielding an upper control limit of 270) and the Basic Control Chart approach (yielding an upper control limit between 260 and 261) will serve to Clarify this computational distinction.l4 The added investigation cost under the Basic Con- trol Chart method remains at $70 per thousand performances tested (2 X $5 + 10 X 6 = $70). Table 37 shows the same number Of performances between 261 and 269 inclusive for each assignable cause that were shown in Table 22. The only difference is that the opportunity costs weighted at 270 are ten times higher in Table 37 than they were in Table 22. The extra savings under the Basic Control Chart approach is thus ten times higher than when every perfor— mance was tested. Consequently, the Basic Control Chart approach is now even more strongly favored. Table 38 shows the results for the comparison Of the other approaches. l4Under both of these approaches the upper control limits remain the same as they were when every performance was tested. This condition does not hold for the other approaches which consider the opportunity costs of failing to detect a shift in the parameter (i.e., the Opportunity costs Of committing a Type II error). When a test is made only once in every ten performances it is relatively more CQstly to fail to detect a shift. Consequently, this fact serves to reduce the upper control limit. 196 TABLE 37.-—Added savings of basic control Chart method Performances Opportunity Between 261 and Cost Weighted Cause 269 Incl. (F) at 270 (C) CF Poor Attitude 12 $125.600 Dull Knives 40 23.058 Tough Cows 2 Illness 4 18.447 Laziness 8 19.900 Lack of Training 1 19.900 Added Savings $2662.508 TABLE 38.—-Financial comparisons between approaches Added Inv. Added Savings Most Approaches Cost Of of Effective Tested Lower UCL Lower UCL Approach AC BCC $ 70 $2,662.51 BCC Equal BCC 460 685.68 Equal Equal McM 148 126.84 Equal AC BFJ lst 672 7,015.57 BFJ lst BCC BFJ lst 625 820.86 BFJ lst Equal BFJ lst 165 69.87 Equal Equal BFJ 2nd BFJ 2nd and Min 80 47.08 and Min McM BFJ lst 15 22.64 McM 197 Figure 10 depicts this analysis in diagramatic form. FIGURE lO.-—Outcomes of the financial comparisons AC BCC BCC Eoual Equal Ecual McM E-ual BCC Min and BFJ 2nd AC BFJ lst BFJ lSt Min and BFJ 2nd BFJ lst 7 McM McM BFJ lst The following ranking now becomes Obvious: Approach Rank Min 1.5 BFJ 2nd 1.5 Equal 3 McM 4 BFJ lst 5 BCC 6 AC 7 198 Derivation and Financial Analysis Of Lower Control Limits for Single Observations-~Every Tenth Performance Tested Accountant's Conventional Method The 10 per cent rule does not distinguish between tests Of every ten performances and tests of every perfor- mance. Accordingly, the lower control limit remains at 220 [245 — (.10)245]——the same as it was when every per— formance was tested. Basic Control Chart Approach Neither does this method distinguish between tests of every performance and tests of every tenth performance. Consequently, the lower control limit for tests Of every tenth performance assuming a two—tailed .05 level of sig— nificance will be between 229 and 230——the same interval into which it fell when each performance was tested. Bierman, Fouraker, and Jaedicke Approach First Interpretation of P. Table 39 shows that the lower control limit under this approach is between 240 and 241. The values used in this table were derived in the same manner as those in Table 24 with the exception that the L values are ten times higher for each corresponding test value. Symbolically, L is now equal to X - 245 60 x $3 X 4 X 10 199 where: X = the test value 245 = the standard §_%U§£§ = the fraction Of an hour between the standard and the test value $3 = hourly wage rate Of butchers 4 = assumed average number Of tests that must be made before an assignable cause is detected 10 = number of performances that lapse between tests. This factor did not pertain to the L values in Table 24. TABLE 39.--Decision table for BF and J application. First interpretation of p Test Value L C PC P Decision 240 $10 $4 .60 .50 Reject 241 8 4 .50 .55 Accept 242 6 4 .33 .63 Accept Second Interpretation of P. Table 40 is constructed in the same way as Table 39 except that P in Table 40 fol- lows the second interpretation. The lower control limit according to this interpretation is between 235 and 236. TABLE 40.--Decision table for BF and J application. Second interpretation Of P Test Value L C Pc P Decision 234 $22 $4 .8182 .7273 Reject 235 20 4 .8000 .6000 Reject 236 18 4 .7778 .8889 Accept 237 16 4 .7500 .8125 Accept 200 McMenimen Approach The information presented in Table 41 shows that the McMenimen lower control limit under this approach is between 235 and 236. The savings of $29.75 is found by subtracting the $.25 per performance cost Of correction15 from the $30 Op- portunity cost. The $30 is determined as follows: 230 -.245 60 proved performances and the other numbers have the same X $3 X 4 X 10 where 230 is the mean of the im- meanings discussed in conjunction with Table 39. The probabilities are determined in the same man— ner as the probabilities that were used to determine the upper control limits under the McMenimen approach in Table 19. That is, since three Of the 22 performances at value 235 (see Table 18) were due to improvement, the probability of detecting improvement and thus saving $29.75 by an in— vestigation is 3/22 or .1364. The probability that nothing will be saved by the investigation is 1 - .1364 or .8636. Since the expected value is positive for test value 235 an investigation would be undertaken. On the other hand, it would be unprofitable tO investigate a performance Of 236 because the expected savings is negative. Consequently, the lower control limit is between 235 and 236. 15This $.25 represents the raise that would be given the butcher when his improvement is recognized. 201 TABLE 41.-—App1ication Of McMenimen technique Spend Up to $4 Investigating Event Pe Cond. Exp. Test Value 235 Save $0 .8636 $-4 $-3.4544 Save $29.75 .1364 25.75 +3.5123 $+ .0579 Test Value 236 Save $0 .8889 $-4 $-3.5556 Save $29.75 .1111 25.75 +2.8608 $— .6948 Equalization Approach Table 42 shows that the lower control limit under this approach is between 241 and 242. The procedure-fol— lowed in constructing the table is exactly the same as that discussed in conjunction with Table 26. In Table 42 the actual Opportunity costs Of committing a Type II error are calculated rather than interpolated because the cal- culations for these test values are not lengthy. Q a 202 TABLE 42.--Decision table for Equalization approach Test Values 240 241 242 Probability of a .2500 .2750 .3167 Opportunity Cost Of a $4 $4 $4 Expected Opportunity Cost Of a $1.0000 $1.1000 $1.2668 Probability of B .15 .13 .12 Opportunity Cost of B $8.810 $8.614 $8.510 Expected Opportunity Cost Of B $1.3215 $1.1198 $1.0212 Decision Reject Reject Accept d = Type I error '03 II .Type II error Minimization Approach Under this approach the lower control limit is 234 because the expected Opportunity costs develOped in Table 43 are less for this test value than for any other. The values in this table are Obtained by the same procedure followed in Table 27 in which the Minimization lower con— trol limit was derived for the situation in which every performance was tested. For previously explained reasons, the weighted Opportunity costs associated with improvement are ten times higher in Table 43 than they would be for a corresponding test value in Table 27. If the reader wishes he can verity that the weighted Opportunity costs for im- provement for test values 233, 234, and 235 respectively 203 are exactly ten times more in Table 43 for each correspond- ing test value than they were in Table 26 for the opportunity cost of a Type 11 error. (The Opportunity costs Of a Type II error in Table 26 were derived in this same manner as the weighted Opportunity costs Of improvement in Table 27d TABLE 43.-—Decision table for Minimization approach Prob. of Prior Weighted Wrong Cond. Ave. Expected Cause Prob. Op. Cost Decision Op. Cost Op. Cost Test Value 233 Chance .8571 $ 4 .0917 .3668 Improvement .1429 11.999 .3600 4.3196 3 .9316 Test Value 234 Chance' .8571 $ 4 .1183 .4732 Improvement .1429 11.127 .3000 3.3380 $ .8826 Test Value 235 Chance .8571 $ 4 .1483 .5932 Improvement .1429 10.254 .2700 2.7686 $ .9041 Comparison Of Upper Control Limits Among the Methods The lower control limits just developed under this testing plan are listed on the next page: 204 Approach Lower Control Limit Accountant's Conventional 220 Basic Control Chart 229-230 Bierman, Fouraker, and Jaedicke First Interpretation of P 240-241 Second Interpretation Of P 235-236 McMenimen 235-236 Equalization 241-242 Minimization 234 Financial Analysis and Ranking Table 44 shows the results for all the appropriate comparisons. TABLE 44.-—Financial comparisons between approaches t — Added Inv. Added Savings Most Approaches Cost of Of Effective Tested Higher LCL Higher LCL Approach AC Equal $656 $2,889.11 Equal Equal BFJ lst 60 17.23 BFJ lst BFJ lst BCC 548 540.66 BCC McM and McM and B . CC BFJ 2nd 304 365 38 BFJ 2nd - McM and . M in BFJ 2nd 72 30.76 Min The results of these comparisons are depicted in Figure 11. 205 FIGURE ll.--Outcomes Of the financial comparisons AC E ual E ual BFJ lst BFJ lst BCC BCC \\MCM(BFJ 2nd) MCM(BFJ 2nd)/ Min Min From these comparisons, the following ranking emerges: Approach Rank Min 1 McM 2.5 BFJ 2nd 2.5 BCC 4 BFJ lst 5 Equal 6 AC 7 Derivation and Financial Analysis Of Upper Control Limits--Sample Size Five—-Every Performance Included in a Sample Every performance is included in this testing plan. However, instead of comparing each performance against con— trol limits developed for individual performances, the analyst groups the performances by fives and computes the mean of each group. These sample means are then compared With control limits developed for this testing plan. The derivation Of these limits under each Of the approaches to b ‘ o u u o 0 e conSidered is explained in this section. ‘—— 206 Accountant's Conventional Method In selecting control limits accountants have not made a distinction between testing individual performances and sampling groups of performances. Consistent applica- tion Of the "ten per cent rule" results in the same upper control limit, 270, that resulted when every performance or every nth performance was investigated. Basic Control Chart Approach With the consistent use of the .05 level Of signif- icance with a two-tailed test, the upper control limit would be that value which is exceeded by 2.5 per cent Of the sample means of five randomly selected chance perfor— mances. It is assumed that the distribution Of sample means is Student-t distributed for sample sizes less than 30. The Student—t Distribution is symmetrical but flatter than the normal distribution. The upper control limit is found by solving the following formula: UCL - 245 or t: where: t represents the number Of standard deviation units between the standard and the upper control limit—~ that yet unknown value which is exceeded by 2.5 per cent Of all sample means Of size five. UCL is the upper control limit. 207 245 is the mean of the distribution Of chance per- formances OX.represents the standard error Of the mean which is the standard derivation of all possible random sample means Of size five that could be drawn from the distribution Of Chance performances. The value for t can be found in a Student-t Distri- bution which is given in most statistics textbooks.i The value in this case is 2.776 corresponding to four degrees Of freedom (the sample size, 5, less one) and .025. The standard error of the mean is found by solving the follow- ing formula: where: o is the standard deviation of Chance performances which is 7.7846 n is the sample size which is 5 therefore: 0‘_ = 2.2.2.8...4—6. ..-: 3.8923 X /B‘?‘I The upper control limit is thus 2 UCL - 245 c- X 2.776 = UCL - 245 3.8923 208 UCL - 245 10.8050 UCL 255.8050 Bierman, Fouraker, and Jaedicke Approach First Interpretation Of P. Decision Table 45 shows the upper control limit to be between 250 and 251. In this table, "L" represents the single performance opportunity cost multiplied by twenty. As in previous applications Of the Bierman, Fouraker, and Jaedicke approach the single performance Opportunity cost is determined by the follow- ing formula: X - 245 60 x $3 where X represents the test value. The twenty is the re— sult Of multiplying the sample size of five by the four tests that these writers assume must be made on the average before an Off-standard condition can be detected. "C" rep- resents the cost of an investigation which is a constant in the Bierman, Fouraker, and Jaedicke system. Again, this Cost is assumed to be $5 rather than $6 since there is nO reason to investigate values in this range for tough cows because a tough cow never produced a value lower than 262. The reader will recall that the PC values result from appli— cation Of the following formula: "P" is the probability of Obtaining a sample mean (from the 209 five randomly selected Chance performances) at least as high as the test value given that the.test value is un- favorable (i.e. greater than 245). For test value 252, "P" isdetermined by: 1. Finding the number of standard error of the mean between the standard and the test value. _252 - 245 Z = 3.8923 ='1.7984 2. Using the table Of normal curve areas16 to convert the Z value into the area between the standard and the test value. In this case the area is .4641. 3. Subtracting this area from .5 to find the area larger than the test value. This area .0359 (.5 — .4641) may be interpreted as the probability that a sample mean will be at least as large as 252. 4. Dividing .0359 by .5 to Obtain .0718—-the prob- ability that a sample mean will be at least as large as 252 given that the test value is unfavor— able. This is simply a matter of limiting the sample space tO only one-half the curve. l6Actually since the sample size is under 30 it would be more appropriate to use the Student-t Distribution. However, most Student-t Distributions are.not sufficiently detailed to provide the areas for all t or Z values. The normal distribution is used for convenience and because it provides a good approximation to the area. Moreover, it is customary for quality control engineers to use the nor— mal distribution for sample sizes of five for the same reason indicated above. 210 The same process is followed to determine P for the other test values. The reader will recall that a decision to accept the hypothesis and refrain from an investigation will be made as long as P is greater than PC. Contrariwise, when P becomes smaller than PC the hypothesis is rejected and an investigation is initiated. Accordingly, Table 45 indicates that the upper control limit calculated by this approach is between 250 and 251. TABLE 45.--Decision table for BF and J application. First interpretation of P Test Value L C PC P Decision 250 $5.05 $5 .0099 .2006 Accept 251 6.00 5 .1667 .1236 Reject 252 7.00 5 .2857 .0718 Reject 253 8.00 5 .3750 .0394 Reject In dealing with testing plans involving individual performances it is sufficient to indicate a control limit (as lying between two whole numbers. In this case, it is understood that an investigation will not be undertaken for the occurrence Of a performance value closer to the standard; but that one will be undertaken for the occur— rence of a value farther from the standard. NO more pre- cision is needed because the preformances are not recorded in fractions Of a minute. 211 However, in testing plans involving samples, the sample mean will rarely be a whole number. Therefore, it is necessary to pinpoint the control limit more exactly. This can be done by the following process of interpolation. The difference between PC and P for test value 250 is .1907. The corresponding difference for test value 251 is .0431. The sum of these differences is .2338. The ratio .0431/ .2338 indicates that only 18 per cent Of the total difference is accounted for by test value 251. That is, P and PC come much closer to being equated at 251 than at 250. Therefore, the control limit is much closer to 251 than to 250 and it can-be determined-by subtracting .18 from 251. Thus the upper.control limit is 250.82. Second Interpretation Of P. The values in Table 46 depicting the second interpretation Of P are determined in the same manner as the values in Table 45 except for P. "P" according to the second interpretation represents the probability that a sample mean equal to the given test value is due to chance. These probabilities can be esti- mated by a round about process shown in Table 47. TABLE 46.——Decision table for BF and J application. Second interpretation Of P L Test Value L C PC P Decision 252 $ 7 $5 .2857 .7330 Accept 253 8 5 .3750 .5807 Accept 254 9 5 .4444 .4307 Reject 255 10 5 .5000 .3684 Reject 212 In this table the conditional probabilities rep- resent the probability of obtaining a sample mean exactly equal to the test value for each respective cause. These probabilities are obtained by the method of normal curve approximation. For example, .0207 is the probability of Obtaining a sample mean Of 252 from five chance perfor- mances randomly selected. The value .0207 represents the area under the normal curve between 251.5 and 252.5 with mean 245 and standard error Of the mean 3.8923.l7 Simi- larly, .0754 is the probability Of Obtaining a sample mean Of exactly 252 from five randomly selected perfor— mances due to poor attitude. The value .0754 represents the area under the normal curve between 251.5 and 252.5 with mean 255 and standard error Of the mean 4.0187. (Table 17 shows that the mean of the performances due to poor attitude is 255.) The standard error Of the mean, div is calculated from the distribution in Table 2 by the following formula: 0' Vn - 1 OX: It would be impossible tO Obtain a mean Of 252 from a sample Of five performances due to any of the other causes. The conditional probabilities for the other test values are interpreted in a similar manner. 17The calculation Of 3.8923 was just explained in Conjunction with the Basic Control Chart approach accord— lng to this testing plan. 213 TABLE 47.--Determination of P's Prob. of Cause Cond. Number Number Of Given Occurrence Cause Prob. of Perf. Means of Test Value Test Value 250 Chance .0477 600 28.6200 .9119 Poor Attitude .0461 60 2.7660 .0881 31.3860 1.0000 Test Value 251 Chance .0318 600 19.0800 .8395 Poor Attitude .0608 60 3.6480 .1605 22.7280 1.0000 Test Value 252 Chance .0207 600 12.4200 .7330 Poor Attitude .0754 60 4.5240 .2670 16.9440 1.0000 Test Value 255 Chance .0057 600 3.4200 .3684 Poor Attitude .0956 60 5.7360 .6178 Illness .0064 20 .1280 .0138 9.2840 1.0000 214 The column in Table 47 labeled "number Of perfor- mances" represents the number Of performances attributed to each respective cause that occurred in the 1000 initial Ob- servations. The numbers are listed in Table 17 and their detail is shown in Table 18. They are used here only as weights. That is, the values in the column labeled "num- ber of means" are the result Of multiplying the conditional probabilities by the number of performances for each re- spective cause. The reason for weighting the conditional probabilities in this manner follows. The conditional probability column indicates that the probability of Ob- taining a sample mean of exactly 252, for example, is more than three times greater if poor attitude is the cauSe than if Chance is Operative. However the probability that Chance is Operative is ten times (600/60) the probability that poor attitude is Operative. Consequently, the ratio 12.42/16.944 = .7330 indicates the probability that a sam— ple mean Of 252 is due to Chance (uncontrollable factors). This probability, .7330, is the P used for test value 252 in Table 46. This decision table shows that the upper control limit according to this interpretation of P is between 253 and 254. The same process of interpolation that was applied for the first interpretation Of P, establishes the control limit at 253.94. 215 McMenimen Approach Use of this approach brings to light a very impor— tant consideration as far as the investigative procedure is concerned. From these calculations it appears that it is not always advantageous to begin by investigating for dull knives. In an attempt tO find the upper control limit for a dull knife investigation, 250 is chosen first as a test.value. Table 47, however, shows that a sample mean as low as 250 would never occur if dull knives were used. In fact, the same table indicates that a sample mean as low as 255 has never been Observed with dull knives. By continuing this procedure for other test values, one would find that a random sample of five dull knife performances would rarely produce a sample mean less than 259. At the same time one would never Obtaina sample mean from Chance performances as high as 258. Thus, this is an ideal situa— tion in which the chance sampling distribution does not overlap with the dull knives sampling distribution. Ac- cordingly, by setting the dull knife upper control limit at 259, one can eliminate both the risks of committing a Type I and a Type II error. Therefore, it would not be profitable to investigate for dull knives until a sample mean Of 259 appeared. The McMenimen test does, however, indicate that it would be profitable to investigate sample means as low as 251 for poor attitude. Table 48 shows the derivation 216 Of the savings values for both poor attitude and illness. Laziness would be detected by the same test used for poor attitude but savings values for it are not indicated since a sample Of these performances would never produce a mean in the 250 range. Essentially, this table is the same as Table 33 except that the multiplication weight is now 20 instead Of 40. As the reader will recall, 20 results from multiplying the sample size of 5 by the 4 tests that are assumed to lapse on the average before an Off—standard condition is detected. TABLE 48.--Derivation Of savings values Single Multi— Correction Perf. Op. plication Weighted Cost per Cause Cost Weight Op. Cost Performance Savings Poor Attitude $ .50 20 $10 $.25 $ 9.75 Illness 1.00 20 . 20 .50 19.50 The probabilities indicated on Table 49 are the same for each corresponding test value as those derived in Table 47. For test value 250, the probability Of saving $0 from an investigation for poor attitude is .9119--the probability that the items sampled were drawn from a pOpu— lation of chance performances. The probability that $9.75 can be saved is, of course, .0881--the estimated probability that poor attitude is prevailing. The application table 217 indicates that the upper control limit for poor attitude is between 250 and 251. Interpolation between these values results in an upper control limit Of 250.2. This value was determined by the following technique. At an expected savings Of zero, the analyst would be just indifferent be— tween investigating and not investigating. For test value 250, the expected savings is .1410 from zero; for 251 it is .5649 from zero. Hence, the control limit is Closer to 250. The sum Of these differences is .7059. The ratio .1410/.7059 = .20 is the amount which should be added to 250 to arrive at the upper control limit of 250.2. The possibility Of investigating for illness is considered for test value 255. The probabilities are de- termined in the following manner. An investigation would be made first for poor attitude and an investigation for illness would be considered only if poor attitude was first eliminated as a possible cause. Therefore, the rele— vant probabilities associated with act-investigate for illness are found by reference to the "number Of means" column in Table 47. The probability that $0 can be saved is 3.4200/3.5480 or .9639 and the probability that $19.50 can be saved is .1280/3.5480 where 3.5480 is 3.4200 plus .1280. (The 5.73160 value for poor attitude has been eliminated.) The negative expected value indicates that it would not be profitable to investigate for illness for a sample mean as low as 255. The same test applied for emmH.H+m emmm.e+m mmce>mm omuommwm maso.e om.ms smem. . . om.mae m>mm . wNNo.h mu m mmow. mh.mw m>mm Nmfimomlw film MPMF VFGH IW Him @BQH. OW ®>flm smm mssm> pmms .1: oemm.mum ewao.m+w mmce>mm smuownxm mmmm. om.mH ammo. om.maw m>mm mmoe.m ms.m muse. ms.ma m>mm mmmm.mlw film mmwm. NNwm. Iw le NNmm. ow w>mm mmm msam> pews . , (a. warm. +w mmcfl>mm ompommxm 8 evov.a mn.m moma. mh.mm m>om fl mamm. 1m Him momm. om o>om Hmm wsam> pmms II case. um mmsfl>mm omuomoxm mosh. + mn.m ammo. mh.mw o>om mHHm. Iw alm mHHm. om m>mm 0mm ODHM> “woe .mxm .osoo om .mxm .pcoo mm uco>m mmmCHHH mom msflpmmflpmm>cH vw OB mo onmmm mosuflpum uoom uom unapomflpmm>nH am 09 mo pcomm osvflssomp.cofiflcmzoz mo cospmowammfill.mv mqmfifi 219 test value 257 indicated that an investigation for illness would be profitable for test value 257. The same informa— tion developed for test value 256 shows that an investiga- tion at this value would not be profitable. Interpolation pinpoints the upper control limit at 256.43. Use Of the incremental approach in which the additional $3 rather than $4 is subtracted from the savings in the conditional column yields an upper control limit of 255.94. Egualization approach Table 50 indicates that the upper control limit under this approach is between 250 and 251. The following interpolation process is employed. For test value 250, the difference between the expected Opportunity cost of d and the expected Opportunity cost Of 8 is .0401. The cor— responding difference for test value 251 is .0342. The sum Of these differences is .0743. The ratio .0401/.0743 = .54 when added to 250 yields 250.54 as the upper control limit. The derivation Of the individual values shown in Table 50 is eXplained in Appendix B. 220 :TABLE 50.-—Decision table for Equalization approach Test Values 251 250 Probability Of d .0618 .1003 Opportunity Cost Of a $1 $1 Expected Opportunity Cost of a $ .0618 $ .1003 Probability of B .0322 .0215 Opportunity Cost of B $2.9800 $2.7990 Expected Opportunity Cost Of 8 $ .0960 $ .0602 Decision Reject Accept d = Type I error 8 = Type II error Minimization approach Table 51 indicates that 252 is the upper control limit because the expected opportunity cost.is less for this test value than for any other. It has previously been established that only poor attitude could produce a sample mean in the low 250's——the range in which Check tests indi— cate that the control limit will fall. Accordingly, only Chance and poor attitude are considered in the prior dis— tribution. Since 600 of the initial 1000 performances were due to Chance and 60 to poor-attitude the prior probabili- ties Of .9091 and .0909 result from the ratios 600/660 and 60/660 respectively. The weighted Opportunity cost for a chance cause is $l--the cost of a Type I error-associated with the 221 investigation for poor attitude. The weighted Opportunity costs for poor attitude are the same for each test value as the Opportunity costs of committing a Type II error that were used in the Equalization approach. Similarly, the probabilities Of a wrong decision are the same for Chance cause as the probabilities of com— mitting a Type I error that were used for each test value in the Equalization approach. These probabilities for poor attitude for each test value are the same as the prob— abilities Of Obtaining a sample mean less than the test value. Such probabilities are determined, as the reader will recall, by finding the area that is less than the test value under the normal curve with mean 255 and standard error Of the mean 4.0187. The probabilities Of a wrong decision used here for poor attitude are not the same for each respective test value as the probability Of.a Type II error under the Equalization approach. The probability Of a Type II error was calculated by adding the averaging step Of multiplying by 60 and dividing by 300. NO such averaging step is added here since one Of the distinctive features Of the Minimization approach is that it deals in— dividually with each assignable cause possibility. As with previous illustrations Of the Minimization approach the conditional Opportunity costs result from multiplying each weighted opportunity cost by its respective probability Of 222 a wrong decision. Also, the eXpected Opportunity cost for each test value results frOm a summation Of the products Of the conditional Opportunity cost and the prior probabil- ity for each cause. TABLE 51.-—Decision table for Minimization approach Prior Weighted Wrong Cond. Ave. Expected Cause Prob. Op. Cost Decision Op. Cost Op. Cost Test Value 253 Chance .9091 $1 .0197 $ .0197 Poor Attitude .0909 3.6260 .3121 1.1317 1.0000 $.1208 Test Value 252 Chance .9091 $1 .0359 $ .0359 Poor Attitude .0909 3.2575 .2266 .7313 1.0000 $.099l Test Value 251 Chance .9091 $1 .0618 $ .0618 Poor Attitude .0909 2.9800 .1611 .4801 1.0000 $.0998 Test Value 250 Chance- .9091 $1 .1003 $ .1003 Poor Attitude .0909 2.7990 .1075 .3009 1.0000 $.1185 223 Comparison of Upper Control Limits Among the Methods The control limits that have just been derived are itemized below: Approach ‘ Upper Control Limit Accountant's Conventional 270 Basic Control Chart 255.8 Bierman, Fouraker, and Jaedicke First Interpretation Of P 250.82 Second Interpretation Of P 253.94. McMenimen 250.2 for poor attitude 255.94 for illness (incremental) Equalization 250.54 Minimization 252 Financial Analysis and Rankipg Under the sampling plans the approaches will con- tinue to be analyzed by twos and ranked in preferential order according to their desirability for control purposes. Slightly different mechanics are necessitated by the intro— duction Of sampling. It is not possible to count directly from Table 18 the number Of sample means that are likely to fall within a specified range of values per thousand samples for any specified cause. In making the financial analysis for control limits based on individualperfor— mances, it was a simple procedure tO count from Table 18 the number of individual performances falling within a specified range for any specified cause. Table 18 does, 224 however, provide the necessary information to make these calculations for sample means. The following analysis will illustrate the technique. It involves a comparison between the Accountant's Conventional approach and the Basic Con- trol Chart approach. The Basic Control Chart approach will incur addi- tionaljnvestigation Charges for sample means with values between 255.8 and 270 which are due to Chance. The area between 255.8 and 270 under the normal curve with mean 245 and standard error Of the mean 3.8923 is .0028.18 This probability that a sample mean will fall in this region if chanceirsoperative must be multiplied by 600 in order to estimate the number Of means per thousand samples that are likely to fall in this range. This product Of 1.68 must be multiplied by $519 to arrive at $8.40 as the additional investigation Charge. 18The control limit under the Basic Control Chart approach was determined by Choosing a value whi h .025 sam? ple means would exceed assuming a student-t sampling distri- bution. It was easy to use the t distribution for this approach because the t values are always given for this h level Of significance. Under the other approaches used, t e level of significance would rarely turn out tO be.a value for which the t values are customarily reported. Hence, the normal sam lin distribution w Now in comgarigg these approaches the areas must be calculated by the consistent use Of a normal sampling distribution. This assumption results in an-area Of .0028 between 255.8 and 270 instead of the area .025 that would result from a t distri- bution. 19$6 is not used in this calculation because a sam- ple mean attributed to tough cows would almost always be Greater than 270. as assumed under these approaches. 225 Table 52 illustrates the derivation of the addi— tional savings brought about because the Basic Control Chart approach would involve investigations between 255.8 and 270 that would not be undertaken by applying the Ac- countant's Conventional method. For poor attitude the .4207 probability Of getting a sample mean between 255.8 and 270 represents the area between 255.8 and 270 under the normal curve with mean 255 and standard error Of the 20 mean 4.0187. The other probabilities represent the area under the normal curve between the same two values. The probabilities are different for each assignable cause be- cause the means and the standard errors Of the mean are different for each assignable cause. The number Of perfor— mances for each corresponding cause are simply used as weights tO enable an estimate Of the number Of sample means between 255.8 and 270 per thousand samples for each respec- tive cause. These estimates are then multiplied by the weighted opportunity costs.21 The sum Of the resulting products, $2482.11, represents the additional savings under the Basic Control Chart approach. Since $2,482.11 is larger * 20The mean Of the poor attitude assignable cause is 255 and the standard error Of the mean is 4.0187. 21These weighted Opportunity costs represent the Single performance Opportunity costs weighted at test yplue 270 and then multiplied by five tO recognize that an O. - Standard condition can only be detected at the concluSion Of each five performances that are included in the sample. 226 than the $8.40 additional cost, the Basic Control Chart approach is more effective for control purposes. TABLE 52.--Additional savings Of Basic Control Chart approach Probability Of getting Number Number P between of of‘ Weighted Added Cause 255.8 and 270 Perf. Means Op. Cost Savings Poor Attitude .4207 60 25.2420 $62.8000 Dull Knives .5000 120 60 11.5290 Illness .8555 20 17.1100 9.2235 Laziness .1190 40 4.7600 9.9500 $2482.1137 The results of the other comparisons are summarized in Table 53. TABLE 53.-~Financial comparisons between approaches g k Added Inv. Added Savings Most Approaches Cost of Of Effective Tested Lower UCL Lower UCL Approach AC BCC $ 8.40 $2482.11 BCC BCC Equal 109.08 141.16 Equal Equal BFJ lst 6.60 2.81 BFJ-lst BFJ lst Min 92.70 14.99 Min BFJ 2nd Min 75.60 41.11 BFJ 2nd Equal McM 6.25 2.56 Equal McM BCC 52.38 82.66 McM This analysis is depicted diagramatically in Figure 12. 227 FIGURE 12.--Outcomes of financial comparisons AC BCC BCC MCM McM Ecual Equal BFJ lst BFJ lst Min Min BFJ 2nd BFJ 2nd From the above presentation, the following ranking emerges: Approach Rank BFJ 2nd 1 Min 2 BFJ lst 3 Equal 4 McM 5 BCC 6 AC 7 Derivation and Financial Analysis Of Lower Control Limits—-Sample Size Five——Every_Performance Included in a Sample Agcountant's Conventional Method Since the accountant does not make a distinction between testing individual performances and sampling groups Of performances, the "ten per cent rule" still results in a lower control limit of 220. 228 Basic Control Chart Approach The lower control limit is that value which is greater than 2.5 per cent of the sample means Of five ran— domly selected Chance performances. (Again, the .05 level Of significance with a two-tailed test is used.) This value is found by solving the following formula for LCL: _ LCL - 245 t — 0_ x _ LCL - 245 2°776 ‘ 3.8923 LCL = 234.2 The symbols t and a; have the same meaning and their values are the same as those used to calculate the upper control limit under these Circumstances. Bierman, Fouraker, and Jaedicke Approach First Interpretation of P. Decesion Table 54 shows that the lower control limit calculated under this approach is just slightly more than 239. That is, at test value 239 PC is almost exactly equated with P. TABLE 54.-~Decision table for BF and J application.' First interpretation Of P Test Value L C PC P Decision 239 $6 $4 .20 .2006 Accept 240 5 4 .33 .1236 Reject . l|.l. -.xll‘l . 229 In this table "L" and "P“ are the same for values 240 and 239 as they were for values 250 and 251 respective— ly in Table 45 that was used to determine the upper control limit for this testing plan. The reason for this is, Of course, that the normal curve is symmetrical and 240 and 239 are 5 and 6 values from the standard (the mean) just as 250 and 251 are. The cost Of an investigation for a favorable assignable cause has already been specified to be $4. The usual formula is used to calculate Pc. Second Interpretation Of P. Decision Table 55 in— dicates that the lower control limit is between 236 and 237 when this interpretation is followed. The same inter- polation procedure that was applied by both Bierman, Four- aker, and Jaedicke approaches in the calculation Of the upper control limit yields a lower control limit of 236.07. In Table 55 "L" is determined in the same manner that is described for the calculation Of the upper control limit and is the same for test values 236, 237, and 238 as for 254, 253, and 252 respectively because of the symmetry of the normal curve. "c" is a constant at $4. The values — C L O ties, P, are determined by the same system of calculations for Pc are determined, as always, by L The probabili— discussed in conjunction with their calculation for the second interpretation Of the upper control limit when tests consist of sample Of five. Table 56 shows these results. 230 TABLE 55.——Decision table for BF and J application. Second interpretation of P Test Value L C PC Decision 236 $9 $4 .5656 .5468 Reject 237 8 4 .5000 .7349 Accept 238 7 4 .4286 .8054 Accept TABLE 56.—-Determination Of P's Prob. Of Cause Cond. Number Number Given Occurrence Cause Prob. Of Perf. Of Means of Test Value Test Value 236 Chance .0073 600 4.3800 .5468 Improvement .0363 100 3.6300 .4532 8.0100 1.0000 Test Value 237 Chance .0122 600 7.3200 .7349 Improvement .0264 100 2.6400 .2651 9.9600 1.0000 Test Value 238 Chance .0207 600 12.4200 .8054 Improvement .0300 100 3.0000 .1946 15.4200 1.0000 Test Value 239 Chance .0318 600 19.0800 .9408 Improvement .0120 100 1.2000 .0592 20.2800 1.0000 Test Value 240 Chance .0437 600 26.2200 .9722 Improvement .0075 100 .7500 .0278 26.9700 1.0000 Test Value 241 Chance .0590 600 35.4000 .9874 Improvement .0045 100 .4500 .0126 35.8500 1.0000 231 McMenimen Approach The savings value for improvement is determined as follows: 1. Calculate the savings on each performance by con- verting the difference in minutes between the mean Of the improvement performances, 230, and the stan— dard, 245, into a fraction Of an hour by dividing by 60. The result is 1/4. (24566 230). 2. Multiply the 1/4 by $3 — the hourly wage of the butcher. (1/4 X $3 = $.75) 3. Multiply this individual performance Opportunity cost by the multiplication weight Of 20. 20 X .75 = $15. 4. Subtract the individual performance cost Of cor— rection $.25 from the $15. ($15 — .25 = $14.75). The probabilities are the same as those determined in Table 56. The lower control limit indicated by Table 57 is be- tween 236 and 237. Application Of the same interpolation procedure used in conjunction with the calculation Of the upper control limit under the McMenimen approach yields a lower control limit of 236.97. 232 TABLE 57.-—Application Of McMenimen technique Spend Up to $4 Investigating For Improvement Event Pe Cond. Exp. Test Value 236 Save $0 .5468 $—4 $-2.1872 Save $14.75 .4532 10.75 +4.8719 Expected Savings $+Z.6847 Text Value 237 Save $0 .7349 $-4 $-2.9396 Save $14.75 .2651 10.75 +2.8498 Expected Savings $-0.0898 EqualizationvApproach Table 58 indicates that the lower control limit for this approach is between 238 and 239. This control limit is pinpointed at 238.02 by the same interpolation procedure discussed in connection with the determination of the upper control limit for the Equalization procedure. In the usual manner the probability Of committing a Type I error is calculated by finding the area at least as small as the test value under the normal curve with mean 245 and standard error Of the mean 3.8923. The $4 Opportunity cost Of a Type I error is the cost Of an investigation to detect improvement. 233 TABLE 58.-—Decision table for Equalization approach Test Value 238 239 Probability of Type I Error .0359 .0618 Opportunity Cost Of Type I Error $4 $4 Expected Opportunity Cost Of Type I Error $ .1436 $ .2472 Probability of Type II Error .0375 .0228 Opportunity Cost of Type II Error $3.8975 $3.8195 Expected Opportunity Cost of Type II Error $ .1462 $ .0870 Decision Reject. Accept The probability Of committing a Type II error is depicted by the area under thenormal curve with mean 230 and standard error of the mean 4.505622 which is greater than the test value. This is, of course, because the hy- pothesis will be accepted for a sample mean larger than the test value Chosen as the lower control limit.23 If, how- ever, improvement has resulted a false hypothesis (Type II error) will have been accepted. The Opportunity cost Of a Type II error results from weighting the $.75 single performance Opportunity cost 22The mean Of the 100 performances (Enumerated in Table 18) which are due to improvement is 230 and the stan— dard error of the mean is 4.5056. 23This holds only so long as the sample mean does not exceed the upper control limit. 234 in the usual manner and then multiplying this by the sample size Of 5. Minimization Approach Because the expected Opportunity cost is less for test value 236 than for any other value indicated on Table 59, 236 is designated as the lower control limit. TABLE 59.—-Decision table for Minimization approach Prob. of_ Prior Weighted Wrong Cond. Ave. Expected Cause Prob. Op. Cost Decision Op. Cost. Op. Cost Test Value 234 Chance .8571 $4 .0023 $.0092 Improvement .1429 4.6035 .1867 .8595 1.0000 $.l307 Test Value 235 Chance .8571 $4 .0051 $.0204 Improvement .1429 4.3215 .1335 .5769 1.0000 $.0999 Test Value 236 Chance. .8571 $4 .0104 $.04l6 Improvement .1429 4.1345 .0918 .3795 1.0000 $.0899 Test Value 237 Chance .8571 $4 .0197 $.0788 Improvement .1429 3.9845 .0606 .2415 1.0000 $.1020 235 Chance and improvement are the only two causes Of interest in setting the lower control limit. In the origi- nal distribution of 1000 values, 600 were due to chance and 100 to improvement. Therefore, the prior probabilities Of .8571 and .1429 for chance and improvement respectively are determined from the ratios 600/700 and 100/700. The $4 weighted Opportunity cost associated with the chance cause is the cost Of committing a Type I error by investigating for improvement when in fact chance alone is cause the variation in the performances. The weighted Opportunity costs associated with improvement result from: 1. Weighting the $.75 single performance Opportunity cost by the procedure indicated in Table 10 to ac— count for the fact that improvement will not always be detected on the first test after its occurrence. 2. Multiplying this weighted value by 5 - the sample size. The probabilities Of a wrong decision correspond- ing to Chance represent the probability of committing a Type I error. Accordingly, the figures are calculated by finding the area less than the test value under the normal curve with mean 245 and standard error of the mean 3.8923. In other words, if chance is the only prevailing cause of variation, the hypothesis will be falsely rejected if the sample mean is less than the test value that is selected as the lower control limit. 236 On the other hand, the probabilities Of a wrong decision corresponding to improvement represent the prob- ability Of committing a Type II error. These probabilities are depicted by the area greater than the test value under the normal curve with mean 230 and standard error of the mean 4.5056.24 This is because a sample mean greater than the test value selected as the lower control limit will lead to acceptance Of the hypothesis which is a wrong con- clusion (Type II error) if improvement has occurred. The figures in the last two columns in Table 59 are determined in the usual manner for the Minimization approach. Comparison Of Lower Control Limits Among the Methods For review, the control limits that have just been derived are listed below: Approach Lower Control Limit Accountant's Conventional 220 Basic Control Chart 234.2 Bierman, Fouraker, and Jaedicke First Interpretation Of P 239 Second Interpretation Of P 236.07 McMenimen 236.97 Equalization 238.02 Minimization 236 24 As the reader will recall the mean Of the 100 performanceslisted in Table 18 as being due to improvement is 230 and the standard error Of the mean is 4.5056. 237 Financial Analysis and Ranking The prOCedure for estimating the number Of sample means per thousand samples falling within a specified range Of values for a specified assignable cause was dis- cussed in analyzing the impact of differences in upper control limits under this sampling plan. In analyzing the impact of the differences in lower control limits the samegrocedure is followed. These probabilities are used as they were in Table 52 in calculating the added savings associated with the control limit closer to the standard. The probabilities are also used in calculating the added investigation costs assoCiated with the higher Of the two lower control limits being compared at any given time. A summary Of the added investigation costs and the added savings between the approaches is presented in Table 60. TABLE 60.-—Financial comparisons between approaches Added Inv. Added Savings Most Approaches Cost Of Of Effective Tested Higher LCL Higher LCL Approach Equal AC $467.80 $1959.02 Equal BFJ lst Equal 60.24 5.73 Equal Equal BCC 136.84 63.85 BCC McM BCC 40.56 53.22 McM McM Min and 22.32 12.90 Min and BFJ 2nd BFJ 2nd 238 The results Of the information presented in Table 60 are depicted in Figure 13. FIGURE 13.——Outcomes Of financial comparisons BFJ lst MCM McM,/ _—\\\Mih and BFJ 2nd Min and BFJ 2nd From the above diagram, the following ranking emerges: Approach Rank Min 1.5 BFJ 2nd 1.5 McM “ 3 BCC 4 Equal 5 BFJ lst 6 7 AC 2397 Derivation and Financial Analysis Of Upper Control Limits——Samp1e Size Five--Sample Taken in Every Fifty Performances Introduction In situations where sampling is used it is perhaps more common to take a sample every so Often rather than to include every performance in a sample. Accordingly, con— trol limits will now be calculated under the assumption that a sample of five is taken for every fifty performances. Accountant's Conventional Method It has been noted before that the upper control limit remains at 270 regardless of the sampling plan. Basic Control Chart Approach The control limits resulting from this approach depend only upon the size Of the sample and not upon the relative Opportunity costs which vary with the frequency Of sampling. Consequently, the upper limit is 255.8050 — the same as that determined when every performance was in— cluded in a sample. Bierman, Fouraker and Jaedicke Approach First Interpretation Of P. The upper control limit according to this sampling plan as indicated in Table 61 is between 246 and 247. The same interpolation procedure that has previously been applied for the Bierman, Fouraker, 240 and Jaedicke approaches establishes the control limit at 246.68. Here "L" is the single performance Opportunity cost multiplied by 200. The single performance Opportunity cost is determined by the usual formula §*60££§ X $3 where X represents each respective test value. The multiplica— tion weight Of 200 results from multiplying the 4 tests that allegedly must be made on the average before an Off- standard performance is detected by the 50 performances from whichihe 5 sample values for any single test_are drawn. As with this approach for the other sampling plans, the cost of an investigation remains constant at $5.. "PC" mined by L i C. For this interpretation "P" is the prob— is deter- ability of Obtaining a sample mean at least as high as the. test value given that the test value is unfavorable. These values are determined by the same procedure indicated in that section where every performance was included in a sample and illustrated in Table 47. With Bierman, Fouraker, and Jaedicke approach, the decision rules, as the reader will recall, are made on the following‘basis: 1. If P is greater that PC, accept the hypothesis and refrain from an investigation. 2. If P is less than PC, reject the hypothesis that chance causes are prevailing and undertake an in- vestigation. 241 TABLE 61.—-Decision table for BF and J application. First interpretation Of P Test Value L C PC P Decision 246 $10 $5 .50 .7948 Accept 247 20 5 .75 .6100 Reject 248 30 5 .83 .4412 Reject. Second Interpretation of P. The second interpreta— tion of P yields an upper control limit between 250 and 251 as indicated in Table 62. This limit is further narrowed down tO 250.13 by the process Of interpolation which has been employed for the Bierman, Fouraker, and Jaedicke ap— proaches. The values for P were obtained from Table 47. The other values are determined in the same manner used for the first interpretation Of P. TABLE 62.—-Decision table for BF and J application. Second interpretation of P Test Value L C PC P Decision 250 $50 $5 .90 .9119 Accept 251 60 5 .917 .8395 Reject McMenimenlApproach The discussion of this approach where every perfor— mance was included in a sample size of five indicated that one would not logically investigate for dull knives until a sample mean at least as high as 259 was Obtained. The 242 reasoning for this is that the probability of Obtaining a sample mean less than 259 with the use of dull knives is almost zero. The same reasoning is equally valid for this sampling plan. It is, however, profitable to begin an investiga— tion for poor attitude with a sample mean of 24725 as Table 63 indicates. The savings value for poor attitude is determined by multiplying the individual performance Opportunity cost by the multiplication weight of 200 and subtracting the $.25 cost Of correction. The numerical values are 25563 245 x $3 x 200 - $.25 = $99.75. The probabilities are determined in Table 64 which follows the same procedure as Table 56. TABLE 63.—-Application Of McMenimen technique Spend Up To $1 Investigating For Poor Attitude Event Pe Cond. Exp. Test Value 246 Save $0 .9920 $-l $- .9920 Save $99.75 .0080 98.75 + .7900 EXpected Savings $-0.2020 Test Value 247 Save $0 .9852 $‘l $- .9852 Save $99.75 .0148 98.75 +1.4615 Expected Savings $+0.4763 25Actually, the process of interpolation that was previously applied for the McMenimen technique yields an upper control limit of 246.30 as far as the investigation for poor attitude is concerned. ,Ah fl-xr’ 243 TABLE 64.--Determination of P's . Prob. Of Cause Cond. Number Number Of Given Occurrence of Cause Prob. Of Perf. Means Test Value Test Value 246“ Chance .0987 600 59.2200 .9920 Poor Attitude .0079 60 00.4740 .0080 59.6940 1.0000 Test Value 247 Chance .0909 600 54.5400 .9852 Poor Attitude .0137 60 00.8820 .0148 55.3620 1.0000 Egpalization Approach Table 65 shows the upper control limit to be be- tween 248 and 249. This limit is further narrowed down to 248.03 by the same interpolation procedure previously applied for the Equalization approach. The individual figures that compose this table were derived in the same general manner as those for Table 50. This latter table was used in setting the upper control limit where a sample of five was Chosen so that every performance was included in a sample. The derivation Of these individual figures is explained in Appendix B. 244 TABLE 65.--Decision table for Equalization approach Test Value 249 248 Probability of Type I Error .1292 .2206 Opportunity Cost Of Type I Error $ 1 $ 1 Expected Opportunity Cost Of Type I Error $ .1292 $ .2206 Probability of Type II Error .0136 .0082 Opportunity Cost Of Type II Error $26.78 $26.11 Expected Opportunity Cost Of Type II Error $ .3642 $ .2141 Decision Reject Accept Minimization Approach The upper control limit under this approach is 249 as shown in Table 66. With this test value the expected Opportunity cost is less than for any other. The prior probabilities are the same for each cause as they were in Table 51——the decision table used for the Minimization approach when every performance was included in a sample. Likewise, the Opportunity cost for Chance continues to be $1 — the cost Of committing a Type I error. Here, the weighted Opportunity costs for poor attitude are calculated in the same manner as when every performance was included in a sample except that the results Of weight- ing the single performance Opportunity costs are multiplied by fifty instead Of five. The result is that the weighted Opportunity costs for poor attitude in Table 66 are ten 245 times more than those shOWn in Table 51 for each respective test value. The probabilities of a wrong decision are the same for each respective test value and cause as those used in Table 51. The figures in the last two columns Of Table 66 are derived in the manner previously explained for the Minimization approach. TABLE 66.--Decision table for Minimization approach Prob. Of Prior Weighted Wrong Cond. Ave. Expected Cause Prob. Op. Cost Decision Op. Cost Op. Cost Test Value 250 Chance .9091 $.l .1003 .1003 Poor Attitude .0909 27.990 .1075 $3.0089 $ .3647 Test Value 249 Chance .9091 $ 1 .1292 $ .1292 Poor Attitude- .0909 26.7800 .0681 1.8237 _1. .$ .2832 Test Value 248 Chance .9091 $ 1 .2206 $ .2206 Poor Attitude .0909 26.1100 .0409 1.0679 $ .2976’ 246 Comparison Of Upper Control Limits Among the Methods The control limits that have just been derived are itemized below: Approach Upper Control Limit Accountant's Conventional 270 Basic Control Chart 255.8 Bierman, Fouraker, and Jaedicke First Interpretation Of P 246.68 Second Interpretation Of P , 250.13 McMenimen 246.30 Equalization 248.03 Minimization 249 Financial Analysis and Ranking The results Of the comparisons between the methods are summarized in Table 67. TABLE 67.——Financial comparisons between approaches Added Inv. Added Savingsa Most Approaches Cost Of Of Effective Tested -Lower UCL Lower UCL Approach AC BCC $280.55 $24,821.14 BCC BCC McM 232.08 1,780.05 McM McM BFJ lst 22.26 5.65 BFJ lst BFJ lst BFJ 2nd 720.60 157.70 BFJ 2nd BFJ 2nd Min 34.86 75.57 Min . Min Equal 39.72 42.26 Equal aThese savings figures are determined by the same general procedure indicated in Table 52. Now, however, the weighted Opportunity costs result from the product of the weighted single performance Opportunity costs for the apprOpriate test value and fifty. 247 These comparisons are depicted diagramatically in Figure 14. FIGURE l4.--Outcomes Of financial comparisons MCM BFJ 2nd BCC McM BFJ 2nd BCC BFJ lst Min AC BFJ lst Min Eoual Equal As a result Of these comparisons, the following ranking emerges: Approach 5223 Equal 1 Min 2 BFJ 2nd 3 I BFJ lst 4.' McM 5 BCC 6 AC 7 Derivation and Financial Analysis of Lower Control Limits--Sample Size Five—-Sample Taken in Evegy Fifty Performances Agcountant's Conventional Method The lower control limit under the Accountant's Conventional method is 220 with the application Of the 248 "10 per cent rule." It has repeatedly been noted that this does not change with the sampling plan. Basic Control Chart Approach Under this approach the lower limit is 234.2 — the same as the limit when every performance was included in a sample Of five. The limit remains the same because it de- pends only upon the size Of the sample and not upon the frequency Of sampling. Bierman, Fouraker, and Jaedicke Approach First Interpretation Of P. Table 68 indicates the lower control limit to be between 243 and 244. The inter— polation procedure that has been used for both Bierman, Fouraker, and Jaedicke approaches narrows the control limit down to 243.49. Because of the symmetry Of the normal curve, "L" and "P" for 244, 243, and 242 are the same respectively as they were for 246, 247, and 248. These latter calcula— tions were discussed in conjunction with the derivation Of the upper control limit under the present sampling plan. The cost of an investigation, "C", for improvement has previously been determined tO be $4. "PC" continues L - C L to be found by 249 TABLE 68.-—DeCision table for BF and J application. First interpretation Of P Test Value L C PC P Decision 242 $30 $4 .8667 .4412 Reject 243 20 4 .8000 .6100 Reject 244 10 4 .6000 .7948 Accept Second Interpretation Of P. Decision Table 69 shows the control limit to be between 238 and 239 when this approach is followed. Interpolation yields a limit Of 238.95. In this table, "L" and "P" are determined in their usual manner. (The multiplication weight in deter— mining "L" is 200.) "C" is still a constant at $4. The values for "P" were Obtained from Table 56. The decision was made on the basis Of the usual criteria. TABLE 69.-—Decision table for BF and J application. Second interpretation Of P Test Value L C PC P Decision 238 $70 $4 .9428 .8054 Reject 239 60 4 .9333 .9408 Accept McMenimen Approach The savings value is determined by multiplying . 230 — 245 x 3 the individual performance Opportunity cost, 60 $ , by the multiplication weight, 200, and subtracting the individual performance cost Of correction, $.25. The 250 result is $149.75. The probabilities used in this McMenimen application were Obtained from Table 56. The lower control limit shown in Table 70 is between 240 and 241. The interpolation procedure previously employed for the McMenimen technique pinpoints the lower control limit at 240.07. TABLE 70.--App1ication Of McMenimen technique Spend Up TO-$4 Investigating For Improvement Event Pe Cond. Exp. Test Value 239 Save $0 .9408 $ -4 $-3.7632 Save $149.75 .0592 145.75 +8.6284 Expected Savings $+4-8652 Test Value 240 Save $0 .9722 $ ‘4 $-3.8888 Save $149.75 .0278 145.75 +4.0518 Expected Savings $+ .1630 Test Value 241 Save $0 ' .9874 $ -4 $‘i.g§gi Save $149.75 .0126 145.75 + . Expected Savings $—2.1132 Equalization Approach Under this sampling plan, Table 71 shows the lower control limit to be between 240 and 241 for the Equaliza— tion approach. The interpolation procedure that has been 251 applied for the Equalization approach yields a limit of 240.30. The individual values for this table were calculated in exactly the same manner as those for Table 58 with the following single exception. After weighting the single performance Opportunity costs, they must be multiplied by fifty instead Of five because a sample is drawn only-once in every fifty performances. If improvement takes place immediately after a sample is drawn or if it is not dis- covered by any given test, the condition has no Opportunity tO be detected until another test is taken fifty perfor- mances later. TABLE 71.--Decision table for Equalization approach Test Value 240 241 Probability of Type I Error .1003 .1292 Opportunity Cost Of Type I Error $ 4 $ 4 Expected Opportunity Cost Of Type I Error $ .4012 $ .5168 Probability Of Type II Error ' .0132 .0073 Opportunity Cost of Type II Error $38.060 $37.840 Expected Opportunity Cost of Type II Error 3 .5024 $ .2762 Decision Reject Accept Minimization Approach The expected Opportunity cost is lowest in Table 72 for test value 238. Hence, 238 is designated as the lower control limit under this approach. 11-... I1. IJ 252 The prior probabilities are the same as those pre- viously used in conjunction with setting the lower control limit under the Minimization approach. For chance causes the weighted Opportunity cost continues tO be $4 - the cost Of a Type I error. For im- provement these values represent the weighted single per- formance Opportunity cost multiplied by fifty. Conse- quently, they are ten times higher for each respective test value than the weighted Opportunity costs listed in Table 59. The probabilities Of a wrong decision are the same for each cause as they were for their respective values in Table 59. Similarly, the figures in the last two columns are derived by the same procedure followed in the other Minimization models. TABLE 72.——Decision 253 table for Minimization approach Prob. Prior Weighted Wrong Cond. Ave. Expected Cause Prob. Op. Cost Decision Op. Cost Op. Cost Test Value 236 Chance .8571 $ 4 .0104 .0416 Improvement .1429 41.345 .0918 3.7950 §——_;_5_1.8_9 Test Value 237 Chance .8571 4 .0197 .0788 Improvement .1429 39.845 .0606 2.4150 Free Test Value 238 Chance .8571 4 .0359 .1436 Improvement .1429 38.975 .0375 1.4620 $ .3320 Test Value 239 Chance .8571 4 .0618 .2472 Improvement .1429 38.195 .0228 .8700 $ .3362 Test Value 240 Chance .8571 4 .1003 .4012 Improvement .1429 28.060 .0132 .5020 $ .4156 99mparison of Lower Control Aimits Among the Methods For purposes Of review the lower control limits pertaining tO this sampling plan are indicated below: 254 Approach Lower Control Limit Accountant's Conventional 220 Basic Control Chart 234.2 Bierman, Fouraker, and Jaedicke First Interpretation Of P 243.49 Second Interpretation Of P 238.95. McMenimen 240.07 Equalization 240.30 Minimization 238 Financial Analysis and Ranking In comparing the relative effectiveness Of any two lower control limits, the higher Of the two will carry a greater investigation cost; but will also bring about the detection Of improvement sooner than the lower Of the two. Thisnmnxatimely detection will bring about added savings. If the added savings is greater than the added investiga— tion cost the higher of the two lower control limits is designated as more effective. Otherwise, the lower of the twois more effective. A summary of the comparisons necessary to rank the approaches is presented in Table 73. 255 TABLE 73.-—Financial comparisons between approaches Added Inv. Added Savings More Approaches Cost Of of Effective Tested Higher LCL Higher LCL Approach AC BFJ lst $835.93 $20,364.10 BFJ lst BCC BFJ lst 829.20 811.14 BCC BCC Equal 264.72 760.50 Equal Equal McM 26.64 7.23 McM McM BFJ 2nd 99.36 39.99 BFJ 2nd Min BFJ 2nd 59.28 55.34 Min The above analysis is depicted in the tree-diagram in Figure 15. FIGURE 15.——Outcomes of financial comparisons AC E ual From this diagram, the following ranking becomes I Obvious. Approach Appk Min 1 BFJ 2nd 2 McM 3 j 256 Approach (Continued) Rank (Continued) Equal 4 BCC 5 BFJ lst 6 AC 7 Conclusions Summary of Rankings In this chapter upper and lower control limits have been calculated for each Of the seven approaches under four testing plans. After the derivation of each control limit for all the approaches according to each testing plan, a financial analysis was performed which enabled the ranking of the approaches in order of their effectiveness for control purposes. Table 74 summarizes these rankings as an aid in determining what generaliza— tion, if any, can be drawn. TABLE 74.—-Summary Of rankings UCL LCL Testing Plan Testing Plan Approach A B C D A B C D AC 5.5 7 7 7 5 7 7 7 BCC 2.5 6 6 6 1.5 4 4 5 BFJ lst 5.5 5 3 4 3 5 6 6 BFJ 2nd 7 1.5 l 3 4 2.5 1.5 2 McM 1 4 5 5 7 2.5 3 3 Equal 4 3 4 1 6 6 5 4 Min 2.5 1.5 2 2 1.5 l 1.5 l 257 The approaches are listed in order Of their pre— sentation in this dissertation. The testing plans are identified by the letters A, B, C, A and D as follows: represents tests of single performances where each performance is tested B represents tests Of single performances where every tenth performance is tested C represents tests of samples of five consecutive performances where every performance is included in a sample. D represents tests of samples Of five consecutive performances where a sample is taken on the average for each 50 performances. From a cursory glance at Table 74, the reader might conclude that the results do not consistently favor one approach over the others. For example, the number one ranking for the upper control limit is achieved by the McMenimen; Bierman, Fouraker, and Jaedicke second; and Equalization approaches for sampling plans A, C, and D respectively. There is a tie between the Bierman, Fouraker, and Jaedicke second and the Minimization ap- proaches for testing plan B. In other words, the most effective approach is different for each Of the testing plans. Top ranking is more consistent for the lower con— trol limit. In every case it is held by the Minimization 258 approach although honors must be shared with the Basic Control Chart approach for testing plan A and with Bierman, Fouraker, and Jaedicke second approach for testing plan C. For both control limits taken together, each approach ex— cept the Accountant‘s Conventional and Bierman, Fouraker, and Jaedicke first achieved a number one ranking at least once. The results with regard to the least effective approach are more consistent. For each control limit, the Accountant‘s Conventional approach ranks last for testing plans B, C, and D. This approach would undoubtedly rank lower for testing plan A if it were not for the fact that the second interpretation of Bierman, Fouraker, and Jaedicke gave an indeterminate solution for the upper control limit and the McMenimen approach gave an indeterminate solution for the lower control limit. To obtain a more comprehensive picture the rankings have been aggregated and these aggregations have been ranked in Table 75. The summation Of ranks corresponding to UCL and AC is 26.5 which is Obtained by adding 5.5, 7, 7, and 7. These are the values shown in Table 74 for UCL and AC under each of the testing plans. The grand total column represents the summations of the ranks for all testing plans for both control limits. These summations are then ranked from one to seven with the lowest summa- tion receiving a rank Of one. 259 TABLE 75.——Summation Of ranks and ranking Of summations by control limit Summation of Ranks Ranking of Summations Grand Grand Approach UCL LCL Total UCL LCL Total AC 26.5 26 52.5 7 7 7 BCC 20.5 14.5 35 6 3 5 BFJ lst 17.5 20 37.5 5 5 6 BFJ 2nd 12.5 10 22.5 3 2 2 Mom 15 15.5 30.5 4 4 3 Equal 12 21 33 2 6 4 Min 8 5 l3 1 l l Significant Generalizations Resulting from the Summary of Rankings The Minimization approach had the lowest summation for both control limits. This may have been anticipated on the grounds that the Minimization approach utilizes more information in deriving the control limits than the other approaches do. The grand summation is considerably higher for the Accountant's Conventional approach than for the next highest summation. This lends credence to this disserta— ‘ tion's hypothesis that new applications of presently developed statistical tools can increase the effectiveness Of accounting variance control by providing a helpful analytical framework to determine the control limits. The hypothesis is further reinforced by the fact that the dollar difference between the added investigation cost and ? 260 the added savings is relatively greater for financial analy— ses between the ACCOuntant's Conventional approach and the approach which occupies the next highest ranking than it is for analyses between approaches not involving the Ac- countant's Conventional approach. This can readily be observed by reference to Tables 23, 28, 38, 44, 53, 60, 67, and 73. This difference is especially large for test— ing plans B, C, and D. That is, the large difference is least noticeable in Tables 23 and 28. It is interesting to note that testing plans B, C, and D involve tests Of less than 100 per cent of all performances. As such, they are more realistic in practice. For example, if the reader will turn to Table 73, he will Observe that for the comparison between AC and BFJ lst the added savings associated with BFJ lst is $20,364.10. This is $19,528.18 greater than the $835.92 added investigation cost of this approach. The difference between the added savings and the added investigation cost for the other comparisons are all less than $100 with the exception of the $495.78 difference between BCC and Equal. The point to be emphasized is that the Accountant's Con— ventional approach is much less effective than the Bierman, Fouraker, and Jaedicke first approach——the approach with the new lower ranking in this case. TO summarize, two conclusions have thus far been reached as a result Of the example which has been developed 261 in this Chapter. One is that in the aggregate the Minimi- zation application is the most effective. The other con- clusion is that, also in the aggregate, the Accountant's Conventional is the least effective. The fact that these conclusions hold the aggregate and not for each individual testing plan and control limit signifies that the rankings can vary with different testing plans. Indeed, the arbi- trarily selected per cent Of the variance to the standard selected as a cut—off point according to the Accountant's Conventional approach need not be 10 per cent. In any actual or hypothetical situation, the per cent cut-Off point could just happen to be selected so as to give the same control limit produced by the Minimization or the otherwise most effective approach. However, onAy coinci- dentally, would the cut-Off point selected by accountans in the conventional manner be the same as that yielded by the most effective approach. Some may argue that the accountant's experience and intuition pay produce control limits very Close to those that are statistically determined. Since this is a qualitative argument, it is difficult to refute analyti- cally. It only needs to be re-emphasized, however, that statistics provides a systematic method for formally considering experience, judgment, and intuition to make it appear unwise to draw conclusions without the use Of techniques that are already available——techniques that 262 have stood the test of time for use in other disciplines. Judgment and experience are needed to estimate probability distributions like those shown in Table 18. They are also needed to set down the Opportunity costs Of error. The chance and assignable cause populations and the costs of error exist whether or not they are explicitly considered in setting the control limits. If they are not explicit they are implicit in which case the analyst is not-aware Of the magnitude Of their values. Judgment and experience certainly have a better basis when used to estimate the relevant factors in decision making than they have when used to make final decisions without considering what factors may be relevant. Surely judgment and ex- perience are more productive when they are systematically rather than haphazardly used. Secondary Generalizations Resultipg from the Summary Of Rankings It is now appropriate to examine the rankings Of the other approaches. In the aggregate the Bierman, Four— aker, and Jaedicke second and McMenimen approaches hold rankings two and three respectively. Because these ap- proaches interpret P in the same manner it is interesting to note that their reSpective rankings follow the Minimi- zation approach. In other words, the top three ranks are held by approaches that could not be identified as Clas- sical statistics. Their identification as Non—Classical . .15-3' 263 is not because they consider financial implications in addition to probabilities but because of the probabilities these methods Obtain. The reader will recall that Classical statisticians are interested in two types of probabilities—-the probabil- ity (3f a Type I error and the probability Of a Type II error. The probability Ofra Type I error is the probabil- ity that a chance value falls outside the control limits and thus leads to rejection Of a true hypothesis. The probability of a Type II error is the probability that a non-chance performance falls inside the control limits and thus lead to acceptance of a false hypothesis. The probabilities of concern in the Bierman, Fouraker, and Jaedicke second and McMenimen approaches are of an entirely different nature. Theselattertwo ap— proaches estimate the probability that a performance is due tO chance and the probability that it is due tO an assignable cause given the test value. The Bierman, Four— aker, and Jaedicke and McMenimen approaches are neither Classical nor Bayesian; nevertheless,they yield useful results. The Minimization approach involves the use Of Type I and Type II errors in the probability of a wrong deci— sion column. Unlike Classical applications, however, the probability of a Type II error is developed for each as~ signable cause. Another distinguishing feature Of the Minimization approach is its use Of the prior probabilities. 264 After the several questionable aspects Of the Bierman, Fouraker, and Jaedicke approaches that were iden- tified in Chapter IV, it appears rather surprising that the Bierman, Fouraker, and Jaedicke second approach rated second in the overall rankings. This ranking is suffi- ciently striking tO compel a reconsideration Of the fol— lowing criticisms that were originally outlined in Chapter IV. First, the numerical example used by Bierman, Fouraker, and Jaedicke involved control Of a summary ex- pense classification for a period Of one year rather than‘ control of individual performances. However, with one noted exception,26 the approach also lent itself to applica— tion at the performance level for comparative utilization in this dissertation. The criticism levied in Chapter IV was not that the model was necessarily incapable of deal— ing with control at the performance level but only that Bierman, Fouraker, and Jaedicke did not apply it at this crucial level. This writer criticized the Bierman, Fouraker, and Jaedicke model because it assumes that the mean of the assignable cause is equal tO the test value (or actual results)—-a condition which would only coincidentally be true. This criticism is still regarded as valid although ‘1 26For UCL testing plan A. ..g. .‘l 265 the Objection could easily be corrected without altering the basic model. Third, the Bierman, Fouraker, and Jaedicke allega— tion that an Off—standard condition requires four tests on the average before detection was questioned. It was shown that the number Of tests required on the average for detection depends upon the control limit that is es- tablished. As the control limit moves away from the standard the number Of tests required increases. For most statistically determined control limits the number will be less than four. This arbitrary selection of the number four remains a basic Objection to the Bierman, Fouraker, and Jaedicke model. It would, of course, be possible to implement the weighting scheme proposed in this disserta- tion into this model. The cogency Of this model is further diminished by their synonymous treatment Of two different interpre— tations Of P. The conceptual distinction between these interpretations has been established. Moreover, it has clearly been illustrated that each Of these interpretations will yield different control limits with a second overall rating achieved by the second interpretation as compared to a sixth overall rating for the first interpretation. Chapter IV did not question the usefulness of either in— terpretation but merely noted the distinction which Bier— man, Fouraker, and Jaedicke failed to make. ‘i __ A . ~ - v. _ 266 Finally, Bierman, Fouraker, and Jaedicke treated the cost Of an investigation as a constant. The example in this chapter illustrates that this is clearly not the case. The cost Of an investigation depends upon the par- ticular assignable cause as well as the established in— vestigation procedure. Once it is agreed to use the second interpreta— tion Of the Bierman, Fouraker, and Jaedicke model for control at the performance level, the second, third, and fifth Objections that were discussed above still hold. Why then, does this approach achieve a higher overall ranking than the Equalization approach that was designed to counteract the Objections Of the Bierman, Fouraker, and Jaedicke model? This reason is that the Equalization approach uses probabilities in the Classical manner; in contrast, the Bierman, Fouraker, and Jaedicke second approach uses the detail Of Table 18 to estimate the probability that a performance is due tO chance given the test value. On balance, it appears that these latter probabilities are sufficiently important determinants i Of effective control limits in this example tO outweigh these stated Objections Of the Bierman, Fouraker, and Jaedicke approach. , The McMenimen approach also achieved a more effec- tive overall ranking than the Equalization approach. The distinguishing features Of the McMenimen approach that 267 were noted in Chapter IV were that: 1. An investigation might be terminated before find— ing the cause. 2. A cost deviation might be reduced by various amounts. In discussing this approach at that time it was this writer's belief that while it might be worthwhile to terminate an investigation short Of finding the cause it would not be feasible in practice unless: l. The cost of an investigation is very high in relation tO the present value Of expected savings. 2. The cost of control is so high that nO action would be taken even if the cause were determined. 3. The probability that the variance is attributed to an assignable cause other'than those already investigated is very low. It was noted in Chapter IV that the first Of the above items is not likely to hold for analyses at the performance or Operational levels although it may hold for monthly or yearly analyses at a departmental or higher organizational level. With regard to the second item, it was also noted in.Chapter Iv that if it is worthwhile to establish a cer— tain standard in the first place, it would be worthwhile to re-establish it unless conditions have changed in which case the standard should be changed. The example in 268 Chapter VI indicates that the third item will Often hOld.‘ For example, Table 32 indicates that the probability that test value 252 is attirubted to illness after an investi— gation for poor attitude is only .0833. For this reason, termination Of an investigation before the cause is identified may be profitable. The second feature Of the McMenimen approach was questioned on the grounds that a realistic standard once developed should be maintained or else it should be Changed. Stated simply, a deviation known to be attributed to an assignable cause should not be permitted to exist. In applying the McMenimen technique in the example developed in this Chapter this Objectionable feature was disregarded. McMenimen developed his approach to overcome the weaknesses he noted in the Bierman, Fouraker, and Jaedicke approach. Since he criticized neither the Bierman, Four— aker, and Jaedicke use Of the test value as the mean Of the Off-standard condition nor their allegation that four tests must be conducted on the average to detect an assign— able cause, one might assume that he intended tO make use of these ideas in applying his model. However, for reasons nOted by this writer in his application of the McMenimen technique, the savings values would be difficult to Obtain without consideration Of specific assignable causes and their related means. Accordingly, this McMenimen oversight 269 caused by his failure to develOp a numerical application was modified in this chapter. The four test assumption, however, was retained. In spite Of this, the McMenimen application achieved a better overall ranking than that received by the Equalization approach. The reason for this is the same as the reason that the Bierman, Fouraker, and Jaedicke second approach had a better overall ranking than the Equalization approach. In the McMenimen applica— tion, Table 18 was used to estimate the probability that Chance and each assignable cause was prevailing given the test value. As was true with Bierman, Fouraker, and Jaedicke first approach, it appears that these probabilities are, on balance, sufficiently important determinants Of effective control limits to outweigh some other question— able aspects. Following the Minimization; Bierman, Fouraker, and Jaedicke second; and McMenimen approaches respectively in the aggregate ranking are the three approaches that in- volve Classical statistics.27 Of these approaches, Equali- zation ranks fourth; Basic Control Chart fifth; and Bierman, Fouraker, and Jaedicke first ranks sixth. From all the discussion, this order is not too surprising. The 27It has already been noted that Classical statis- tics has not typically considered financial implications. These considerations such as found in the Bierman, Fouraker, and Jaedicke first and Equalization approaches do not, how- ever, materially alter the conceptual basis Of the Classi— cal approach. 270 Equalization approach considers theiinancial implications not considered by the Basic Control Chart approach. At the same time it remedies some Of the objections to the Bierman, Fouraker, and Jaedicke first approach. Hence, it might be expected to top the list of the Classical statis— tical methods. Summapy of Generalizations All this discussion, then, leads to four general conclusions. First, and most significant, the approach conventionally employed by accountants is generally in- ferior to the statistical methods. Second, the Minimiza- tion approach tends to be most effective for control pur- poses. Third, the Bierman, Fouraker, and Jaedicke second and McMenimen approaches which consider the probability that chance is prevailing given the test value appear to rank next most effective after the Minimization approach. Their employment Of these rather unconventional probabili— ties is sufficiently useful to counteract other previously designated deficiencies associated with these approaches. Fourth, the Classical approaches headed by the Equaliza— tion approach appear to be the least effective of the statistical approaches. §Eability Of Generalizations The third and fourth conclusions are not nearly as important or as valid as the first two. They could be 271 influenced by Changing some of the assumptions Of the prob— lem. For instance, the Equalization approach ranks sixth for the lower control limit under sampling plans A and B. In both cases the Equalization control limits are closer tO the standard than the approach which ranked first. One could tamper with the distribution in Table 18 in such a way as tO reduce the lower control limit under the Equali- zation approach without greatly affecting the lower con- trol limits under the other approaches. This could be achieved by including fewer performances due to improve- ment between the lower control limit for the Equalization approach and the standard thus lowering the probability Of a Type II error which will cause the decision maker to accept the hypothesis until such tampering could be de- signed to give the Equalization approach a better ranking. If in another example this latter type of distribution would in fact prevail the Equalization approach might rank better. Conversely, tampering in the reverse direction could lead to a poorer ranking for the Equalization approach. The number of assignable cause and chance perfor— mances occurring at any test value play an important role in the comparative financial analysis used to rank these approaches. This is illustrated for testing plan A in Table 76. 272 TABLE 76.-—Numerical differences in control limits between the top ranking and the other approaches for testing Plan A UCL LCL Numerical Numerical Approach Difference Ranking Difference Ranking AC +(10 — 11) 5.5 —(9 - 10) 5 BCC +1 2.5 0 1.5 BFJ 1st +(lO - 11) 5.5 —5 3 BFJ 2nd a 7 —7 4 McM b 0 l a 7 Equal -1 4 +4 6 Min +1 2.5 0 1.5 a = indeterminate b = CL for dull knives is used (259-260) The column labeled "numerical difference" reports the difference between the control limit achieving a num— ber one rating and each respective approach. The rankings are also shown. Three approaches had upper control limits only one minute from the most effective one. The Basic Control Chart and Minimization approaches had upper con— trol limits one minute over the McMenimen upper control limit and the Equalization approach had an upper control limit one minute lower. In terms Of numerical differences there would be a three way tie; but because there are more Chance performances that would be investigated in the direction Of the Equalization upper control limit, its additional investigation cost is higher. Consequently, the Basic Control Chart and Minimization approaches tie for second ranking and the Equalization approach ranks 273 fourth. Had the performances been distributed differently the rankings might have been altered. This comparison is complicated by the fact that the McMenimen upper control limit for poor attitude and laziness is in the direction of the Basic Control Chart and Minimization limits. The numerical differences reported for the lower control limits illustrate the influence Of the concentration of performance values more clearly. The Equalization lower control limit is four minutes over the two approaches ranking first. Both Bierman, Fouraker, and Jaedicke in- terpretations and the Accountant's Conventional approach had larger numerical differences than the Equalization approach. In spite of this fact the Equalization approach received a poorer ranking. The reason is that there are a greater number Of chance performances requiring investi— between the lower control limits ranking first and the Equalization control limit than between the control limit ranking first and the control limits for the two Bierman, Fouraker, and Jaedicke approaches and for the Accountant's Conventional approach. A slightly different distribution Of Chance performances may cause different results. Final Conclusions The point to be emphasized is that the control lim— its under the various statistical approaches are generally fairly closely grouped about the control limit achieving the number one ranking while the Accountant's Conventional 274 method produces control limits fairly far apart from this grouping. This is illustrated in Table 77. Because of this and because the rankings Of the statistical methods seem to vary with differing testing plans and with dif— ferent sets Of assumptions, it is probably pre—mature to support any particular statistical method. Anyone Of these would generally be an improvement over the Accoun— tant's Conventional method. A company planning to adopt statistical procedures for variance control might begin by taking certain key operations, calculating the control limits under each Of the approaches, and running a financial analysis to deter- mine the one best suited to their Operation. For a begin- ning, the firm might be satisfied to use the Basic Control Chart approach which, of course,_is the simplest Of all the statistical methods. This would also make a smoother transition into the more sophiticated methods which should be instituted after more knowledge is Obtained about the Operation. Intelligent guesses about the relevant distri- butions and costs produce better control limits than arbi— trarily selected ones for which the shape Of the distribu— tions and the costs are implicit but unspecified by the analyst. One might also begin by making the necessary estimates to employ the Minimization approach. This would be followed by careful tabulation Of subsequent results according to the format indicated in Table 18. Application 275 mpceflanopmoCar o o o o sm.+ am.an o H+ an: om.m+ mo.m+ Amusv+ 4+ o oa.mn a- a: Assam so.m+ sm.+ Amuav+ 4 ms.au as.mu Amuse- o 202 mm.o+ o Amnac+ s: oa.m+ o o 4 sum hem ma.m m+ lsuee+ mu mm.Hu NH.mu Amuse- laauoav+ nus 6mm m.mn m.mu Amuse: o ss.s+ em.a+ e+ H+ com me. on- as- refines- sa.ea+ ee.ea+ leelmac+ Aaaueae+ ca 0 U m m o .o m a Comoummm qon MOM scam defiance no: How swam mcflumoa nomad mcflpmou Ham How monooonomc Hoppo men too moaxcmn don may COTBqu muHEflH HOHpCOO opp CH moocoHEMMHo HCOHHoEoz|I.ss momma 276 of Bayes' Theorem from time to time could be used to revise the original estimates until the differences between sub— sequent estimates become insignificant. This procedure assumes that the population mean to the Operation remains constant. While the Basic Control Chart approach might be easy to use in the absence Of more detailed information, and while it may serve as a good transition intO other statistical methods, this writer would not recommend its long continuance. The conventional practice Of selecting an arbitrary level of significance between .001 and .05 just does not yield as satisfactory control limits as is commonly believed. In this example, the .05 level was used. Although the Basic Control Chart approach ranked ' fifth Of the sixth statistical methods, the control limits under all statistical approaches were generally close to— gether. However, if the .01 level had been Chosen, the Basic Control Chart approach would not have ranked much better than the Accountant's Conventional approach. In this case the differences between the control limits under the Basic Control Chart approach and those of the other statistical approaches would be wider. It might be noted in passing that the Basic Con— trol Chart approach has been used in this dissertation to describe the conventional Classical statistical procedure. The control Chart diagram could certainly be used under 277 any approach to aid the analyst in visualizing the results of the Operation. Before concluding this chapter, one more Observa— tion must be made. It has frequently been noted that the control limits utilized under the Accountant's Conventional approach are not altered by Changes in the testing plan. The upper control limit in Table 78 is shown at 270 for each testing plan. For the other approaches, however, there is a tendency for the control limit to move Closer to the standard as the testing plan moves from A through D.28 This movement toward the standard takes cognizance of the average-out effect that was elaborated upon in Chapter II. That is, the existence of an assignable cause may not be too strongly suspected for a single performance as high as approximately 260 for testing plan A. However, an investigation would normally be undertaken should the mean Of five performances (for testing plan D) reach as high as 260; it is not likely that five chance performances will average 260. This is analogous tO saying that a tail is a likely occurrence in one flip of a coin; but five tails in five flips are not nearly so likely to occur. There is still another reason why the control limits move Closer to the standard as the reliance on sampling is in— creased. This is that a Type II error becomes more 28This also holds for the lower control limit. 278 costly. Consequently, the control limits are tighter so that Type II errors are made less frequently. TABLE 78.--Summary Of upper control limits by testing plan Testing Plan Approach A B C D AC 270 270 270 270 BCC 260-261 260-261 255.8 255.80 BFJ lst 270 250 250.82 246.68 BFJ 2nd a b 254-255 253.94 250.13 McM 259-260 250b 250.20C 246.30 Equal 258—259 253-254 250.54 248.03 Min 261 255 252 249 a . Indeterminate b Dull knives CL CPoor attitudes CL Failure to recognize the average—out problem is even more serious under the conventional control programs which analyze reported variances on a summary basis. As— sume that the 1000 performances included in this example represent the performances to be included in the summary report. Table 17 shows that the mean Of all these perfor- mances is 251. Since this is less than the 10 per cent rule would allow (even less than 5 per cent), accountants typically would regard the variance as insignificant. However, Table 17 also shows that 40 per cent Of the per— formances were attributed to assignable causes—-a situation which should certainly be considered significant. If no 279 assignable causes were prevailing, the accountant has 95 per cent assurance the mean Of 1000 performances will lie within 244.4 and 245.6 [245 i l.96(7.7846/1000) where 1.96 is the normal deviate corresponding the middle 95 per cent Of the curve and 7.7846 is the standard deviation Of the chance performances.] Therefore, it is recommended that greater emphasis be placed on control at the performance level because it is more timely. Since statistical procedures account for the degree Of summary reflected in a report, they should be used to interpret summary results as well as individual performance results. L... CHAPTER VII SUMMARY AND CONCLUSIONS Reason for Study This study emanated from a general dissatisfaction with the lack of Objective criteria utilized in determin- ing the significance of variances from standard. The ac— countant confronted with variance reports has not employed any structured guidelines to distinguish between when an investigation should be undertaken,on one hand, and when no further action is warranted on the other. Moreover, it was felt that too much reliance is placed on the vari— ance report as a control device. While on-the-spot Ob- servation of performance is also currently considered to be an important and timely aspect of control, it was this writer's belief that these control procedures are not Of the utmost benefit without an organized and analytical framework to signal the need for follow—up. Standard costs have been widely adOpted. The ul- timate goal, Of course, is to pin-point areas where inves- tigation is needed. To accomplish this, great effort is typically expended in developing realistic standards. Periodically, detailed procedures are employed to report 280 281 actual performance and to classify variances by source (that is, labor efficiency, material price, etc.). After all this precision this writer felt that it was ironic that the control limits are selected without a formalized framework for consideration of the relevant factors. After consulting with numerous faculty and prac- titioners, it was felt that significance determination was a real problem of sufficient import to warrant further study. After a review of conventional variance analysis in Chapter II a number of dissertation Objectives were specified. In the following sections, each Objective will be reconsidered in light Of how it was accomplished and what conclusions resulted. Conceptual Distinction between Significant and Insignificant Variances Accounting definitions of control take the follow- ing general form. Control entails those procedures de- signed to make actual results conform to the plan or stand- ard. Such definitions do not account for any variance and certainly do not make a conceptual distinction between sig- nificant and insignificant variances. The first Objective was to make such a distinction. It was felt that a con— ceptual framework might supply Clues for determining how such a distinction might actually be made in practice. From reading literature in the field of quality control, it became apparent that many types of variances result 282 from a host Of unexplainable factors which are identified as chance. In other words, there is an omnipresent non— uniformity which cannot be eliminated. This non-uniformity is a natural phenomenon——even plants or animals experimen— tally developed under the same conditions are not identi- cal. Likewise, tasks performed by the same worker under the same conditions are not identical. The reasons for such variation are unknown to man and have been identified by quality control engineers and statisticians as chance. Thus, this Objective was accomplished by interdisciplinary study. However, an examination of the typical variance classifications revealed that chance is not Operative for all of them. For expenses which are either contractual, a matter Of company policy, or determined by outside agen— cies, chance is not Operative. In these cases any vari— ance is significant in the sense that its cause should be identified. Thus, chance does not affect the material price, labor rate, or budget variance. It does, however, cause variation in labor and overhead efficiency, mate— rial usage, and volume as well as in some non—manufactur- ing costs. Three recommendations result from these findings. First, an insignificant variance should be regarded as one due to chance factors. Since these variances can neither be explained nor eliminated, there is nO reason to 283 undertake an investigation. Second, a significant vari- ance should be regarded as one resulting from some cause which is capable Of being identified (assignable cause). Accordingly, an investigation should be undertaken to identify the cause. Third, this distinction should be incorporated into the accountant's definition Of control. For variance Classifications which entail chance factors, control would be defined as consisting Of those procedures designed to maintain variation within limits due to Chance. For those variance classifications where chance is not Operative, the conventional accounting definition as the procedures designed to make performance conform to the standard is satisfactory. i With all of this settled, it would now seem tO be r a relatively simple matter to Observe performance under established conditions for the purpose Of determining the limits within which Chance is operative. The difficulty is that the distribution of values due to chance overlaps the distribution Of values due to assignable causes. The l dilemma remains: Where should the control limits be placed? I Recognition of Chance factors, however, provides the clue for confronting this problem and for utilizing more Objec— tive criteria for significance determination. Since prob- ability statistics is an area concerned with procedures for evaluating the patterns of Chance influences, its use is a logical extension from the recognition that chance 284 factors distinguish between significant and insignifi- cant variances. Examination Of Statistical Models The second dissertation Objective was to examine more Objective criteria for significance determination. This was accomplished through two major steps. First, three statistical models that have been proposed by others were evaluated. One of these, which has been identified as the Basic Control Chart approach, employes conventional Classical statistics. Another, that was devised by Bier— man, Fouraker, and Jaedicke, actually involves two ap- proaches. The third of these was conceived by LeO Mc- Menimen in a Master's thesis from The Pennsylvania State University. Since each of these available models involves some questionable aspects, this writer constructed two additional models. These represent new applications of already developed statistical concepts. These statistical models are capable Of consider- ing various combinations are the following eight relevant factors: 1. Distribution of values Of chance performances. 2. Distribution Of values for each assignable cause. 3. Probability of making an unwarranted investigation (Type I error). 285 4. Probability Of accepting variance when an inves- tigation is warranted (Type II error). 5. Opportunity cost Of Type I error. 6. Opportunity cost of Type II error. 7. Prior probabilities Of the occurrence Of Chance and each assignable cause. 8. Probability that any given variance is due to chance and the probability that it is due to each assignable cause. The Basic Control Chart approach formally considers the distribution of chance performances (factor 1). This in turn enables the evaluation Of the probability of com- mitting a Type I error (factor 3). This approach may in some undefined way also evaluate the probability of com- mitting a Type II error (factor 4) for some alternative parameter that is considered serious. If this is done, an attempt would be made to select a control limit that would yield a "low" probability Of a Type II error without making the probability of a Type I error tOO "high." The diffi- culty is that there are no available criteria for deter— mining what is "high" and what is "low." With or without consideration Of the probability of a Type II error, this approach would normally involve selection of a control limit that would yield a level of significance between .001 and .05. The major Objection to this approach is 286 that it does not consider the Opportunity costs associated with each type Of error. Without this, the analyst cannot know when he has achieved a good balance between a Type I and a Type II error. Bierman, Fouraker, and Jaedicke introduced a model which actually turned out to be two models because Of their inadvertent dual definitions Of the probabilities used in their model. From their numerical example, it is Obvious that they intended P to be the same as the Classi— cal probability of a Type I error.1 However, in an at— tempt tO improve upon the Basic Control Chart approach these writers introduced the cost of an investigation and the expected Opportunity cost resulting from failure to identify an assignable cause (factor 6) formally into their model. They did not, however, attempt to incor— porate the probability of committing a Type II error into their model. They apparently thought that they were restating the definition for the probability Of committing a Type I error when they interpreted P as "the probability Of an unfavorable deviation resulting from uncontrollable [Chance] causes."2 Unknowingly, then, they introduced into the 1In this dissertation this approach has been re~ ferred to as the Bierman, Fouraker, and Jaedicke first interpretation of P. Bierman, Fouraker, and Jaedicke, 121. 287 model a different kind of probability (factor 8) which this writer considered separately in an approach which he identified as the Bierman, Fouraker, and Jaedicke second approach. This approach explicitly considers factors 1, 2, 6, and 8. It also considers the cost of an investiga— tion. Even after these different interpretations of P are noted, both approaches still involved several ques- tionable aspects which are summarized briefly below. First, in their numerical example Bierman, Fouraker, and Jaedicke'uax1a.summary expense classification for a period of one year rather than individual performances. Second, they assumed that the mean of the assignable cause is equal to the test value (or actual result)--a condition which would only coincidentally be true. Third, they arbitrarily assumed that an off-standard condition re- quires four tests on the average before detection. The second and third aspects result in a poor evaluation for the Opportunity cost of a Type II error (factor 6). Fourth, they treated the cost Of an investigation as a constant when, in fact, the cost Of an investigation de— pends upon the cause and the order Of the investigation procedure followed. Because McMenimen did not include a numerical example it is difficult to tell what his precise treat- ment would be. It does, however, appear that his model L... 288 would consider factors 1, 2, 5, 6, and 8. This approach has two distinguishing features: 1. An investigation might be terminated before find- ing the cause. 2. A cost deviation might be reduced by various amounts. It is this writer's Opinion that the first of these features is a good one, particularly if at some point in the investigation process there appears to be a low prob- ability that the variance is attributed to an assignable cause other than those already investigated. Conversely, the second feature is not regarded as valid. It should be worthwhile to maintain the standard if it was realisti- cally established. If it was not or if conditions have Changed, the standard should be revised. Stated simply, a deviation known to be attributed to an assignable cause should not be permitted to continue. This writer also implied that McMenimen would assume that the mean Of the assignable cause is equal to the test value and that an off-standard condition requires four tests on the average before detection. These are also assumptions Of Bierman, Fouraker, and Jaedicke but neither is valid. In an effort to counteract the limitations just noted for the Basic Control Chart; Bierman, Fouraker, and Jaedicke; and McMenimen approaches, two additional models 289 were Constructed. These have been identified as the Equal- ization and the Minimization appraoches respectively. The Equalization approach considers factors 1 through 6 inclu- sive. The Minimization approach considers factors 1 through 7 inclusive. Example Testinggthe Relative Control Effectiveness Of the Conventional Accounting and the Various Statistical Methods The third Objective was to illustrate through an example the superiority Of the statistical models over the procedures conventionally employed by accountants. The test consisted of three parts. First, a hypothetical example was develOped for which the causes and performance values Of 1000 performances of a certain Operation were assumed. Second, these values in conjunction with economic assumptions were used to compute the upper and lower con- trol limits for each of the models under four different testing plans. These models included all of the statis— tical models in addition to the 10 per cent cut-Off point which was selected to represent the Accountant's Conven- tional approach. The third phase of the test consisted Of a financial analysis conducted to rank the approaches for control effectiveness for each corresponding control limit and testing plan. The financial analysis consisted of analyzing the approaches by twos insofar as it was neces— sary morank them in preferential order. This analysis took the following general form. Of any two approaches 290 being compared, the one closer to the standard bears a greater investigation cost than the one farther from the standard. However, it also carries additional savings be- cause it signals more investigations and thus detects as- signable causes earlier. The additional investigation costs and the additional savings are computed by a rather technical process which is explained in Chapter VI. A decision is made on the following basis. 1. If the added savings is greater than the added investigation cost, the approach with the control limit closer to the standard is regarded as more effective. 2. If the added savings is less than the added inves- tigation cost, the approach with the control limit farther from the standard is regarded as the more effective approach. This analysis for each pair of approaches was performed until it became possible to rank all Of the approaches. After summarizing the rankings, it was concluded in Chapter VI that nO one approach ranked first for each control limit under each testing plan. The Minimization approach was, however, either first or tied for first in five out of the eight cases. Moreover, it was either second or tied for second in each of the remaining three cases. (Refer to Table 74.) Furthermore, the sum Of the 291 rankings was lowest for the Minimization approach. It, then, was generally, but not always, the most effective. The superiority of this approach was anticipated because it considered more (seven) of the eight relevant factors than any of the other approaches. Also, it was constructed to eliminate the questionable aspects involved with the statistical models that have been proposed in the litera— ture. Unexpectedly, the Bierman, Fouraker, and Jaedicke second and McMenimen approaches achieved overall rankings of two and three respectively. These were the only two "IQ ..._ _ _ .. approaches to incorporate factor 8 directly into the model. The test, then, indicated that the probability that any given variance is due to Chance and the probability that I a. _..._.-.—--- it is due to each assignable cause is an important deter- minant of effective control limits. In fact, in this example it was sufficiently important information to out- weigh some noted questionable aSpects associated with these approaches. The three remaining statistical models involve either Classical statistics or extensions Of Classical statistics. As expected, the Equalization approach which formally considers factors 1 through 6 outranked both the Bierman, Fouraker, and Jaedicke first and Basic Con- trol Chart approaches. 292 It should be noted that these conclusions are true in general and not for each individual control limit with its corresponding testing plan. The "Stability of Generalizations" section in the conclusions to Chapter VI explained how it might be possible to tamper with the probability distributions in Table 18 in order to achieve different results for individual Circumstances. The reader will recall that the Objective Of this test was to illustrate the superiority of the statistical models over the procedures conventionally employed by accountants. The most significant conclusion, therefore, is that the Accountant's Conventional approach was desig— nated as generally the least effective method of control. The following four findings support this conclusion: The Accountant's Conventional approach achieved 1. the least effective ranking for six out of the eight cases. 2. It Obtained the highest sum of rankings. 3. The control limits under the various statistical approaches are generally farily closely grouped about the control limit achieving the number one ranking while the Accountant's Conventional ap— proach produces control limits fairly far apart from this grouping. As a result,differences in the rankings of the statistical approaches are 293 not nearly so significant as the difference be- tween the statistical approaches taken together an the Accountant's Conventional approach. The dollar difference between the added investiga- tion cost and the added savings is relatively greater for financial analyses between the Ac- countant's Conventional approach and the approach which occupies the next highest ranking than it is for analyses between approaches not involving the Accountant's Conventional approach. Aggregation Problems The fourth Objective was to show through this example the tendency for significant variances to be averaged—out in the process Of accumulation used in de- veloping the typical performance report. Table 17 shows that 40 per cent Of the 1000 performances upon which the example was based are attributed to assignable causes. From this, it would appear that the Operation is really an analysis typically employed This not in control. However, by accountants might fail to disclsoe this fact. type of analysis involves a comparison Of the actual dOl- lar cost of the operation for some period Of time with the budgeted dollar cost. For convenience, assume that this period of time is coincidental with the time required to complete the 1000 performances. In physical terms, 294 this same analysis would compare the mean of 1000 perform- Table 17 indicates that the ac- ances with the standard. The differ- tual mean of these 1000 performances was 251. ence between 245-~the standard--and 251 would frequently be regarded by the accounting analyst as insignificant. In fact, under the 10 per cent rule conventionally em- ployed signifiCance would not be rec0gnized unless the mean of the 1000 performances was as high as 270. Even the Accountant's Conventional approach applied on an in— dividual performance basis would provide more adequate control by detecting significance more readily than the common sole reliance on the summary report. This is an example of what may happen when aggre- gate reports rather than the individual performances are the basis of control. Aggregate reports may be useful for reviewing how effective control has been; but, here again, accountants should make use of statistical con— cepts. The more performances that are represented in the report, the closer the mean should be to the standard. In this situation significance would be indicated if the mean of the 1000 performances fell outside the range from + 1.96 times 244.4 to 245.6. This range represents 245 7.7846/1000 where 7.7846 is the standard deviation of the 600 chance performance and 1.96 is the normal deviate which includes 95 per cent of the chance performances. Even if the mean of the 1000 performances fell within the 295 above interval some unfavorably significant variances may have been off-set by favorably significant ones. For this reason, and also in order to achieve more timely con- trol, it is suggested that the major focus of control be addressed to individual performances rather than to weekly or monthly reports. This could, of course, be accomplished through sampling at the performance level in a manner simi- lar to testing plan D. Along these same lines, it should be noted that the control limits under the Accountant's Conventional ap- proach remain the same regardless of the sampling plan. There is a strong tendency, however, for the control limits under the statistical approaches to move closer to the standard as fewer performances are tested. In general, two factors account for this: 1. Type II errors become more expensive so it is im— portant that they be made less frequently. 2. As the sampling of more than one performance is introduced allowance is made for the fact that occasional extreme chance performances will be averaged—out by the more frequent performance values closer to the standard. Mere recognition of the average—out problem will not eliminate it. However, it could be greatly reduced under any method of significance determination by focusing 296 greater control attention at the performance level. Fur- ther reduction would result from using statistical methods of significance determination which select the control limits to account for the average-out effect. Moreover, statistical concepts can be used at the summary report level. These account for the average-out effect by re- ducing the amount of allowable variation as the degree of report summarization is increased. For a report covering the 1000 performances, the mean of these performances should fall in the range between 244.4 and 245.6. This range could easily be expressed in dollar terms by mul- tiplying 244.4 and 245.6 respectively by the standard3 wage rate per minute. Some may argue that control at the performance level would be more expensive. There is, of course, the initial cost involving the time required to estimate the values pertaining to the relevant factors involved in calculating the control limits. For the Minimization approach, Bayes' Theorem should be applied periodically thereafter to revise factor 7. Once this is done, the extra cost of maintaining control at the performance level should be small. Procedures are currently used to accu- mulate information by performance for the summary report. 3The standard wage rate is used because any labor rate variance should be removed from this analysis. 297 The additional time required to compare performance re- sults with control limits at this point would not appear to be great and, of course, sampling can be used. The control limits simply provide guidelines for performance observation which is currently used by the worker and his foreman and sometimes even by those higher in command. Under current procedures off-standard conditions continue until they are arbitrarily deemed significant on summary reports.4 Then time must be spent localizing them. Therefore, continued savings resulting from more timely detection of assignable causes in conjunction with increased detection resulting from reduction of the average- out effect should compensate for the increased analysis at the performance level. Justin Davidson who has been active in applying statistical techniques to auditing and accounting problems estimates that the set—up costs would approximate $5,000 to $10,000--a range which he regards as modest for a system change.5 These costs would include the cost of establish- ing control limits, explaining the details in non-mathe- matical terms to those in charge, and writing a simple 4Off-standard conditions may also be detected by observation but there are currently no organized criteria employed to detect assignable causes on this basis. 5Reported on telephone conversation on July 21, 1967. Mr. Davidson is a partner with Touche, Ross, Bailey, and Smart. 298 set of instructions for the worker and foreman who must maintain the system. The costs would be lower for com— panies whose controllers have used statistical procedures for other purposes. Mr. Davidson feels that the savings would soon compensate for these costs. For ten years he has believed that accountants should begin to establish statistical control limits for use in accounting variance analysis. The accountant may now wonder whether there are a sufficient number of statistically compentent account- ants available to instigate such procedures. In Mr. Davidson's opinion, 25 per cent of the companies that have a standard cost accounting system have internal talent capable in this area. The remainder would need outside help. Mr. Davidson thinks that all of the big eight Certified Public Accounting firms have staff skilled in statistical applications. Some of the smaller na- tional firms also have personnel proficient in this field. There is a growing awareness in the business community of the advantages of statistical and mathematical appli- cations. ACcordingly, at the college level business curricula are requiring heavier emphasis in these areas. At the post graduate level, there have been an increasing number of mathematics and statistics seminars to better acquaint professional peOple with the advantages of ap— plications in these areas. There are, then, at least a 299 sufficient number of personnel available to begin estab- blishing systems for statistical variance control and others are being educated for such work. It is, therefore, recommended that statistical control limits be established at the performance level. Perhaps it would be wise to begin with a few of the more important operations. If the savings readily compensate for the set—up costs as Mr. Davidson and this writer feel they will, these procedures would logically be extended to include more Operations. Summary of Conclusions The following conclusions result from the study: 1. The accounting definition that control consists of those procedures designed to make actual re- sults conform to the standard does not explain why some variances are not investigated. 2. UneXplainable factors called chance cause variation in labor and overhead efficiency, material usage, and volume as well as in some non-manufacturing costs. Variances that result solely from chance should be identified as insignificant. Those re- sulting from chance and assignable causes should be identified as significant. The recognition that chance factors cause variable performance leads to twatesting of statistical models since dix A. 300 probability statistics deals with evaluating pat- terns of chance occurrences._ There is very little literature dealing with sta- tistical applications to accounting variance analy- sis6 and most of it deals with procedure rather than concept. Moreover, there is little evidence to suggest that the procedures that have been pro- posed are used. In fact, after some inquiry this writer has been unable to find a single case of their usage. This could be accounted for because the accountant not having recognized chance factors has no logical reason to search for statistical models. Another possible eXplanation is that many accountants are not currently statistically so- phisticated. Still another reason might be that they have discovered some of the questionable as- pects of the proposed models and have discarded them. As a group, the statistical methods produce sig- nificantly more effective control than the Ac- countant's Conventional approach. This is illus— trated in the example. Thus, the hypothesis is confirmed. 6What literature is available is listed in Appen- 301 In general, the statistical models which incor- porate the largest number of relevant factors achieve the most effective control. Thus, the statistical method identified as the Minimization approach which considers the first seven of the eight relevant factors (more than any other) was generally the most effective. However, factor 8, which it did not consider, proved to be a suffi- ciently important determinant of effective control limits to enable the Bierman, Fouraker, and Jae— dicke second and McMenimen approaches to achieve overall rankings of two and three respectively in spite of some questionable aspects of these models. The individual rankings of the statistical ap— proaches can be expected to vary somewhat depend— ing upon the probability distributions of chance and assignable cause performances and also upon the testing plan with its corresponding control limit. Significant variances can be averaged-out in the summary report. Summary of Recommendations From these conclusions it is recommended that: Accountants recognize the existence of chance factors and incorporate this concept into their 302 definition of control. For those variance classi— fications for which chance is operative, control would then be defined as consisting of those pro- .cedures designed to maintain actual results within limits due to chance. This definition explains why insignificant variances (due to chance) are not investigated. Experience, judgment, and intuition be used to develOp the information required for the eight relevant factors for several important operations. This information could be develOped for overhead efficiency and material usage as well as for labor efficiency. Surely experience and judgment will be more useful if applied in an organized rather than in a haphazard way. Various statistical models be tested according to a plan similar to that outlined in Chapter VI to determine the one most feasible for a given Opera- tion and testing plan. For this test, the analyst need not confine himself to those methods dis- cussed in this dissertation. Indeed, other varia— tions may prove to be more satisfactory. The most desirable statistical model be directed toward control at the performance level for those operations where the benefits of more timely con— trol and surer detection (without the average-out 303 problem) are thought to outweigh the additional costs. Statistical procedures be employed for analyzing the summary report. These procedures account for the degree of summarization and thus reduce the average-out effect. BIBLIOGRAPHY Books and Monographs Allan, Douglas H. W. Statistical Quality Control. New York: Reinhold Publishing Corporation, 1959. Arkin, Herbert. Handbook of Sampling for Auditing and Accounting. Volume I—-Methods. New York: McGraw- Hill Book Company, Inc., 1963. Barnhart, Clarence L. (ed.). ‘The American College Diction— ary. New York: Harper and Brothers Publishers, 1953. Bierman, Harold, Jr. Managerial Accounting: An Intro- duction. New York: Macmillan, 1959. Bierman, Harold, Jr., Fouraker, Lawrence E., and Jaedicke, Robert K. Quantitative Analysis for Business De- cisions. Homewood, Illinois: Richard D. Irwin, Inc., 1961. Carroll, Phil. Overhead Cost Control. New York: McGraw- Hill Book Company, 1964. Cochran, William G. Sampling Techniques. New York: John Wiley and Sons, Inc., 1953. Cowden, Dudley J. Statistical Methods in Quality Control. Englewood Cliffs, New Jersey: Prentice-Hall, Inc. 1957. Cyert, Richard M., and Davidson, Justin H. Statistical Sampling for Accounting Information. Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1962. Duncan, Acheson J. Quality Control and Industrial Statis- tics. Homewood, Illinois: Richard D. Irwin, Inc., 1959. Ekambaram, S. K. The Statistical Basis of Quality_Control Charts. New York: Asia Publishing House, 1960. 304 305 Freund, John E., and Williams, Frank J. Modern Business Statistics. Englewood Cliffs, New Jersey: Pren— . Elementary Business Statistics: The Modern Ap- proach. Englewood Cliffs, New Jersey: Prentice- Hall, Inc., 1964. Gillespie, Cecil. Standard and Direct Costing. Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1962. Grant, Eugene L. Statistical Quality Control. New York: McGraw-Hill Book Company, Inc., 1952. Haynes, W. Warren, and Massie, Joseph L. Management Analy- sis, Concepts and Cases. Englewood Cliffs, New Jersey: Prentice—Hall, Inc., 1961. Henrici, Stanley B. Standard Costs for Manufacturing. New York: McGraw-Hill Book Company, Inc.fl960. Hill, Henry P., Roth, Joseph L., and Arkin, Herbert. Sam— pling in Auditing. New York: The Ronald Press Company, 1962. Horngren, Charles T. Cost Accounting: A Managerial Em- phasis. Englewood Cliffs, New Jersey: Prentice- Hall, Inc., 1962. Keller, I. Wayne, and Ferrara, William L. Management Ac- counting for Profit Control. 2nd EdT’ New York: McGraw-Hill Book Company, Inc., 1966; Kohler, E. L. A Dictionary for Accountants. Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1963. C. 0. Sylvester (ed.). Roget's Pocket Thesaurus. Mawson, New York: Rocket Books, Inc., 1946. National Association of Accountants. The Analysis of Manufacturing Cost Variances. Research Report 22. New York: National Association of Accountants, August 1, 1952. How Standard Costs Are Being Used Currently.' C. A. Standard Cost-Research Series. 1 Association of Accountants, Complete N. A. New York: Nationa Not Dated. 306 Rossell, James H., and Frasure, William W. Managerial Accounting: Columbus: Charles E. Mérrill Books, Inc., 1964. Schlaifer, Robert. Introduction to Statistics for Busi- ness Decisions. New York: McGraw-Hill Book Com- pany, Inc., 1961. Shewhart, W. A. Economic Control of_guality of Manufac- tured Product. New York: D. VanNostrand and Com- pany, Inc., 1931. . Statistical Method from the Viewpoint of Quality Control. Ed. W. Edwards Deming. Washington, D.C.: The Graduate School, Department of Agriculture, 1939. Smith, Richard L. Management through Accounting. Engle— wood Cliffs, New Jersey: Prentice-Hall, Inc., 1962. Slonim, Morris James. Sampling in a Nutshell, New York: Simon and Schuster, 1960. Trueblood, Robert M., and Cyert, Richard M.. Sampling Tech- niques in Accountipg, Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1957. Vance, Lawrence L., and Neter, John. Statistical Sampling for Auditors and Accountants. New York: John Wiley and Sons, Inc., 1956. Webster's New Collegiate Dictionary, Springfield, Massa- chusetts: G. and C. Merriam Company, 1956. Articles and Periodicals American Institute of Electrical Engineering Subcommittee on Statistical Methods. "Statistical Methods in Quality Control," Electrical Engineering, LXIV, No. 10 (October, 1945), pp. 363—364. Bendel, Clair W. "Using Statistical Tools to Keep Costs Current," N. A. C. A. Bulletin, XXXIV, No. 10 (June, 1953), pp. 1307-1326. Bierman, Harold, Jr., Fouraker, Lawrence E., and Jaedicke, Robert K. "A Use of Probability and Statistics in Performance Evaluation," Accounting ReView, XXXVI, No. 3 (July, 1961), pp. 409—417. 307 Birnberg, J. G. "Bayesian Statistics: A Review," Journal of Accounting Research, II, No. 1 (Spring, 1964), pp. 108-116. Blough, Carman G. "Challenges to the Accounting Profes- sion in the United States," Journal of Accountancy, CVIII, No. 6 (December, 1959), pp. 37-42. Brown, Theodore H. "Quality Control," Harvard Business Review, XXIX, No. 6 (November, 1951), pp. 69-80. Carter, Percy.C. "Maintaining the Adequacy and Accuracy of Standard Costs," N. A. A. Bulletin, XLV, No. 7 (March, 1964), pp. 33—40. Deming, W. E. "Some Principles of the Shewhart Methods of Quality Control," Mechanical Engineerigg, CXVI, No. 3 (March, 1944), pp. 173-177. Freeman, H. A. "Statistical Methods for Quality Control," Mechanical Engineering, LIX, No. 4 (April, 1937), pp. 261-262. Fox, Harold W. "Statistical Error Concepts Related to Accounting," Accounting Review, XXXVI, No. 2 (April, 1961), PP. 282-284. Gable, John L. "An Internal Audit Using Receiving In- Spection Techniques," Industrial Qualitngontrol, XIV, No. 7 (January, 1958), pp. 15-17, 22. Gaynor, Edwin W. "Use of Control Charts in Cost Control," N. A. C. A. Bulletin, XXXV, No. 10 (June, 1954), pp. 1300—1309. Glasser, Gerald H. "Classical Versus Bayesian Method of Statistical Analysis," The Statistical News, XV, No. 6 (February, 1964), pp. 1-3. Grady, Charles H., Jr. "Reducing Clerical Costs Through Improved Manpower Utilization,“ N. A. A. Bulletin XLVI, No. 7 (March, 1965), pp. 41-49. Grant, Eugene L. "Industrialists and Professors in Quality Control-—A Look Back and A Look Forward," Indus- trial Qualitngontrol, X, No. 1, Part I (July, 1 1953), pp. 31—35. Gryna, Frank M., Jr. "Statistical Methods in the Quality Function," Quality Control Handbook, ed. J. M. Juran. 2nd ed. New York: McGraw-Hill Book Com- pany, Inc., 1962. pp. 13-1 to 13-127. 308 Hamburg, Morris. "Bayesian Decision Theory and Statisti- cal Quality Control," Industrial Quality Control, XIX, No. 6 (December, 1962), pp. 10—14. Hart, Alex L. "Using Probability Theory for Economy in Cost Control," N. A. C. A. Bulletin, XXXVIII, No. 2 (October, 1956), pp. 257-263. Hill, David A. "Communicating Quality Control Ideas," Industrial Quality Control, XVI, No. 11 (May, 1960), pp. 21-24. Holguin, R. "Today's News-—Today A Must in Shop Corrective Action," Industrial Quality Control, XXI, No. 12 (June, 1965), PP. 616-618. Juran, J. M. "Pioneering in Quality Control," Industrial Quality Control, XIX, No. 3 (September, 1962), pp. 12-14. Kennedy, Miles "Statistical Inference and Accounting: A Review Article," Journal of AccountinggResearch, I, No. 2 (Autumn, 1963), pp. 225-231. Lewis, Wyatt H. "Inspection and Quality Control," Handbook of Industrial Engineering and Management, eds. W. G. Ireson and E. L. Grant. Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1955, pp. 960-1012. Lybrand, Ross Brothers, and Montgomery. "Reducing White Collar Costs-~Part I," The Lybrand Newsletter (Octo- ber, 1964), pp. 11-13. . "Reducing White Collar Costs-~Part II," The Lybrand Newsletter (November, 1964), pp. 2—6. McDaniels, Howard. "Improving Controllership through Prob- ability Statistics," The Controller, XXII, No. 3 (March, 1954), pp. 107-109, 140. Murph, A. Franklin. "Problem Solving through Matematical and Statistical Techniques-~Correlation and Sampling,‘ N. A. A. Bulletin, XLII, No. 1, Section 3 (September, 1960), pp. 15-21. Mosteller, F. "Note on Application of Runs to Control Charts," Annals of Mathematical Statistics, XII (1941), pp. 228-232. 309 Mueller, Robert Kirk. "Statistical Control Aids Manage- ment-by—Exception," N. A. C. A. Bulletin, XXXIV, No. 10 (June, 1953), PP. 1297—1306. Neal, Dewey W. "Cost Control Charts-~An Application of Statistical Techniques," N. A. A. Bulletin, XLII, No. 9 (May, 1961), pp. 73—78. Noble, C. E. "Cost Accounting Potentials of Statistical Methods," N. A. C. A. Bulletin, XXXIII, No. 12 (August, 1952), pp. 1470—1478. . "Statistical Cost Control in the Paper Industry," Industrial Quality_Control, IX, No. 6 (May, 1953), pp. 42—46. . "Calculating Control Limits for Cost Control Data," N. A. C. A. Bulletin, XXXV, No. 10 (June, Olmstead, R. S. ’"Distribution of Sample Arrangements for Runs Up and Down," Annals of Mathematical Statis~ tics, XVII (1946), pp. 24~33. Pierce, James L. "The Planning and Control Concept," Administrative Control and Executive Action, eds. B. C. Lemke and James Don Edwards. Columbus: Charles E. Merrill Books, Inc., 1961. Proschan, Frank. "Control Charts May Be All Right, But—~," ' Industrial Quality Control, XI, No. 8 (May, 1953), pp. 56-62. Reece, J. A. "Standard Costing and Quality Control," The Accountant, CXXXIII, No. 4215 (1955), p. 494. Rosander, A. C. "Probability Statistics in Accounting," Industrial Quality Control, XI, No. 8 (May, 1955), pp. 26-3.].- Rucker, Allen W. "Clocks for Management Control," Admini— strative Control and Executive Action. eds. James Don Edwards and Bernhard Carl Lemke. (Columbus: C. E. Merrill Books, Inc., 1961, pp. 68—80. Smith, Arthur H. "Problem Solving through Mathematical and Statistical Techniques-~Use of Operations Research," N. A. A. Bulletin, XLII, No. 1, Section 3 (September, 1960), pp. 3—14. 310 Smith, L. Wheaton. "An Introduction to Statistical Cost Control," N. A. C. A. Bulletin, XXXIV, No. 4 (December, 1952), pp. 509-516. Smith, Robert. "Quality Assurance in Government and In- dustry: A Bayesian Approach," Journal of Indus- trial Engineering, XVII, No. 5 (May, 1966), pp. 254-256. . Stephenson, James C. "Quality Control to Minimize Cost Variances," N. A. C. A. Bulletin, XXXVIII, No. 2 (October, 1956), pp. 264—275. Suttle, Clyde T., Jr. "The Controller Meets Statistics," N. A. A. Bulletin, XLIV, No. 9 (May, 1963), pp. 19-25. Swed, S., and Eisenhart, C. "Tables for Testing Randomness of Sampling in a Sequency of Alternatives," Annals of Mathematical Statistics, XIV (1943), pp. 66-87. Wald, A., and Wolfowitz, J. "Sampling Inspection Plans for Continuous Production Which Insure a Prescribed Limit on the Outgoing Quality," Annals of Mathe— matical Statistics, XVI (1945), pp. 30-49. Wolfowitz, J. "On the Theory of Runs with Some Applica- tions to Quality Control," Annals of Mathematical Statistics, XIV (1943), pp. 280—288. Wyer, Rolfe. "Learning Curve Helps Figure Profits, Control Costs," N. A. C. A. Bulletin, XXXV, No. 4 (December, 1953), pp. 490—502. Unpublished Material McMenimen, Leo J. "Statistical Analysis of Cost Deviations," Unpublished Master's Thesis, The Graduate School, The Pennsylvania State University, August, 1965. t. fl.— *2;- APPENDIX A BIBLIOGRAPHY OF STATISTICAL APPLICATIONS TO ACCOUNTING VARIANCE CONTROL Books Bierman, Harold Jr., Fouraker, Lawrence E., Jaedicke, Robert K. Quantitative Analysis for Business Decisions. Homewood, Illinois: Richard D. Irwin, Inc., 1961. Bierman, Harold Jr. TOpiCS in Cost Accounting and Deci— sions. New York: McGraw—Hill Book Company, Inc., 1963. Henrici, Stanley B. Standard Costs for Manufacturing. New York: McGraw—Hill Book Company, Inc., 1960. Horngren, Charles T. Cost Accounting A Managerial Empha- sis. Englewood Cliffs, New Jersey: Prentice- Hall, Inc., 1962. Trueblood, Robert M., and Cyert, R. M. Sampling Techniques in Accounting. Englewood Cliffs, New Jersey: Prentice Hall, Inc., 1957. Vance, Lawrence L., Neter, John. Statistical Sampling for Auditors and Accountants. New York: John Wiley and Sons, Inc., 1956. The statistical approach in this book was covered in connection with analysis of deviations from clerical work standards. Periodicals Bierman, Harold Jr., Fouraker, Lawrence E., Jaedicke, Robert K. "A Use of Probability and Statistics in Performance Evaluation," Accounting Review, XXXVI, No. 3 (July, 1961), 409-417. 311 312 Byrne, Robert S. "Control Charts to Measure Sales Per- formance Within the Month," N. A. A. Bulletin, XLIV, No. 4 (December, 1962), 43—52. Gaynor, Edwin W. "Use of Control Charts in Cost Control," N. A. C. A. Bulletin, XXXV, No. 10 (June, 1954), 1300-1309. Mueller, Robert Kirk. "Statistical Control Aids Manage- ment-by-Exception," N. A. C. A. Bulletin, XXXIV, No. 10 (June, 1953), 1297—1306. Neal, Dewey W. "Cost Control Charts-—An Application of Statistical Techniques," N. A. A. Bulletin, XLII, No. 9 (May, 1961), 73-78. Noble, C. E. "Cost Accounting Potentials of Statistical Methods," N. A. C. A. Bulletin, XXXIII, No. 12 (August, 1952), 1470-1478. Nobel, Carl E. "Statistical Cost Control in the Paper In— dustry," Industrial Quality Control, IX, No. 6 (May, 1953), 42-46. Nobel, Carl E. "Calculating Control Limits for Cost Con- trol Data," N. A. C. A. Bulletin, XXV, No. 10 (June, 1954), 1309—1317. Reece, J. A. "Standard Costing and Quality Control," The Accountant, CXXXIII, No. 4215 (1955), 494. Rosander, A. C. "Probability Statistics in Accounting," Industrial Quality Control, XI, No. 8 (May, 1955), Smith, L. Wheaton Jr. "An Introduction to Statistical Cost Control," N. A. C. A. Bulletin, XXXIV, No. 4 (Decem— ber, 1952), 509-516. " ' ' ' ' Cost Ste henson James C. Quality Control to Minimize p Variances," N. A. C. A. Bulletin, XXXVIII, No. 2 (October,1956), 264—275. Trueblood, Robert M. "The Use of Statistics in Accounting Control," N. A. C. A. Bulletin, XXXIV, No. 11 (July, 1953), 1561-1571. 313 Unpublished Material McMenimen, Leo J. "Statistical Analysis of Cost Deviations," Unpublished Master's Thesis, The Graduate School, The Pennsylvania State University, August, 1965. APPENDIX B COMPUTATIONAL DETAIL TO SUPPORT CHAPTER VI Derivation and Financial Analysis Of Upper Control Limits for Single Observations--Each Performance Tested McMenimen Approach It has just been determined in Chapter VI that it would be worthwhile to spend $1 investigating for dull knivesiqxnithe occurrence of a performance value of 260. The question now confronting the analyst is whether it is worthwhile to spend up to $2 investigating for poor atti— tude--the only other assignable cause that was observed for test value 260 in the original 1000 performances. The savings figure associated with poor attitude is determined by the following procedure: 1. Find the Opportunity cost associated with each . - 24 performance. This 18 255 60 5 X $3 = $.50 where 255 is the mean of the poor attitude performances. 2. Multiply the $.50 by 4. Result $2. 3. Subtract the Opportunity cost Of correcting poor attitude from the $2 weighted Opportunity cost. 314 315 The opportunity cost of correcting poor attitude would be difficult to determine. More than one performance would benefit from any procedure aimed at attitude improvement. Assume that studies in- dicate that the cost of such procedures would average out to $.25 on each performance. The savings if this one performance is investigated and attributed to poor attitude is then $2 — .25 or $1.75. Here, one could conceive of instituting procedures to improve attitude a little; but not enough to reduce the mean to 245. In this case, various amounts other than $0 or $1.75 could be saved. It is this writer's Opinion, however, that it should be worthwhile to re-establish the standard if 245 was a realistic standard to start with. If it was not, it should be revised. If circumstances have changed the standard should also be revised. Accordingly, only two events will be considered in conjunction with act "spend up tO $2 investigating." Since the savings figure of $1.75 is less than the $2 cost of an investigation, the conditional value is $-.25. It is Obvious, then, that the expected savings will be negative so that an investigation could not be worthwhile regardless of the probabilities. However, because probabilities will be calculated in the same 316 manner for further applications of this technique, it is instructive to discuss their derivation and complete the analysis by determining an expected savings value for this act. The results are shown in Table 79. TABLE 79.-—App1ication of McMenimen Technique Spend up to $1 Spend up to $2 Investigating Investigating Event Pe Cond. Exp. Pe Cond. Esz Test Value 260 Save $0 .7143 $—1 $-.7l43 .9 $—2 $—l.80 Save $4.625 .2857 3.625 1.0357 Save $1.75 .1 -.25 - .02 Expected Savings $ .3214 $-l.82 The probabilities associated with each savings value for this act are estimated from the original distri— bution Of 1,000 values shown in Table 18. These estimates are made according to the following line Of reasoning. Fourteen of the 1,000 performances sampled had values Of 260 minutes. Four of these were attributed to dull knives and their cause would be detected by the investigation for dull knives. Thus, ten performances remain for the second phase of the investigation. Of these, nine were due to chance so Pe = .9 for event "save 0" act "Spend up to $2 investigating." One Of the ten performances was due to poor attitude so Pe = .1 for event "save $1.75." 317 Since the expected savings associated with act "Spend up tO $2 investigating" is negative, an investiga- tion for a performance value of 260 would be undertaken only for dull knives. Some may, however, wish to consider the fact that the $1 spent investigating for dull knives is at this point in the decision process a sunk cost and that the concern now is in the incremental sense with whether an additional $1 should be spent. Table 80 shows the effects Of this incremental application for test value 260. The act is now labeled "investigate for poor attitude" rather than "spend up to $2 investigating." The conditional values are only $1 less than the savings figures. TABLE 80.-—McMenimen technique--incremental application Investigate for Poor Attitude Event Pe Cond. Exp. -Test Value 260 Save $0 .9 $-l $-.90 Save $1.75 .1 .75 +.75 Expected Savings $-.15 The eXpected savings is still negative so an in- vestigation would not be undertaken for poor attitude with a performance value Of 260. The upper control limit for this cause is somewhat higher. 318 Table 81 shows that the upper control limit is between 261 and 262 for poor attitude or laziness. The figures are determined in the same way they were in Table 80. Of the 5 performances with values at 261, 2 are due to dull knives whose cause would have been detected by the first step in the investigation process. Now 3 per— formances remain. Two of these are due to chance so Pe of saving $0 is 2/3 or .667; one is due to poor attitude so Pe of saving $1.75 is 1/3 or .333. It would still not be worthwhile to administer the psychological test because the expected savings is still negative. TABLE 81.——McMenimen technique—~incremental application. Investigate for Poor Investigate for Attitude and Laziness Illness Event Pe Cond. Exp. Pe Cond. Exp. Test Value 261 Save $0 .667 $-l $-.673 Save $1.75 .333 .75 +.25 Expected Savings $—.423 Test Value 262 Save $0 .666 -l -.666 .75 $-3 $-2.25 Save $1.75 .167 + .75 +.125 Save $5.75 .167 4.75 +.79l7 Save $3.50 .25 .50 +.125 Expected Savings $+0.2507 $42.125 319 Two new dimensions are added to the analysis for test value 262. First, one of the eight performances in the original 1,000 values was due to laziness; therefore there is an Opportunity to make a savings from this cause in conjunction with the psychological test. The amount Of the savings, $5.75, is calculated by multiplying the 275 - 245 single performance Opportunity cost of $1.50 ( 60 X $3 where 275 is the mean Of the performances due to lazi— ness) by the 4 performances that allegedly lapse on the average before an assignable cause is detected. From this product Of $6, the estimated per performance cost of cor- rection, $.25, is subtracted to arrive at the savings of $5.75 Since the expected savings is now positive, it would be worthwhile to administer the psychological test as the investigation for poor attitude and laziness. The upper control limit for these causes is thus between 261 and 262. The other new dimension for test value 262 is the possibility of investigating for illness which involves an incremental cost Of $3. After dull knives, poor atti— tude, and laziness have been eliminated as causes, only 4 performances remain. Of these, one is due to illness so Pe Of saving $3.50 is 1/4 or .25 and the Pe of saving.$0 is .75. The $3.50 savings is determined by multiplying 265 — 245 the $1 single performance Opportunity cost ( 60 X $3 where 265 is the mean of the performance due to illness) 320 by the 4 performances that will lapse before the assign- able cause is detected and subtracting from this product, $4, the $.50 estimated per performance cost of correction. The expected savings is negative so this aspect of the investigation is not profitable. Moreover, the investiga- tion for illness will yield negative expected savings for all test values. Thus, McMenimen would never investigate for illness under these assumptions. Chapter VI indicated that an investigation would be undertaken for dull knives for a performance value of 260. Since 260 falls in the region of hypothesis rejec— tion, 259 would be in the direction of the control limit. Table 82 shows the expected savings of an investigation for dull knives for test value 259 to be $.385.l There— fore, the investigation would not be profitable. The upper control limit would be between 259 and 260, as far as dull knives is concerned. 1The probability of saving $4.625 is 2/15 = .133. Table 18 shows that two Of the fifteen performances with values Of 259 were due to dull knives. The remaining thir— teen performances were due to other causes (including chance) for which an investigation for dull knives would result in $0 savings. Consequently, Pe corresponding to event "Save $0" is 13/15 = .867. 321 TABLE 82.—-Application Of McMenimen technique Spend Up to $1 Investigating Event Pe Cond. Exp. Test Value 259 Save $0 .867 $-l $-.867 Save $4.625 .133 3.625 +.482 Expected Savings $-.385 Equalization Approach The following explanation pertains to the deter— mination Of the probabilities and opportunity costs Of each type Of error for Equalization Decision Table 20. The probabilities of a Type I error are deter- mined by dividing the number of chance performances with values at least as great as the test value by 600--the total number of chance performances. The reason for this is that the hypothesis will be rejected for any perfor— mance value greater than the test value selected as the control limit. If the performance value is attributed only to chance, a Type I error will be made. The number Of chance occurrences at least as great as 258, 259, and 260 are shown in Table 18 to be 42, 31, and 22 respectively and their ratios to 600 are .0700, .0517, and .0367 re— spectively. These are the values shown for the probabili— ties Of a Type I error. The reader will notice that the probabilities decline as the test value increases. This 322 follows from the fact that there are fewer chance perfor- mances at least as great as the test value present for higher test values. The Opportunity cost Of a Type I error resulting from a barren investigation is $6. However, since a tough cow has never been butchered in less than 262 minutes, the final aspect Of the investigation--spend $1 testing for tough cows——can be eliminated for test values less than 262. Thus, the Opportunity cost of a Type I error shown in Table 20 is reduced to $5. The expected Opportunity cost Of a Type I error for each test value results from multiplying the probability of committing a Type I error by $5—-the Opportunity cost of committing a Type I error. By the same token, the expected Opportunity cost of a Type II error results from multiplying the probability Of committing a Type II error by the opportunity cost Of a Type II error. The reader will recall that the probability of committing a Type II error was determined in Chapters IV and V by selecting an alternative parameter, to repre- sent an assignable cause assuming normal distributions for both the chance and the alternative pOpulations, and computing the proportionate area under the curve represent- ing the alternative pOpulation that falls within the control limits. The reasoning for this approach is, of course, that values falling within the control limits would lead to hypothesis acceptance—~a Type II error when as assignable 323 cause is Operative. NO attempt was made in Chapter V to identify the alternative parameter with a particular assignable cause. The probability Of assignable causes with different parameters such as exist in this example was not considered. Here, however, six unfavorable assign- able causes and one favorable one—-each with different parameters are possible. Consequently, a different ap- proach must be used to estimate the probability of com- mitting a Type II error. Moreover, since detailed infor— mation regarding 1000 past performances is assumed, the assumption of normality is not necessary. The probability of committing a Type II error will be estimated by dividing the number of assignable cause performances (other than improvement)2 with values less than the test value3 by 300-—the total number of unfavor— able assignable cause performances. Given that some un— favorable off—standard condition exists, this ratio represents the probability that a performance will be executed in a time less than the test value. If the test value is selected as the upper control limit, this ratio 2 . . . Since improvement represents a favorable aSSign— able cause, it is considered in determining the lower control limit. 3The hypothesis will be accepted for performance values less than the test value chosen as the upper con- trol limit. If an assignable cause is operative, acceptance will result in a Type II error. 324 is an estimate Of the probability Of committing a Type II error. For test value 260 the ratio is 62/300 or .2067. The numerator, 62, is determined by referring to Table Two and adding the 45 performances due to poor attitude, the 10 performances due to dull knives, the 5 caused by ill- ness and the 2 caused by laziness that had performance values less than 260. Because the conditional Opportunity cost of a Type II error also depends upon the assignable cause, the cost will be determined for each assignable cause. These will then be averaged in order to arrive at a representative single figure to be used in the determination of the upper control limit. This is important because the control limit is used for decision making when the cause is unknown and it is important to have a single value that can be used to signal an assignable cause regardless of what that cause happens to be. The single performance opportunity costs have al— ready been calculated by dividing the difference between the mean Of the assignable cause and 245 the mean of the chance performances by 60 to convert the difference into a fraction Of an hour. This fraction is then multiplied by $3—-the hourly wage rate for butchers. The results for the assignable causes Of interest are shown in Table 83. 325 TABLE 83.—-Sing1e performance opportunity costs for corres— ponding assignable causes Assignable Cause Opportunity Cost Illness $1.00 Laziness 1.50 Poor Attitude .50 Dull Knives 1.25 These single performance opportunity costs must now be weighted by the same procedure illustrated in Table 10 to recognize the fact that an assignable cause might not be detected on its first occurrence. The weighting procedure associated with. poor attitude is shown in Table 84. Again, Column A represents the number of successive failures to detect a change in the cause system. The der- ivation of the $.50 Opportunity cost corresponding to the first failure to detect a change in the cause system was eXplained in conjunction with Table 83. The other figures in Column B increase successively by $.50 for each addi— tional failure to detect the change. Column C shows the probability of failing to detect poor attitude for the numbers corresponding to Column A. Since 45 out of the 60 performances attributed to poor attitude are less than 260 (shown in Table 18) the probability of failing to de— tect an assignable cause of poor attitude on its first occurrence, if the control limit is 260, is 45/60 or .75. The other figures in Column C are determined by taking the power Of .75 corresponding to the values in Column A. The result Of the weighting, $1.8036, is the conditional 326 Opportunity cost which recognizes that it takes on the average ($1.8036/.50) 3.6 tests to detect poor attitude once this assignable cause has appeared if 260 is selected as the upper control limit. TABLE 84.——Weighted Opportunity cost associated with poor attitude assuming UCL = 260 Accumulated‘ Probability Column B Number Opportunity of Tests in times Of Tests Losses Column A Column C (A) (B) (C) (D) 1 $ .50 .75 2 1.00 .5625 3 1.50 .4219 4 2.00 .3164 5 2.50 .2373 6 3.00 .1780 7 3.50 .1335 8 4.00 .1001 9 4.50 .0751 10 5.00 .0563 11 5.50 .0422 12 6.00 .0316 2.9049 $5.2392 $5.2392 Weighted cost = 2 90 9 = $1.8036 The conditional weighted Opportunity costs for the other assignable causes are determined in a similar manner. The values corresponding to each assignable cause are shown in Table 85. These weighted opportunity costs for each assignable cause are averaged in Table 85 in order to find the conditional Opportunity cost Of a Type II error. The number of times — 327 each assignable cause occurred are used as weights in this averaging process. These frequencies were originally indicated in Table 17. The averaging process yields $1.5053 as the conditional Opportunity cost of a Type II error. TABLE 85.-—Averaging process to find the conditional opportunity cost of a Type II error for test value 260 Weighted Relevant Number of Conditional Assignable Performances Opportunity Cause (F)’ Cost (C) CF Illness 20 ' $1.3379 Laziness 40 1.5825 Poor Attitude 60 1.8036 Dull Knives 120 1.3583 ' 546 $361.2700 Average = ESE = $36l°2700 = $1.5053 2F 240 These weighted conditional_opportunity costs shown in Table 85 are unique to the test value 260. Separate figures must be calculated for each test value. Since the work involved in calculating these figures is fairly tedious, it is preferable to make a good approximation Of the figures by performing the calculations for another test value, 255, for example, and interpolating for the values between 255 and 260. The weighted conditional Opportunity costs for test value 255 are shown in Table 86 along with the same costs 328 for test value 260. The difference between the Opportunity costs at 255 and 260 is also shown. Now in order to find the weighted conditional Opportunity cost for test value 256 by interpolation, it is only necessary to add one-fifth of the difference to the test value for 255. TO find the cost for 257, two-fifths of the difference is added to the test value for 255. Three-fifths is added for 258 and four -fifths for 259. The results for each assignable cause are indicated in Table 87. These values are averaged in Table 88 in the same manner used in Table 85. The re- sulting averages are those used in Table 20 for the op- portunity cost Of a Type II error. TABLE 86.--Weighted conditional opportunity costs for test values 255 and 260 Weighted Conditional Opportunity Cost One-Fifth of Cause 260 255 Difference Difference Illness 4 $1.3379 $1.1512 $.1867 $.03734 Laziness 1.5825 Poor Attitude 1.8036 .8492 .9544 .19088 Dull Knives 1.3583 1.2720 .0863 .01726 4There were no values attributed to laziness as low as 255. Therefore, a Type II error could not be made with an upper control limit Of 255 if laziness were the assignable cause. 329 TABLE 87.--Weighted conditional Opportunity costs for selected values determined by interpolation Weighted Costs Cause 256 257 258 259 Illness $1.1885 $1.2258 $1.2632 $1.3006 Laziness* Poor Attitude 1.0400 1.2309 1.4218 1.6127 Dull Knives 1.2892 1.3065 1.3237 1.3410 TABLE 88.--Averaging process to find the conditional Oppor— tunity cost Of a type II error for various test values Number Relevant of 255 256 257 Assignable Perf. Cause (F) (C) CF C CF C CF Illness 20 $1.1512 1.1885 1.2258 Laziness 40 None Poor Attitude 60 .8492 1.0400 1.2309 Dull Knives 122 1.2720 1.2892 1.3065 240 26.6 60 40.87 0 255TI500 258 259 F c CF c CF Illness 20 1.2632 1.3006 Laziness 40 1.5000 1.5352 Poor 60 1.4218 1.6127 Attitude Dull Knives 120 1.3237 1.3410 240 329.4160 345.1020 Test Value Averages = ECF/ZF 255 226.6160/200 = $1.1331 256 240.8740/200 = 1.2044 257 255.1500/200 - 1.2758 258 329.4160/240 = 1.3726 259 345.1020/240 — 1.4379 *See footnote 4 on page 328. 330 Minimization Approach The following eXplanation pertains to the deter— mination of the detail for Table 21. The number of performances associated with each cause shown in Table 17 can be used to estimate the prior probabilities. Three causes can be eliminated in testing for an upper control limit of 260. These are improvement, tough cows, and lack of training. Improvement pertains to the establishment Of the lower control limit. Tough cows and lack of training have not had values as low as 260. Accordingly, the relevant prior probability distribu— tion for testing 260, is shown in Table 89. Of course, the probabilities represent the ratio that the number Of performances for each cause bears to the total number of performances. For example, .7143 is equal to 600/840. TABLE 89.-—Re1evant prior probability distribution for test value 260 Number of I Cause Parameter Performances Probability Chance 245 600 .7143 Poor Attitude 255 60 .0714 Illness 265 20 .0238 Dull Knives 270 120 . .1429 Laziness 275 _40 .0476 840 1.0000 331 The weighted conditional Opportunity costs are the same figures indicated in Tables 85 and 87 for test values 258, 259, and 260. They are estimated by interpolation for test values 261, 262, 263, and 264. The actual amounts were determined for test value 265. These amounts are shown in Table 90 in conjunction with the weighted condi- tional opportunity costs for test value 260. Notice that the weighted costs are higher for test value 265. This is because the probability of not detecting the shift to the assignable cause parameter is higher for a control limit at 265 than for a control limit at 260. TABLE 90.--Weighted conditional opportunity costs for test values 265 and 260 Weighted Conditional Opportunity Cost Cause 260 265 Difference Illness $1.3379 $1.5335 $ .1956 Laziness 1.5825 1.7122 .1297 Poor Attitude 1.8036 , 3.6031 1.7995 Dull Knives 1.3583 1.4408 .0822 That is, the probability Of committing a Type II error is higher for each assignable cause at control limit at 265; therefore, the weights are higher in determining the weighted costs by the procedure indicated in Tables 10 and 84. 332 Table 91 shows the interpolated weighted conditional Opportunity costs for test values 261, 262, 263, and_264. The weighted cost for test value 261 is determined by add- ing to the weighted value for 260 one-fifth of the differ- ence between the weighted costs at 260 and 265. The weighted cost for test value 262 is found by adding two— fifths of the difference, etc. TABLE 91.--Weighted conditional Opportunity costs for selected values determined by interpolation Weighted Costs Cause 261 262 263 264 Illness $1.3770 $1.4161 $1.4553 $1.4944 Laziness 1.6084 1.6344 1.6603 1.6863 Poor Attitude 2.1635 2.5234 2.8833 3.2432 Dull Knives 1.3750 1.3915 1.4079 1.4244 For the chance parameter, a wrong decision consists Of rejecting the hypothesis for values Of 260 or more when indeed chance caused the variation. This is the probability of a Type I error which is found for test value 260 by 0b- taining the number of performances of at least 260, from Table 18, and placing this number over 600—-the total num— ber Of chance performances. The result is 22/600 or .0367 which is indicated in the probability Of a wrong decision column. This same figure was determined in conjunction with the Equalization approach. 333 For the assignable cause parameters, a wrong de- cision is made when the hypothesis is accepted for values less than 260 when one of the assignable cause parameters is operating. The derivation of the probabilities Of a wrong decision for each assignable cause for test value 260 are shown inflbble 92. The probabilities Of a wrong decision are determined by dividing the number Of performances less than 260 by the total number of performances corresponding tothe given assignable cause. TABLE 92.--Ca1culation of the probabilities of a wrong decision for each assignable cause under test value 260 Number Of .- Performances . Number of Probability of Cause Less than 260 Performances A Wrong Decision Poor Attitude 45 60 .75 Illness 5 20 .25 Dull Knives 10 120 .0822 Laziness 2 40 .05 The conditional average opportunity cost figures in Table 21 are determined by multiplying the weighted Opportunity costs for each cause by the corresponding prob- ability of a wrong decision. These figures represent the average Opportunity cost given the occurrence of each respective parameter. 334 Derivation and Financial Analysis of Lower Control Limits for Single Observations—-Each Performance 13226.4. Bierman, Fouraker, and Jaedicke Approach First Interpretation Of P. The following explana~ tion pertains to the derivation of the figures used in Table 24. The "L" value for the Bierman, Fouraker, and Jaedicke approaches is calculated as it was for the deri- vation Of the upper control limit by multiplying the single performance Opportunity cost by four. The single perfor- mance Opportunity cost associated with test value 220 is $1.25 (24560 230 X $3). Hence, L is $5. The cost of an investigation is given at $4. Accordingly, PO is .2 (Pc = L E C = §§_%_i) for test value 220. For any test value, P is derived by dividing the number Of chance performances at least as far from the standard as the test value by 600—-the total number of chance performances. This result is then divided by .5 to limit the sample space to only one-half the curve which, Of course, takes cognizance of the fact that any deviation is either favorable or unfavorable. TO find P for test value 220, it is necessary to refer to Table 2 to discover that only one chance perfor- mance is as far from the standard as 220. Hence, P is 335 (1/600) divided by 2 or .0034. Since this is smaller than Pc, which is .2, the hypothesis should be rejected. The lower control limit which is