.A ‘ ' ' 3.41.71] .' - L'r'".._‘t:.'°| '.~ l gas u-“ —-—A—’ ‘u‘ ”AT? 3!!" .... u _ »‘ .‘3’2'f‘ Lay-4.3g. 1.31.6. : , t ._ a. v. - -_.- .4 mangwqg. 533515;). 5 f '9..-.-., 34:14:“; ’Eifli' "'¢;;‘A.~' v nw‘l, ‘ uh {MW h" pg ‘ . (J en. 5' x3: 3,1,31-h'fhg; . - ‘ 9:21;}?I "A...-f.. JUL. ”1""!!! "a, 1' .9- 'm#mgfi%$flflfimy‘ $¥£fiw : .‘ . j.-.'€r‘ »‘ ’ '3‘”). 0’ ,-: 5' jgzsiréum figsiegénwuiw 6-: em»; {4:}; u A, _ ,, _ ' ;- 4;}; pr. "f“dti.’ .!1':*'::'. h 5W- 1',. I g M £1 it ll 1"; . w .-. F— » may! .fiia'Igfiiiz ii;‘§i¢n21.ir{§ .éV‘éaiéIié? ”‘3 didit‘u‘; i 7‘..- l M”-.. - .wr- ‘3’?- WI. .7373?— :W-zsx‘ .fl. .. -51“. .. v - -J M"""L 1 .. —.... ‘ ’ - A7... . C—“mk—o ._._-—":.. ~.-.. - h. " {'17: £319”. . TE UNIVERSITY LIBRARIES Illllllllll lllllllllllllll 31293 01712 8830 This is to certify that the dissertation entitled INTEGRATIVE ANALYSIS OF STATE-BASED REQUIREMENTS FOR COMPLETENESS AND CONSISTENCY presented by Barbara Jean Czerny has been accepted towards fulfillment of the requirements for Ph.D.Y degreein Computer Science fl/K fig“; Q/zé’ Major professor 5/7/7y Date MS U is an Affirmative Action/Equal Opportunity Institution 0- 12771 INTEGRATIVE ANALYSIS OF STATE-BASED REQUIREMENTS FOR COMPLETENESS AND CONSISTENCY By Barbara Jean Czemy A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Computer Science _ May 1998 ABSTRACT INTEGRATIVE ANALYSIS OF STATE-BASED REQUIREMENTS FOR COMPLETENESS AND CONSISTENCY By Barbara Jean Czerny Statically analyzing requirements specifications to assure that they possess desirable properties is an important activity in any rigorous software development project. All other stages of development depend upon the requirements specification. In addition, errors in the requirements that go undetected and propagate to later stages of development (the de- sign and implementation stages) are the most costly to correct. Therefore, it is important to ensure that the requirements document satisfies certain desired properties before proceed- ing to later stages of the development process. However, static analysis is performed on a formal model of the requirements that is an abstraction of the original requirements specification. Some degree of abstraction is necessary or the analysis becomes intractable. The output from the analysis is a repon of the desirable properties that the requirements specification fails to satisfy. In many cases, abstractions in the analysis model lead to spurious errors in the analysis output. Spurious errors are conditions that are reported as errors, but information that was abstracted out of the analysis model precludes the reported conditions from being satisfied. A high ratio of spurious errors to true errors in the analysis output makes it difficult, error-prone, and time consuming to find and correct the true errors in the specification. Two desirable pr0perties that certain requirements documents should satisfy (for ex- ample, the requirements for critical systems) are completeness (a behavior is specified for every possible input) and consistency (no conflicting behaviors are specified). Analyz- ing for completeness and consistency in state-based requirements generalizes to analyzing complex logical expressions for satisfiability and mutual exclusion. Two methods for an- alyzing logical expressions for satisfiability and mutual exclusion are symbolic methods such as those thatme on Binary Decision Diagrams (BDDs), and reasoning methods such as theorem proving. Symbolic methods are fast and fully automated, but generate output that may contain many spurious errors since the analysis model contains many abstractions. Reasoning methods tend to be slower and require more manual intervention, but generate more accurate output since the analysis model contains fewer abstractions. The objective of this research is to develop a technique for analyzing logical expres- sions for satisfiability and mutual exclusion that is fast enough to be used on a day-to-day basis, automated, and that generates analysis output with a small ratio of spurious errors to true errors. The results of the research are: (1) an iterative technique that integrates the strengths of a symbolic and a reasoning component to analyze logical expressions for sat- isfiability and mutual exclusion and circumvents the weaknesses of the components, and (2) a simple technique that uses a symbolic representation of logical expressions to help identify abstractions in a model that are causing spurious errors in the analysis output. © Copyright May 1998 by Barbara Jean Czerny All Rights Reserved To my Lord and Savior Yeshua (Jesus) the Messiah; Thank you for giving your life so that I could have eternal life; Thank you for paying the penalty for my sins that I could not pay; And thank you for taking what was only a dream, and making it a reality. ACKNOWLEDGMENTS Thus says the LORD: ‘Let not the wise man glory in his wisdom, let not the mighty man glory in his might, let not the rich man glory in his riches; but let him who glories glory in this, that he understands and knows me, that I am the LORD who practice steadfast love, justice, and righteousness in the earth; for in these things I delight, says the LORD.’ (Jeremiah 10:23-24) Unless the LORD builds the house, those who build it labor in vain. Unless the LORD watches over the city, the watchman stays awake in vain. (Psalm 127:1) First and foremost, I must thank my Father in Heaven Elohim (The Creator), El Elyon (The God Most High). Thank you for revealing yourself to me (and for continuing to reveal yourself to me). You are El Shaddai (The All-Sufficient One). You are Adonai (The LORD). You are Jehovah-jireh (The LORD Will Provide). You are Jehova-nissi (The LORD My Banner). You are Jehovah-malt (The LORD My Shepherd). You are more than I or any of us can possibly comprehend. You are Jehova (The Self Existent One). Thank you for your gentle and loving guidance. Thank you for your love, patience and grace. Thank you for your wisdom, knowledge, insight, and understanding. Thank you for helping me to grow. Thank you for being my refuge and strength, a well proved help in times of trouble (Psalm 46:1). And Father, thank you for blessing me with my thesis advisor, Dr. Mats P.E. Heimdahl, without whom this dissertation would not have been possible. Words cannot express the deep gratitude I have for my thesis ad‘visor Dr. Mats P.E. Heimdahl. Thank you Mats for your wisdom, guidance, patience, and grace. Thank you for your constant encouragement and for your undying belief in me even when I didn’t believe in myself. Thank you for letting me be honest and open about my feelings. Thank you for helping me to grow in knowledge and maturity. I would also like to express my deep gratitude to Dr. Betty H.C. Cheng and Dr. Abdol H. Esfahanian. Thank you for serving on my committee. Thank you for believing in me and supporting vi me throughout my graduate studies at Michigan State University. Thank you for not giving up on me. Thank you Dr. Cheng for your encouragement, wisdom, and guidance, and thank you for your listening ear and wise advice when things didn’t always go the way I expected or hoped they would; thank you for being there for me when I had given up. I express my thanks also to Dr. Laun'e K. Dillon and Dr. Carrie J. Heeter for serving on my committee. Thank you for your help and support. I express my sincere thanks also to Dr. Richard J. Reid for his support and encouragement throughout my years as a Master’s student. Thank you also Dr. Reid for expressing confidence in me and for encouraging me to stay on as a Doctoral student. And thank you for helping me to get into the Doctoral program and for helping me get started as a Doctoral student. Thank you to my sisters and brothers in the LORD Dr. Sally Jean Howden (and her mom), (Dr.) Ralph and Dr. Utami DiCosty, Dr. Barbara D. Birchler (and her mom), Gretel V. Coombs, Karissa Miller, Dr. Tom and Edna Manetsch, Alice M. Ribby. and Daniel Olujimi Josiah-Akintonde for your constant prayers and support. A special thank you to Dr. Tom and Edna Manetsch for the guidance, understanding, acceptance, help, and wise counsel you provided me in addition to your constant prayers and support. A special thanks also to Alice M. Ribby. Thank you Alice for everything you have done to help me grow and mature and start becoming the woman God knows I can become. Thank you also to (Dr.) William E. McUmber, and Dr. lee Enoch Wang for your encouragement and support and for advice and tips to help me get Linux up and running on my computer. Thanks Bill for also giving me tips to get my zip drive up and running so I could back up my dissertation and other important items. Thanks to the department secretaries Linda L. Moore, Donna Landon, and Beverly Wallace for your prayers, support, and for helping me keep up with all the paperwork required; thanks for everything you did when the “crises” arose. Thanks (Dr.) Stephen K. Wagner (C++ Guru and manager) for your help with some tough vii problems I had with C++. Thanks to all the managers for their help with system problems and questions. Thanks also to Cathy Davison and the other department secretaries for all of your help. Last but not least, I would like to thank my father and mother for helping me out with some high cost items such as my computer. Thank you also for always stressing the importance of education and saving from early on in my life. Thank you for making me think early in my life that it was normal for everyone to go to college after graduating high school. And thanks Dad for encouraging me to choose computer science as my field of study in my first year as an undergraduate. Thanks also to my big brother John M. Czemy who I have always admired and respected. Thank you John for letting your little sister tag along with you and your friends when I was young, and for teaching me how to play sports and expand my horizons with some of the risky things we did. Thanks also John, for sticking with me during my years in graduate school. viii TABLE OF CONTENTS LIST OF FIGURES xii 1 Introduction and Motivation 1 1.1 Problem Statement .................................... 4 1.2 Contributions ....................................... 6 1.3 Organization of Dissertation and Guide to Reading ................... 7 2 Analysis Techniques 9 2.1 Dynamic Analysis ..................................... 11 2.2 Static Analysis Techniques ................................ 11 2.2.1 Reachability Analysis ................................. 12 2.2.2 Model Checking .................................... 14 2.2.3 Theorem Proving .................................... 18 2.2.3.1 Characteristics of Proof Systems ........................... 19 2.2.3.2 Basic Proof Systems ................................. 20 2.2.3.2.] Sequent Calculus Systems ............................. 21 2.2.3.2.2 Resolution Systems ................................ 22 2.2.3.3 Search Strategies ................................... 23 2.2.3.3.1 Resolution Strategies ............................... 23 2.2.3.3.2 Natural Strategies ................................. 25 2.2.3.4 Specific Theorem Provers .............................. 26 2.2.3.4.1 Prototype Verification System ........................... 26 2.2.3.4.2 PROLOG ...................................... 31 2.2.3.4.3 Larch Prover .................................... 36 2.2.3.4.4 Knuth-Bendix ................................... 38 2.3 Summary ............. 39 3 Analysis of Software Requirements 41 3.1 Analysis of State-Based Requirements for Completeness and Consistency ....... 43 3.1.1 SCR ........................................... 46 3.1.2 RSML .......................................... 51 3.1.3 Common Problem Between Methods: Spurious Errors ................ 55 3.1.3.1 Binary Decision Diagrams .............................. 56 3.1.3.2 Problems With Symbolic Representation of Predicates ............... 59 3.1.4 Summary ........................................ 64 3.2 Model Checking Software Requirements ......................... 66 3.3 Spurious Errors Revisited ................................. 70 3.4 Summary ......................................... 71 ix 4 Analysis Method 73 4.1 General Analysis Process ................................. 76 4.1.1 Applying the Analysis Approach to Analysis of Software Requirements for Com- pleteness and Consistency .......................... 80 4.2 Tools ............................................ 85 4.2.1 Control Overview .................................... 86 4.2.2 BDD Library ...................................... 88 4.2.2.1 Variable Reordering ................................. 89 4.2.3 BDD Translator ..................................... 90 4.2.4 AND/OR Table Translator ............................... 90 4.2.5 PVS Translator ..................................... 93 4.2.6 BDD Analyzer ..................................... 98 4.2.6.1 Consistency Analysis ................................. 98 4.2.6.2 Completeness Analysis ................................ 99 4.2.7 PVS Analysis ...................................... 100 4.3 Adding Augmenting Information to the Analysis Process ................ 101 4.3.1 Adding Augmenting Information to the BDD Analysis ................ 101 4.3.2 Adding Augmenting Information to the PVS Analysis ................ 103 4.4 Iteration Options and Analysis of Output ......................... 104 4.5 Summary ......................................... 107 5 Domain Axioms and Domain Axiom Identification 109 5.1 Domain Axioms ...................................... 110 5.2 Identifying Domain Axioms ............................... 113 5.2.1 Indicator Nodes ..................................... 114 5.2.2 Sampling ........................................ 124 5.3 Summary ......................................... 128 6 Application of Method and Experimental Results 130 6.1 Consistency Analysis ................................... 136 6.1.1 Transitions in State Intruder-Status - TCAS II Version 6.04A ............. 137 6.1.1.1 Transitions out of State Proximate-Traffic ...................... 138 6.1.1.1.1 Proximate—Traffic to Threat and Proximate-Traffic to Other-Traffic ....... 138 6.1.1.1.2 Proximate-Traffic to Threat and Proximate-Traffic to Potential-Threat ...... 151 6.1.1.1.3 Proximate-Traffic to Other-Traffic and Proximate-Traffic to Potential-Threat . . 157 6.1.1.2 Transitions out of State Other-Traffic ........... ‘ ............. l 68 6.1. 1.2.1 Other-Traffic to Threat and Other-Traffic to Potential-Threat ........... 168 6.1.1.2.2 Other-Traffic to Threat and Other-Traffic to Proximate-Traffic .......... 172 6.1.1.2.3 Other-Traffic to Proximate-Traffic and Other-Traffic to Potential-Threat ..... 174 6.1.1.3 Transitions out of State Potential-Threat ....................... 174 6.1.1.3.1 Potential-Threat to Threat and Potential-Threat to Proximate-Traffic ....... 176 6.1.1.3.2 Potential-Threat to Threat and Potential-Threat to Other-Traffic ......... 179 6.1.1.3.3 Potential-Threat to Proximate-Traffic and Potential-Threat to Other-Traffic . . . 185 6.1.1.4 Transitions out of State Threat ............................ 189 6.1.1.4.] Summary of Results ................................ 192 6.1.2 Transitions in State Intruder-Status - TCAS 11 Version 7 ............... 193 6.2 Completeness Analysis .................................. 194 6.2.1 Transitions in State Auto-SL .............................. 194 6.2.2 Transitions in State Effective-8L ............................ 205 6.3 Summary and Heuristics ................................. 208 7 Conclusions and Future Investigations 211 7.1 Potential Future Work ................................... 213 7.2 Suggestions for Future Investigations ........................... 214 APPENDICES 215 A Macro Definitions 215 B PVS Commands and Strategies 223 3.1 Built-in PVS Commands and Strategies ......................... 223 3.2 Defining Strategies .................................... 224 B3 Our PVS Defined Strategies For Checking For Consistency and Completeness ..... 225 xi 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 LIST OF FIGURES Model checker inputs and outputs. ............................ Reachability analysis using a model checker ........................ Model checking’s counter example feature. ....................... PVS sequent formula .................................... PROLOG solution scenarios. ............................... Infinite search example in PROLOG ............................ Finite search example in PROLOG. ........................... Assertions and rewrite rules in Larch Prover ........................ An example proof using rewrite rules in LP. ....................... Incomplete and inconsistent finite state machine model .................. Completeness and consistency checks within rows. ................... FSM annotated with conditions and output variable assignments. ............ Table relating states and conditions to an output variable. ................ A state transition table abstraction. ............................ FSM annotated with events and state transitions ...................... Table relating states and events to new states. ...................... Transitions annotated with conditions and output variable assignments .......... Tabular representations of guarding conditions. ..................... Guarding condition for transition from Proximate-Traffic to Other-Traffic. ....... Completeness and consistency checks between tables ................... Simple BDD example. .................................. Example of variable orderings and their effect on graph size. .............. BDD representation and analysis of enumerated types. ................. BDD representation and analysis of invalid enumerated type assignments. ....... BDD example of tautology ................................. Symbolic representation of non-uivial arithmetic expressions and mathematical func- tions, and spurious error reports. ............. ‘ .............. Spurious error reports generated for the state Inhibited. ................. High-level view of analysis to check for mutual exclusion. ............... High-level view of analysis to check for satisfiability. .................. General analysis process and integration of symbolic and reasoning components. General integrative analysis process applied to analysis of RSML requirements for com- pleteness and consistency ................................ Overview of analysis tools and data flow between the tools ................ Example state diagram to demonstrate collection of triggers and associated transitions during analysis process set-up. ............................ Example BDD node profile portion. ........................... Examples of enumerated types and enumerated type predicates .............. xii 16 27 33 34 35 37 38 44 47 48 48 49 50 52 53 53 54 57 58 62 63 7O 78 79 81 85 87 88 92 4.9 The transitions out of the state Inhibited .......................... 95 4.10 A PVS theory for the transition from Inhibited to Not-Inhibited. ............ 96 4.11 Conjecture theory for the state inhibited .......................... 97 4.12 PVS unprovable subgoal example. ............................ 97 4.13 AND/OR table representation of PVS subgoal. ..................... 98 4.14 Specifying axioms in the PVS specification language ................... 103 4.15 Adding augmenting information to the PVS proof process and completing the PVS proof. 104 4.16 Analysis of output and iteration options of the analysis. ................. 105 5.1 Domain Axioms associated with spurious errors of the state Inhibited. ......... 111 5.2 Overview of data flows for identifying and using domain axioms ............. 114 5.3 BDD structure and node profile example. ........................ 115 5.4 Example of a deep BDD with a repeating sub-branch. The repeating sub-branch denoted by the open circle, repeats six times in the BDD .................... 117 5.5 Example of a deep BDD with links to a repeating sub-branch. The repeating sub-branch is denoted by a 1 with a circle around it. ....................... 118 5.6 Example graph showing indicator nodes .......................... 119 5.7 Examples of indicator nodes near the top of the BDD ................... 120 5.8 Node profiles of indicator nodes; a dash represents a don’t care .............. 121 5.9 Partial node profile for consistency analysis of two transitions from a large avionics specification. ..................................... 122 5.10 Outline of a BDD showing the sampling algorithms and locations of the samples in the BDD. ......................................... 125 5.11 Partial samples of a result BDD. ............................. 126 5.12 Partial graph and samples associated with the first three levels in the graph. ...... 127 ‘ 6.1 RSML specification of TCAS-Controller. ........................ 131 6.2 RSML partial specification of the state Other-Aircraft. ................. 132 6.3 RSML partial specification of the state Tracked ...................... 132 6.4 RSML specification of the state Intruder-Status. ..................... 133 6.5 RSML partial specification of the state Own-Aircraft ................... 135 6.6 Transitions fi'om Other-Traffic, Proximate-Traffic, and Potential-Threat to Threat. . . . 139 6.7 Transition from Proximate-Traffic to Other-Traffic. ................... 139 6.8 The Threat-Condition Macro ................................ 140 6.9 Partial BDD node profile for conjunction of Proximate-Traffic to Threat and to Other- Traffic without variable reordering ........................... 141 6.10 Partial BDD node profile for conjunction of Proximate-Traffic to Threat and to Other— Traffic with variable reordering. ........................... 143 6.11 Selected samples from symbolic analysis of Proximate-Traffic to Other-Traffic and to Threat with variable reordering. ........................... 144 6.12 RSML specification of state Other-Air-Status. ...................... 145 6.13 Guarding condition for transition from Other-Air-Status state Airborne to Other-Air- Status state On-Ground ................................. 145 6.14 Guarding condition for transition from Other-Air-Status state On-Ground to Other-Air- Status state Airborne. ................................. 145 6.15 RSML specification of state Alt-Reporting. ....................... 146 6.16 Guarding condition for transition from Alt-Reporting state Yes to Alt-Reporting state Lost ........................................... 147 6.17 6.18 6.19 6.20 6.21 6.22 6.23 6.24 6.25 6.26 6.27 6.28 6.29 6.30 6.31 6.32 6.33 6.34 6.35 6.36 6.37 6.38 6.39 6.40 6.41 6.42 6.43 6.44 Guarding condition for transition from Alt-Reporting states Lost and N o to Alt- Reporting state Yes ................................... 147 Guarding condition for transition from Alt-Reporting state Lost to Alt-Reporting state No. .......................................... 147 Guarding condition for transition from Alt-Reporting state C to Alt-Reporting state Yes. 148 Guarding condition for transition from AltoReporting state C to Alt-Reporting state No. 148 Domain axiom in tabular form for Other-Alt—Reporting, Alt-Reporting assertion. . . . 149 General PVS strategy to prove two guarding conditions are consistent. ......... 150 Specific PVS commands to include a domain axiom into the analysis process and to prove two guarding conditions consistent ...................... 151 Transitions from Other-Traffic and Proximate-Traffic to Potential-Threat. ....... 151 Partial BDD node profile for conjunction of Proximate-Traffic to Threat and to Potential-Threat without variable reordering ...................... 152 Selected samples from symbolic analysis of Proximate-Traffic to Threat and to Potential-Threat without variable reordering ...................... 154 Partial BDD node profile for conjunction of Proximate-Traffic to Threat and to Potential-Threat with variable reordering. ...................... 155 Selected samples from symbolic analysis of Proximate-Traffic to Threat and to Potential-Threat with variable reordering. ...................... 156 Partial BDD node profile for conjunction of Proximate-Traffic to Other-Traffic and to Potential- Threat without variable reordering ...................... 159 Portion of samples from symbolic analysis of Proximate-Traffic to Other-Traffic and to Potential- Threat without variable reordering ...................... 160 Partial BDD node profile for conjunction of Proximate-Traffic to Other-Traffic and to Potential-Threat with variable reordering. ...................... 161 Selected samples from symbolic analysis of Proximate-Traffic to Other-Traffic and to Potential-Threat with variable reordering. ...................... 163 Partial BDD node profile for conjunction of Proximate-Traffic to Other-Traffic and to Potentialon with variable reordering and domain axioms. ........... 165 Domain axiom for Other-Alt-Reporting, Other-Air-Status assertion ........... 166 Selected unprovable subgoals from PVS analysis of Proximate-Traffic to Other-Traffic and to Potential-Threat. ................................ 167 The PVS proof commands used to include the domain axioms into the analysis process to attempt to prove two guarding conditions consistent. ............... 168 Partial BDD node profile for conjunction of Other-Traffic to Threat and to Potential- Threat with variable reordering. ........................... 169 Selected samples from symbolic analysis of Other-Traffic to Threat and to Potential- Threat with variable reordering. ........................... 171 Transition from Other-Traffic to Proximate—Traffic. ................... 172 Partial BDD node profile for conjunction of guarding conditions from Other-Traffic to Threat and to Proximate-Traffic with variable reordering. .............. 173 Portion of the samples obtained from symbolic analysis for consistency of Other-Traffic to Threat and to Proximate-Traffic with variable reordering .............. 175 Transition from Potential-Threat to Proximate-Traffic ................... 176 Partial BDD node profile for conjunction of Potential-Threat to Threat and to Proximate-Traffic with variable reordering. ..................... 177 Selected samples from symbolic analysis of Potential-Threat to Threat and to Proximate-Traffic with variable reordering. ..................... 180 xiv 6.45 6.46 6.47 6.48 6.49 6.50 6.51 6.52 6.53 6.54 6.55 6.56 6.57 6.58 6.59 6.60 6.61 6.62 6.63 6.64 6.65 6.66 6.67 6.68 6.69 6.70 6.71 6.72 6.73 6.74 6.75 6.76 6.77 6.78 6.79 6.80 A.l A.2 A.3 Selected samples for Potential-Threat to Threat and to Proximate-Traffic of predicates involved in the domain axiom. ............................ 181 Transition from Potential-Threat to Other-Traffic. .................... 181 Partial BDD node profile for conjunction of Potential-Threat to Threat and to Other- Traffic with variable reordering. ........................... 182 Selected samples from symbolic analysis of Potential-Threat to Threat and to Other- Traffic with variable reordering. ........................... 184 Partial BDD node profile for conjunction of Potential-Threat to Proximate-Traffic and to Other-Traffic with variable reordering ........................ 186 Selected samples from symbolic analysis of Potential-Threat to Proximate-Traffic and to Other-Traffic with variable reordering ........................ 188 Specific PVS commands to include a domain axiom into the analysis process and to prove two guarding conditions consistent. ...................... 189 Transition from Threat to Other-Traffic. ......................... 189 Transition from Threat to Potential-Threat ......................... 190 Transition from Failed to Potential-Threat ......................... 190 Transition from Failed to Other-Traffic. ......................... 190 Transition from Failed to Passed .............................. 191 Transition from Passed to Failed .............................. 191 Guarding condition for transition from any Auto-SL state to Auto-SL state 1. ..... 195 Guarding condition for transition from any Auto-SL state to Auto-8L state 2. ..... 195 Guarding condition for transition from any Auto-8L state to Auto-SL state 3. ..... 196 Guarding condition for transition from any Auto-SL state to Auto-SL state 4. ..... 196 Guarding condition for transition from any Auto-SL state to Auto-8L state 5. ..... 197 Guarding condition for transition from any Auto-SL state to Auto-8L state 6. ..... 197 Guarding condition for transition from any Auto-SL state to Auto-8L state 7. ..... 198 Mutually exclusive and all-inclusive relational expressions involving the variable Own- Alt-Radio (OAR) .................................... 200 PVS strategy for proving guarding conditions complete .................. 201 The Radar-Bad-For-RADARLOST-cycles macro. .................... 201 The Radarout-EQ-O-macro ................................. 201 The Climb-Desc.-Inhibit macro. ............................. 202 The Standby-Since macro. ................................ 202 Unprovable subgoals from PVS analysis .......................... 203 Reduced unprovable subgoals from PVS analysis ..................... 204 Condition to add to specification of guarding conditions to make the specification com- plete ........................................... 204 Guarding condition for transition from any Effective-SL state to Effective-5L state 1. . 205 Guarding condition for uansition from any Effective-8L state to Effective-8L state 2. . 205 Guarding condition for transition from any Effective-SL state to Effective-SL state 3. . 206 Guarding condition for transition from any Effective-8L state to Effective-8L state 4. . 206 Guarding condition for transition from any Effective-8L state to Effective-8L state 5. . 207 Guarding condition for transition from any Effective-8L state to Effective-8L state 6. . 207 Guarding condition for transition from any Effective-3L state to Effective-8L state 7. . 208 Alt-Separation-Test Macro ................................ 216 Low-Firmness-Separation-Test Macro .......................... 217 Noncrossing-Biased-Climb Macro ............................ 217 XV A.4 N oncrossing-Biased-Descend Macro ........................... 218 A.5 No-Vertical-Intent Macro ................................. 218 A6 RA-Mode-Canceled Macro ................................ 218 A7 RA-Inhibit Macro ..................................... 219 A8 Reply-Invalid-Test Macro ................................. 219 A.9 TCAS-TCAS-Crossing-Test Macro ............................ 220 A. 10 Threat-Alt-Test Macro .................................. 221 All Threat-Range-Test Macro ................................. 222 A.12 Two-Of-Three Macro ................................... 222 xvi Chapter 1 Introduction and Motivation . Statically analyzing requirements specifications to assure that they possess desirable properties is an important activity in any rigorous software development project. Mthout the thorough under- standing of the software specification that can be gained through static analysis, the end product, that is, the working software system, will most likely disappoint the user and bring grief to the developer [49]. The requirements specification is the foundation of any software development process. All other stages of development depend upon the requirements specification. In addition, errors in the requirements that go undetected and propagate to later stages of development (the design and im- plementation stages) are the most costly to correct [32, 38, 41]. Therefore, it is important to ensure that the requirements document satisfies certain desired properties before proceeding to later stages of the development process. One application area for the research described in this dissertation is in the static analysis of software requirements. Analysis of state-based requirements is used in the examples throughout this document. There are two classes of properties we can check the requirements for: application specific properties and general, or application independent, properties. For example, a desirable property 1 2 for flight management software in an aircraft might be, if the aircraft is landing, the landing gear must be down. This constraint is applicable to most aircraft, but is not applicable to, for example, a train system or an automotive system. Thus, application specific properties are relevant only in a specific domain or in a specific product. General properties are properties that all requirements should satisfy regardless of the applica- tion domain. For example, two desirable properties for requirements specifications to satisfy are completeness and consistency. Completeness means the requirements have a behavior specified for every input and input sequence, and consistency means there are no conflicting requirements [24]. In the context of state-based requirements, completeness and consistency analysis becomes the prob- lem of showing that disjunctive and conjunctive logical expressions are tautologies or contradic- tions [24]. Techniques for automatically analyzing and manipulating the requirements facilitate the analysis process and increase the level of confidence that a system will operate as desired. Machine analysis and manipulation of requirements presumes that the requirements are specified in a language with a formal foundation [59]. The research described in this document presumes that the requirements to be analyzed can be specified in a language suitable for machine analysis and manipulation. There are two general methods that can be used to analyze the requirements to determine if they satisfy desired properties: dynamic analysis techniques and static analysis techniques. Dynamic analysis techniques include testing and simulation. Testing and simulation rely on providing input to a system and observing the output generated by the system. Since the input space is often infinite (or at the very least, extremely large), dynamic analysis techniques cannot provide the necessary levels of confidence required for certain classes of systems, for example, so called critical systems, and only a small portion of the input space can be tested [7 , 25, 41, 52, 53]. Critical systems are often real-time reactive systems in which a failure could result in harm to life, property, or the 3 environment. Very high levels of confidence that such systems will operate in a safe manner (i.e., in such a way so as not to cause harm to life, property, and the environment, or in such a way so as to minimize the amount of harm that results in the event of a failure) are required. Static analysis techniques are based on statically studying a model and attempting to formally prove that certain properties hold. Static analysis techniques include techniques such as reachability analysis, model checking, and theorem proving. Static techniques rely on abstraction techniques to simplify the model since including all details may make the models too complex to analyze; i.e., the analysis process will become intractable. The abstractions, however, may remove details needed for the analysis, and thus, the analysis may report spurious errors along with the true errors. Spurious errors are conditions in the model that are reported as errors in need of correction, but information that was abstracted out of the model precludes the reported conditions from being satisfied. For example, assume a system has a control variable that is used to determine the flow of control in such a way that the system will enter a dangerous configuration only if the control variable has a Specific value, but the static technique being applied has abstracted away the variables that determine control flow. Based on the analysis model, the static technique may report that the system will enter a dangerous configuration since the control flow information is missing. Manual inspection of the requirements, however, reveals that the control variable that has been abstracted out of the model actually prohibits the system from entering the dangerous configuration; thus, the error reported by the analysis technique is a spurious error. The end product of any analysis effort, whether static or dynamic, is a report of the analysis results. This report should provide the analysts with the information they nwd to find and correct errors in the artifact being analyzed. The number of spurious errors reported by static analysis techniques in the analysis results can be excessive. When analysis reports contain too many spurious errors, it is difficult, error-prone, and time consuming for the analyst to identify and correct the true 4 errors. The necessity of providing the analyst with useful output from the static analysis is the motivation for this research and is discussed more thoroughly in the next section. 1.1 Problem Statement The goal of this research was to develop an analysis method to check state-based requirements for completeness and consistency in a way that is automated, generates error reports with an acceptable level of accuracy, is fast enough to be used on a day-to—day basis, and that is scalable and generaliz- able (i.e., applicable to real-world problems and not limited to state-based requirements). Since an- alyzing for completeness and consistency in the context of this research generalizes to the problem of showing that disjunctive and conjunctive logical expressions are tautologies or contradictions, our goal was to be able to analyze logical expressions to check for tautologies and contradictions in a way that generates error reports with an acceptable level of accuracy, and is fast and automated enough to be used on a day-to-day basis by practicing engineers. No one individual static technique for performing the analysis is sufficient to satisfy the desired goals [48, 65, 67]. There are trade-offs between the amount of abstraction an analysis method relies on, the degree of automation, the speed with which the analysis completes, and the level of accuracy in the analysis output. Each analysis method has its own strengths and weaknesses in relation to the trade-offs. For example, static symbolic methods are fast and fully automated, but may generate many spurious error reports since with symbolic methods many functions are not interpreted; i.e., the semantics of the functions are abstracted away. Reasoning techniques such as theorem proving may generate more accurate error reports, but are less automated and tend to be slower. When more details are included in the analysis process, the analysis often requires more user intervention and more time may be required to complete the analysis. We know that some information must be abstracted from the model or the analysis becomes 5 intractable. Furthermore, not all information needs to be included in the analysis model, only that information that is relevant to the properties of interest to the analysts. Even the most powerful static techniques will generate spurious errors if information relevant to the analysis has been abstracted away. Therefore, if the spurious errors are the result of information missing from the analysis model, it is important to determine what information has been abstracted away and that is leading to the spurious errors. Determining the abstractions leading to spurious errors can be difficult, since generally, the analysis output does not give the analyst any indication of where to look for the missing information. Indeed, without error-prone and time consuming manual inspection of the analysis output, the analyst does not know which reported errors represent true errors in the specification, and which reported errors represent spurious errors. Since no one individual analysis technique can satisfy all of our goals, the problem is then, how can we combine different static analysis methods to take advantage of the strengths of each method and circumvent their weaknesses, and how can we identify information that is relevant to the accuracy of the analysis, but that has been abstracted out of the model being analyzed. This dis- sertation addresses these problems; it describes an iterative and integrative method that capitalizes on the strengths of the more detailed analysis methods and the strengths of the less detailed analysis methods while circumventing their weaknesses. In addition, this dissertation describes an approach to help the analyst identify relevant information that has been abstracted away from the analysis model. To illustrate the power of the research described in this dissertation, we applied our technique to the requirements specification for a large real-world avionics system specified in the requirements language called Requirements State Machine Language (RSML) [41]. The elements of RSML that are relevant to this research are described in Chapter 3, Section 3.1.2. 1.2 Contributions The contributions of this research are the following: c We developed an iterative and integrative analysis method to analyze state-based requirements for completeness and consistency, in a way that generates analysis reports with an acceptable level of accuracy, is fast and automated enough to be used on a day-to-day basis by practic- ing engineers, and is scalable and generalizable. The analysis process combines a symbolic component with a reasoning component. We can generalize this contribution to: an iterative and integrative analysis method to check disjunctive and conjunctive logical expressions for tautologies and contradictions. We developed a general and simple method that uses a symbolic representation of logical expressions to help identify abstractions causing spurious incompletenesses and spurious in- consistencies in the analysis output. We identified four classes of spurious errors and associated the spurious errors with the ab- stractions that lead to the spurious errors, when analyzing state-based requirements for com- pleteness and consistency. e We developed various analysis tools to automate the analysis process. These tools include: - a translator from Requirements State Machine Language (RSML) to Prototype Verifi- cation System (PVS)l specifications, - a translator from RSML AND/OR tables2 to Binary Decision Diagrams (BDDs)3, - a BDD analyzer to check logical expressions represented as BDDs for tautologies and contradictions, - a translator from BDDs to RSML AND/OR tables to present the analysis results to the analyst, and — proof strategies for PVS to help automate the proof process and allow the analyst to use a single command in most cases to perform PVS analysis. lPVS is a specification and verification system and is described in Chapter 2. 2AND/OR tables are disjunctive normal form tables used to represent logical expressions and are described in Chap- m3. 3BDDs are data structures used to represent and manipulate logical expressions and are described in Chapter 3. 7 1.3 Organization of Dissertation and Guide to Reading Chapter 2 discusses different analysis techniques and describes the strengths and weaknesses of each technique. Dynamic analysis techniques are discussed only briefly at the beginning of the chapter. The bulk of the chapter is devoted to static analysis techniques including reachability anal- ysis, model checking, and theorem proving. Persons with a good understanding of static analysis techniques may skip the majority of this chapter. However, we recommend reading at least the introduction and summary to maintain a link with the rest of the dissertation. In Chapter 3 we discuss related work in the area of analysis of software requirements. We describe the strengths and weaknesses of each technique, and describe a problem common to all techniques. The common problem is that abstractions in the model may yield spurious errors in the analysis report. The next two chapters describe the results of the research discussed in this dissertation. In Chapter 4 we present our solution to the problem detailed in Chapter 3, specifically, our iterative and integrative analysis method. In Chapter 4, however, we leave out one important detail; namely, how we identify the information that is abstracted away from the model and that leads to the spurious errors. In Chapter 5 we discuss our method for identifying the missing information that leads to the spurious errors, and describe our overall method to analyze logical eXpressions to check for tautolo- gies and contradictions in a way that is timely, automated, and generates analysis reports with an acceptable level of accuracy. To demonstrate the usefulness of our approach, we applied our method to a large real-world avionics specification. The results of this application are reported in Chapter 6. In the conclusion of Chapter 6 we provide some heuristics to help guide analysts in using our technique efficiently and effectively; we learned the heuristics from the application of our method. In Chapter 7 we present 8 our conclusions and discuss future enhancements that could be made to our analysis technique, and we discuss future investigations that follow from this research. Appendix A contains definitions of macros that are discussed in Chapter 6, but not included in the chapter itself. Appendix B contains a description of the PVS prover commands we used, and descriptions of the PVS proof strategies we developed during the course of this research. Chapter 2 Analysis Techniques In Chapter 1 we introduced the motivation for this research and the problem we set out to solve. In this chapter we describe several different analysis techniques and investigate the strengths and weaknesses of each technique. Our investigation helped us identify analysis methods and tools that we could potentially use to achieve the goals of the research described in this dissertation. Our investigation also helped us eliminate certain analysis methods and tools from consideration. Analysis techniques are conventionally classified as dynamic analysis and static analysis tech- niques, according to their operational characteristics [62,65]. Dynamic analysis techniques require actual program execution, whereas static analysis techniques do not require the actual program to be executed. In the context of this chapter, a program refers to any eXecutable artifact. For example, an executable state-based specification or a traditional program. The emphasis of this chapter is on static analysis, but dynamic analysis is discussed briefly in Section 2.1. Several static analysis methods are discussed in detail in Section 2.2. The bulk of the static analysis discussion is spent on theorem proving systems; we knew early on that we nwded some type of reasoning component to achieve our goals. In Section 2.3 we provide a summary of the analysis methods discussed, and review some of their strengths and weaknesses. 10 Analysis techniques may be applied to requirements documents, design documents, or source code. Each analysis method works on some model (requirements, design, code) of the system being analyzed. Each analysis technique incorporates some compromise between accuracy and complete- ness, and tractability (or computational cost) [65]. For example, exhaustive testing (i.e., testing all possible input cases) is guaranteed to be accurate since all true errors will be found and no spurious errors will be reported, but exhaustive testing is intractable for all but the simplest of programs. The compromise between accuracy and computational cost results because the question “Does program P obey specification 5” is undecidable for arbitrary programs and specifications [65]. Most analysis techniques rely on constructing a representation of possible executions of the executable artifact, and comparing this representation to a specification of intended behavior [65]. The state space of an executable artifact is the set of states that the artifact (referred to as program from here on) can reach in all possible executions. Many analysis methods (both static and dynamic) rely on analysis of the state space, since the actual behavior of a program is represented as program states and state transitions, or as an abstraction of program states and state transitions. Analysis techniques that explicitly construct some representation of program states are referred to as state space analysis techniques [65]. There are two main ways to reduce the computational complexity of state space analysis: folding and sampling [65]. With folding, the state space shrinks because some of the details of program execution are abstracted away (i.e., some details are ignored or removed). Abstracting away details, typically results in a smaller state space, since each state in the model state space may represent several states in the normal program execution. Sampling, on the other hand, does not alter the size of the state space, but rather, examines only a portion of the entire state space. In testing, for example, the program is executed only on selected input data, since testing all possible inputs is intractable. Most dynamic analysis techniques rely on sampling of the state space, whereas most 11 static analysis methods rely on folding of the state space. 2.1 Dynamic Analysis Dynamic analysis methods such as testing, involve selective exploration of the entire state space of program execution [62]. Testing, as used here, refers to testing a system or portion of a system by providing a set of inputs to a system or portion of a system, and observing the output generated by the system. The output is then compared with the predetermined expected results. Testing is successful when an error is found. Not finding any errors does not ensure that the program has no errors. Thus, testing can reveal the presence of errors but not the absence of errors. No dynamic analysis technique can check all possible input sequences of a program, since the input space for even a moderately sized program can be exceptionally large [49]. If a good sampling of the input space is made, then testing will be successful to some degree. However, if a bad sampling is made, many errors will be missed. Unfortunately, there is no method for determining a good sampling of the input space. Thus, dynamic analysis techniques can only demonstrate the presence of errors in a program, they cannot demonstrate that the program is error-free [49, 56], and so dynamic analysis techniques cannot provide the levels of confidence desired to ensure that a critical system will operate as expected. 2.2 Static Analysis Techniques Static analysis techniques create a model of the system to be analyzed, and check the model to make sure it satisfies some desired properties. There are many different ways to model system behavior: axiomatic methods [37], state-based methods, such as mode table models [4, 30, 32], Statecharts [20, 21, 41], and finite state machines [38], and a plethora of other methods. There are 12 also many different ways of analyzing the models. Static methods for analyzing models include reachability analysis, model checking, and theorem proving. Section 2.2.1 discusses reachability analysis. Section 2.2.2 discusses model checking, and Section 2.2.3 discusses theorem proving methods. The discussion of each method includes an analysis of the strengths and weaknesses of the method. 2.2.1 Reachability Analysis Reachability analysis is a group of analysis techniques that involves the systematic enumeration of all reachable states in a finite-state model; that is, a state transition model of larger modules or of a complete system, is constructed from models of individual processes [48, 62, 67]. The re- sulting composite state-transition model is generally referred to as a reachability graph. Once the reachability graph is constructed, it can be analyzed for certain general properties. Typically, reach- ability models abstract away all details of execution except for synchronization structure. Thus, reachability analysis is primarily used to verify properties related to the synchronization structure of software, such as freedom from deadlock, livelock (absence of starvation), race conditions, and mutual exclusion [62, 67]. Reachability analysis is attractive because it is relatively straightforward to automate and conceptually simple. Reachability analysis is attractive and widely used, however it suffers from several drawbacks. The biggest drawback to practical application of reachability analysis to real systems is the well known state space explosion problem. In the worst case, as the number of processes in a concurrent system increases, the total number of reachable states in the system grows as the product of the numbers of states of all the processes [62]. This exponential growth in the number of states from the composition of processes limits the analysis of synchronization structure to small collections of tasks and can easily make it impractical to analyze even a simple system. According to Young [67], 13 the task limitation is on the order of a dozen. Yeh [62, 63] introduced a divide and conquer compo- sitional approach that may help reduce the limitations of reachability analysis that result from the state space explosion problem [67]. However, reachability analysis suffers from other limitations as well. Since reachability analysis is primarily used to check the synchronization structure of a program, it is not as powerful (comprehensive) as theorem proving or testing [62, 67]. Theorem proving and testing can be used to verify a wider variety of properties, such as functional correctness. In general, the number of states grows exponentially in the number of independent items one models. The more details that are modeled, the greater the number of states that are needed to represent the model. This is why for analysis of synchronization structure, for example, all details except synchronization structure are absu'acted away from the model. The abstractions make the analysis tractable, but they also may limit the accuracy of the analysis, and abstractions may limit the types of properties that can be analyzed. For example, to avoid the state space explosion problem, details related to flow of control may be abstracted away by not completely modeling the values of the variables that determine control flow. As a result, spurious errors may be reported; that is, a state representing an error condition may be reachable in the reachability graph model of a system and will be reported as an error, when in actuality, certain variable values that were not modeled completely would preclude that state from being reached in the actual system. Thus, the abstractions made can make the reachability analysis conservative (i.e., all true errors will be reported, but spurious errors may also be reported). It is then left up to the analyst to determine which reported errors are true errors and which reported errors are spurious errors. 14 2.2.2 Model Checking The focus of model checking is on checking a model of a system to determine if certain applica- tion dependent properties are satisfied in the model. These properties are all temporal properties and thus, involve, in some way, the sequencing or ordering of events in time. Traditionally, model check— ing has been applied to hardware verification. More recently, Atlee and Gannon [3, 4], Wing [60], and Bharadwaj and Heitrneyer [6] have applied model checking to software requirements. Their work will be discussed in Chapter 3, Section 3.2. Early work by Clarke and his colleagues involved the use of explicit model checking in the area of hardware verification and concurrent program‘verification [7, 13, 14]. Explicit model checking requires two inputs: a behavioral model of the system, and a formal specification of system behav- ior [9, 13]. The first input to a model checker is in the form of a state transition graph. The second input, the formal specification of system behavior, is a logical formula expressing certain proper- ties that must hold true in the system. Some properties of interest are safety and liveness proper- ties [3, 4, 7, 13, 14, 22]. Safety properties generally state what should not happen and are usually required to remain invariant throughout the lifetime of the entire system [3, 4]. Liveness proper- ties state that some current state leads to an expected future state (for example, forward progress is made) [3]. Specifying requirements about safety and liveness properties reqmres a logic language capable of describing the relative ordering of events in time (a temporal logic language) [22]. There are many different variations of temporal logic languages that can be used. One of the more popular temporal logic languages, and the one Clarke and his colleagues use [7, 13, 14, 42], is Computation Tree Logic (CTL), a branching time temporal logic; viewing time as linear suggests that at each moment there is only one possible future, whereas viewing time as branching suggests that at each moment, time may split into alternate courses representing different possible futures [17]. The details of CH. 15 are not important for the purposes of this research and will not be discussed. See [13, 14] for further details. Model checking is concerned with determining if a specific property, expressed as a formula in temporal logic, holds in a system. Figure 2.1 shows a graphical depiction of the inputs and outputs of a model checker [3]. The notation M, so ]= f means that formula f holds at state so in M [13]. To check if temporal logic formula f holds in M, both the formula f and a specification of the system are input to the model checker (Figure 2.1). The model checking algorithm operates in stages. In the \ / Connect “In Figure 2.1: Model checker inputs and outputs. first stage, each state of the finite state transition model M is labeled with the formulas that hold in that state. In the second stage, the model checker examines the model to determine if the temporal logic formula f holds in the model. If the system specification was translated correctly into a state transition model, then a formula determined true by the model checker must also hold true for the corresponding specification [7]. For example, consider Figure 2.2. Let the specification require that at some time in the future, relative to state 3, there is some state in which the formula f = (a A b) holds (assume that the model checker has already completed the first step of labeling the states with the formulas that hold in them; not all formulas that hold in the states are shown). The CTL formula l6 Figure 2.2: Reachability analysis using a model checker. EF( f ) means that there is some path from s that leads to a state at which f holds. Then, given the state 3 and the CTL formula EF ( f ), the model checker will check the label of state s to see if EF( f ) is in the label set of s. If the formula is found, then s l: f. A model checker is also equipped with the capability to generate a counter example in certain situations. If the model checker determines that a formula is false, it will, if possible, provide a counter example that shows that the negation of the formula is true. For example, consider Fig- ure 2.3. Let the specification require that formula f holds in every state on every path from so; this is represented by the CTL formula AG (f). The model checker will check the label at so to see if the CTL formula AG (f) is in the set of labels. Since the formula does not hold globally in the model shown, the model checker will attempt to find a counter example; that is, a path to a state in which -w f holds; in this case, it will report either the path from so to st, or the path from so to 51-. A counter-example will not be generated for example, when the model checker is checking to see if a specific property eventually holds; if the model checker finds that the property does not eventually hold, there is no counter-example to generate to show this. The major disadvantage to model checking is the explicit enumeration of the state space. As the complexity of a model increases, the size of the state space can in certain situations, increase exponentially. Since the model checker searches a model by explicitly enumerating the state space, the running time scales linearly in the number of states. This means that the model checker’s running time increases exponentially as the complexity of the system increases. Thus, model checking 17 Figure 2.3: Model checking’s counter example feature. suffers from the state space explosion problem. The state space explosion problem can be alleviated by representing the state space symbolically rather than explicitly [9, 10]. This can be done by representing states, and transitions between states as Boolean formulas [35]. The details of symbolic model checking are not important and are not discussed in this dissertation. For details, see [10, 42]. To summarize, model checking has several advantageous features: 0 it can automatically verify that required properties hold in a finite state model, 0 it can generate a counter example for some properties if it determines a specific property does not hold in the model, and o in practice, model checkers can run quickly [7, 13, 14, 35]. However, as with all analysis methods, model checking also suffers from several problems: 0 it can be difficult or impossible to specify some properties in the restricted temporal logic [2, 4] used in model checkers, thus, no verification can be performed for such properties, 0 in some cases, the complexity of temporal logic formulas makes it difficult to know what has actually been specified and proven, and o the explicit generation of the state space results in the state space explosion problem (although this can be somewhat alleviated by using symbolic model checking). 'fi 18 We have investigated both reachability analysis and model checking which are closely related static techniques. We showed that both reachability analysis and model checking have certain strengths and weaknesses, and we described these strengths and weaknesses. We now look at a static method that is not closely related to the previous two techniques, namely, theorem proving. 2.2.3 Theorem Proving Theorem proving is used extensively in the area of hardware verification and there are a plethora of different types of theorem proving systems in use; HOL [36, 58], Boyer-Moore [16, 43], PVS [15, 55], Knuth-Bendix [16], and the Larch Prover [19] to name a few. Theorem proving is also used in the software arena, mainly to verify that implementations satisfy some specification [52, 53, 60]. More recently, theorem proving is being considered for use in the analysis of requirements for completeness and consistency [26, 27, 28, 30, 32]. To solve the problem described in this dissertation, we evaluated different theorem provers to determine which theorem prover fit best with our goal of developing a method for AN Ding and ORing complex logical expressions together to check for contradictions and tautologies in an auto- mated, timely, and accurate way. The information in Sections 2.2.3.1 through 2.2.3.3 is intended to introduce some concepts and techniques about proof systems which are further developed in Section 2.2.3.4 when specific theorem provers are described and discussed. Section 2.2.3.1,pro- vides a discussion of the general characteristics of proof systems. Section 2.2.3.2 discusses two different types of proof systems we investigated: sequent calculus systems and resolution systems, and Section 2.2.3.3 describes some of the various search strategies that can be used to enhance the efficiency of proof methods. Four specific theorem provers are described and discussed in Sec- tion 2.2.3.4: PVS, PROLOG, Larch Prover, and Knuth-Bendix. The strengths and weaknesses of the various methods are summarized at the end of this chapter. 19 2.2.3.1 Characteristies of Proof Systems The majority of the information contained in Sections 2.2.3.1, 2.2.3.2, and 2.2.3.3, is derived from [16]. All proof systems have four components in common: 1. A set of logical or formal axioms which are universal truths (formulas that are valid; i.e., true in all interpretations). 2. A set of inference rules which are transformation rules to map valid formulas to valid formu- las. 3. A proof development method. 4. A proof strategy. The proof development method is the style of the proof employed. This methodology may be top-down (analytic) or bottom-up (synthetic). A top-down approach starts with the conjecture that is to be proved, and applies the rules in a backwards manner reducing the conjecture to sub-goals until axioms are derived. The less practical bottom-up approach starts with the axioms and applies the rules until the conjecture to be proved is deduced. In the bottom-up approach it is possible for the prover to continue synthesizing theorems that lead further and further away from the goal. As a result, the goal may never be reached. A combination of top—down and bottom-up can also be used. There are basically three different facts that can be proven in a proof system. A proof system can be designed to prove that 1. A formula may be deduced from a given set of axioms. 2. A single formula of the form (al A a2 A ... A an) —> c is valid, where the ag’s are logical axioms and c is the conjecture to be proved. 3. A set of axioms taken together with the negation of the goal conjecture is unsatisfiable. Systems of the first type are known as deduction systems and provide the most natural approach of the three. In the second approach, the correctness of the result depends on the fact that a formula or is a logical consequence of a finite set of formulas [61, 32, --- 6,, if and only if the conjunction of the finite set of formulas, (fir A 62 A - - - A fin), implies a is valid (i.e., is a tautology); systems of this type are known as afiirmation systems. More simply, the above formula means that if all of 20 the formulas 61, fig, ... [in are valid, then a is valid, and it will never be the case that if all of the dis are valid 0 is not valid. The third approach corresponds to a proof by contradiction; the goal is assumed false and the system shows that this leads to a contradiction. The correctness of this third style of proof depends on the assumption that if the axioms together with the formula ea is inconsistent (i.e., unsatisfiable), then a must be a logical consequence of the axioms. Resolution systems use this last style of proof. A proof strategy is imposed on the development method in order to facilitate proof develop- ment. There are many different strategies that may be used. Essentially, a strategy may include using derived rules of inference, specifying directions on how to apply a given set of rules, and us- ing special procedures for dealing with subtasks. Derived rules of inference are rules that combine several inferences in the original system into one step. Derived rules do not increase the power of the system in a technical sense, but merely save time in practice. Any application of a derived rule could be translated into the sequence of applications of the rules from which it was derived [18]. The directional component determines which rules should be applied and to which formulas, and it effects both the speed with which a proof is derived and the naturalness of the proof. The special procedures for dealing with subtasks include the use of decision procedures and procedures for deal- ing with difficult tasks such as using induction to prove theorems. Search strategies are described in more detail in Section 2.2.3.3. 2.2.3.2 Basic Proof Systems There are several different types of theorem proving systems including semantic tableaux [18], natural deduction [16], sequent calculus [16], and resolution based systems [16]. Only sequent calculus systems and resolution based systems will be considered further here. Natural deduction systems will not be considered because such systems are not designed for the purpose of developing 21 proofs in an efficient manner, but rather, they are designed for displaying a proof in a “natural” form once the proof has been derived [16]. Consequently, other approaches to proof development are more efficient, particularly the sequent calculus approach. Resolution and semantic tableaux systems are similar: both systems are refutation systems and they represent information in the form of clauses [18]. Semantic tableaux systems use clauses in disjunctive normal form, while resolution systems use clauses in conjunctive normal form. Both resolution and semantic tableaux systems are amenable to automation. 2.2.3.2.] Sequent Calculus Systems A sequent is an expression that has the form I‘ l- A, where I‘ and A are finite sets (possibly empty) of formulas. The meaning of the sequent formula I‘ l- A is: ('71 A 72 --- A 7,) -r (61 V 62 V --- V 6,), which says that if all of the formulas on the left of the arrow are true, then at . least one of the formulas on the right of the arrow is true. Sequent calculus systems have few, if any, axioms and many inference rules [16]. The reason there are few, if any, axioms and many inference rules is because sequent calculi systems are based on the work of Gerhard Gentzen who developed a system of “natural deduction” intended to allow proofs to be performed in a manner corresponding to human reasoning. The major principle that characterizes “natural deduction” systems is that for _ each logical symbol of the chosen logic, there should be separate rules that allow its introduction and its removal [16]. Thus, in Gentzen systems, at least two rules of inference are needed for each logical symbol, and so there are many rules of inference and only few, if any, axioms. In sequent calculus systems, the inference rules are of the form of antecedent and consequent rules, where the antecedent is taken as the left side of the sequent formula and the consequent is taken as the right side of the sequent formula (I‘ and A respectively). Sequent calculus systems are affirmation systems that use backward reasoning (topodown approach to proof development); they start with 22 the goal conjecture and reduce the goal to subgoals by applying the mics backwards until axiom sequents are derived. In sequent systems, proofs are expressed as a tree of sequents built from axiom sequents such that if node N is labeled with I‘ l- A then: if N is a leaf node, 1" l- A must be an axiom; and if N has children, their labels must be the premisses from which I‘ l- A follows by one of the sequent calculus rules [18]. The label on the root node is the sequent to be proved. The backward reasoning approach results in a more direct style of proof than the forward rea- soning approach, so sequent systems can be efficiently implemented. The Prototype Verification System (PVS) (Section 2.2.3.4.1) is based on sequent calculus. 2.2.3.2.2 Resolution Systems Resolution systems are refutation systems; i.e., they do proof by contradiction. They assume the conjecture is false and then show that this leads to a contradiction. The conjecture is represented in clausal form (conjunctive normal form). A clause is a disjunction of a finite set of literals with no literal appearing twice. A formula is a conjunction of a finite set of clauses. As an example of the resolution of two clauses, consider the two clauses C’ = (a: V C") and D = (ea: V D’), where C’ is the rest of clause 0 and D’ is the rest of clause D [46]. The two clauses contain two opposing literals, :1: and fix. The clause (0’ V D’), containing all literals of the two clauses except for the two opposing ones, is called the resolvent of C and D. The resolution of two clauses is the deduction of a resolvent from the two clauses (if one exists). For example, the result of applying resolution to the following two clauses 01V L V Cg (2.1) D1 V ‘WL V Dz (2-2) is the resolvent C; V Cg V D1 v D2. Resolution systems generate resolvents from clauses until a contradiction is reached. 23 The main problem with resolution systems is to extract from a proof (by resolution) the steps of the proof that are truly necessary [16]. This problem is partly solved by the imposition of search strategies, but even with these strategies there is the problem of the vast search spaces inherent in the proof of complex theorems via strategies that must be complete (i.e., assured always to prove any theorem) for all of first-order logic. PROLOG is a resolution based theorem prover. Resolution will be discussed more thoroughly in relation to PROLOG (Section 2.2.3.4.2). 2.2.3.3 Search Strategies It is necessary to impose search strategies on proof systems in order to avoid the redundancy in- volved in simply carrying out all possible deduction sequences. There are two major types of search strategies: resolution strategies (strategies that [were created specifically for resolution systems, but that can be applied to other theorem proving techniques), and natural strategies [16]. These strategies can also be classified as complete strategies or heuristic strategies respectively. Not all strategies will be considered since there are many of them, and only a brief overview of the types will be presented. 2.2.3.3.1 Resolution Strategies Resolution strategies are complete in the sense that they are assured to always prove any theorem that follows from the axioms [16, 46]. There are three major categories of resolution strategies: simplification, refinement, and ordering. Simplification strategies are strategies for removing redundant clauses. Simplification strategies include subsumption of clauses and demodulation. A clause C subsumes a clause D if a substitution 0 exists such that every literal in 09 appears in D [16]. For example, P(a, r) V P(y, b) subsumes P(a, b) where a and b are constants and a: and y are variables; here, 0 is a: = b or y = a. There are several different versions of demodulation, but the basic idea is that equations are applied as 24 destructive rewrite rules to replace certain terms by others. The rewrite rules are destructive in the sense that the replaced rules disappear (are removed). Refinement strategies are those strategies that determine those clauses to which resolution should be applied and those to which it should not. Examples of refinement strategies are the set-of- support strategy and hyper—resolution. In the set-of-support strategy, a subset of the set of clauses is taken as the “goa ” and resolvents are derived from these clauses and then from the resolvents themselves until the empty clause results. When the set-of-support strategy is used, a reasoning program is not allowed to apply an inference rule (resolution, for example) unless at least one of the potential clauses to which the inference rule is being applied to yield a resolvent has been deduced from some specified subset of the input clauses, or is a member of the specified subset of input clauses [61]. For example, let S be a set of clauses. Choose a nonempty subset, T, of S. The subset T is called the set—of-support, and the clauses in T are said to be supported or to have support. An inference rule cannot be applied unless at least one of the clauses to which it is being applied to yield a resolvent, is a member of T or has been deduced from T. Hyper-resolution is an inference rule that combines several resolution steps into one. Hyper- resolution is applied to a set of clauses to produce a positive clause (a clause containing no negative literals) [61]. The set of clauses to which hyper-resolution is applied must contain one negative clause (a clause with no positive literals) or one mixed clause (a clause with positive and negative literals), and the rest must be positive clauses. The number of positive clauses in the set must be equal to the number of negative literals in the negative or mixed clause. For example, consider the following set of clauses: -Path(a:,y) V ePath(y,z) V Accessible(z,x) (2.3) Path(townA,townB) V Inaccessible(townC, town/1) (2.4) Path(townB, townC) (2.5) Clause 2.3 is a mixed clause containing two negative literals and one positive literal. Clause 2.4 25 is a positive clause containing two positive literals, and clause 2.5 is a positive clause con- taining one positive literal. The number of positive clauses is equal to the number of nega- tive literals in the mixed clause. Applying hyper-resolution to these clauses yields the clause, Accessible(townC, townA) V I naccessible(townC, town/1), where the clauses have the fol- lowing interpretations: if there is a path from :r: to y and from y to 2, then 2 is accessible from 2:; if there is no path from townA to townB then townC is inaccessible from townA; there is a path from townB to townC; and, if townC is not accessible from townA then townC' is inaccessible from town/1. Normally, the above result would require several steps to determine. The third type of resolution strategies considered here are ordering strategies. Ordering strate- gies determine the order in which a set of clauses should have resolution applied to them [16]. Examples of ordering strategies include depth-first and breadth-first search, and the unit preference strategy. The unit preference strategy requires that resolution be applied to pairs of clauses such that one of them is a unit clause (a clause containing a single literal), before it can be applied to more general clauses. 2.2.3.3.2 Natural Strategies Natural strategies are heuristic in the sense that they are not assured to prove any theorem but may be more efficient and natural in the cases they do succeed [16]. Natural snategies are usually associated with natural deduction systems or sequent calculus systems. They are intended to capture some aspect of human reasoning (thus, the term natural). There are many types of natural strategies. A few of these are: reduction, unification, and the use of decision procedures. A reduction is a rewrite rule. A rewrite rule is a rule used for replacing any instance of one expression by the corresponding instance of another expression. In terms of natural systems, unification is the process of finding appropriate terms to substitute for variables in the application of quantifier rules. Decision 26 procedures are built-in theories and models. The built-in theories are special inference rules or procedures that take the place of sets of axioms for certain theories. 2.2.3.4 Specific Theorem Provers We discuss and describe four specific theorem provers in this section: PVS, PROLOG, Larch Prover, and Knuth-Bendix. The theorem prover incorporated into PVS is based on the sequent calculus. PROLOG is a resolution refutation system, and Larch Prover and Knuth-Bendix are rewrite systems. The strengths and weaknesses of these four systems are presented. The ability of each of these systems to solve the problem described in this dissertation is also discussed. 2.2.3.4.] Prototype Verification System The Prototype Verification System (PVS) is a verification system that provides an interactive environment for the development and analysis of formal specifications [15, 44, 45]. PVS consists of a specification language, a parser, a typechecker, a proof checker, specification libraries, and browsing tools. PVS is designed to help in the detection of errors and in the confirmation of “correctness” of hardware and software systems during early life-cycle applications of formal methods. Formal sup- port for conceptualization and debugging is provided during the early design stages in which both requirements and design are expressed in abstract terms. A rich type-system and correspondingly rigorous typechecking is one way in which PVS supports early error detection. For example, an invariant to be maintained by a state machine can be embedded in PVS types by expressing it as a type constraint. In addition, typechecking can generate proof obligations that amount to a very strong consistency check on certain aspects of the specification. A theorem prover is provided in PVS as another way that PVS supports error detection. An ef- fective way to understand the content of a specification and to identify errors in the specification is to 27 attempt to prove properties about the specification. There are two ways in which proving properties about a specification can be done: incidentally, while attempting to prove a “real” theorem. such as, “does a particular algorithm achieve its purpose”, or deliberately, by challenging a specification as part of a validation process. A challenge is a test case posed as a putative (supposed) theorem and is of the form: “if this specification is correct, then the following ought to follow”. The PVS proof checker is an interactive theorem prover based on sequent calculus (described in Section 2.2.3.2.1). To reiterate, a sequent is an expression that has the form I‘ t- A, where I‘ and A are finite sets (possibly empty) of formulas. The meaning of the sequent formula is: (71 A 72 A 7,) -) (61 V 62 V V 61-), which says that ifall ofthe formulas on the left of the arrow are true, then at least one of the formulas on the right of the arrow is true. A proof goal in PVS is a sequent and has the form shown in Figure 2.4. The sequent formulas 7,- and 6,- are PVS formulas; the 7.- being the antecedents and the 6]- the consequents. Restating the meaning of the sequent formula in terms of antecedents and consequents, the conjunction of the antecedents of 'a sequent implies the disjunction of the consequents of the sequent. PVS is an affirmation system 71 72 ’73 Figure 2.4: PVS sequent formula. that uses backward reasoning (top-down approach to proof development); it starts with the goal conjecture and reduces the goal to subgoals by applying the inference rules backwards until axiom sequents are derived. The inference rules in PVS are in the form of antecedent and consequent 28 rules. The reason for this is because formulas that are candidates for reduction can appear in either the antecedent or consequent of a sequent and different results need to be generated in each case. For example, the antecedent and consequent rules for conjunction in PVS are: A,B,1"l-A I‘l-A,A I‘l—B,A marl-AA" I‘l—AAB,A "" The interpretations respectively are, if a conjunction appears in the antecedent of a sequent subgoal, then applying the antecedent conjunction rule to the subgoal yields A, B, I‘ l- A, and if a conjunc- tion appears in the consequent of a sequent subgoal, applying the consequent conjunction rule to the subgoal yields two subgoals: I‘ l- A, A and I‘ l- B, A. The symbols ‘ A l-’, and ‘l- A ’ signify that the conjunction to which the rule is applied appears in the antecedent or consequent re- spectively. As an example of the utilization of the antecedent and consequent rules in PVS, assume we have the following goal sequent: A A B B CVD CVD | ------ | —————— B D AVC AVC. The first of these new subgoals reduces to true by another rule called the propositional axiom rule which is automatically applied to every sequent that is ever generated in a proof. Application of 29 a rule for disjunctions, analogous to the rule for conjunctions, to the second subgoal, followed by application of the propositional axiom rule reduces this second subgoal to true. In actuality, PVS will automatically reduce the original subgoal to true by automatic recursive application of a single rule, followed by automatic application of the propositional axiom rule. There are three kinds of proof commands in PVS: primitive rules, defined rules, and proof strategies. The primitive rules define the underlying logic of PVS and include structural rules, propositional rules, quantifier rules, and equality rules as well as some others. Defined rules are strategies that are applied in a single atomic step so that only the final effect of the strategy is visible; intermediate steps are hidden from the user. Proof strategies are intended to capture patterns of inference steps and are formed by combining proof commands in various ways. A proof tree is maintained by the prover and it is the goal of the user to construct a proof tree that is complete [55]. A complete proof tree is one in which all of the leaves are recognized as true. Each node in the proof tree is a proof goal (a subgoal: a goal that is yet to be discharged; i.e., proved). PVS contains a large prelude file that contains many built-in theories that can be used during the proof process. Some of the built-in theories included in the prelude file are Boolean properties, quantifier properties, equality properties, functions, relations, orders, sequences, well-founded in- duction, measure inductions, sets and set lemmas, reals and real properties, rationals and rational properties, exponentiation, and so on. In addition, PVS contains decision procedures for equality and linear inequality that are complete for linear arithmetic (multiplication by literal constants; ex- pressions of the form 2:: + 3y 3 42) [51, 55]. These decision procedures are the workhorses of almost any nontrivial PVS proof [15] and are used by PVS to prove trivial theorems, to simplify complex expressions (particularly definitions), and to perform pattern matching. Linear arithmetic reasoning is performed over the natural numbers and reals. The decision procedures deal solely 30 with ground formulas (that is, those formulas that contain no quantifiers). Tedious equality and arithmetic reasoning is simplified by the procedures so that the number of trivial subgoals can be minimized and the sequent formulas kept simple. Efficient data structures are maintained by the decision procedures. Assumptions that are true in the current context are recorded in these data structures. A rule called record is used to add more assumptions to the data structures. The assumptions that are recorded are antecedent formulas and negations of consequent formulas. Recall that a proof in sequent calculus systems is expressed as a tree of sequents where the root is labeled with the sequent to be proved. New assertions that get added to the data structures are valid for any descendent proof node of the current sequent and are automatically employed whenever the decision procedures are invoked at the lower nodes. An example of a rule that both adds assumptions to the data structures and attempts simplification using the decision procedures, is the assert rule. Rules using the decision procedures and adding assumptions to the data structures used by the decision procedures, can be sensitive to the order in which they are invoked, since each application of these rules adds assumptions to the data structures used by the decision procedures. As a result, it may be necessary to apply the rule more than once to achieve the desired effect. For example, let a formula A be asserted before a formula B, where the formula B is used to simplify A. The assert rule needs to be re-applied to effect the simplification of A, since B is used to simplify A. Although the decision procedures mainly deal with linear arithmetic, they can also do some sim- ple nonlinear reasoning, but the decision procedures are not complete for this case [51]. There are some modest extensions to the decision procedures for dealing with expressions involving nonlinear sub-terms using simplifications such as (a: + y) a: (2: - y) = (a: * 2:) — (y * y) and simplifications involving division, such as, .7: 4: g = y. In general, however, to prove nonlinear facts, formulas from the prelude must be cited. 31 PVS is a powerful system that combines a variety of methods to reason about expressions and arrive at solutions, however, it too has some drawbacks. If a subgoal contains all atomic expressions and the user cannot complete the proof for this subgoal, then the subgoal may or may not be un- provable - the burden is on the user to come up with a proof if one exists. Since the subgoal is in a reduced format, it may be possible for the user to examine the subgoal and detect obvious problems with the original conjecture. If no obvious problems can be seen, the user does not know if the goal is unprovable or if the knowledge required to prove the goal is lacking. In addition, since the burden of constructing a proof is placed on the user, PVS may not be amenable to automation if a common strategy cannot be found to arrive at a solution to the problem posed. 2.2.3.4.2 PROLOG PROLOG is based upon the principles of unification and resolution (Section 2.2.3.2.2) [16]. It uses a backward reasoning, proof by contradiction methodology. PROLOG uses incomplete search strategies (i.e., strategies that are not guaranteed to prove any given theorem, but in the cases they succeed, they may be more efficient and natural) (Section 2.2.3.3). A PROLOG program consists of a set of facts and rules expressed as Horn clauses. A clause is a disjunction of a finite set of literals with no literal appearing twice (clauses were discussed in Section 2.2.3.2.2). In general, Horn clauses are clauses that contain at most one positive literal. In PROLOG, the clauses contain exactly one positive literal. A query in PROLOG is a Horn clause with no positive literals; it is the negation of an existence theorem converted to PROLOG clausal form (derived below). For example, the clause A V -Bl V "‘32 V V -nB,, is logically equivalent to A (— (B1 A 32 AA Bn). which, in PROLOG, is written A:-Bl, Bo, , Bn, 32 where the head of the clause is A and the tail of the clause is the sequence, Bl, Bo, ... , Bn. A query has the form HA1 V HA2 V V eAm, which is equivalent to -(A1 A A2 A A Am). In PROLOG the query has the form ?- A1, A2 Am. Thus, respectively, the symbols “:-”, “,”, and “?-” in PROLOG have the same meaning as the sym- bols “t—”, “ A ”, and “-.”, For example, the clause Greater“, Z) V fiGreaterBase“, Y) V fiGreaterBase“, Z) is logically equivalent to Greater“, Z) (— GreaterBase“, Y) A GreaterBase(Y,Z), and in PROLOG is Greater“, Z) :- GreaterBase“, Y), GreaterBase(Y, Z). The interpretation of this clause is: if (a: > y) A (y > z) then (a: > 2). Assuming that the facts GreaterBase(5, 3) and GreaterBase(3, 1) are in the rule base along with the above clause, an example of a query relating to the above clause is: fiGreaterBase“, 3) V fiGreaterBaaeB, Z) V fiGreater“, 2) which is equivalent to -:(GreaterBaae“, 3) A GreaterBase(3, Z) A Greater“, 2)), which in PROLOG is ?- GreaterBase“, 3), GreaterBase(3, 2), Greater“, Z). The result from the query would be X = 5, Z = 1. In addition, all program clauses and queries are prefixed implicitly by universal quantifiers. For example, let 2:1, - -- , zj be the variables appearing in a query. Then the query has the form: Vxlu-ijhAl v 4A2 v v -Am). which is equivalent to Vrlu-‘v’xj-(Al A A2 A A Am), which in turn is equivalent to -w(3:r:1---3:rj(A1 A A2 A A Am)). Thus, as previously mentioned, a query is the negation of an existence theorem converted to 33 PROLOG clausal form. The computation strategy used in PROLOG (based on unification and resolution) starts with the goal clause, and unifies it with the head of a program clause to produce a resolvent. This resolvent becomes the new goal. The leftmost subgoal of each goal is selected and a depth—first search strategy is used. A built-in ordering rule is used that selects the “tap-most” program clause in the unification with the selected goal. In essence, PROLOG searches through all possible sequences of deductions from the initial set of clauses until a solution is found. A solution to a query may be a yes/no answer, or a specific instantiation for the variable(s) in the query. Figure 2.5 shows two possible solution scenarios; the first query demonstrates a yes/no solution, while the second query demonstrates finding an instantiation for the variables X and Y in the query. Note that there are other possible solutions to the latter query that could have been found. The search strategy and ordering rule determine which solutions will be found first. The user could tell PROLOG to find another solution (and another) if desired; PROLOG will uninstantiate the variable (or variables) . and backtrack to seek another solution from there. I‘ Axioms *l Plus(2,3,5). I' 2 + 3 = 5 *I Plus(3,4,7). I' 3 + 4 = 7 *l Plus(2,5,7). I“ 2 + 5 = 7 *I Plus(1,6,7). I' 1 + 6 = 7 *I /‘ Querlee *l ?- Plus(2,3,5). I" 2 + 3 = 5 7 ‘I Yes ?- Plus“,Y,7). I' X + Y = 7 *I = r Y = 4 Figure 2.5: PROLOG solution scenarios. 34 One problem with the search mechanism in PROLOG is that the depth-first search strategy is an incomplete search strategy in the sense that solutions to some problems may never be found. An example of a set of axioms and a query that result in an infinite search is shown in Figure 2.6; the corresponding search tree is also shown. The speed with which other problems are solved however, offsets the problem of an incomplete search strategy. In addition, there are mechanisms available to alter the control sequence; these include cut: and fail. The cut, written ‘ ! ’ in PROLOG, is a special goal that when activated succeeds immediately but only once [50]. If processing returns to the cut due to backtracking, the cut will fail and cause the failure of its parent goal (i.e., the goal that called the rule in which the cut appeared) as well. The fail predicate is a predicate built-in to PROLOG that when activated as a goal, immediately fails and thus always causes backtracking. Thus, the cut and fail can be used to alter the normal control sequence used in PROLOG. l‘ Axioms *I Predp:- -Predq. I'A.1;lfqthenp’l Pred q. -Predp I‘ A.2; if p then q *l Pred -Pred r), Pred(e). I“ A.3; if r and s then q */ mg:- I" A.4; r I: true 'I Pred I‘ A.5: e to true *I /' Query */ ?- Pred(p). I“ is p true? *I ?- "06(9) / ?- Pred(q) I‘ Deduee Pred(q) from goal and A.1 *l / . ?- Pred(p) I" Deduee Pred(p) from subgoal and A.2 *I / ?- Pred(q) I" Deduee Pred(q) from subgoal and A1 'I 000/ Figure 2.6: Infinite search example in PROLOG. The example shown in Figure 2.6 illustrates another drawback of PROLOG; the order of the rules in the rule base may effect whether or not a solution is found. Figure 2.7 shows the same 35 I' Axioms */ Predp -Pred I‘ A.1; If qthen p 'l Predq m3; Pred(e) I'A.2; ltr endothenq'l Pred -Predp I‘ A.3; If p then 'I Fred ?)3- I" All; r In true‘ Prod e) I‘ A.5; e 1: true *I I" Query 'l ?- Pred(p). I‘ Is Pred(p) true? *I Yes 7- Prodlp) / 7- 9'06“!) /' Deduee Pred(q) from goal and A.1 'I ?- Pred(r), Pred(e) I" Deduee Pred(r) and Pred(e) from subgoal and A2 'I ?- Pred(e) I' Deduee Pred(e) from subgoal and AA ’1 e I" Pred(e) ls true from A.5 *I Figure 2.7: Finite search example in PROLOG. example as shown in Figure 2.6, except that the order of axioms A . 2 and A . 3 has been reversed. PROLOG can now find a solution to the query Pred (p) . The finite search tree showing how the solution is obtained when the order of the two axioms is reversed is also shown in the figure. The order of rules in the rule base may also effect the speed with which a solution is found. Other important attributes of PROLOG that require consideration include the following: 1. PROLOG is based on a closed world assumption in that it assumes that the given program contains all true assertions (if a fact is not in the described world it is not true). 2. Most PROLOG implementations do only integer arithmetic. 3. A term to be evaluated as an arithmetical expression must contain no uninstantiated variables and must be a valid expression. 4. PROLOG is not suitable for number crunching. The closed world assumption is detrimental for complex domains since it means every true fact related to a particular domain must be included. For the domain of research discussed in this dis- sertation, since PROLOG has no built-in theories for real numbers, all relevant theories for real 36 numbers in our domain of application must be added; if some relevant theories are missed, certain queries may result in no solution being found when there is a solution, or certain queries may result in an infinite loop when there is a solution. Items 2, 3, and 4 are particularly detrimental in the do— main of this research since the problem described in this dissertation requires arithmetic expressions involving real numbers and uninstantiated variables. PROLOG has other attributes also, but those discussed in this section are the most important to consider in regards to the work described in this dissertation. 2.2.3.4.3 Larch Prover The Larch Prover (LP) is an interactive theorem proving system for multi-typed first-order logic that is intended to assist users in finding and correcting flaws in conjectures; the predominant activity in the early stages of the design process. LP is designed to treat equations as rewrite rules. It can also carry out other inferences such as induction and proof by cases. The Larch Prover has been used to reason about designs for circuits, concurrent algorithms, hardware, and software [19, 43]. The Larch Prover assists in the proof process by carrying out routine steps in a proof automat- ically, but leaves it up to the user to build a complete proof. The underlying methodology of the Larch Prover supports both the forward (bottom-up) reasoning approach and the backward (top- down) reasoning approach. One problem with the forward approach (starting with axioms and trying to synthesize the goal conjecture) is that the system may continue to synthesize theorems that lead further and further away from the goal. As a result, the goal may never be reached. During the course of a proof, the user chooses commands to apply that use either the forward or the backward reasoning method. In addition, there are some built-in axioms that can be incorporated into the proof specification (the specification created by the user to initiate the proof process) and used dur- ing proof development. These axioms are originally specified in the Larch Shared Language (LSL; 37 not discussed in this document) and need to be converted to axioms usable by the Larch Prover. Equations play a major role in the Larch Prover (LP). Some of LP’s inference rules work directly on equations, but most require that equations be oriented into rewrite rules [19]. Rewrite rules and equations have the same logical meaning, but operationally they behave differently. A rewrite rule is an ordered pair < l, r > of terms (a term consists of either a variable or an operator and a sequence of terms known as its arguments), usually written I —> r, where l is not a variable and every variable that occurs in r also occurs in l, and the position of l and r and the direction of the arrow determine the orientation of the rewrite rule; I ——> r means whenever 1 appears in an equation, it can be rewritten by replacing it with r. The restriction placed on I and r is necessary so the rewrite rule will be terminating (i.e., there is no infinite sequence of rewrites). A set of rewrite rules comprises a term-rewriting system, or a rewriting system for short. Equations are oriented into rewrite rules by the Larch Prover and these rules are then used to reduce terms to normal forms. For example, the assertions shown on the left in Figure 2.8 will be oriented into the rewrite rules shown on the right in the figure. With these rewrite rules (and some declarations which are not shown in the figure), the conjecture 1 < 1 + 1 can be proved. The proof is shown in Figure 2.9. Assertion-z Rewrite Rules: :I.+o--1 0+1-->1 3.1 t + 3(1) -- s(r + 1) all) + j --> Iti + d) R.2 not(1 < 0) 1 < o --> false 8.3 o < 0(1) 0 < e(1) --> true 8.4 e(1) < a(j) -- 1 < j a(1) < 8(1) --> 1 < J 3.5 1 II l(0) 1 --> 3(0) R.6 Figure 2.8: Assertions and rewrite rules in Larch Prover. 38 1 < 1 + 1 Conjecture 3(0) < NO) + MO) Apply 8.6 three times MO) < etc «0- 3(0)) Apply 8.2 a(0) < e(e(0)) Apply 8.1 0 < 8(0) Apply 8.5 true ' Apply 8.‘ Figure 2.9: An example proof using rewrite rules in LP. In the sense that it is an interactive theorem proving system and used in the early stages of the development process it is similar to PVS. However, it appears that PVS has both a more structured environment and a more user friendly environment for specification and proof development, and a more structured underlying proof methodology. The specification language and interactive theorem prover for PVS are highly integrated This is not the case for the Larch Prover; the specification environment (not discussed in this document) and prover are not integrated at all. In addition, PVS has a large prelude file that contains many predefined theorems that can be easily incorporated into PVS proofs. This feature is also lacking in the Larch Prover. In addition, term-rewriting systems are not amenable to semantic analysis of equations involving arithmetic expressions, and the Larch Prover has no built-in theories for real numbers and Operations on real numbers. Both of these capabilities are required to solve the problems described in this dissertation. 2.2.3.4.4 Knuth-Bendix The Knuth-Bendix theorem prover is concerned almost entirely with the equality relation and deciding the equality of terms with respect to sets of equations [16]. The Knuth-Bendix system is a term rewriting system in the sense that an equation is treated as a rewrite rule; i.e., an equation may be oriented into a rule and used to rewrite a term into another term; the rewrite rule can only be applied in one direction, and can not be applied backwards, thus, the orientation. For example, 39 the equation a + 3(b) = s(a + b) may be oriented into the rule a + s(b) -+ s(a + b) and the term (a + 0) + s(a + b) may be rewritten, using the rule, into 3((a + 0) + (a + b)). In some cases, this procedure makes it possible to prove that a certain equation is a consequence of other equations. The major application of the Knuth-Bendix procedure (i.e., the area for which it was originally designed) is proof by induction. Consider a specification with the following axioms for addition on the Natural numbers [16]: a: + 0 = a: and a: + 3(y) = 3(2: + y), where s() is the successor function. To prove the associativity of ’+’ according to the principle of induction, we have to prove the base case, (u + v) + 0 = u + (1) +0), and the induction case, (u +v) + 3(w) = u + (v + s(w)), based on the assumption that the hypothesis, (u + v) + w = u + (v + w), is true; i.e., there existsaproofforit. Fromthefirstaxiom,thebasecasemaybcreducedtou + v = u + vby substituting u + v for a: and rewriting the left and right sides of the equation with the consequent of the axiom; the result is trivially true. In a similar way, using the second axiom above, the induction ‘case may be reduced to the subgoal s((u + v) + w) = s(u + (v + 112)). Since by assumption, (u + v) + w = u + (v + w) is true, the conjecture is proven. Other applications of Knuth-Bendix are in the areas of program synthesis and refutation theorem proving. 2.3 Summary In this chapter, we discussed several analysis techniques and presented the strengths and weaknesses of each method. No single technique, whether dynamic or static, is capable of addressing all analysis concerns [48, 65, 67]. Reachability analysis entails enumeration of all reachable system states and can lead to state space explosion for even moderately sized systems. Early model checking techniques suffered from the same limitations since they also relied on enumeration of all reachable 40 system states. Symbolic model checking allows larger systems to be analyzed since the states can be represented symbolically and therefore, a set of states in the original specification can map to a single state in the model. We discussed three types of theorem proving systems in this chapter: sequent calculus systems, resolution based systems, and rewrite systems. PVS is based on the sequent calculus, PROLOG is a resolution-based system, and the Larch Prover and Knuth-Bendix are both types of rewrite systems, though they use different types of mles and strategies. Our investigation of the strengths and weaknesses of the theorem provers discussed in this chap- ter, led us to choose PVS as one of the analysis components for the research described in this dis- sertation. The decision procedures in PVS give PVS a major advantage over other theorem provers that do not have decision procedures; in particular, they are advantageous for applying to one as- pect of the problem discussed in this dissertation. Rushby notes that “it is enormously tedious to perform verifications involving even modest quantities of arithmetic in systems such as HOL that lack decision procedures” [52]. In addition, PVS allows automation, is freely available, and is well documented and supported. Chapter 3 Analysis of Software Requirements In Chapter 2 we discussed analysis techniques such as model checking and theorem proving, that have traditionally been applied in the area of hardware verification [39, 54] to determine if certain application specific properties are satisfied, or to verify that an implementation satisfies its speci- fication [52, 53, 60]. We discussed the strengths and weaknesses of each technique. We used the information about the strengths and weaknesses of the analysis techniques to help us determine which techniques and tools we might use to achieve the goals of this research. The motivation for the research discussed in this dissertation was to find a way to check large disjunctive and conjunctive expressions to see if they form tautologies and contradictions and sub- ject to the following criteria: the analysis must be automated, the analysis must generate analysis reports in a reasonable amount of time, the analysis must generate analysis reports with an accept- able level of accuracy, and the analysis must be scalable and generalizable (i.e., applicable to real- world problems and not limited to state-based requirements). The first intended application of our technique was for analyzing software specifications; specifically, state-based specifications. In this chapter we discuss several specific techniques for analyzing software requirements, and describe the shortcomings of each approach. 41 42 Both Heimdahl and Heitmeyer are working on analysis of state-based requirements. Section 3.1 discusses analyzing state-based requirements for completeness and consistency. We discuss Heit- meyer’s work in detail in Section 3.1.1. Heimdahl’s doctoral work provided the foundation for this research and is discussed in detail in Section 3.1.2. In Section 3.1.3, we identify a problem com- mon to both approaches, namely, spurious errors, and discuss in detail the difficulties that occur when checking logical expressions for tautologies and contradictions. Section 3.1.4 summarizes some of the similarities and differences between the two methods, and summarizes the strengths and weaknesses of both approaches. Recently, more work is being done in the area of model checking software requirements. Sec- tion 3.2 discusses the work in this area, including work by Atlee and Gannon [4], Sreemani and Atlee [57], ng and Vaziri-Farahani [60], Anderson et al. [1], and most recently, Bharadwaj and Heitrneyer [6]. At the end of Section 3.2 we note that spurious errors may also be a problem in the area of model checking software requirements and we note why spurious errors tend to be a common problem among analysis methods. In Section 3.3 we list four classes of spurious errors that we identified during the course of this research and we identify the undetected contradictions that lead to them. Section 3.4 provides a summary of the work in analysis of software requirements, and discusses the shortcomings of the current analysis methods and why an enhanced analysis technique is re- quired. 43 3.1 Analysis of State-Based Requirements for Completeness and Con- sistency Jaffe, Leveson, Heimdahl, and Melhart have described a set of criteria that all requirements docu- ments should satisfy [38]. The criteria include precision or lack of ambiguity in the requirements, and robustness. Lack of ambiguity in the requirements means they are free from conflicting re- quirements and undesired nondeterminism [26]. Critical systems are often reactive systems. The behavior of reactive systems is defined with respect to assumptions about the environment within which the systems operate [38]. A robust system will detect violations of these assumptions (such as unexpected inputs) and respond appropriately to these violations. Since the system is built from the specification, its robustness depends on the completeness of the specification of the environmental assumptions. In terms of a state-based requirements model, robustness implies the following [26]: e a behavior (transition) must be defined for every possible input in every state, a the disjunction of conditions on every transition out of any state must form a tautology, and e a behavior (transition) must be defined in every state in case there is no input for a given period of time (a timeout). If a specification is free from ambiguity and is robust, then it is consistent and complete. Both completeness and consistency are essential criteria that should be satisfied by all requirements spec- ifications. These terms will now be defined in terms of a finite state machine model. Consider the finite state machine shown in Figure 3.1. This mOdel consists of three states, A, B, and C, and three inputs, 2:, y, and z. The inputs potentially trigger a transition from one state to another when they are received in a current state. A guarding condition is associated with each transition; the guarding conditions are enclosed in brackets and follow the associated transition input. If an input is received and the guarding condition is satisfied, then a transition occurs. The a, b, and c in the guarding conditions represent variables in the system. If a specification is complete, then there will be a transition specified for every possible input and input sequence; this implies that 44 the disjunction of the guarding conditions out of a state for a given input must be a tautology. If a specification is consistent, it is deterministic; i.e., there will not be more than one possible transition out of a state under the same conditions. This implies that the pairwise conjunction of the guarding conditions out of a given state for a given input, must be unsatisfiable (i.e., must be a contradiction). x[e>10] Figure 3.1: Incomplete and inconsistent finite state machine model. The finite state model shown in Figure 3.1 is both incomplete and inconsistent. The transitions on input :1: out of state A are incomplete; if a = 10 on input 2:, no action is specified. In addition, the transitions specified on input y out of state B are inconsistent; if b = 5, a transition will either occur from B to A or from B to C. The disjunction of the transitions out of state C with respect to input z is a tautology so the transitions out of state C are complete with respect to 2. In addition, the conjunction of the transitions out of state C with respect to z is a contradiction, therefore, the transi- tions out of state C are also consistent with respect to z. The incompletenesses and inconsistencies in the specification of transitions out of states A and B must both be detected and corrected. If the requirements are incomplete or inconsistent, the design and implementation might be incomplete and inconsistent. For critical systems, this can mean that responses to certain environmental stimuli are not considered, or that conflicting choices exist in certain situations that can lead to a system state that causes harm to life, property, or the environment. 45 Heitmeyer, Jeffords, and Labaw [30, 31, 32] and Heimdahl [25, 26, 27, 28] are doing research related to analysis of requirements for completeness and consistency. Heitmeyer and her colleagues use the requirements language Software Cost Reduction (SCR). Heimdahl uses the requirements language Requirements State Machine Language (RSML). The underlying model used in both lan- guages is a finite state machine model; Heitmeyer and her colleagues use a Moore machine, whereas Heimdahl uses a Mealy machine (in a Moore machine, outputs are associated with states; in a Mealy machine, outputs are associated with transitions). Both requirements languages attempt to abstract away implementation details and describe only the externally visible behavior required of the system. Both languages specify conditions (guarding conditions) that must be satisfied before a transition can occur; these guarding conditions are specified as logical expressions (predicates). Both models use a tabular format to represent different aspects related to states, events, and tran- sitions, since it is easier to comprehend guarding conditions expressed in a tabular format than the equivalent predicate logic expression [41]. SCR uses several different tabular representations to show the relations between different as- pects of the system behavior, whereas RSML uses only one type of table. The similarities and differences in the two requirements languages are not important here; essentially, different require- ments languages have been chosen to model the same types of systems: real-time reactive systems that fall into the category of critical systems. The important aspect of their work is regarding the ' analysis of the requirements for completeness and consistency. Both Heitmeyer and Heimdahl rely on ORing and ANDing together information represented in their tables to check the requirements for completeness and consistency. Section 3.1.1 describes the analysis techniques Heitrneyer and her colleagues use to check for completeness and consistency with respect to SCR. Section 3.1.2 describes the analysis techniques Heimdahl uses to check for completeness and consistency with respect to RSML. Section 3.1.3 describes and discusses a problem common to both approaches. 3.1.1 SCR One tabular representation used in SCR relates states and conditions to output variables. Figure 3.2 shows the format of this type of SCR table (note that this table is not an exact representation of an actual table in SCR — SCR uses the term mode rather than state). For this type of table, pairs of conditions (or conditional events) are ANDed within a row to check for mutual exclusion (consis- tency), and all conditions within a row are ORed together to check for completeness; Figure 3.2 provides a graphical depiction of this method. The cij’s represent conditions (logical expressions), tj represents a state variable, and the v55 represent values that tj can be assigned. The semantics of the table shown in Figure 3.2 are: in state i if condition cm is true, then tj = 121; if condi- tion on is true, then tj = 222, and so on to condition Gian- The table is deterministic (consistent) if none of the aid-’5 in a row overlap. To check the table for consistency, a pairwise ANDing of the conditions in a row must be performed, and the result of each conjunction should be false; Vj, k, k'; k yé k' : CM A CM, = FALSE. The table is complete if the disjunction of all conditions in a row is true; Vj, V£=1 c“ = TRUE. Completeness means that at least one condition can be satisfied and consistency ensures that only one condition can be satisfied. Their tool applies a tableaux-based decision procedure to determine whether the logical expressions representing consistency and com- pleteness given above are tautologies [31]. Consider Figure 3.3 which shows a state transition model with 'three states, A, B, and C, and nine transitions. Each state is annotated with a set of conditions followed by an assignment to an output variable. System variables are represented by a, b, and c. T is an output variable that can take on the values Low, Norm, and High. If the condition c > I is true in state C, T will be set to High. The other condition/assignment annotations are interpreted likewise. Figure 3.4 shows the tabular representation relating states and conditions to output variables derived from the specification. To interpret the tabular representation, assume that the current state is state B and that 47 .8tetes ( Conditions Itlt. 1 GL1 I c1,2 l ..- OLD Itlti 2 32'1i c2,2 i c2.p : i l l ! stete n cn,1! cm2 - - - cn'p Conpleteness exemle: c’1,1 V c1,2 \/"'\/°1.p - TRUE Consistency m1” c1,1 /\ 61,2 - FALSE Figure 3.2: Completeness and consistency checks within rows. the condition b = 5 is true; the value of T will be set to Norm. The following results are obtained from performing consistency analysis on the conditions shown in the table: e The conditions out of state A are inconsistent; that is, more than one condition out of state A willbetrueifa >100ra < 10: - (a < 10) A (a a5 10) does not form a contradiction, - (a < 10) A (a > 10) does form a contradiction, and — (a aé 10) A (a > 10) does not form a contradiction. e The conditions out of state B are inconsistent; that is, more than one condition out of state B will be true ifb = 5: -- (b < 5) A (b = 5) does form a contradiction, - (b < 5) A (b >= 5) does form a contradiction, and - (b = 5) A (b >= 5) does not form a contradiction. e The conditions out of state C are consistent; all pairwise conjunctions form a contradiction. 48 b>-5 'r-nigh e! I10 T-florllt , e>10 r-ntgh . ; b<5 Tenor: e<10 T-Low \Q/f—NQ Cond. Ver 3 l . V b-S T-Norlt f- / / Cond. Ver o-1 Tumor: o>1 Tunigh o<1 'rI-Low Figure 3.3: FSM annotated with conditions and output variable assignments. States Conditions fA e<10ieln10 e>10 B j b < 5 l b u 5 1: >5: 5 C j c < 1 l o I 1 o > 1 '1' Low j Horn nigh Figure 3.4: Table relating states and conditions to an output variable. 49 The following results are obtained from performing completeness analysis on the conditions shown in the table: e The conditions out of state A are incompletely specified; condition a = 10 is not considered: - (a < 10) V (a yé 10) V (a > 10) is not a tautology. e The conditions out of state B are completely specified: - (b < 5) V (b = 5) V (b >= 5) is a tautology. e The conditions out of state C are completely specified: — (c < 1) V (c =1) V (c > 1) isatautology. The tabular representation in SCR that most closely relates to the tabular representation in RSML, is the state transition table (referred to as mode transition table in SCR). The general struc- ture of a state transition table is shown in Figure 3.5 (note that this table is not an exact representation of an actual table in SCR for the same reason as stated earlier). The Si’s represent the current state, the Eij’s represent events that trigger transitions out of state 3,, and the Sid-’5 represent the new state that will be entered if the event Eig- occurs in state 5,. For example, if the current state is 5'1 and event E13 occurs, then the system will enter state 51,2. The table is consistent if the pairwise ANDing of the events results in a contradiction. These tables are not checked for completeness. Current I new Btete 1' M” j state l '1,1 t '1.1 81 l '1.2 81‘2 1 . . . e e e j 'l'jl 81:51 j '24 82,1 82 1 '23 82,2 ! 22.3.1 32,52 4T: ! l 1 'DJ 89.1 I 8 an , .p..2 . 3'2 ! ‘pdp 39.19 Figure 3.5: A state transition table abstraction. 50 Figure 3.6 shows a state transition model modeling the transitions from and to the states A, B, and C. The states are annotated with events and transitions. The tabular representation of this model is shown in Figure 3.7. Let the current state be B. The table shows that if the event b >= 5 Figure 3.6: FSM annotated with events and state transitions. is true, then a transition will occur from state B to state C (note that if b = 5 the specification is nondeterministic; a transition can also occur from state B to state A.). The semantics of the other table entries are the same. The specification is consistent if the pairwise conjunction of events associated with a particular state form a contradiction. The following results are obtained from performing consistency analysis on the events shown in the table: a The conditions out of state A are inconsistent; that is, more than one transition out of state A can occur if the event a > 10 or a < 10 occurs: - (a 75 10) A (a < 10) does not form a contradiction, — (a 515 10) A (a > 10) does not form a contradiction, and - (a < 10) A (a > 10) does form acontradiction. o The conditions out of state B are inconsistent; that is, more than one transition out of state B can occur if the event b = 5 occurs: - (b < 5) A (b = 5) does form a contradiction, - (b < 5) A (b >= 5) does form a contradiction, and - (b = 5) A (b >= 5) does not form a contradiction. o The conditions out of state C are consistent; all pairwise conjunctions form a contradiction. *n. 51 Curr. New sues m” seat. A II 10 A A e < 10 B e > 10 C b < 5 8 B b I 5 A b >- 5 C l o > 1 c i C o I 1 A o < 1 s I Figure 3.7: Table relating states and events to new states. 3.1.2 RSML SCR uses multiple tabular representations to represent the behavior of a system. RSML uses a sin- gle and slightly different tabular representation. The single tabular representation used in RSML describes the conditions that must be met before a transition can take place. Consider the state . transition model shown in Figure 3.8. The transitions are annotated with guarding conditions and output variable assignments. In RSML, the annotations on transitions have the following interpre- tation: the expressions preceding the ’/’ represent guarding conditions on the transitions, and the expressions following the 7’ represent the output that gets generated if a transition occurs. Also associated with each transition is a trigger event (not shown in the figure). In this example, we assume that all transitions are triggered by the same event. A transition occurs if the trigger event associated with a transition occurs and the guarding condition associated with that transition is true. In RSML the guarding conditions are represented as disjunctive normal form (DNF) tables called AND/OR tables. Figure 3.9 shows the tabular representations of the guarding conditions on transitions out of the states in Figure 3.8 (in actuality, guarding conditions are much more complex than those shown and the tabular representations consist of multiple rows and columns; Figure 3.10 shows an example of an actual guarding condition). If the guarding conditions specified for 52 6", "‘ \ 4' «f. 0 ‘t r s.’ P 30. 4 \ / Q, '9 a. 06 V O 45 c 3 . c>1 / 'r-nigh Figure 3.8: Transitions annotated with conditions and output variable assignments. transitions out of a particular state are consistent, then the pairwise conjunction of the guarding con- ditions is a contradiction. If the guarding conditions are completely specified for transitions out of a particular state, the disjunction of the guarding conditions is a tautology. Thus, in RSML the anal- yses occur between tables, whereas in SCR the analyses occur within tables. That is, with RSML, tabular representations of guarding conditions are ORed and ANDed together to test for complete- ness and consistency; Figure 3.11 shows a high level representation of this. If the disjunction of all tables for a specific state and trigger event is a tautology, then the guarding conditions are complete as specified. If the conjunction of pairs of tables for a specific state and input (trigger event) is a contradiction, then the transition conditions represented by the tables are consistent. Consistency analysis on the tables shown in Figure 3.9 yields the following results: 0 The guarding conditions for transitions out of state A are inconsistent; if a yé 10, any of the transitions can be taken. - TableAA A TableAB is not a contradiction, - TableAA A TableAC is not a contradiction, and - TableAB A TableAC is acontradiction. ‘9 53 wording Condition for 'rrensition Ousrdinc Condition for Trensition Ira Stete A to stete A: Ir:- ltete A to Btete I: rebleAAleI-io | 'rj reblenie<1o '1'} g r Ouerdina Condition For rrmition Ir:- State A to State C: usual->101!) Ouerdina Condition for 'rrensition wording Condition for 'rrensition tr:- Btete I to ltete A: Ira Btete I to Btete A: rabienh<5 11“: rubleu[b-5 ls l Ouerdino Condition For 'rrensition Ira BCICO 3 to ltlt. C: unlesclbns If] Guarding Condition for 'rreneition Midi-III Condition for Trill-1:100 Ira ltete C to ltete A: Ira ltete C to ltete I: rebleCAo>1 ['1'] rebleCAIo-i 11'] Guarding Condition for 'rrensition Ira ltete C to stete C: rebieccjc<1 [a] Figure 3.9: Tabular representations of guarding conditions. Transition(s): [Proximate-Traffia —> Other-Traffic Location: Other-Aircraft r> Inu'uder-Statuss.136 Trigger Event: Air-Status-Evaluated-Evento279 Condition: 0R Wart-forge ’T‘T‘TTT . T‘fl T‘TT'TTT’T‘: ——(i—1—-——Ii—1-—1-— iniginnn 41411.11” " .71. .TTT'.‘ r——l'—-—(-—-I -—-—1l——1r—u— L'__'_._'_E_'__'-_11;; L444; ' _.__'.J 5"“ LLngE4l Output Action: Intruder-Status-Evaluated-Evente-279 Figure 3.10: Guarding condition for transition from Proximate-Traffic to Other-Traffic. 54 ’ Ccmleteness mle : l 'reble 1 \/ treble 2 \/ ° ° ' \/ i'reble ni . no: Consistency mile : I I treble 11 /\ reble 2| - rue: I ._________J /\ . [————x /\ ram. 1! /\ l'reble ni - nus Figure 3.11: Completeness and consistency checks between tables. o The guarding conditions for transitions out of state B are inconsistent; if b = 5, two transi- tions can be taken. — TableBA A TableBB is acontradiction, - TableBA A TableBC is acontradiction, and — TableBB A TableBC is not acontradiction. o The guarding conditions for transitions out of state C are consistent: - TableCA A TableCB is acontradiction, - TableCA A TableCC is acontradiction, and - TableCB A TableCC is acontradiction. Completeness analysis on the tables shown in the figure yields the following results: ' The guarding conditions for transitions out of state A are incompletely specified; the condition a. r: 10 has not been specified: - TableAA V TableAB V TableAC is not a tautology. ' The guarding conditions for transitions out of state B are completely specified: - TableBA V TableBB V TableBC is a tautology. ' The guarding conditions for transitions out of state C are completely specified: - TableCA V TableCB V TableCC isatautology. Both analysis approaches discussed above report results that are quite promising [26, 27, 28, 30. 3 1 . 32]. However, both approaches suffer from a similar problem; the analysis methods being ”9th often generate spurious errors. These spurious errors can be traced to several sources. The 55 next section details the sources leading to spurious errors. Since Heimdahl’s work is the foundation for the research described in this dissertation, the bulk of the discussion in Section 3.1.3 focuses on his work. 3.1.3 Common Problem Between Methods: Spurious Errors In Heimdahl’s work, one source of spurious errors was the lack of a type system in the RSML notation [27]. For example, consider the variable T defined earlier. T is an enumerated type that can take on the values Low, Norm, and High. Enumerated types are mutually exclusive and all- inclusive, but the analysis method used does not know the semantics of enumerated types so it will report conditions as errors that cannot occur. For example, in analyzing transitions for completeness that involve enumerated types, the analysis output would report that no transition out of a certain state is satisfied when: 0 (TaéLow) A (TaéNorm) A (TaéHigh , e (T = Law) A (T = Norm) A (T 75 High e (T = Law) A (Taé Norm) A (T = High 0 (TaéLow) A (T=Norm) A (T=High ,and when 0 (T=Low) A (T=Norm) A (T=High), , ) ) ). ) Clearly, these situations are impossible for enumerated types and should not be reported as errors. In addition, the analysis method Heimdahl used is unable to reason about the relationships between the predicates making up the conditions (recall that the predicates in the conditions are the objects ultimately being ANDed and ORed together). Heimdahl used a data structure known as a binary decision diagram (BDD) to represent and manipulate the guarding conditions. BDDs represent expressions symbolically, thus, the semantics of the expressions are ultimately lost. To clarify the problem and show how and why a symbolic representation may lead to spurious errors, we now explain BDDs in detail. 56 3.1.3.1 Binary Decision Diagrams Bryant [8] proposed a method that provides a canonical (unique) representation for Boolean func- tions. The representation uses binary decision diagrams (BDDs). BDDs are directed acyclic graphs (DAGs). In addition to providing a canonical representation for Boolean functions, algorithms exist for efficiently manipulating BDDs [8, 42]; for example, testing two BDDs for equality, ANDing and ORing BDDs, and negating a BDD. . Representing Boolean formulas as reduced directed acyclic graphs with restrictions on the or- dering of decision variables in the vertices results in a canonical form (every formula has a unique representation). Testing for equivalence then becomes the task of simply testing to see if two graphs match exactly, and testing for satisfiability requires a simple comparison of the graph with the con- stant function F (FALSE); if the graph reduces to FALSE it is unsatisfiable. Similarly, to check for a tautology, merely requires a simple comparison of the graph with the constant function T (TRUE); if the graph reduces to TRUE the formula it represents is a tautology. Semi-formally, the set of vertices in a Binary Decision Diagram consists of non-terminal and terminal vertices. The non-terminal vertices are labeled with the variables of a Boolean formula X = {$1,552, - - - :3"). Terminal vertices are labeled T or F; true or false respectively. Each variable can be assigned the truth value T or F. The edges out of each vertex are labeled T or F representing the possible truth values that can be assigned to the variable labeling the vertex. If a truth assignment sets variable x,- to T, then at the vertex labeled z,- the edge labeled T is taken, otherwise the edge labeled F is taken. The value of a formula for a given truth assignment equals the value of the terminal node that is reached. For example, consider the binary decision diagram in Figure 3.12. The Boolean formula represented by this BDD is (9:1 A $2) V 2:4. A canonical representation is ensured in an ordered BDD (OBDD) by imposing a strict total order on the occurrence of the variables as the graph is traversed from the root to the leaves. On 57 (xi/\x.)\/x. Figure 3.12: Simple BDD example. a path from the root to the leaves, for any two variables 3:,- and 1133-, if i < j then :13.- must occur before 33- along any path. An ordering of variables in a system must be chosen and maintained for all functions that are to be represented in the system. The ordering of the variables can affect the number of nodes required to denote a given function [8]. For example, consider Figure 3.13; the permutation of arguments for the graph on the left results in only 8 nodes, while the same arguments permuted as on the right of the figure result in a graph with 16 nodes. There are many different algorithms for attempting to find a variable ordering that yields the fewest number of total nodes [5, 8]. The reordering algorithms are based on heuristics. Two such reordering algorithms are the sift reordering method and the window reordering method [5]. Sift reordering moves each variable into different positions in the BDD in an attempt to find the encoding of variables that yields the fewest number of nodes. The window reordering method moves a group (or window) of variables around, maintaining the same order of the variables within the window. The goal of window reordering is to find a position for the groups of variables that yields the fewest number of total nodes. There are algorithms that will find the optimal ordering. Unfortunately, they all run in exponential time [8]. The major difficulty Heimdahl encountered in using BDDs is the inaccuracy of the output gen- erated by ANDing and ORing BDDs together to test logical expressions for contradictions (mutual 58 (x1Ax2)V(x3/\x4)V(x5/\x6) (x1Ax4)V(x2/\x5)V(x3/\x6) Figure 3.13: Example of variable orderings and their effect on graph size. exclusion) and tautologies. The difficulty arises for two reasons: the predicates are represented symbolically and a symbolic representation loses the semantics of the predicates contained in the logical expression; and information related to the structure of the state machines is not considered] in the analysis process; i.e., the analyses are local rather than global, so information about the global state machine is absent from the analysis process. Incorporating all structural information into the analysis process would be difficult and time consuming and may result in state explosion, thus, rendering the analysis intractable. The latter problem and its solution will be discussed in more detail later in this document. The problem with a symbolic representation of complex predicates is discussed next. 59 3.1.3.2 Problems With Symbolic Representation of Predicates A symbolic representation of complex predicates means that the semantic information about ex- pressions involving the predicates is lost. No semantic consideration of the expressions precludes reasoning about the relationships between predicates. For example, Figure 3.14 shows a set of ex- pressions assigned to the variables :51 and 1:2, where S is an enumerated type that can be one of a or b (an enumerated type variable cannot be both values at the same time and the variable must have one of the values). To demonstrate how BDDs may report inaccurate results because the semantics of the expressions are lost, let 2:1 and 2:2 represent the expressions S = a and S = b respectively. Two possible truth assignments for :51 and 22 are ($1 = T) A ($2 = F) and (2:1 = F) A ($2 = T). Let g be a function of 2:1 and 2:; such that g(:r:1, 3:2) = 2:1 A “111:2. Let h be a function of 2:1 and 2:2 such that h(a:1, x2) = -:a:1 A 2:2. The functions 9 and h represent the two possible valid truth assignments for 2:1 and x2 described above, and they completely specify the enumerated type S. The BDD representation of each of these functions is shown in the figure. Let f be a function of g and h such that f(g, h) = g($1,$2) V h(a:1, 1:2). The BDD representation of f(g, h) is also shown in the figure. If the functions g and h completely specify the truth assignments 2:1 and 122 can have with respect to S, then the function f (g, h) should be equivalent to TRUE, signifying that the Boolean expression represented by f forms a tautology (i.e., is all-inclusive); Since S can only have one of the values a or b and it must have exactly one of those values, the functions 9 and h do completely specify the truth assignments 2:1 and 2:2 can have with respect to S, and f (g, h) should reduce to TRUE. However, as the graphical depiction of the function reveals, f (g, h) does not reduce to TRUE and the two paths leading to F (FALSE) signify the truth assignments for 2:1 and 2:2 that need to be added to completely specify the possible truth assignments for 2:1 and 2:2 with respect to S; i.e., these truth assignments represent spurious errors that would get reported as missing conditions. ‘18 8 I g T ‘13 8 I I. l' 82 : 3 u b r 82 3 8 I b '1' 9(81'82) - 81 /\ ~82 h(81,82) - “81 /\ 3:2 5‘? / @ f“ V 1' \T r A.) 82 T. ‘——‘ V, L__J :- \‘r g \‘1‘ [in $93 ’1 Figure 3.14: BDD representation and analysis of enumerated types. To see how specifying the invalid truth assignments along with the valid truth assignments results in a BDD that reduces to T (TRUE), consider Figures 3.15 and 3.16. Figure 3.15 shows the Boolean expressions and BDDs representing the paths leading to F in the BDD of f (g, h) in Figure 3.14. The function i(:r:1,a:2) = -x1 A fizz (Figure 3.15) represents the leftmost path leading to F in the BDD representing f (g, h). The function j ($1,122) = 2:1 A $2 (Figure 3.15) represents the rightmost path leading to F in the BDD representing f (g, h). The BDD representing the function f(i,j) = i(z1,x2) V j(2:1,a:2) shows the result of ORing the two invalid truth assignments for 2:1 and 2:2 with respect to S. The function f’(g, h, i,j) = f(g, h) V f(i,j) shown in Figure 3.16 is the disjunction of all combinations of assignments that the enumerated type S can have, including the invalid assignments where S equals both a and b at the same time, and where S has no value at all (neither a nor b). As expected, the BDD representation of f’ reduces to TRUE signifying that the Boolean expression represented by f’ is a tautology (i.e., all combinations of 61 assignments to the enumerated variable S have been included). But the truth assignments given to $1 and 1:2 in the functions i and j are not valid truth assignments for the enumerated type S; enumerated types are all-inclusive and mutually exclusive. Knowledge about the all-inclusive and mutually exclusive nature of enumerated types and post-processing of the results gleaned from the BDD representing f (g, h) can be used to eliminate these falsely reported missing truth assignments that would otherwise be reported. 31 3 s I H F x1 3 8 - . T 1‘2 3 S - b F *2 8 8 . b T @ \/ LE} 1'- \‘1' L5 L5 £(1.d) - 1 \/ 3 f9: 2 9 Figure 3.15: BDD representation and analysis of invalid enumerated type assignments. Heimdahl relied on post-processing the results from the analysis program before presenting them to the user, to alleviate the spurious error reports related to enumerated types. Heimdahl augmented his BDDs with pointers to the actual constituent components of the Boolean expressions. This allowed him to perform post-processing on the BDDs generated from the conjunction and disjunction of expressions. Paths containing invalid truth assignments for enumerated types were 62 “1.3) - 1 \/ 3 Figure 3.16: BDD example of tautology. not included in the analysis reports. The post-processing method involves examining the actual expressions at each node as the augmented BDD is traversed, and keeping track of the values being assigned to the enumerated types. If on a single path during the traversal, two elements of an enumerated type are TRUE, or if all values of an enumerated type are FALSE, that path is not checked further and the conditions represented by it and the sub paths generated from it are ignored. This same general form of post-processing can be used to eliminate paths containing simple equality and inequality contradictions such as (a: > 10) A (a: < 10) or (a: = 10) A (a: = 5). There are other types of expressions that will lead to spurious error reports when they are repre- sented symbolically but for which the post-processing method discussed above is not useful. Thus, the inability of the analysis method to reason about the underlying relationships between predicates is a more troublesome matter. Complex predicates containing arithmetic expressions and mathe- matical functions are especially troublesome to deal with. Consider the following three predicates: 63 xlza>10 T x4:f(y)>z T x2:b>0.55 T x5:g(y) f(y) T (a) (b) Figure 3.17: Symbolic representation of non-trivial arithmetic expressions and mathematical func- tions, and spurious error reports. a > 10 (3.1) b > 0.55 , (3.2) abs(a at b) 5 0.00278 (3.3) These predicates are represented symbolically in Figure 3.17(a) with the Boolean variables 2:1, 2:2, and $3. In analyzing transitions out of a state for completeness where the guarding conditions on those transitions involve these complex predicates, the following spurious error is reported as a condition that was not specified in the requirements specification and that needs to be specified: (a > 10) A (b > 0.55) A [abs(a at b) 3 0.00278] (3.4) When the semantics of the expressions represented symbolically by 1:1, 2:2, and 2:3 are considered, it is clear that all three terms cannot be true at the same time; the lower bound on abs(a at b) is 5.5 which is never less than or equal to 0.00278. This obvious contradiction cannot be detected with Heimdahl’s augmentation and is reported as a missing condition; i.e., a condition that has not been considered and that needs to be added as a condition in one of the tables. As another example, assume that the following predicates appear in the guarding conditions of transitions out of a particular state: f (y) > z (3.5) g(y) < z (3.6) g(y) > f (y) (3.7) These predicates are represented symbolically in Figure 3.17(b) with the Boolean variables 2:4, 2:5, 64 and 2:5. The current analysis method would report that there is no transition out of the state when (f (y) > 2) A (g(y) < 2) A (g(y) > f (11)) (3.8) Again, when the semantics of 3:4, 9:5, and x5 are considered, it is obvious that the conjunction of these predicates is a contradiction and was not included in any guarding conditions since it cannot occur in the first place; the function g(y) can never be greater than the function f (y). But anal- ysis using BDDs precludes reasoning about mathematical relationships between predicates, and therefore, precludes detection of such contradictions. Unfortunately, for non-trivial arithmetic ex- pressions and mathematical functions, post-processing using the method discussed above will not work. N on-trivial mathematical expressions in the predicates pose one challenge to developing an analysis procedure that will generate as few spurious errors as possible, operate in an efficient manner, and be amenable to automation. Nonlinear multiplication, such as in the example above (Equation 3.4), is especially difficult to manage since arithmetic with nonlinear multiplication is undecidable [51] and so is generally impossible for automated procedures to manage. 3.1.4 Summary Both Heitmeyer’s and Heimdahl’s analyses are purported to be efficient in terms of speed. In terms of accuracy, both methods suffer from spurious error reports. Efiiciency and accuracy in the context of this dissertation are defined in terms of speed and in terms of the number of true errors and the number of spurious errors reported. An analysis process is efficient if it is fast enough to be used on a day-to-day basis; i.e., the analysis completes in seconds or minutes, rather than hours or days. An analysis process is accurate if it reports all true errors and as few spurious errors as possible. The smaller the ratio of spurious errors to true errors, the more accurate the analysis report is. Obviously, some tradeoffs are required between efficiency and accuracy. One difficulty in achieving more accurate analysis output depends on the complexity of the predicates used to 65 represent the transition conditions. Examination of Heitmeyer’s work [32] and the software requirements for the A-7E aircraft [33], and personal communication with a colleague of Heitmeyer, suggests that the conditions being analyzed in Heitmeyer’s work are less complex than those being analyzed in Heimdahl’s work. The reduced complexity is because requirements in SCR are specified at a higher level of abstraction than requirements in RSML. Heitmeyer et al. reported in [30, 32] that of 44 variables used in the condition tables, 33 were of type Boolean or were converted to Boolean manually; those that were converted to Boolean were either modes (states) or enumerated types with two values; the remaining 11 variables were either continuous or enumerated with more than two values and were used in relational expressions; the relational expressions were manually converted to Boolean variables also. The conditions Heimdahl analyzed, on the other hand, can be quite complex, consisting of arithmetic expressions and mathematical functions that cannot be converted to Boolean without losing important details. We know that arithmetic expressions and mathematical functions pose a significant challenge to the analysis process. In some system domains, such as process control systems, mathematical relationships are unavoidable and must be expressed in the requirements if the requirements are to provide a true representation of the system under development. Therefore, it is important to be able to both model and analyze these relationships. Another source for spurious errors in both analysis methods is that the analysis methods used may abstract away many details related to the structure of the system as a whole; i.e., the analysis is local rather than global. This abstraction precludes spurious errors related to system structure from being eliminated. For example, an error may be reported stating that an inconsistency exists between two transitions, when examining the system reveals that in fact, both transitions cannot be satisfied at the same time (i.e., an expression in the first guarding condition may require that one subsystem be in a particular state, while the second guarding condition may require that another 66 subsystem be in a particular state, when the structure of the system is such that both conditions cannot be satisfied at the same time). Including more detail into the analysis process could help alleviate some of the problems with spurious errors. Model checking, discussed in the next section, does this to some extent. 3.2 Model Checking Software Requirements Model checking has traditionally been applied to hardware verification. Recently, several groups have investigated applying model checking to software systems. Atlee and Gannon have applied model checking to the requirements of a cruise control system and a water-level monitoring system and showed how model checking could be used to verify safety properties for event-driven systems [4]. In their work, they formalized SCR-style requirements [30, 32] and transformed the resulting formal specification into a state-based structure that a CTL model checker can analyze. Safety assertions are expressed as CTL formulas, and the model checker is used to determine if the formula holds in the model. If the formula held in the model, they could conclude that the safety property represented by the formula also held in the system requirements. More recently, Sreemani and Atlee presented a case study of model checking the non-trivial A- 75 requirements document for specific properties that the system requirements should satisfy [57]. The case study is intended to demonstrate the scalability of model checking software requirements. They implemented a program that translates an SCR requirements specification into an equivalent SMV (Symbolic Model Verifier) [42] specification, and use SMV to verify the required properties. The model is an abstraction of the original specification. The authors state that “the SCR methodol- ogy requires the requirements writer to create abstractions of the (potentially infinite) environmental state space by determining which predicates on values of environmental variables affect mode tran- sitions”. The authors further state that the SCR methodology requirements that must be satisfied, 67 ultimately provide an easy method for obtaining abstractions; i.e., since the SCR methodology al- ready requires abstractions, it is easy to determine additional abstractions for the SMV model. Most of the conditions are represented as Boolean variables, but if two or more conditions are related such that only one and exactly one condition can be true at all times, the conditions are represented as enumerated-type variables [57]. To limit the state space explosion problem, SMV uses Binary Decision Diagrams to represent Boolean functions, thus, SMV can check to see if a property holds in a set of states rather than enumerating the entire state space and checking to see if the property holds in every state. Recall (Section 3.1.3.1) that the size of a BDD is sensitive to the order of variables in the BDD. In Sreemani and Atlee’s approach, the BDD is constructed so that variables in the BDD occur in the order in which they are declared. In one case, they were not able to reach a solution after executing for two weeks, but when they reordered the variables and reran the analysis, a solution was generated in about 15 seconds. They experimented with several different variable orderings. They manually change the order of the variables by rearranging the order of the variable declarations. Wing and Vaziri-Farahani [60] have also investigated ways in which model checking can be applied to software systems. They apply a variety of abstraction techniques to a software system to create a finite state machine model of the system. The model and a specification expressed in CTL are input to a model checker, SMV, and the model checker determines if the specification is satisfied in the model. The abstractions are manually determined based on the formula being verified or by exploiting domain-specific or application-specific knowledge [57, 60]. Determining the abstractions manually, requires much human thinking and case-specific knowledge. Anderson et al. have investigated the feasibility of model checking large software specifica- tions [1]. They translated a portion of the TCAS IIl (Traffic Alert and Collision Avoidance System lTCAS II is an airborne, collision-avoidance system required on all commercial aircraft capable of carrying 30 or more passengers through us. airspace. 68 II [40]) requirements specification into a form acceptable to a model checker and used the model checker to analyze for a number of dynamic properties of the system. Since completeness and con- sistency properties can be expressed as CTL [l3] formulas, model checking can be used to check the translated specification for these properties. Model checking for transition completeness and con- sistency is not as efficient (in terms of speed) as the analysis method Heimdahl used, but the results may contain fewer spurious errors since their approach is able to handle addition and subtraction, whereas Heimdahl’s symbolic approach is not. Most recently, Bharadwaj and Heitmeyer [6] have integrated the Spin model checker [34] into the SCR toolset so users can establish logical pr0perties of an SCR specification. To limit the state explosion, they verify abstractions of the original requirements. They describe two types of reductions (abstractions) that are derived using the formula to be verified and special attributes of SCR specifications. The first reduction involves eliminating irrelevant entities in the original model; irrelevant entities are entities that are not needed in the analysis since they do not appear in the formula being checked. The second reduction involves using more abstract representations of monitored variables. Essentially, the abstraction involves eliminating certain monitored variables from the identified entity set that are causing state explosion. The reductions are applied manually and the abstract model produced is then analyzed automatically by the SCR toolset using Spin. The authors state that the approach they describe can be used to verify properties of a complete SCR specification. The approach allows variables to range over arbitrary (finite) domains such as enumerated values and integer subranges. They present two theorems and a corollary that guarantee that an invariant that holds for the abstract model also holds for the full specification. Currently, the second reduction principle they apply is incomplete, since there are “extra” transitions in the abstract state machine that generate states that are not reachable in the original specification. Since all of the methods described in this section rely on abstraction to generate a system model 69 that can be analyzed in a computationally tractable manner (i.e., avoid the state explosion problem), spurious errors may be reported that would be eliminated if certain abstractions were not made. Currently, it is left up to the analyst to determine which error reports represent true errors and which error reports represent spurious errors. The problem of spurious errors is common to all the analysis methods described in this chapter. To summarize, spurious errors in the analysis output are conditions that are reported to the analyst as errors, but the reported conditions cannot actually be satisfied in the system being analyzed. In other words, there is some augmenting information that the analysis procedures need in order to determine that certain conditions or expressions are not true errors. As we showed in Section 3.1.3.2, if we augment the analysis process with the appropriate information, previously reported spurious errors can be eliminated. The latter statement is true for any analysis method that is reporting spurious errors, since spurious errors are reported because some relevant details have been abstracted out of - the system model. If one augments the analysis process with the relevant information, the spurious errors are no longer reported. In essence, in each spurious error report, there is some undetected contradiction that exists between the constituent components of the reported expression that actually precludes the reported error from being a true error. During the course of the research described in this dissertation, we identified several classes of undetected contradictions that lead to spurious errors, and classified the spurious errors according to the undetected contradictions that lead to them. The next section discusses the different types of spurious errors that can appear in the analysis output, the undetected contradictions that lead to the spurious errors, and the analysis techniques, if any, that can eliminate particular classes of spurious errors from the analysis output. 70 3.3 Spurious Errors Revisited After numerous case studies and experiments, we identified four classes of spurious errors. We classified the spurious errors according to the undetected contradictions that cause them. 1. Spurious errors involving simple and obvious contradictions between two predicates such as enumerated type predicates and predicates involving simple arithmetic expressions such as the one shown in Figure 3.18. 2. Spurious errors involving three or more predicates containing related linear arithmetic expres- sions. 3. Spurious errors involving non-linear expressions. 4. Spurious errors related to the structure of the state machine, or spurious errors related to information about the environment in which the system will operate, that is not part of the specification. Inhibited --> Inhibited conflicts with Inhibited --> Not-Inhibited if (Own_Tracked_A1t() - Ground_Leve1()) > 1200 : T ; Own_Tracked_A1t() <= (1200 + Ground_Level()) : T ; No transition out of Inhibited is satisfied if (Own_Tracked_Alt() - Ground_Leve1()) > 1200 : F ; Own_Tracked_Alt() <= (1200 + Ground_Leve1()) : F ; Figure 3.18: Spurious error reports generated for the state Inhibited. Different analysis techniques are able to eliminate different classes of spurious errors, but no single analysis technique can eliminate all spurious errors. Spurious errors of the types involved in the first two classes listed above, can be eliminated by augmenting the analysis process with decision procedures. As we will show in Chapter 4, Section 4.2.4, the symbolic analysis component of our technique augmented with decision procedures for enumerated type predicates, can easily eliminate spurious errors involving enumerated type predicates. Tools using reasoning components augmented with decision procedures, such as PVS, can eas- ily eliminate all types of class 1 and class 2 spurious errors. As we mentioned in Chapter 2, 71 Section 2.2.3.4. 1, PVS can also eliminate some spurious errors involving simple non-linear inequal- ities. In general, no analysis‘technique can eliminate class 4 spurious errors, since this class of spu- rious errors results because information that is required by the analysis process has been abstracted from the model, or is related to some environmental constraints that were not modeled. To eliminate spurious errors of class 4 requires one to identify the relevant information that is missing from the analysis process and to augment the analysis process with this information. 3.4 Summary In this chapter we discussed analysis of state-based requirements for completeness and consistency, and model checking software requirements for application dependent properties. Work in analyzing requirements for the application independent properties of completeness and consistency forms the foundation for the research described in this dissertation. All of the analysis methods described in this chapter suffer from the same general problem - spu- rious errors resulting from a model that is an abstraction of the original requirements specification. Requirements specifications describe all acceptable system implementations. Existing verification methods do not scale well for such requirements, since the requirements are too detailed [6]. To avoid the state space explosion problem and make the desired analysis tractable, an abstraction of the original specification must be modeled for the analysis procedures to work with. Details con- tained in the original specification are abstracted away. Different analysis methods rely on different types of abstractions, but the lost details may include: (1) the semantics of individual predicates comprising the transition conditions, and the semantics of the relationships between the individual predicates comprising the transition conditions, and (2) information related to the structure of the original state machine used to represent a model of the system, or information related to the 72 environment that the system will operate in. These abstractions (and the underlying methods used for the analysis) make the analysis approach pessimistic (conservative); all true errors are reported, but spurious errors may be reported as well. The number of spurious errors reported can make it difficult for an analyst to find and correct the true errors in the specification. We identified four classes of spurious errors during the course of our investigations. Different analysis techniques are able to eliminate different classes of spurious errors. For example, there are analysis methods, such as theorem proving (discussed in Chapter 2), that can be used in place of abstracting away the details described in (1) above. Analysis methods such as theorem proving can also be used in place of abstracting the details described in (2) above. However, in general, including all information contained in the original requirements specification in the analysis process will render the analysis intractable. In truth, many of the details are not required for the analysis to be performed with satisfactory results (i.e., so that the analyst is able to easily find and correct the true errors that show up in the final analysis report). Our investigations revealed that there are generally only a few details that have been abstracted away that are causing all or most of the spurious errors to be reported in the analysis output. The analysis methods described in this chapter and in the previous chapter, do not help the user identify the relevant information to include in the analysis process. The next two chapters describe an iterative and integrative approach that does not lose the details related to (1) above, and that assists the analyst in identifying the missing details related to the structure of the state machine or its environment that are causing the majority of spurious errors being reported. Upon completion of the iterative and integrative analysis process, the analyst is presented with an error report that is easily managed in terms of allowing the analyst to locate and correct the true errors in the original specification. it Chapter 4 Analysis Method Static analysis methods suffer from a common problem: spurious errors in the analysis output. Spu- rious errors are caused by a lack of information in the underlying analysis. The loss of information occurs when we create a model of the original specification that is suitable for analysis. An abstrac- tion of the original specification must be created, since analysis most often becomes intractable if all details about a system and its environment are included in a model. Not all analysis methods are the same. Different levels of abstraction can be applied. We identified four classes of spurious errors and found that different analysis methods could deal effectively with different classes of spu- rious errors. We also found that some classes of spurious errors are difficult for any analysis method to .deal with efficiently and automatically. In general, an analysis method that can check a more detailed model will generate more accurate analysis reports. In the previous two chapters, we showed that both BDD analysis and PVS analysis work well for certain problems, but it is clear that neither method alone can always provide the analyst with accurate analysis output in a timely and automated manner. We define accurate analysis output as output in which the ratio of spurious errors to true errors is small. We want to eliminate as many spurious errors as possible from the analysis report, so that the analyst can easily find and correct 73 74 the true errors in a specification. Furthermore, we want the analysis process to be fast enough to be used on a day-to—day basis, and we want the process to be automated. In addition, we want the anal- ysis process to be applicable to real-world problems and not limited to state-based requirements; in other words, the underlying method of checking state-based requirements for completeness and consistency can be generalized to checking any large logical expressions for tautologies and contra- dictions. We saw that there is a trade-off between the speed with which the analysis completes and the accuracy of the analysis output. Generally, the faster the analysis, the less accurate the output. This is directly related to the level of abstraction used to represent the model. The less detail included in the model, the faster the analysis can be, but the less accurate. For example, symbolic analysis methods, such as those that rely on BDDs, are fast, but may generate many spurious errors. The accuracy of the analysis report also depends on the abstractions made in relation to the properties of interest to the analyzer. If the abstractions do not adversely effect the analyzers ability to identify and eliminate the contradictions leading to spurious errors, then the analysis output in relation to the properties of interest will be accurate. Of course, checking to see if the model satisfies additional properties may lead to spurious error reports. In addition, there is a tradeoff between efficiency and accuracy, and automation. Generally, the less detail an analysis method requires, the faster it will complete, the less accurate the output will be, and the more automated the analysis method will be. The more detail an analysis method requires and/or uses, the slower it will complete, and the more manual intervention will be required; i.e., the less automated the method will be. Since no single analysis method used alone will allow us to satisfy all of our goals, we developed an iterative and integrative approach that combines the strengths of the individual analysis methods (graph-based symbolic analysis using BDDs, and theorem proving using PVS) and circumvents their weaknesses. 75 Note that the integrative approach in itself is not unique. There are several others who have also developed integrative analysis techniques. Most of these techniques have been applied in the area of hardware verification, or analysis of concurrent programs. Joyce and Seger [39] developed an integrated approach to formal hardware verification that combines BDD-based symbolic simulation techniques with interactive theorem proving. Young et al. [64, 66, 67], have integrated static concur- rency analysis with symbolic execution to detect anomalous synchronization patterns in concurrent Ada programs. Havelund and Shankar [23] describe a series of protocol verification experiments that combine theorem proving and model checking. Chan et al. [1 I], discuss integrating a constraint solver with a BDD manipulator to handle nonlinear constraints. It is clear that an integrative ap- proach to analysis is essential, since all individual analysis techniques suffer from limitations of one form or another. The integrative approaches represent attempts to capitalize on the strengths of the individual techniques and circumvent their weaknesses. Our integrative analysis approach is based on trying the simple, straightforward methods first and, if these methods fail, applying the more complex and computationally expensive methods. The iterative analysis is based on identifying the information that was abstracted out of the model that is leading to spurious errors, feeding the information back into the model, and re-running the analysis. Since the spurious errors are the result of missing information (abstraction) in the model, identifying the missing information and adding it back into the analysis process should make the spurious errors disappear. In other words, if all of the missing information is identified, and the original guarding conditions are consistent (complete), then augmenting the analysis process with the missing information will yield the correct output of FALSE (TRUE), showing that the guarding conditions are consistent (complete). Traditionally, the analyst must manually inspect the analysis report to locate any true errors that may exist. If the analysis output contains many spurious errors, the analysts task is difficult, A. 76 error-prone, and time consuming. Our method is unique in that it helps an analyst identify the missing information so that the spurious errors can be eliminated and the true errors can be more readily identified. In addition, the technique we use to identify the missing information is unique. In this chapter, we refer to the missing information as augmenting information since its inclusion into the analysis process augments the accuracy of the analysis output by eliminating spurious errors. In this chapter we describe the method we developed to analyze disjunctive and conjunctive expressions to see if they are satisfiable or mutually exclusive. The method integrates symbolic manipulation techniques and theorem proving techniques: Binary Decision Diagrams (BDDs) and the Prototype Verification System (PVS). An application of the process developed in this research is in the analysis of state-based requirements for completeness and consistency. To demonstrate the scalability of our analysis process to real-world problems, we applied the technique to a large real- world avionics specification, specified in RSML, to check parts of the specification for completeness and consistency. The results of this application are reported in Chapter 6. Chapter 4 is organized as follows: Section 4.1 provides a high-level description of the analysis process and describes the process using some typical scenarios we encountered in the analysis of state-based requirements for completeness and consistency. Section 4.2 describes the specific tools our technique uses, and Section 4.3 describes how to add augmenting information to the analysis process. An analyst ultimately initiates and guides the overall analysis process, therefore, Sec- tion 4.4 discusses the iteration and integration options available to the analyst during the analysis process. 4.1 General Analysis Process Figures 4.1 and 4.2 show a high-level view of the general analysis process. The inputs to the analy- sis process for mutual exclusion are two logical expressions. The analysis process forms the 77 conjunction of the two expressions. If the expressions are mutually exclusive the conjunction will reduce to FALSE. If the expressions are not mutually exclusive, i.e., the conjunction of the expres- sions does not reduce to FALSE, the analysis will report the logical expressions to the analyst that are satisfiable by both of the disjuncts at the same time (Figure 4.1). The analyst is then left with the task of determining how to modify the two original logical expressions so they are mutually exclusive. Logical Logical Expression! ExpressionZ [8:7] Conj ction /\ FALSE _ Satisfiable Ex sions (Mutually Exclusrve) (Not Mutually elusive) Figure 4.1: High-level view of analysis to check for mutual exclusion. The input to the analysis process for satisfiability may be one or more logical expressions (Fig- ure 4.2). The analysis process forms the disjunction of the logical expressions. If the disjunction of the expressions is satisfiable, then the disjunction reduces to TRUE. If the disjunction of the ex- pressions is not satisfiable, i.e., the disjunction does not reduce to TRUE, the analysis will report the logical expressions to the analyst that are not satisfiable by any of the original logical expressions. The analyst can then determine the logical expressions that need to be added to make the disjunction of the expressions a tautology. The symbolic component of our method is fast and fully automated, but may generate many 78 Logieal Logical Logical Expression] Expression2 ExpressionN Disjunction TRUE Unsatisfiable Expressions (Satisfiable) Figure 4.2: High-level view of analysis to check for satisfiability. spurious errors. The reasoning component is slower and may require costly and time consuming manual intervention, but generates more accurate analysis reports. Our overall analysis process takes advantage of the strengths of the symbolic and reasoning components and circumvents their weaknesses. The strengths of the symbolic component are its speed and automation. The symbolic compo- nent is simple and straightforward to use, and can deal effectively with some logical expressions. In particular, symbolic manipulation can deal with logical expressions where contradictions exist be- tween structurally equivalent predicates, or enumerated types. The major weakness of the symbolic component is that it cannot reason about the relationships between predicates since the semantics of the predicates are lost The symbolic components’ inability to reason about relationships between predicates may lead to spurious errors (if there are many interdependencies between the expres- sions). The strength of the reasoning component is its ability to reason about relationships between the predicates. But this strength means that more details are modeled. Modeling more details and 79 reasoning about the details tends to slow down the analysis process, and in some cases, the analysis may fail to halt (depending on the size and complexity of the expressions being analyzed). In addition, a reasoning technique generally requires more costly and time consuming user intervention than simpler techniques, thus, the reasoning component is less automated. Figure 4.3 shows the general analysis process and the integration of the symbolic and reasoning components. The logical expressions to be analyzed are stored in some machine readable form. The Symbolic Analysis Component F Analysis Process I" 1 Use BDD ________________ BDD '3ng Analysis Analysis M:— Intemal . Disjunctive Normal Re resentauon orm Expression of pressron . \ Anal sis 1'03“” Dis'unctiv Normal y Expressions @ thirm Ex . ssion /-B£22§- PVS In -- al Repo Repre ntation of Expr sions Greats rys. ............... PVS Specification Trans 3“” Specification \ \ J end \‘. Reasoning Component File Object —> Data Flow 99°1EE'L'3531‘ Com. new 0 Process Figure 4.3: General analysis process and integration of symbolic and reasoning components. analysis process begins by first parsing the logical expressions and converting them into an internal representation. If the analyst chooses to perform symbolic analysis, then the tool performs symbolic analysis using BDDs and generates two outputs: a BDD node profile (described in Section 4.2.2), and an analysis report in disjunctive normal form in a tabular format. The latter output of the symbolic analysis component may be samples of the resulting disjunctive normal form expression; 80 the analyst may specify samples if the resulting disjunctive expression is too large to report in its entirety. If the analyst chooses to create a PVS specification, the tool converts the internal representation of the logical expressions into a PVS specification so the analyst can use PVS to analyze the expres- sion. The output of the PVS analysis is either a finished proof (no unprovable subgoals), or a report of unprovable subgoals. The analyst can also choose to create a PVS specification for the disjunc- tive normal form expression output from the symbolic analysis component (this option is shown in the figure by the data flow from the BDD analysis process to the PVS translator process). The next section describes the general analysis process as applied to the analysis of software requirements for completeness and consistency. 4.1.1 Applying the Analysis Approach to Analysis of Software Requirements for Completeness and Consistency The outcome of any analysis is an error report of some form that is presented to the analyst. Error reports from our analysis process are presented to the analyst as Boolean expressions in disjunctive normal form. In terms of analyzing requirements for completeness and consistency, each report represents either (I) an incompleteness, that is, a condition the requirements do not handle, or (2) an inconsistency, that is, a condition where two or more responsesare specified. Spurious errors are manifested as conditions where contradictions between conjuncts in the disjunctive normal form expression comprising the error report, are not detected by the analysis tool; if the contradictions were detected, then the conditions would not be reported as errors in the analysis output. Figure 4.4 shows the same general integrative analysis process as shown previously in Fig- ure 4.3, but for the specific application of analyzing RSML specifications for completeness and consistency. A machine readable RSML specification is parsed and an internal representation of the 81 fl ( Analysis Process Use_ app _________________ non 399i)”: Analysis Analysis MIL. Internal R resentation Of Stactg Machine Table \ Analysis / Report Machine Readable RSML Saification Create PVS -----—-----‘---_---- Specification L File Object —-> Data Flow Mei-I192! Control now 0 m Figure 4.4: General integrative analysis process applied to analysis of RSML requirements for completeness and consistency. .state machine is generated. For the purposes of this discussion we assume symbolic analysis is ap- plied first. If the symbolic analysis reports that the analyzed expressions are complete (consistent), then we are done. If the symbolic analysis reports a reasonable number of errors (i.e., a number of errors that an analyst can manually inspect without too much trouble), the analyst manually inspects the report to look for the true errors. If the number of errors reported is unmanageable, we can try the reasoning component on the original expressions. If the reasoning component reports that the expressions are complete (consistent), then we are done. If the reasoning component generates a manageable number of error reports, the analyst can review them manually. If the reasoning com- ponent generates an unmanageable number of error reports, or fails to halt, we assume that some important details needed during the analysis process were abstracted away and that a majority of the reported errors are spurious. We can then apply the method described in Chapter 5 to identify the augmenting information. In general, the tools can be applied in any order the analyst determines 82 best fit the particular problem at hand. To give the reader an idea of how some typical analyses may proceed, and some of the decisions that may need to be made during the analyses, we present several possible usage scenarios. These scenarios represent typical situations we encountered during our experiments. 1. The ideal case is when the analyst applies symbolic analysis to a logical expression and the result shows the logical expression complete or consistent. In this case, the analyst is done, and the analysis process was fully automated and fast (generally, on the order of seconds or minutes). N o spurious errors were generated. 2. The analyst applies symbolic analysis to a logical expression and the results show under 75 errors. It took only seconds to generate the analysis report, and the process was fully automated. From here, there are two choices: (a) The analyst manually inspects the analysis report and finds five true errors with the specification. The other 70 errors are spurious errors. It may take the analyst a few hours to identify the true errors mixed in with the spurious errors; every error reported must be manually inspected, since all 75 may be true errors. (b) The analyst automatically generates a specification of the symbolic analysis output suit- able for analysis by the reasoning component, and the 75 error reports are analyzed by the reasoning component. The reasoning component yields five error reports. The ana- lyst inspects the error reports and finds they represent true errors in the specification. It takes the analyst a few minutes to automatically convert the symbolic analysis output to a specification that the reasoning component can analyze. It takes the reasoning compo- nent a few seconds to identify and report the five true errors. The analyst takes several minutes to determine that the five reported errors are true errors. 3. The analyst applies symbolic analysis to a logical expression and the results show over 1000 errors. There are too many error reports for the analyst to manually inspect, but not too many for PVS to handle. The analyst automatically generates a specification of the symbolic analysis output in a form that the reasoning component can analyze, and the error reports are analyzed by the reasoning component. The reasoning component runs for under 30 seconds and reports that the logical expressions are complete or consistent. The symbolic analysis took under a minute to complete, and it took only a few minutes for the analyst to automatically convert the symbolic output to pass on to the reasoning component. 4. The analyst applies symbolic analysis to a logical expression and the results show tens of thousands of errors. The symbolic component takes a couple of minutes to complete the anal- ysis and report the results. The analyst automatically generates a specification of the original 83 logical expression that the reasoning component can analyze, and the reasoning component is invoked to analyze the logical expression. Two potential results are: (a) The reasoning component runs for a few minutes and reports that the logical expressions are complete or consistent. (b) The reasoning component runs for a few minutes and reports thirty errors. The analyst manually inspects the errors and finds they all contain the same contradiction; this con- tradiction is one that the reasoning component could not detect because the information about the contradiction was missing from the reasoning component’s specification. The analyst required only an hour to identify the contradiction and show that all reported errors are spurious. 5. The analyst applies symbolic analysis to a logical expression and the results show millions of errors (on the order of 10’s or 100’s of millions). The analysis takes several minutes to complete. The analyst automatically creates a specification of the original expression that the reasoning component can analyze and invokes the reasoning component on the specification. The reasoning component runs for hours without reporting any results and the analyst aborts the analysis. The analyst assumes that most of the reported errors are spurious errors that result because some information is missing from the analysis process. The analyst uses the method described in Chapter 5 to identify the information that is missing from the analysis process and that is leading to the spurious errors. Three possible scenarios are: (a) The analyst manually adds the identified information to the specification and reruns the symbolic analysis. The analysis runs for a few minutes and reports that the logi- cal expressions are complete or consistent. The analysis is fully automated aside from manually adding the information to the specification. (b) The analyst manually adds the identified information to the reasoning components speci- fication and analyzes the logical expressions with the reasoning component. The analyst manually adds the information from the specification into the reasoning analysis (i.e., the proof process). The analysis finishes in under a minute and shows that the guarding conditions are complete or consistent. (c) The analyst manually adds the identified information to the specification and reruns the symbolic analysis. The analysis runs for a few minutes and reports some errors. The analyst uses one of the above scenarios (1-4) to determine that the errors are spurious. 6. The analyst applies symbolic analysis to a logical expression and 10’s or 100’s of millions of errors are reported. The analyst automatically generates a specification for the reasoning component and analyzes the Specification with the reasoning component. The reasoning com- ponent runs for hours and the analyst aborts the analysis. The analyst assumes, that most of the reported errors are spurious errors that result because some information is missing from 84 the analysis process. The analyst uses the method described in Chapter 5 to identify the in- formation that is missing from the analysis process and that is leading to the spurious errors. The analyst adds the missing information to the symbolic analysis and reruns the analysis. The analysis reports under ten million errors. Two possible scenarios from this point are: (a) The analyst looks for additional information that is missing from the analysis, identifies the missing information, adds it to the reasoning component specification, reruns the reasoning analysis, and the analysis reports the expressions are complete or consistent. (b) The number of spurious errors is now less than the number of true errors and the true er- rors occur with many permutations of sub-expressions. Symbolic analysis reports all of the permutations. The analyst adds the missing information to the reasoning component specification, reruns the reasoning analysis, and the analysis reports 500 errors. The an- alyst cannot find any additional missing information, so manually inspects the reported errors, making the assumption that they are true errors. The analyst spends several hours and finds that the errors are true errors and that the number of them is large due to the possible permutations of several groups of sub-expressions. 7. The analyst applies symbolic analysis to a logical expression and millions of errors are re- ported. The analyst assumes, that most of the reported errors are spurious errors that result because some information is missing from the analysis process. The analyst uses the method described in Chapter 5 to identify the information that is missing from the analysis process and that is leading to the spurious errors. The analyst adds the missing information to the symbolic analysis and reruns the analysis. The analysis reports that the logical expressions are complete or consistent. The above scenarios represent typical situations we encountered during our experiments but do not cover all of the possible ways an analyst can apply our analysis process. They do, however, provide a good overview of how our analysis process may be applied and how the process may work. Our analysis process is driven by the analyst, and thus, application of our analysis process is highly flexible and can be customized by the analyst to deal with a wide variety of situations. Note that in the above scenario descriptions, we did not include specific details about the tools our process uses. In the next section we describe in detail the symbolic and reasoning components of our analysis process. 85 4.2 Tools The specific tools we use in our analysis process and the flow of data between the tools are shown in Figure 4.5. The tools described in this section are specific to the application of our analysis process to the analysis of state-based requirements (namely, RSML requirements) for completeness and consistency. f 1 BDD Node ML AND/OR —. I . . Table Machine Readable RSML I r A Saificatron ' 'l‘ I! PVS PVS ‘ m —PAVS a,’ Translator Specification m ' Of State Machine PVS SW6 _— m. Object [£3 Strategies ——> Data Flow Figure 4.5: Overview of analysis tools and data flow between the tools. The symbolic analysis component sits on top of a BDD library created by Long [5] at Carnegie Mellon University. The symbolic analysis component consists of three sub-processes: a BDD trans- lator that translates the RSML AND/OR table representation of guarding conditions to Binary Deci- sion Diagrams, a BDD analyzer that manipulates the BDD representation of the guarding conditions to check for tautologies and contradictions, and an AND/0R table translator that converts the result BDDs output from the BDD analyzer process to AND/0R table format to present to the analyst. The PVS translator converts the internal representation of the state machine to a PVS specifica- tion. The analyst then initiates PVS to analyze the PVS specification. We developed a set of proof strategies that, in most cases, allow the analyst to perform proofs of completeness and consistency 86 with a single command. In Section 4.2.1 we present an overview of the control flow and set-up required before analysis can begin. Section 4.2.2 discusses Long’s BDD library and some of the features of the library. A brief description of the BDD translator is given in Section 4.2.3. We do not discuss the BDD trans- lator in detail since it is fairly straightforward and the details are not important. In Section 4.2.4 we describe the AND/OR table translator. We discuss the PVS translator in Section 4.2.5. Chapter 2 con- tained a detailed discussion of PVS, so we will not discuss PVS further in this chapter. Section 4.2.6 describes the BDD analyzer, and in Section 4.2.7 we describe the PVS analysis process. 4.2.1 Control Overview The tool provides several command line options that allow the analyst to determine the direction the analysis should take. When the analyst initiates the analysis process, the specified command line options guide the flow of the analysis. Regardless of the options specified, the tool does some initial set-up to prepare for the requested analysis. When the analysis program is initiated, the following set-up occurs: for each atomic state1 in the RSML specification, the transitions out of the atomic state are grouped together according to the events that trigger the transitions (we call this group a trigger-transitions set). The transitions associated with each event include all transitions (if any) out of the super-states of the atomic state triggered by the same event. For example, Figure 4.6 shows an example of a state diagram with super-states SI, SlA, SIB, 5'2, and S3, atomic states A1, A2, and so on to A7, and transition triggers x, y, and 2. Only the triggers on transitions out of the atomic states A1, A6, and A7 are shown in the figure. The trigger transitions set that would be constructed from the information in the state diagram is: for atomic state A1 and trigger 2, transitions A1 to A3, and A1 to S l; lAn atomic state is a state with no sub—states. 87 \ J Figure 4.6: Example state diagram to demonstrate collection of triggers and associated transitions during analysis process set-up. for atomic state A6 and trigger 9:, transitions A6 to A7, A6 to $2, $1 to Al, and S3 to A1; for atomic state A7 and trigger 2, transitions S 1 to Al, and S3 to A1; for atomic state A7 and trigger y, transitions A7 to A5, and A7 to A6. Each trigger-transitions set' is marked with the number of transitions that are included in the set. If a particular event out of an atomic state (and its super- states) has multiple transitions associated with it, we can perform completeness and consistency analysis on the associated transitions. If a particular event out of an atomic state (or it’s super- states) has only one transition associated with it, no consistency checks are required, but we can perform completeness checks on the single transition. Section 4.2.6 discusses the details of the completeness and consistency analysis. 88 A1t_8eporting IN_STATE Yes: OtherdAlt_8eporting = cTrue: Other_Air_Status IN_STATE State_Airborne: Auto__SL IN_STATE ASL_2: 8A_Inhibit_From_Ground = cTrue: Mode_Se1ector = TA_On1y: Own_Tracked_Alt() > Other_Tracked_Alt(): Own_Tracked_Alt() < Other_Tracked_Alt(): Inhibit_Biased_Climb() > Down_Separation(): ‘ (Own_Tracked_Alt() - Other_Tracked_Alt()) >= MINSEP: Up_Separation() >= ALIMt): (Own_Tracked_Alt() - Other_Tracked_A1t()) <= n_MINSEP: Down_Separation() >= ALIM(): Current_Vertica1_Separation() > LOWFIRMRZ: Current_Vertical_Separation() < ZTHRTA(): ADOTH >= n_ZDTI-IRTA: -((Current_Vertical_Separation(l / ADOT())) < TVTHRTATBL(): Modified_Tau_Capped() < TFRTH8(): Level_Wait IN_STATE S3: Up_Separation() <= SENSEFIRMt): Down_Separation() <= SENSEFIRMt): TCAS_TCAS_VMD() >= Zero: Other_Tracked_8e1ative_Alt() < n_MINSEP: TCAS_TCAS_VMD() <= Zero: Other_Tracked_8e1ative_Alt() > MINSEP: Other_Tracked_8ange_Rate() > Zero: H H H 0.0 h a n a b»h»¢.mnn.n.p N w NININ Mr» H H H H H H H t a: Threat_A1t_VMD() < ZT(): ##### leaf: Total: 390 Figure 4.7: Example BDD node profile portion. 4.2.2 BDD Library Long’s BDD package [5] includes a set of routines for creating, manipulating, printing, and destroy- ing BDDs. There are also routines for printing status information about BDDs and for printing a histogram showing the number of nodes at each level. The histogram is called a BDD node profile; the node profile’s importance to the analysis process is described in Chapter 5. Figure 4.7 shows an example of a BDD node profile. The # signs after the nodes represent a histogram signifying the number of nodes at each level in relation to the other nodes in the BDD. Long’s BDD library also includes provisions for dynamically reordering the variables in a graph; dynamic reordering is discussed in Section 4.2.2.1, and its importance to the analysis process is described in Chapter 5. 89 4.2.2.1 Variable Reordering Here, we provide only a high-level description of dynamic variable reordering methods. All of the material contained in this section comes from [5] and from personal communica- tion with David Long. Long’s BDD package includes two variable reordering algorithms: bddxeordens tabl e.window3 and bddxeorderus i f t. In general, the stable window methods take small groups of consecutive variables and try all permutations of the variables within each group. The process is repeated throughout the order until no more reduction is possible; i.e., the nodes within each group are permuted until no more reduction in the total number of nodes occurs. The stable window 3 method permutes the variables within windows of three adjacent variables to minimize the overall BDD size [5]. The method is not guaranteed to find an optimal ordering. The sifting method picks up each variable in turn, moves it throughout the ordering to see what the total BDD size is for each possible position, and then moves the variable back to the spot that gives the lowest total number of nodes. Sifting is not guaranteed to find an optimal ordering of variables either. Generally, sifting achieves greater size reductions than the window-based methods, but is slower. For the stable window method to be most effective, one should know which variables should be grouped together, and these variables should be created in the appropriate order so they are adjacent to each other in the ordering within a window of three variables. It is not generally easy to automatically determine which variables are best grouped together. The analysis results described in this dissertation relied on the sift reordering method, but the analyst can specify the stable window 3 method if they desire. 4.2.3 BDD Translator The guarding conditions are represented in disjunctive normal form in the RSML specification. When the analyst invokes symbolic analysis using BDDs, the following actions are performed: 1. Initialize the requested command line options. 2. For each predicate in the RSML specification, create a BDD variable and create a link between the predicate and the BDD variable. 3. If dynamic variable reordering is requested, set the dynamic variable reordering routine to the requested reordering routine (stable window 3 or sifting). Initially, the BDD translator creates only a BDD variable for each predicate in existence. During the BDD analysis process, the BDD analyzer works with BDD representations of the AND/OR tables. When the BDD analyzer attempts to invoke a BDD representation of an AND/0R table that does not exist, the BDD translator builds the BDD representation of the AND/0R table, stores it for later use, and returns it to the BDD analyzer. If dynamic variable reordering is enabled, the BDD translator will invoke the reordering algorithm to reorder the guarding condition BDD after building it, and prior to storing it and returning it to the BDD analyzer. When the BDD analyzer invokes an existing BDD representation of an AND/0R table, the existing BDD representation is simply returned to the analyzer. 4.2.4 AND/OR Table Translator The AN DIOR table translator receives the result BDD that is output from the BDD analyzer and con- verts the BDD into AND/OR table format to present to the analyst. The translator works by traversing the BDD and looking for satisfying paths. Each satisfying path in the result BDD represents a po- tential incompleteness (inconsistency) and is converted to a column in the AND/0R table. Recall that there may be some contradictions between the conjuncts comprising the satisfying paths that preclude them from being satisfiable, but the symbolic analysis without some additional capabilities cannot detect and eliminate paths containing certain contradictory predicates. Thus, each satisfiable 91 path containing an undetected contradiction becomes a column in the AND/0R table, and represents a spurious error reported to the analyst. Since our goal is to eliminate as many spurious error reports as possible, we augmented our AND/OR table translation process with decision procedures to detect certain contradictions so they would not be reported in the analysis report. Decision procedures are algorithms designed to reason about, and possibly simplify expres- sions. Decision procedures can augment weak analysis processes and make them stronger. For example, we know that symbolic analysis using BDDs cannot reason about the components of the expressions and therefore cannot make decisions about whether or not two or more interdependent expressions contradict each other; this is because the semantics of the expressions are lost in the symbolic representation. This inability to reason about relationships between expressions may lead to many spurious error reports in the analysis output. Adding procedures to the symbolic analysis process that can perform some simple reasoning about the interdependencies between expressions will strengthen the overall analysis process and result in fewer numbers of spurious error reports. We added some simple decision procedures to check for contradictions between enumerated types and state enumerated types (a super-state with sub—states can be thought of as an enumerated type by making the sub-states of the state the elements of the type). The term enumerated type pred- icate shall, henceforth, refer to any of four types of RSML predicates: EnumEqual, InState, InOneOf, and IsOneOf predicates. The term enumerated type shall, henceforth, refer to either an enumerated type variable, or a state enumerated type. Figure 4.8 shows examples of enumerated types and what the RSML predicates for the enumer- ated types look like. The enumerated type for the state Pressure has elements corresponding to the state’s child states High, Low, and OR. Mode is an enumerated type whose elements are On, Off, Standby, and Halt. The interpretation of example RSML predicate I is: the value of the enumerated type Mode is equal to Standby. The interpretation of predicate 2 is: Pressure is 92 Pressure State Enumerated Type: Pressure = {High, Low, 0k} Enumerated Type: Mode = {On, Off, Standby, Halt} Example RSML Predicates: 1. Mode EnumEqual Standby 2. Pressure InState Low 3. Mode IsOneOf (Off, Standby, Halt} 4. Pressure InOneOf (High, LOW} Figure 4.8: Examples of enumerated types and enumerated type predicates. in state Low. The interpretation of predicate 3 is: Mode is in either Of f, Standby, or Hal t. The interpretation of predicate 4 is: Pressure is either in state High or Low. The analyst enables the decision procedures by setting the appropriate command line option when the analysis process is initiated. The decision procedures are invoked by the AND/OR table translator during the translation of the result BDD to AND/OR table format. Recall that each sat- isfying path in the result BDD represents a column in the AND/OR table. Whenever one of the enumerated type predicates is encountered as each path is traversed in the result BDD, the decision procedures are called to see if the current path is valid given the truth value of the newly encoun- tered enumerated type predicate. The decision procedures keep track of which elements have been encountered, the number of times the element may be TRUE, and the number of times a particular state or enumerated variable has been encountered. This information is kept for each individual enumerated variable and for each individual super-state. A current path is reported valid by the decision procedures if only one element for each enumerated type may be TRUE at a given time. If 93 two or more elements of a given enumerated type are TRUE at the same time, or if all elements are FALSE at the same time, the decision procedures return that the current path is invalid. If the decision procedures report that a path is invalid, the AND/OR table translator stops travers- ing the current branch in the BDD and backs up to the next unexplored branch. Whenever the traversal algorithm backs up to a node representing an enumerated type predicate, or when the traversal algorithm backs up over an enumerated type predicate (the traversal algorithm backs up over an enumerated type predicate after both branches of the node representing the predicate have been explored) the decision procedures are notified of the incident by the AND/OR table translator. The decision procedures modify the information they are monitoring for the specific enumerated type that has changed. The AND/OR table translator continues traversing the result BDD until all satisfying paths have been translated to columns in the AND/OR table representation of the result BDD, or until the number of satisfying paths requested by the analyst have been translated to columns in the tabular represen- tation. 4.2.5 PVS 'Ilanslator The PVS translator tool generates a PVS theory for each transition in an RSML specification and the tool also generates the PVS conjecture theories for completeness and consistency analysis. This is done in a three stage process. First, each predicate in the AND/OR table is defined as a predicate in the PVS specification languagez. Second, a predicate representing the full guarding condition is built from the individual predicates defined in the first stage. For example, consider the transition from the state Inhibited to the state Not-Inhibited defined in Figure 4.9. In PVS, this transition would be defined by the theory shown in Figure 4.10. The PVS theory is named with the transition source 2A predicate in PVS is a function with retrun type Boolean. 94 state, to the transition destination state, appended with a unique id: lnhibited-To_NotJnhibitedJDl. The constant NODESHI used in the predicate comprising the guarding condition in Figure 4.9 is declared as a constant in the PVS specification. The functions 0wn_Tracked_Alt and Ground_LeveI used in the predicate comprising the guarding condition are declared as variables of type integer in the PVS specification. The predicate comprising the guarding condition is then declared as a function with return type Boolean Predl68-Grtr?; each predicate in the guarding conditions is given a number and appended with the type of the predicate being specified. For the transition from Inhibited to Inhibited (Figure 4.9), the predicate comprising the guarding condition would be appended with LessEqual in the PVS specification. The guarding condition is defined as a function in PVS with retum type Boolean (a PVS predicate), and given a name composed of the transition source and destination states appended with a unique id Inhibited-T0_NotJnhibited.TI ?. The definition of the PVS predicate representing the guarding condition is the disjunctive normal form expression of the previously defined predicates comprising the guarding condition. In the guarding condition in Figure 4.9 there is only one predicate and its value must be TRUE; thus, the guarding condition defined in the PVS specification consists of a single predicate, Predl68_Grtr?, whose value must be TRUE for the predicate to be satisfied. In the third stage, a conjecture theory is generated that contains the proof obligations for the completeness and consistency checks. The conjecture theory generated for the state Inhibited is shown in Figure 4.11. The PVS conjecture theory imports the two theories representing the transi- tions from the state Inhibited, declares the necessary variables, and then sets up the conjectures to check for completeness and consistency. The PVS prover is used on the resulting specifications to attempt to prove the stated conjectures. PVS easily proves the conjectures in Figure 4.11, showing the guarding conditions are complete and consistent. When a conjecture cannot be proved by PVS, unprovable subgoals (Figure 4. 12) are reported. 95 'l‘ransition(s): —+ [Not-Inhibited] Location: Own-Aircraft r> Descend-lnhibits-3o Trigger Event: Surveillance-Complete-Eventc-279 Condition: BOwn-Tracked-Altm“ - Ground-Levelf.237) > lZOOprODESth Output Action: Descend-Inhibit-Evaluated-Eventc-279 'D'ansition(s): —-—> Location: Own-Aircraft o Descend-Inhibits-3o Digger Event: Surveillance-Complete-Evente-279 Condition: [Own-Tracked-Altf.243 S (Isztflqopasfln‘l' Ground-Levelf.237)] Output Action: Descend-Inhibit-Evaluated-Eventc-279 Figure 4.9: The transitions out of the state Inhibited. Unprovable subgoals represent errors in the specification and are easily translated back to an AND/OR table for review by the analyst. For consistency analysis, the unprovable subgoal shown in Figure 4.12 is interpreted as: both transition conditions are satisfied if TrafficDisplayStatus(i) IN_STATE WaitingToSend AND (AdvisoryCode(i) = ResolutionAdvisory) AND NOT (i = j) . AND NOT (TrafficDisplayStatus(j) IN_STATE WaitingToSend) AND NOT (TrafficScore(OtherAircraft(i)) >= TrafficScore(OtherAircraft(j))l. For completeness analysis, the same unprovable subgoal is interpreted as: there is no transition out of a state if the above condition is satisfied. Figure 4.13 shows the AND/OR table representation of the subgoal. Translation of the unprovable subgoals from PVS format to AND/OR table format is currently done manually but could be automated. 96 8 ---------------------------------------- % % Theory for transition from state % % Inhibited to state Not_Inhibited % % ---------------------------------------- % Inhibited_To_Not_Inhibited_ID1: THEORY BEGIN % -------------------------- % % Constant Declarations % % -------------------------- % NODESHI: int = 1200 8 Threshold for descend inhibit % -------------------------- % % Variable Declarations: % % -------------------------- % Own_TrackeddAlt, Ground_Level: VAR int % --------------------------------------------------- % % Definition for the first (and in this case, only) % % predicate in the guarding condition on the % % transition from state Inhibited to state % % Not_Inhibited. % % --------------------------------------------------- % Pred168_Grtr?(Own_Tracked_Alt, Ground_Level): bool = Own_Tracked_Alt - Ground_Level > NODESHI % ------------------------------------------------------------------- % % Guarding condition and interface to the theory for the transition % % from state Inhibited to state Not_Inhibited. % % ------------------------------------------------------------------- % Inhibited_TO_Not_Inhibited_T1?(Own_Tracked_Alt, Ground_Level): bool = Pred168_Grtr?(Own_Tracked_Alt, Ground_Level) END Inhibited_To_Not_Inhibited_ID1 Figure 4.10: A PVS theory for the transition from Inhibited to Not-Inhibited. 97 Inhibited_Tests: THEORY BEGIN % -------------------------------------------------- % % Import declarations for theories for transitions % % - Inhibited_To_Not_Inhibited, % % - Inhibited_To_Inhibited, % % -------------------------------------------------- % IMPORTING Inhibited_To_Not_Inhibited_IDl, Inhibited_To_Inhibited_IDO % -------------------------- % % Variable Declarations: % % -------------------------- % Own_TrackeddAlt, Ground_Level: VAR int % --------------------------------------------------- % % Conjecture: The conditions for transitions out of % % state Inhibited are completely specified. % % Condl \/ Cond2 -> True % % --------------------------------------------------- % InhibitedToNotInhibited_InhibitedToInhibited_Complete: CONJECTURE Inhibited_To_Not_Inhibited_T1?(Own_Tracked_Alt, Ground_Level) 08 Inhibited_To_Inhibited_T0?(Own_Tracked_Alt, Ground_Level) % --------------------------------------------------- % % Conjecture: The conditions for transitions out of % % state Inhibited are consistent; non-conflicting. % % Condl /\ Cond2 -> False % % rewritten as (not(Cond1 /\ Cond2) % % --------------------------------------------------- % InhibitedToNotInhibited_InhibitedToInhibitedConsistent: CONJECTURE NOT ( Inhibited_To_Not_Inhibited_Tl?(Own_Tracked_Alt, Ground_Level) AND Inhibited_To_Inhibited_T0?(Own_Tracked_Alt, Ground_Level) ) END Inhibited_Tests Figure 4.11: Conjecture theory for the state inhibited. PVS SUBGOAL: {-1) WaitingToSend?(TrafficDisplayStatus!1(i!1)) {-2} ResolutionAdvisory?(AdvisoryCode!l(i!1)) I ....... {1) (i!1 = j!1) {2) WaitingToSend?(TrafficDisplayStatus!1(j!1)) {3} TrafficScore!1(OtherAircraft!1(ill)) >= TrafficScore!1(OtherAircraft!1(j11)) Figure 4.12: PVS unprovable subgoal example. 98 TrafficDisplayStatusti) IN_STATE WaitingToSend (AdvisoryCode(i) = ResolutionAdvisory) (i=j) (TrafficDisplayStatus(j) IN_STATE WaitingToSend) (TrafficScore(OtherAircraft(i)) >= TrafficScore(OtherAircraft(j))) 'fl'li'lit-Jt-J Figure 4.13: AND/OR table representation of PVS subgoal. 4.2.6 BDD Analyzer The BDD analyzer contains routines to check logical expressions represented as BDDs for satis- fiability (completeness) and mutual exclusion (consistency). Section 4.2.6.1 describes the process used to check for consistency. Section 4.2.6.2 describes the process used to check for completeness. 4.2.6.1 Consistency Analysis Recall that in RSML, transitions out of a state are annotated with a trigger (event) and a guarding condition. A particular transition from one state to another state can only occur if the trigger event occurs and the guarding condition associated with that event is satisfied. Further, recall that to check two guarding conditions for consistency requires computing the conjunction of the guarding conditions to see if the conjunction is a contradiction (reduces to FALSE). This section describes how the BDD analyzer checks the guarding conditions out of a state for consistency, and how the inconsistencies are reported to the analyst. For each trigger-transitions set (described in Section 4.2.1) associated with a particular atomic state, do the following: 1. See if there are multiple transitions associated with the trigger. 2. If there are multiple transitions associated with the trigger, get the BDD representation for each transition’s guarding condition from the BDD translator. (a) Compute the pairwise conjunctions of the BDD representations of the guarding condi- tions. i. For each result BDD, if dynamic variable reordering is enabled, invoke the reorder- ing algorithm to reorder the result BDD. 99 (b) If the guarding conditions are consistent (the conjunction reduces to FALSE, a contra- diction), report the guarding conditions consistent. (c) If the guarding conditions are not consistent (the conjunction did not reduce to FALSE), report to the analyst the conditions under which both guarding conditions can be satis- fied. i. For inconsistent guarding conditions, convert the result BDD to AN D/OR table for- mat (via the AN D/OR table translator) and print the result table. All satisfiable paths in the result BDD represent potential inconsistencies and are reported as columns in the AND/OR table output. 3. If there are not multiple transitions associated with a trigger, skip the trigger. 4.2.6.2 Completeness Analysis Recall that to check to see if the guarding conditions on transitions out of a particular state trig- gered by the same trigger are complete, it is necessary to compute the disjunction of the guarding conditions to see if the disjunction is a tautology (reduces to TRUE). This section ‘describes how the BDD analyzer checks the guarding conditions out of a state for completeness, and how the incompletenesses are reported to the analyst. For each trigger-transitions set (described in Section 4.2.1) associated with a particular atomic state, do the following: 1. See if there are multiple transitions associated with the trigger. 2. If there are multiple transitions associated with the trigger, get the BDD representation for each transition’s guarding condition from the BDD translator. (a) Compute the disjunction of the BDD representations of the guarding conditions. i. For each result BDD, if dynamic variable reordering is enabled, invoke the reorder- ing algorithm to reorder the result BDD. (b) If the guarding conditions are complete (the disjunction reduces to TRUE, a tautology), report the guarding conditions complete. (c) If the guarding conditions are not complete (the disjunction did not reduce to TRUE), report to the analyst the conditions under which no transition can be made out of the state. 100 i. For incomplete guarding conditions, negate the result BDD so the unsatisfiable paths in the BDD (the paths leading to FALSE) become satisfiable paths (paths lead- ing to TRUE — this is necessary because the AND/OR table translator translates only satisfiable paths to columns in an AND/OR table, and for completeness analysis, we actually want to report the unsatisfiable paths to the analyst). ii. Convert the result BDD to AND/OR table format (via the AND/OR table translator) and print the result table. All satisfiable paths in the negated result BDD represent potential incompletenesses and are reported as columns in the AND/OR table output. 3. If there are not multiple transitions associated with a trigger, perform the relevant steps de- scribed in step 2 above, on the single transition. 4.2.7 PVS Analysis We provided an overview of the translation process from an RSML specification to a PVS specifi- cation in Section 4.2.5. Some additional details related to the translation that were not discussed previously, include: all enumerated type variables in the RSML specification are mapped to abstract datatypes in the PVS specification, and all super-states in the RSML specification are mapped to abstract datatypes whose elements are the immediate child states. Once the translation process is complete, the analyst initiates PVS and automatically parses and typechecks the theories and declarations generated from the RSML specification. During type- checking, theories for abstract datatypes, such as the disjointness property of abstract datatypes, are automatically generated by PVS. When parsing and typechecking is finished, the analyst invokes the PVS prover on the conjecture to be proved. When the analyst invokes the prover, a set of strategies developed during this research are loaded into PVS and the strategies become available to the PVS prover. The strategies allow a sequence of commands to be canied out with the invocation of only one command (the strategy name). The strategies allow the PVS analysis to be more automated, and largely frees the analyst from having intimate knowledge of the PVS prover commands. We developed strategies for proving that the 101 pairwise conjunction of two logical expressions is a contradiction (i.e., the logical expressions are mutually exclusive, or consistent), and we developed strategies for proving that the disjunction of logical expressions is a tautology (i.e., the disjunction of the logical expressions is satisfiable). There are other strategies that are basically sub-strategies within the major strategies. The sub-strategies are useful when the analyst may need to apply some intermediate steps during the proof process. We developed and used the strategies during the course of this research. In the majority of cases, we only needed one command (strategy) to carry out and finish a proof. Appendix B contains definitions of the PVS strategies we applied. As a side note, several additional strategies were developed for and used successfully in a related research area [29]. 4.3 Adding Augmenting Information to the Analysis Process Recall that if the number of error reports is large, it may be that the majority of the reports are spurious and that the reason for the spurious error reports is that information needed by the analysis process to detect and eliminate the spurious errors has been abstracted out of the model. If this is the case, then we want to identify the information that has been abstracted out, augment the analysis process with the information, and rerun the analysis to achieve more accurate results. The method for identifying the missing information is described in Chapter 5. This section describes how to incorporate the augmenting information into the analysis proCess once it has been identified. Section 4.3.1 describes the process of adding augmenting information to the BDD analysis, and Section 4.3.2 describes how augmenting information is added to the PVS analysis. 4.3.] Adding Augmenting Information to the BDD Analysis It is easy to add augmenting information to the BDD consistency analysis process. To demonstrate, let E be a logical expression with sub-expressions X, Y, and Z, where X, Y, and Z are logical 102 expressions containing the variables $1,132, . . . ,zm, y1, y2, . .. , y", and 21, 22, . . . ,zp respectively. Assume that two variables, say 2:, and zj, cannot both be satisfied (TRUE) at the same time due to the structure of the specified system, and that this information has been abstracted out of the model. Assume also, that every path in the BDD representation of E that eventually leads to TRUE has both at, and zj TRUE; i.e., there does not exist any satisfiable path in the BDD that does not pass through 2:,- and Zj, such that 2:,- and 23- are both TRUE. The analysis report shows 100’s of millions of satisfying paths, implying that there are 100’s of millions of inconsistencies between the logical expressions. The analyst wisely assumes that most of the reported errors are spurious errors, and uses the method described in the next chapter to locate the source of the spurious errors. She quickly locates the source of the spurious errors, adds the required augmenting information to the analysis process by including it in the above expression, and reruns the analysis. In this case, the required augmenting information is: e(r,- A z,-). If E was X A Y A Z, then the logical expression (E’) with the augmenting information included is: E’ = X A Y A Z A -(a:,- A 2,). Rerunning the analysis on the BDD representation of E’ will show the original logical expression consistent. The same process applies to adding augmenting information to the completeness analysis, how- ever, the original result BDD (the BDD obtained from taking the disjunction of the guarding condi- tions) must be negated prior to adding the augmenting information. Completeness and consistency are duals of each other. In completeness analysis we want to report unsatisfiable paths to the ana- lyst, while in consistency analysis we want to report satisfiable paths. Thus, by negating the BDD that results from completeness analysis, we change the unsatisfiable paths to satisfiable paths and vise versa, thereby reformulating the problem in terms of consistency analysis. We can then add the augmenting information to the results of the completeness analysis in the same way as described above for consistency analysis. 103 4.3.2 Adding Augmenting Information to the PVS Analysis Augmenting information is added to the PVS analysis using the following steps: I. Add the augmenting information to the PVS specification as an axiom. 2. Start the prover to prove the desired conjecture. 3. Use the (lemma ‘ ‘ axiom-name ’ ' ) prover command (described in Appendix B) to bring the axiom axiom—name into the proof process. 4. Determine instantiations for the variables used in the axiom. Figure 4.14 shows two axioms as specified in the PVS specification language. The OtherAltReporting_AltReporting_Assertion : AXIOM cTrue?(Other_Alt_Reporting) IFF Yes?(A1t_Reporting) OtherAltReporting_OtherAirStatus_Assertion : AXIOM NOT (cTrue?(Other_Alt_Reporting)) IMPLIES State_Airborne?(Other_Air_Status) Figure 4.14: Specifying axioms in the PVS specification language. interpretation of the OtherAltReportingAltReportingAssertion is, the variable OtherAltReporting is TRUE IF the super-state Altjeporting is in state Yes. The OtherAl tReportinthherAirStatusAssertion axiom is interpreted as, the variable Other_Alt.Reporting NOT TRUE (FALSE), IMPLIES the super-"state Other.Air.Status is in state Airborne. Another way to state this latter axiom is, IF Other.A1t_Reporting is FALSE, then OtherAirStatus is NOT in state OnGround. Figure 4.15 shows an example of how our PVS sub-strategy (skolemizeandrewriteS) and the PVS prover commands would be applied to add the above augmenting information (axioms) to the PVS proof process, and to complete the proof. The PVS prover commands and our strategies are defined and explained in Appendix B. 104 (skolemizeandrewrites) strategy (lemma I'OtherAltReporting_AltReport:ing_Assertion") (lemma "OtherAltReporting_OtherAirStatus_Assertion') (apply (repeat* (inst?))) (apply (repeat* (try (bddsimp) (record) (postpone)))) Figure 4.15: Adding augmenting information to the PVS proof process and completing the PVS proof. 4.4 Iteration Options and Analysis of Output Section 4.1 provided a high level view of our analysis process, and described some typical scenarios of how an analysis might proceed when applied to RSML requirements to check the requirements for completeness and consistency. In sections 4.2 and 4.3 we described the specific tools our analysis process uses and how to add augmenting information to the analysis process to mduce or eliminate the number of spurious errors reported in the analysis output. We now describe the analysis process in more detail, specifically highlighting how the analyst directs the analysis process, the analysis of output, and the iteration options available to the analyst. Note that, as in Section 4.1.1, the description is specific to the analysis of state-based requirements (specifically, RSML requirements) for completeness and consistency. The analyst directs the analysis process by setting specific command line options when the process is initiated. The command line options available include options to: e Run BDD analysis on a given specification. When the analyst chooses to run BDD analysis, the following options are available: - Run completeness analysis. - Run consistency analysis. - Run both completeness and consistency analysis. - Enable dynamic variable reordering using the stable window 3 reordering algorithm. - Enable dynamic variable reordering using the sift reordering algorithm. - Enable decision procedures. - Count the number of satisfying paths in the result BDD. - Print the result BDD. - Print only a: number of paths from the result BDD in AND/OR table format. 105 - Enable sampling (described in Chapter 5, Section 5 .2.2) so that samples of the result BDD are taken and printed out in AND/OR table format. The analyst can choose to take any number of samples from four different parts of the result BDD. The analyst can specify a different number of samples from each of the four sample areas, including zero samples. At this time, the sample algorithms cannot be changed by the analyst. - Expand macros. 0 Generate PVS specifications for the guarding conditions, and generate the conjecture theory to specify the completeness and consistency conjectures. e Generate PVS specifications for individual AND/OR tables, and generate the conjecture theory to specify the completeness and consistency conjectures to check the individual tables for completeness and consistency. This feature is used when the analyst wants to apply PVS analysis to the AND/OR table output from the BDD analysis. Figure 4.16 shows the possible outputs from the analysis process, the possible outcomes of the output analysis process, and the options available to the analyst based on the outcome of the analyze output process. After the automated portion of the analysis is finished and the analysis report is generated, the analyst inspects the output to determine the next course of action to take; the Analyze Output process in the figure. If the report shows the guarding conditions complete or cum size ,--"Mana$eabl€‘~\ , . . For vs ~ . ‘ ’ ’ - ND/OR r . e ‘ ‘ Analysis AND/0R TIM; y Analyze - 99%“ Process BDD Node Profile Output Complete 1” I PVSOUUU r"” “\ - :1;fi ' N01Mjssing I ’ I I (\Output Slack $51 . ’ I \ gnfzgnfion Infdrmeuon Ou ott Sup! ,Por yst l 00 : ’ t Machine Readable .' . ”“9““ 0““ : RsML . . . x . . l I Specification i‘ Missrn ' I AND ’0 . o \ Info tion '. ,’ ‘. . ,’ AND/OR Table \ : ,' BDD Node Profile 13.2%, x \ ‘ ‘ : ’I """"" ~ - dentit' Missin """""" f - Information Causgl “.1 - ' ’ ' fimgg V Spurious Errors _R—_ A File Object ———’ Data Flow New ammo. O m Figure 4.16: Analysis of output and iteration options of the analysis. 106 consistent, the process is finished and no further iterations are required. If the output size from either analysis process is manageable, the analyst manually inspects the output for the true errors. As in Section 4.1.1, for this discussion we assume that symbolic analysis using BDDs is applied first. If the output size from the symbolic analysis is not manageable for an analyst, but is manageable for PVS, it is fed back into the analysis process, a PVS specification is generated, and the PVS output is reported to the analyst. If the output size from either analysis process is not manageable, we assume that most of the error reports are spurious and exist because there is some information missing from the model, such that if the information was added to the analysis process, the spurious errors would be reduced or eliminated. The outputs from the symbolic analysis, the BDD node profile and the AND/OR table, are used to identify the information that is missing from the model being analyzed that is causing the spurious errors; the process labeled Identify Missing Information Causing Spurious Errors in the figure. If no missing information can be found, and both analysis processes have been tried, then the errors must be true errors, and the analyst must examine them manually. If only symbolic analysis has been tried and no missing information can be found, either the symbolic analysis output is converted to a PVS specification and PVS analysis is run, or PVS analysis is applied to the original expressions. If missing information is found, it is added to either the machine readable RSML specification or the PVS specification generated from the machine readable RSML specification, and either symbolic analysis or PVS analysis is run on the augmented specification. Adding augmenting information to the analysis process was discussed in detail in Section 4.3. The process to identify missing information causing spurious errors is described in the next chapter. Note that it is not necessary to start the analysis process using symbolic analysis. The analyst may choose to start with PVS analysis. If the number of subgoals reported from PVS is too large, or the analysis fails to halt, the analyst can mn BDD analysis on the original specification and use 107 the process described in Chapter 5 to look for the missing information leading to spurious errors. It may be beneficial to start the analysis using the symbolic component, however, for the following {83801181 0 The symbolic analysis using BDDs is fully automated, faster, and simpler to use than PVS analysis. 0 The technique used to identify the augmenting information relies on the output from the symbolic analysis to locate the augmenting information. o If all or most of the augmenting information is related to the structure of the system, starting with PVS analysis will require an unnecessary extra iteration through the analysis process. The extra step is required since the PVS output gives no indication of where to begin to look for missing information. 4.5 Summary In this chapter we described an analysis technique for checking disjunctions and conjunctions of logical expressions to see if they form tautologies (completeness) or contradictions (consistency). The process is iterative, and integrates graph-based symbolic analysis using BDDs, with theorem proving using PVS. There are some consistent (complete) logical expressions that a first run BDD analysis shows consistent (complete). For the purposes of the discussions in this chapter, we as- sumed that symbolic analysis using BDDs was applied first, and we presented several possible benefits to applying symbolic analysis first. Ultimately, however, it is up to the analyst to decide the flow the analysis process takes. In Chapter 6 we describe some heuristics based on the types of pred- icates comprising the guarding conditions, that the analyst may use to decide the most promising order to apply the analysis methods in. We described several potential interaction scenarios between the analyst and our analysis tech- nique. Some scenarios require the analyst to identify information that was abstracted away from the analysis model, and that needs to be re-introduced into the model for the analysis to finish 108 successfully, or to generate more accurate analysis reports. However, we did not describe how our process helps an analyst identify the augmenting information. In the next chapter we describe how the output of the symbolic analysis using BDDs can help the analyst locate the missing information leading to spurious errors. Chapter 5 Domain Axioms and Domain Axiom Identification In the previous chapter we described in detail our iterative and integrative analysis process. We saw that if the analysis reports are unwieldy, the analyst can assume that many of the error reports are spurious, and that some contradictions in the logical expressions are not being detected because the analysis procedures lack the information to detect these contradictions. We also saw that if the analyst can determine what missing information is leading to the spurious error reports, augment the analysis process with this information, and re-run the analysis, the analysis process will yield more accurate error reports. In the last chapter, however, we left out the details of how the analyst identifies the augmenting information. In this chapter we present our technique to help the analyst identify the contradictions and missing information causing the spurious errors. Our technique uses the structure of the result BDD from the symbolic analysis to identify potential contradictions, and, thus, to identify the information that the analysis procedures lack. Since the augmenting information represents a truth or axiom about the system (i.e., given the structure and properties of a system, the augmenting information is 109 l 10 something that is always true about the system), we refer to the augmenting information describing the contradictions as domain axioms‘. Section 5.1 describes the meaning of domain axioms as used in this research, and Section 5.2 describes our technique to help the analyst identify the domain axioms. 5.1 Domain Axioms Recall that we represent the input to our analysis in disjunctive normal form. We also present the analysis output to the analyst in disjunctive normal form. If two guarding conditions are consistent (they cannot both be satisfied at the same time), their conjunction should reduce to FALSE. The spurious errors that get reported are disjuncts that contain contradictions within the conjunctive terms; the conjunction should reduce to FALSE, but the contradictions between the conjuncts are not being recognized by the analysis methods. As a result, these contradictory conjunctions are reported to the analyst as errors. If we can find the contradictory predicates leading to the spurious errors, create domain axioms representing the contradictions, and feed these domain axioms back into the analysis process, then the output resulting from this second round of analysis will correctly reduce to FALSE, showing that the guarding conditions are consistent. If the guarding conditions are not consistent, then the disjuncts containing contradictions will be eliminated and the true errors will remain and be reported to the analyst. In Chapter 3 we identified four classes of spurious errors. The latter two classes of spurious er- rors are the most difficult for analysis procedures to identify. In the previous chapter, we showed the guarding conditions for transitions out of the state Inhibi ted. Symbolic analysis will generate lThe name domain axioms was suggested by Dr. Laura Dillon at Michigan State University. l l l the following spurious error reports for the guarding conditions shown in Figure 4.9: Inhibited --> Inhibited conflicts with Inhibited --> Not-Inhibited if (Own_Tracked_Alt() - Ground_Level()) > 1200 : T ; Own_Tracked_Alt() <= (1200 + Ground_Level()) : T ; No transition out of Inhibited is satisfied if (Own_Tracked_Alt() - Ground_Level()) > 1200 : F ; Own_Tracked_Alt() <= (1200 + Ground_Level()) : F ; Figure 5.1 shows an example of the domain axioms that can be added to the analysis process to eliminate the spurious errors reported above. The first domain axiom, to eliminate the spurious consistency report, says that the two predicates-cannot both be TRUE at the same time. The second domain axiom, to eliminate the spurious completeness report, says that the predicates cannot be both FALSE and that one of them has to be TRUE. Recall that the goal is to eliminate as many spurious errors as possible from the analysis output so the analyst can easily identify and correct the true errors in the specification. A single undetected contradiction between two or more predicates may be responsible for causing many spurious error reports. The reason for this is that there are many satisfying permutations that exist among the pred- icates comprising the conjuncts. Therefore, the undetected contradiction causing the spurious error reports may occur in thousands or millions of permutations in the analysis output. For example, when we used BDD analysis on two of the guarding conditions from the TCAS II specification, the result BDD contained 4, 909, 890 satisfying paths (paths leading to TRUE). The predicates Domain Axiom to eliminate consistency spurious error: NOT [((Own_Tracked_Alt() — Ground_Level()) > 1200) AND (Own_Tracked_Alt() <= (1200 + Ground_Level()))1 Domain Axiom to eliminate completeness spurious error: ((Own_Tracked_Alt() - Ground_Level()) > 1200) XOR (Own_Tracked_Alt() <= (1200 + Ground_Level())) Figure 5.1: Domain Axioms associated with spurious errors of the state Inhibited. 112 represented by the first two nodes in the result BDD were always FALSE and always TRUE respec- tively. When we examined the TCAS II specification we saw that these two predicates could not be FALSE and TRUE at the same time. However, all 4, 909, 890 satisfying paths in the result BDD had the first node always FALSE and the second node always TRUE. Thus, the original guarding conditions were consistent, but the analysis reported almost five million inconsistencies, and all of the reported inconsistencies involved a single undetected contradiction between two predicates in the guarding conditions; the reported inconsistencies were all spurious error reports. We added the information about the contradictory predicates to the analysis process as a domain axiom and reran the BDD analysis (see Chapter 6, Section 6.1.1.1.2 for details). The analysis report from the sec- ond iteration with the domain axiom showed the guarding conditions consistent. Thus, finding the undetected contradictions causing the spurious errors, identifying the appropriate domain axioms associated with the undetected contradictions, and feeding these domain axioms back into the anal- ysis process can eliminate thousands or millions of spurious errors from the analysis output at one ' time. The problem is, “how can one find the undetected contradictions so that one can identify the missing information and create the relevant domain axioms to add to the analysis process?”. In par- ticular, how can one find undetected contradictions that are artifacts of the state machine structure or that are related to the environment in which the system operates; that is, contradictions related to the last class of spurious errors described in Chapter 3 (class 4 spurious errors). Class 4 spuri- ous errors and their associated domain axioms are more difficult to identify since identifying them often requires intimate and global knowledge of the state machine structure and intimate domain knowledge. The brute force way to identify domain axioms is to take each error reported and examine the specification to see if the reported error represents a true inconsistency (incompleteness), or 113 if there is some information in the original specification that precludes the reported error from occurring. This method can be difficult, error-prone, and time consuming since the error reports can be voluminous and the original specification complex. Therefore, it is beneficial to have a method that will aid in the process of identifying the necessary domain axioms. We describe such a method in the next section. 5.2 Identifying Domain Axioms The process we developed for identifying domain axioms uses the structure of the result BDD to identify predicates that may be indicative of domain axioms in the specification. Normally, the analyst may need to check many predicates to see if they are involved in the required domain axioms. With our approach, we can generally narrow the search space of predicates down to a small number. For example, in our large study (Chapter 6) we were able to narrow the search space of predicates to fewer than seven predicates, from sixty or seventy predicates. Figure 5.2 shows an overview of the data flows in our technique for identifying and using domain axioms. The discussion in this chapter assumes that the output from the first run analysis was too large for the analyst to easily eliminate the spurious errors by hand. First, we perform symbolic analysis using BDDs with dynamic variable reordering enabled, and generate a BDD node profile using a routine in Long’s BDD library (discussed in Chapter 4, Section 4.2.2). From the BDD node profile, we locate what we call indicator nodes. Indicator nodes indicate predicates that may be causing spurious errors in the analysis report, and so, are used to assist in the process of finding domain axioms. This process is shown inside of the Identify Missing Information circle in the figure. We call the predicates associated with the indicator nodes indicator predicates. Once we identify the indicator predicates, we locate the places in the original RSML (specification where the indicator predicates appear and look for truths about the system that are related to the indicator 114 predicates. Once we find these truths (domain axioms), we can use the AND/0R table output from the symbolic analysis to verify that the domain axioms are actually violated in the analysis report (Section 5.2.2). If we find domain axioms that are violated, we feed them back into the analysis process by adding them to the machine readable RSML specification or to the PVS specification, and re—run the analysis. If the output size is manageable on the second iteration, or no domain axioms are found, then the AND/OR table output or the PVS output (whichever is considered more readable) is used to manually analyze the errors and generate the final error report. We now describe indicator nodes in more detail. Ou ut Size Mani? e I . l ,' va : AND/O Output ‘ Table Specification \ \ ‘ Analyze True Errors _ _ ' - ' ----- - Origrnal' RSML Specification Change ' equests Etc“ " file Object ——> Data Flow ETE'E‘E‘ Control Flow 0 “”55 Figure 5.2: Overview of data flows for identifying and using domain axioms. 5.2.1 Indicator Nodes Figure 5.3 shows an example of a BDD and its node profile. Recall (Chapter 3, Section 3.1.3.1) that a strict total order is imposed on the occurrence of the variables in a BDD as the graph is traversed from the root to the leaves. On a path from the root to the leaves, for any two nodes labeled 2:, and 115 3,, if i < j then 1:,- must occur before tr,- along any path [8]. The figure shows the nodes at each level. At each level in the BDD there may be zero or more node occurrences. A node that does W x 10 x2 I! x, 3M x, 2“ x, l! x, l! Figure 5.3: BDD structure and node profile example. not appear along a particular path in a BDD is a don 't care for that particular path; for example, variables 2:2, and .724 are don ’t cares in the highlighted path shown in Figure 5.3 (their values do not matter along this particular path). The BDD shown in the figure has nine potentially satisfying paths; nine paths through the BDD lead to terminal node T. Assume the BDD represents the conjunction of two consistent guarding conditions. Since the BDD did not reduce to FALSE, this means there are nine paths in the BDD that led to T when they should have led to F and thereby reduced the entire BDD to FALSE. Essentially, there is some contradiction between the nodes along each of the nine satisfiable paths that has not been detected. The BDD will reduce to FALSE when the information related to the undetected contradictions is included in the analysis. The BDDs we are working with are much larger than the simple BDD shown in the figure, and 116 it is not feasible to look at the BDD itself to try to identify contradictions between predicates along the satisfying paths. We cannot easily visualize what a large BDD looks like, however, we can get an idea of what the structure of the BDD is by looking at the number of nodes that occur at each level. A BDD node profile, also shown in Figure 5.3, shows the node count for each level in the BDD. For example, the node profile shows a count of three for node $3, the node at level two in the BDD. As discussed in Chapter 4, Section 4.2.2.1, Long’s BDD package allows the variables to be dynamically reordered during BDD analysis, and the ordering of the variables can greatly affect the number of nodes required to denote a given function (Chapter 3, Section 3.1.3.1). The goal of dynamic variable reordering is to rearrange the BDD variable ordering in an attempt to reduce the number of nodes in the graph [5, 8]. The rearrangement attempts to move nodes to positions in the BDD so that the total number of nodes in the BDD is minimized. For example, if all paths pass through a particular node, the best position for that node would be the root, since of necessity, all paths pass through the root node. Similarly, we know that the higher up in the BDD a node occurs, the more paths are likely to pass through it. In addition, if a node near the bottom of the BDD only occurs once, it is possible that this node is a convergence point so that many paths from higher up pass through it. Node 1:6 in Figure 5.3 is an example of a node that acts as a convergence point; seven paths out of the nine pass through this node. The structure of the larger BDDs we are investigating in this work tends to be deep (30-70 variables, which means 30-70 levels in the BDD) with many repeating sub-branches lower in the BDDs (Figure 5.4). The sub-branches are not actually replicated in the BDD with Long’s BDD package. Rather, one sub-branch exists and all other branches that lead directly to the sub-branch are linked to the existing sub-branch. Figure 5.5 shows how all branches that lead to the same sub-branch, point to a single instance of the repeating sub-branch; the repeating sub-branch is 117 denoted by a 1 with a circle around it. The significance of the figures is that they demonstrate what the general structure of the BDDs for the conditions we are working with tends to look like. In addition, they show that for BDDs with this general structure, if an undetected contradiction exists between any of the nodes from a to d, identifying the contradictory predicates and augmenting the analysis with information about the contradictions will show the original expressions mutually exclusive. Figure 5.4: Example of a deep BDD with a repeating sub-branch. The repeating sub-branch denoted by the open circle, repeats six times in the BDD. The method we developed for symbolic analysis with BDDs uses the procedures in Long’s BDD package to reorder the nodes and to generate a BDD node profile for the result BDD. We used the sift reordering algorithm in all the test cases reported in Chapter 6, but the analyst can select the window method if she desires; in the majority of cases, we found that the sift reordering algorithm provided better results than the window method. We found that nodes that occur along every path, or on a majority of paths near the top or bottom of the BDD are indicative of predicates that may be involved in domain axioms in the specification. Recall that a single undetected contradiction may lead to many permutations of the spurious error reports (Section 5.1). Since these permutations 118 Figure 5.5: Example of a deep BDD with links to a repeating sub-branch. The repeating sub- branch is denoted by a 1 with a circle around it. often outnumber any real errors by orders of magnitude, the contradictory predicates involved in the permutations tend to percolate up near the top of the BDD or drop down near the bottom of the BDD so the nodes representing the predicates will occur as few times as possible in the BDD, thus reducing the total number of nodes. We call these nodes that may be indicative of predicates involved in spurious error reports indicator nodes. The nodes are indicative in the sense that they point out (indicate) predicates that are likely to be the cause of spurious errors. For example, consider Figure 5.6 which shows a typical graph generated during our analysis for consistency. In the figure, we see that all paths pass through the first three nodes of the graph. All three of these nodes would be considered indicator nodes; it is not important to know what the rest of the graph looks like. Assume that the guarding conditions being analyzed are consistent so that the BDD that results from taking the conjunction of the guarding conditions should be a contradiction. However, as the figure shows, the resulting graph does not reduce to FALSE. Assume the number of errors reported is on the order of millions. In this case, the analyst assumes that most or all of the errors are spurious and so examines the BDD node profile to identify indicator nodes. The analyst identifies the first three nodes as indicator nodes and identifies the indicator predicates associated 119 Figure 5.6: Example graph showing indicator nodes. with the nodes. The analyst then examines the specification to see if the indicator predicates are involved in any domain axioms that are violated in the result BDD. In this case, assume that the predicates associated with the first two nodes cannot both be true at the same time. The analyst adds the appropriate domain axiom -u(:r:1 /\ 2:2) to the model and reruns the analysis. With the appropriate domain axiom added to the analysis process, the analysis report correctly shows that the original guarding conditions are consistent. It is trivial to pick out the indicator nodes in the BDD node profile. The node profile shows only the nodes that actually occur in the graph under consideration, it shows the nodes in the order in which they occur within the graph, and it shows the number of nodes that occur at each level in the graph. Therefore, to identify the indicator nodes, one simply examines the node profile and identifies those nodes near the top or bottom of the profile that show an occurrence count of one, or that show by their count that they occur along all or a majority of paths in the graph. Nodes that occur along every path or along a majority of paths can be determined from the node profile based on knowledge of the structure of a BDD. Consider the BDD fragment shown in graph (1) of Figure 5.7. The BDD node profile for graph (1) is shown in Figure 5.8. BDDs are structured such that from each node, one can traverse to the left or to the right only. Therefore, there are always two ways and only two ways to leave a node; one 120 or both of the alternatives may lead to a terminal node or to another non-terminal node. In addition, the nodes are ordered in a BDD and must maintain that order throughout the graph. Therefore, if a node occurs twice in the graph (as indicated in the node profile) and is ordered such that it appears in the graph after the root, then it is necessarily the case that all paths from the root to the terminal nodes pass through both the root and the second node. Furthermore, if the third node from the root occurs four times while the second node occurs twice, then it is necessarily the case that the third node also occurs along every path in the BDD from the root to the terminal nodes (see Figure 5.7 graphs (1) and (2)). A similar argument exists for other patterns of indicator nodes. l I _ II I ‘ ' (I) I, ' ‘ I‘ , I \ l \ I ' I X0 \ \ \ xr . \ \ \ X2 X2 \ t \ I \ “ \ I \ x | I \ “ I ‘ ’ ‘ \ I \ I I ‘ ‘. l \ . \ ‘ X0 \ \ X1 ‘ \ \ . \ \ \ ‘ \ ‘2 \ \ \ \ ‘ I . \ \ I I ‘ ‘ \ I ‘ e \ I ’ I \ \ \ ’ I \ \ \ \ \ X0 X0 \ I \ I I xr ~‘ , xr I \ ’ \ I ‘ I s I \ I \ I ‘ I \ I X3, \ ’, X1 \‘ I I ‘ ‘ I ‘ \ ‘ ‘ I I \ I I ‘ \ I I \ ‘ ' \ ‘ I I \ l \ ’ ‘ I I \ \ , \ . (1) . (I) (5) (6) Figure 5.7: Examples of indicator nodes near the top of the BDD. There may be as many as tens or hundreds of predicates in a single result BDD. In our case, we had between 60 and 70 predicates in some of the result BDDs. The indicator nodes help the analyst to determine those predicates that are worth investigating further. Without some indication of which 121 [N Graph Number 0“ (1) (2) (3) (4) (5) (6) (7) _(8) NumberofNodes X0 1 l 1 l l l l 1 PerLevel x1 2 2 1 1 1 III 1 x2-422111I1 Figure 5.8: Node profiles of indicator nodes; a dash represents a don‘t care. predicates to look for to find domain axioms, the analyst, in the worst case, may need to examine all combinations of the predicates. Manually looking for domain axioms among a large number of predicates can be difficult and time consuming. Our technique can narrow the field to a small number of predicates worth examining further. For the cases we looked at with 60 or 70 predicates, we were able to reduce the number of predicates to investigate further, to under seven predicates. In tests with the TCAS H specification, we have successfully applied this method on several pairs of large guarding conditions that were generating millions of error reports, all of which turned out to be spurious; i.e., the guarding conditions for the transitions were consistent in all cases. Figure 5.9 shows a portion of one of the node profiles that we analyzed. The predicates associated with the first four nodes are indicator predicates and were indicative of domain axioms in the specification. In this case, the first, second, third, fourth, seventh, and ninth nodes were all involved in the domain axioms whose absence resulted in over 18 million spurious errors. In general, the fewer the domain axioms whose absences are leading to spurious errors, the better. In cases where the undetected contradictions causing the spurious errors are all related to a single domain axiom, the upper levels of the graph should contain the contradictory nodes only once, and in many cases, the truth values should be the same in all satisfying paths. This is because if a single contradiction is causing all of the spurious errors, the contradictory predicates will contradict each other with the same truth values in every case. If there are multiple undetected contradictions, then the graph tends to be wider (i.e., multiple nodes at each level) since the different undetected 122 Other_Air_Status IN_STATE State_On_Ground: Alt_Reporting IN_STATE Lost: Alt_Reporting IN_STATE No: A1t_Reporting IN;STATE Yes: PREV;RA_Inhibit = cTrue: Other_Tracked_Range() < PROXR: Other_A1t_Reporting = cTrue: Current_Vertical_Separation() < PROXA: Other‘Air_Status IN_STATE State.Airborne: Own_TrackeddAlt() >= ABOVNMC: Auto_SL IN_STATE ASL_2: RA_Inhibit_From_Ground = cTrue: Mode_Se1ector = TA_On1y: Other_Bearing_Valid = cTrue: Own_Tracked_A1t() > Other_Tracked_Alt(): Own_TrackeddAlt() < Other_Tracked_Alt(): Inhibit_Biased_Climb() > Down_Separation(): (Own_Tracked_Alt() - Other_Tracked_Alt()) >= MINSEP: Up_Separation() >= ALIM(): (Own_Tracked_Alt() - Other_TrackeddAlt()) <= n_MINSEP: Down_Separation() >= ALIM(): Current_Vertical_Separation() > LOWFIRMRZ: Up_Separation() <= SENSEFIRM(): Down_Separation() <= SENSEFIRM(): Modified_Tau_Capped() < TFRTHR(): Other_Tracked_Range_Rate() > Zero: Threat_Alt_VMD() < ZT(): Current_Vertical_Separation() < ZT(): Time_To_Co_Alt() < True_Tau_Capped(): Time_To_Co_Alt() < TVTHRl): ADOT() >= n_ZDTHR(): Current_Vertica1_Separation() < ZTHRTA(): ADOT() >= n_ZDTHRTA: -((Current_Vertica1_Separation()/ADOT())) < TVTHRTATBL(): 0ther_Capability = TA_RA: Own_Tracked_A1t_Rate() <= OLEV: Other_Range_Valid = cTrue: Other_Tracked_Range() >= DMOD(): Tau_Rising IN_STATE PLUS_3: Intruder_Status IN_STATE Threat: Other_Tracked_Range_Rate() > RDTHRTA: Other-Tracked_Range() <= DMODTAl): (Other_Tracked_Range()*Other_Tracked_Range_Rate())<=H1TA(): Other_Tracked_Range() <= RMAX: Modified_Tau_Capped() < TRTHR(): Other_Tracked_Range() > DMOD(): (Other_Tracked_Range_Rate() * Other_Tracked_Range())>H1(): Other_VRC = No_Intent: Intent_Received IN_STATE IR_No: Modified_Tau_Capped() < FRTHR(): Other_Tracked_Alt_Rate() <= OLEV: Other_Tracked_A1t_Rate() >= Zero: Current_Vertica1_Separation() > MAXALTDIFF: Current_Vertica1_Separation() > MAXALTDIFFZ: PT_Timer IN_STATE PT_O: Total: 480 mmmmmmmmmummmmmmwmmumbwp # ## ### # #### #### #### #### ### #### #### #### ### # ### ### ### ### ### ### ### ### ### #### ##### ##### ################ ##### ##### ##### ####### ####### ####### ########### ###### ############# ###### ###### ###### ###### ### ### ### ### ### ### # # # Figure 5.9: Partial node profile for consistency analysis of two transitions from a large avionics specification. 123 contradictions will require different truth values and more predicates to represent the contradictions. Similarly, indicator nodes (and therefore, domain axioms) may be more difficult to find if true errors are occuning on a large number of satisfying paths in the BDD. The paths containing the true errors may outnumber the paths containing the spurious errors and therefore the predicates related to the true errors will tend to percolate up near the top of the BDD. In general, this does not happen, since the initial intent is to create a complete and consistent specification. However, for inconsistent and incomplete guarding conditions whose paths containing spurious errors originally outnumber the paths containing true errors by orders of magnitude, once the domain axioms are found and added to the analysis process, the number of permutations of paths containing the true errors can still be unmanageable (for example, on the order of millions). Fortunately, PVS analysis with the proper domain axioms added can generally reduce the error reports to a manageable size, since many of the satisfying path permutations may be able to be collapsed into fewer numbers of satisfying paths with the proper reasoning skills, such as the decision procedures included in PVS. For example, there may be several paths in the result BDD that PVS’ decision procedures could collapse into a single path, whereas with the BDD analysis all of the paths would appear as individual columns in the AND/0R table representation of the result BDD. Ideally, we want to apply the reordering algorithm only to the result BDD. If reordering is applied to multiple BDDs at one time, the algorithm attempts to find the best overall ordering for all of the BDDs. This may cause non-indicator nodes in the result BDD that occur in the majority of paths in the other BDDs, to take an indicator node position in the result BDD. Thus, it becomes more difficult to identify the true indicator nodes in the result BDD. Unfortunately, the current tool reorders all existing BDDs and not just the result BDD. This problem can be corrected, and will be corrected in a future version of the tool. However, we do not know for sure the effect this change will have. We will investigate the effect once the revision is made. So far, reordering all the BDDs 124 has not caused any problems in identifying the proper indicator nodes and the domain axioms that need to be added to the analysis process. The indicator nodes method helps us to identify predicates that may be involved in contradic- tions, however, it does not help us determine which combination of truth assignments is making the predicates contradictory or which domain axioms are needed to eliminate the spurious errors. To help determine this information, we can take samples of the result BDD and present them to the analyst as an AND/OR table. 5.2.2 Sampling A large BDD may contain millions of satisfying paths. To augment the information obtained from the node profile, the analyst may request samples of several paths from various portions of the BDD. The current tool allows an analyst to take four samples of any desired size (a different sample size can be chosen for each of the four samples) from the BDD. The samples represent paths leading to TRUE (satisfiable paths), and are constructed through a depth-first traversal to the right in the BDD, an alternating right-left traversal down the middle portion of the BDD, an alternating left- right traversal down the middle portion of the BDD, and a depth-first traversal to the left in the BDD (Figure 5.10). The particular samples chosen were made in an attempt to provide the analyst with a good idea of what the different parts of the BDD look like. Currently, the analyst cannot change the sampling algorithms or the locations sampled in the BDD. The combined samples are presented to the analyst as an AND/0R table. After we identify potential domain axioms from the indicator nodes as described in the previous section, we want to make sure the domain axioms are actually violated in the result BDD. If the do- main axioms are not violated, then adding them to the analysis process will only result in increasing the number of satisfying paths, since more satisfiable expressions are being added to the BDD. The A. \ z " - - '. Dena; M“"...""'ML...""' Traversal gel-spur“ Figure 5.10: Outline of a BDD showing the sampling algorithms and locations of the samples in the BDD. samples allow the analyst to determine if the domain axioms are violated, since the samples show the truth values of the indicator predicates. Once the analyst determines that the domain axioms are actually violated in a significant number of paths in the BDD, the domain axioms can be introduced into the analysis process and the analysis rerun. Consider the samples shown in Figure 5.11. These samples are from the same result BDD that had the node profile shown in Figure 5.9. The columns in the figure are labeled from 1 - 38. The predicates labeled (1) — (6) were all involved in the domain axioms that needed to be added to the analysis process to eliminate the spurious errors. The original sampling contained 400 samples (100 samples from the right side of the BDD, 100 from the left side of the BDD, 100 from the right middle part of the BDD, and 100 from the left middle part of the BDD). The repeating patterns that can be seen in the samples, occurred many times. For each repeating pattern, only two samples are shown. The domain axioms identified from the indicator nodes (nodes (1) — (4) in the figure), are: l. The Alt_Reporting states are mutually exclusive, 2. The Other_Air_Status states are mutually exclusive, and 3. The structure of the state machine is such that whenever Other_Air_Status is in state 126 On.Ground, Other.AlL.Reporting = cTrue cannot be FALSE. The samples show that domain axiom (l) is violated in columns 1 — 14, 23 -— 26, and 31 - 38. Domain Axiom (2) was violated in columns 1, 2, 5, 6, 9,10,13 — 16,19, 20, 25, 26, 29 - 32, 37, and 38. Domain Axiom (3) was violated in columns 3, 4, 7, 8, 11, 12, 17, 18, and 21 - 24. Consistency Analysis Result: Number of Satisfying Paths found: 18457091 PR-ICATES | ----------- I (1) OtherdAir_Status IN_STATE State_On_Ground (2) Alt_Reporting IN_STATE Lost (3) A1t_Reporting IN_STATE No (4) A1t_Reporting IN_STATE Yes PREV_RA_Inhibit = cTrue Other_Tracked_Range() < PROXR (S) Other_A1t_Reporting = cTrue Current_Vertical_Separation() < PROXA (6) Other_Air_Status IN_STATE State_Airborne SAMPLES] -------- | 11111111112222222222333333333 12345678901234567890123456789012345678 (1)TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTPFPFFFFF (2)TTTTTTTTTTTTTTFFFFPFFFFPFFFFFFTTFFTTTT (3)TTTTTTTTTTTTFFTTTTTTTTTTTTFFFFFFTTFFF? (4)T'r'r'rrrrFrrrrrrrrrrrrFr'r'r'r'r'rr'r'r'r'r'r'r'r'rr'r ....TTTTFFFF..TTTTFFFF. ........rr='. . 'r'r'r'r'r'rr'r'r'r'r'r'r'r'r'rrr'r'r'rrr'r'r'r'r'r'r'r'r'r'rrrr'r'r (5)TTFFTTFFTTFFTTTTFFTTFFFFTTPFTTTTPFFPTT TT TT TT 'r'r'r'r 'r'r .r'r. 'r'r'r'r .r'r (sl'r'r..'r'r r'r 'r'r'r'r TT .TT .r'r'r'r .TT Figure 5.11: Partial samples of a result BDD. The contradictions involving the Alt_Reporting and Other_Air.'Status enumerated type predi- cates are trivial to identify and can be easily eliminated by enabling the decision procedures, but eliminating the contradictions involving predicates (1) and (5) requires the domain axiom informa- tion (domain axiom (3)) to be added to the specification. Domain Axiom (3) is an example of a domain axiom for class 4 spurious errors, and is not easy to find without some indication of what predicates are worth investigating. Since the patterns shown in the samples repeated in all 400 sam- ples, all 400 samples represented spurious errors. Once the domain axiom is added into the analysis 127 fimfig: TTTTFFFF FFFFFFFF 3': Q n 9 x. >\ XX x, FFFFFFFF Figure 5.12: Partial graph and samples associated with the first three levels in the graph. and the decision procedures enabled, the BDD reduces from over 18 million satisfying paths to zero satisfying paths (a contradiction) on the next iteration of the symbolic analysis. If the indicator nodes domain axiom identification technique fails, the analyst can look for re- peating patterns in the AND/0R table that may be indicative of domain axioms in the specification. One repeating pattern that may be indicative of domain axioms is a pattern that shows a particular predicate maintains the same truth value a majority of times (for example, in all samples). For ex- ample, consider the partial graph shown in Figure 5.12. Assume that for some reason, the analyst is unable to identify any domain axioms using the indicator nodes method. The analyst examines the samples (also shown in the figure) to see if she can find any repeating patterns. She sees that $2 and x3 are always FALSE in the samples, checks the specification with regard to the predicates associated with the two nodes, and finds that the two predicates cannot both be FALSE at the same time. She adds this information to the analysis process and reruns the analysis. The results show that the original conjunction (disjunction) was mutually exclusive (satisfiable). The example shown in Figure 5.12 is trivial, and in actuality the indicator nodes approach would work here. However, let 2:; and 2:3 now be arbitrary nodes 1:,- and :r,- in the graph, such that they are not considered indicator nodes when the BDD node profile is examined; i.e., they occur near 128 the middle level in the BDD (for example, levels 30 and 31 in a BDD with 65 levels). Assume that there are some domain axioms associated with a few of the nodes that do represent indicator nodes, but the analyst finds that there are still a large number of error reports after the identified domain axioms have been added to the analysis and the analysis rerun. The analyst can then look at the samples from either the initial run or from the current run, and will find that the nodes 2:, and :r,- have the same truth values in all samples. The analyst will then examine the specification to see if the predicates associated with the nodes are involved in a domain axiom that needs to be added to the analysis process. When the appropriate domain axiom is found, added to the analysis, and the analysis rerun, the results show that the original conjunctive (disjunctive) expression was mutually exclusive (satisfiable). Note that three iterations are not really required here. Since the analyst is looking at the samples to see if the domain axioms related to the indicator predicates are actually violated, she can easily scan the samples for predicates that maintain the same truth value in all or a majority of the samples. Once she locates such predicates, she can check the specification to see if there are any domain axioms related to predicates identified via the samplings. If all domain axioms are found from the initial run results, only two iterations of the analysis are required. We found that in a majority of our test runs, only two iterations were required. 5.3 Summary In this chapter we defined the term domain axiom as used in the context of this research and de- scribed a method to assist the analyst in identifying domain axioms. The domain axiom identifi- cation process uses the structure of the BDD that results from the initial symbolic analysis run to identify what we call indicator nodes. The indicator nodes are nodes that may be indicative of do- main axioms in the specification. Both indicator nodes and sampling may be used to help identify -_I' . Ta. 129 domain axioms, and sampling may be used to verify that the domain axioms are actually violated in the analysis output. A specification can consist of large numbers of predicates and any of these predicates may have constraints placed on the truth values they can hold in relation to the other predicates. Without some indication of what predicates to look at, all possible combinations of predicates may have to be examined. Therefore, it is important to be able to narrow the search for domain axioms by pointing out predicates that are worth investigating. This is especially important for predicates involved in domain axioms related to the structure of the state machine or related to the environment in which the system will operate. When the appropriate knowledge about domain axioms related to the structure of the state machine or the operating environment is missing from the specification, class 4 spurious errors result. Class 4 spurious errors are the most difficult to identify, since locating them requires intimate knowledge of the system and the specification. The analysis process we developed and described in this chapter helps the analyst identify the domain axioms that are missing from the analysis process by pointing out predicates that may be involved in the domain axioms. To test our analysis process and domain axiom identification technique, we ran tests on several large expressions from a real-world avionics specification. Our results were quite promising, and are reported in the next chapter. Chapter 6 Application of Method and Experimental Results We applied our iterative and integrative analysis and our domain axiom identification processes, to the TCAS (Traffic alert and Collision Avoidance System) 11 requirements specification. TCAS II is a complex avionics system that is required on all commercial aircraft carrying 30 or more passengers through US Airspace. The system monitors the airspace around the aircraft for other aircraft that may be on a collision course, and takes evasive action if a collision is imminent. Figure 6.1 shows a high-level View of the TCAS controller as specified in RSML. The entire specification for TCAS 11 (version 6.04A) comprises a little over 400 pages [1 l]. The requirements consist of two main parts, Own-Aircraf t and Other-Aircraf t [12]. About 30% of the document is devoted to Own-Aircraft, while the remaining 70% is devoted to Other-Aircraft. Own-Aircraft and Other-Aircraf t are two of three parallel states in the state Fully-Operational, which is one of two states in the state Power-On in the TCAS-Controller (Figure 6.1). We concentrated our efforts on several transitions in the Other-Aircraf t portion of the re- quirements specification (Figure 6.2). We chose the transitions because of the complexity of their 130 131 TCAS CONTROLLER [Power-On I / f a Inputs: Manufacturer determined operational status [filly-Operational I (Own-Aircraft anal ' rcraft, """ r Elib'oi ''''' ‘ Mode-S-Ground-Station, i:[ l .. 15] ¥ Figure 6.1: RSML specification of TCAS-Controller. guarding conditions and because it is critical for the specification of these guarding conditions to be correct. Specifically, our analyses efforts focused on one of the most complex portions of the TCAS II requirements specification, the state Intruder-Status. The state Other-Aircraf t con- sists of four parallel states, one of which is the state Track—Status (shown in bold in Figure 6.2). One of the three states comprising Track-Status is the state Tracked (also shown in bold in the figure). The state Intruder-Status is one of nine parallel states in the state Tracked (Figure 6.3). ' The state Other-Aircraf t tracks up to thirty aircraft1 in the vicinity of the aircraft containing the TCAS controller that is monitoring the airspace around the plane. The state Intruder-Status (Figure 6.4) is responsible for upgrading or downgrading the threatening status of intruder aircraft (known as other aircraft) within the airspace of the 1The variable i in brackets next to the state name in Figure 6.2 specifies the number of the other aircraft being monitored; thus, i ranges from 0 to 30. 132 [ Other-Aircraft [r] l G V put: Not Shown I I Track-Status RMSend-Status \' (Threat-Not-Tracked (Not-Attempting-RM LL‘ ' Not-Tracked 1 i (Waiting-To—Coordinate l i Tracked 4 (Expanded ElSCWhCIB) CWaiting-For—Reply L. ......................................... RM-Count_'l'l'lis_Cycle (Not Expanded) Intent-Received ( No Yes Q m )-—-———————-L--—-———-—-——-— \Output: Not Shown J Figure 6.2: RSML partial specification of the state Other-Aircraft. F W Intruder-Status (Expanded Elsewhere) PT-Timer (Not Expanded) """"""" ‘I’"T""'""'r""'""'"’"'l""""""" I I t Other-Air-Status . Display-Arrow : Alt-Reporting u Level-Wait (Not Expanded) : (Not Exvanded) : (Not Rwanda!) 1 (Not Expanded) I : I .............. 'fl--_---------L-_-_--_r_---J_-------_---- Tau-Rising I Traffic-Display—Status : Intr'uder-Status-Sync (Not Expanded) : (Not Ewanded) : (N at Expanded) \ ' ' J Figure 6.3: RSML partial specification of the state Tracked. 133 (Intruder-Status ------—--C----—---—----------------—-1 Figure 6.4: RSML specification of the state Intruder-Status. monitoring aircraft (known as own aircraft); Intruder-Status monitors the status of an in- truder aircraft in relation to the aircraft (own aircraft) that is monitoring the airspace for colli- sions with other traffic in the near vicinity. Intruder—Status contains three atomic states (Other—Traffic, Proximate—Traffic, and Potential-Threat), and one parallel state, Threat. A transition from Proximate-Traffic, Potential-Threat, or Threat to the state Other-Traffic means that the intruder aircraft is no longer in the airspace close to own aircraft but is still being monitored; in other words, the status of the intruder in relation to 134 the monitoring aircraft has been downgraded. A transition from either Potential-Threat or Threat to Proximate-Traffic signifies that the intruder aircraft is still in the near vicinity and is closer than those other aircraft placed in the Other—Traf f ic category. When an intruder aircraft is in state Threat, own aircraft is directed to take evasive action to avoid a potential col- lision. Going from state Threat to any of the other three states represents a downgrading of the status of the intruder. Likewise, going from Potential—Threat to either Other-Traffic or to Prox imate-Traf f ic also represents a downgraded intruder. Our consistency analyses of the state Intruder-Status included all transitions out of the three atomic states and out of the parallel state to one of the three atomic states. It is important to ensure that the guarding conditions on these transitions are consistent since we do not want to be able to go from a state of heightened alert (i.e., Threat or Potential-Threat) to a state signifying no threat (Other-Traffic or Proximate-Traffic) at the same time we could stay in or move to the highest alert state, Threat. Such an inconsistency could have serious conse- quences. For example, if an inconsistency exists between the guarding conditions of the transitions Threat to Potential-Threat and Threat to Other—Traffic, the own aircraft might non-deterministically enter state Other-Traf f ic when the intruder aircraft is still on collision course with own aircraft. Our analysis helps detect such inconsistencies so they can be eliminated. Section 6.1.1 discusses the results we obtained from applying our analysis technique to the transi- tions described above. In addition, we examined transitions within the states Auto-8L and Effective-8L. Auto—SL and Effective-8L are parallel states within the state Own-Aircraft (Figure 6.5). Both of these states are responsible for determining the sensitivity level that own aircraft uses to notify a pilot of a potential collision. The higher the sensitivity level, the earlier an alert will be sent to the pilot if an intruder aircraft is on a potential collision course. Therefore, it is important to 135 [Own-Arman I Input: Not Shown Auto-8L Effective-8L 2 -------——————--—-——-—-—4 ---r--—--——-———----—- kOutput: Not Shown J Figure 6.5: RSML partial specification of the state Own-Aircraft. ensure that a transition cannot non—deterministically occur from a lower to higher sensitivity level at the same time it can occur from a lower to an even lower sensitivity level. For example, if the sensitivity level is currently set to 4, it should not be possible to transition from level 4 to level 2 and to level 7 at the same time. Such a non-deterministic transition could be hazardous if there is another aircraft approaching on collision course. We applied both our consistency and completeness analysis methods to the specifications of the guarding conditions for the transitions within the states Auto-SL and Ef f ective-SL. We do not describe the consistency analysis results in this document, since the results were comparable, but on a smaller scale, to the results we obtained for the more complex guarding conditions related to the state Intruder-Status. We describe in detail (Section 6.2) the completeness analysis of the specification of the guarding conditions for transitions in the state Auto-8L, and summarize the completeness analysis results we obtained when we checked the Effective-SI. guarding 136 conditions for completeness. Whenever we applied PVS analysis in our test cases, we used PVS version 2.1 (patch level 2.399). Each of the test cases we selected demonstrated a particular scenario of using our analysis method and demonstrated the effectiveness of our method when applied to real-world specifications. In Section 6.1 we focus on consistency analysis, and in Section 6.2 we focus on completeness analysis. In Section 6.3 we provide a summary of what each of our test cases demonstrated about our analysis technique and provide some heuristics we learned from our experiments. The heuristics help guide the analyst in using our technique efficiently and effectively. 6.1 Consistency Analysis In this section, we describe how we used our analysis technique to check pairs of guarding condi- tions for mutual exclusion. Section 6.1.1 describes in detail the application of our method and the results obtained when we applied our method to some of the most complex guarding conditions in the TCAS 11 version 6.04A requirements specification; transitions within the state Intruder—Status. In Section 6.1.2 we summarize the results we obtained when we applied our method to the same transitions described in Section 6.1.1, but for the latest version of the TCAS II requirements specification; version 7.0. In all cases reported in this section, we applied symbolic analysis using BDDs with macro expansion, sampling, and path count enabled. For the symbolic analyses with dynamic variable reordering enabled, we used the sift reordering procedure from Long’s BDD library (Chapter 4, Section 4.2.2). In most cases, we reported 400 samples for each consistency check; 100 samples from each of the four sampling algorithms we discussed earlier (Chapter 5, Section 5.2.2). 137 6.1.1 Transitions in State Intruder-Status - TCAS 11 Version 6.04A We did not apply completeness analysis to the guarding conditions associated with the transitions described in this section. This is because the original TCAS II specification was not developed to be complete, and since these guarding conditions are some of the most complex in the specification, there would be too many true incompletenesses to demonstrate anything useful about our technique. Note that the guarding conditions we analyzed in this section could not be checked in the past for consistency because analyzing them resulted in too many spurious errors in the analysis reports; i.e., the analysis reports were too large to be useful to the analyst in identifying any true inconsistencies that might exist in the specification. We verify and demonstrate the problem with spurious errors in our discussions of the results we obtained from a first run analysis of the guarding conditions. For each of the results reported in this section (with the exception of Section 6.1.1.4), we include the number of satisfying paths found in the result BDDs, the BDD node profile of the result BDDs, the indicator nodes identified, selected samples from the result BDDs, the actions taken as a result of examining the analysis outputs, and the results achieved as a consequence of these actions. When the pairwise conjunction of two guarding conditions did not reduce to a contradiction, we used our domain axiom identification process to identify information about the system that was missing from the analysis process and causing spurious errors in the analysis output. Once we identified the relevant domain axioms we added the domain axioms back into the analysis process and reran the symbolic analysis using BDDs. At the first use of a domain axiom, we list and discuss the domain axiom, and describe how we identified the domain axiom with our domain axiom identification technique. If the results of the second run symbolic analysis showed the guarding conditions consistent, then we were finished. If the results of the second run symbolic analysis did not reduce to FALSE and the output was small enough to feed into PVS, we converted the output from the symbolic 138 analysis into a PVS specification and applied our proof strategies to the reduced output so that PVS' reasoning capabilities and decision procedures could make further reductions that the symbolic analysis component was not capable of making. If the results from the second run analysis using BDDs resulted in a report that was still too unwieldy to be useful to the analyst, and too large to convert to a PVS specification, we added the relevant domain axioms to the PVS specification of the original guarding conditions and invoked our proof strategies on the PVS specifications of the original guarding conditions, augmented with the appropriate domain axioms. In actuality, we applied both symbolic and PVS analysis techniques in all cases, so we could compare, verify, and draw appropriate conclusions from the results about our iterative and integrative analysis method and about our domain axiom identification process. 6.1.1.1 Transitions out of State Proximate-Iraffic Figure 6.4 showed that there are three transitions out of the state Proximate-Traf fic: l. Proximate-Traffic to Other-Traffic, 2. Proximate-Traffic to Potential-Threat and 3. Proximate-Traffic to Threat. We ran our analysis on each pair of guarding conditions individually; note that there are three conjunctive pairs that need to be checked for mutual exclusion. We describe in detail, both the results of our analysis for each of the three pairs of guarding conditions, and the process we used to achieve those results. Two of the transitions guarding conditions proved to be consistent, while the third pair proved to be inconsistent. 6.1.1.1.1 Proximate-Traffic to Threat and Proximate-Ti'affic to Other-Traffic Figures 6.6 and 6.7 show the guarding conditions for these two transitions. The predicates sub- scripted with an m represent macro predicates. The Threat-Condition macro is shown in 139 Transition(s): Other-Traffic I —) I Threat I Proximate-Traffigj —) I Threat I Potential-Threat ] ——+ [Threat I Location: Other-Aircraft l> Intruder.Statuss-136 Trigger Event: Air-Status-Evaluated-Eventc_279 Condition: [Threat-Conditionmqwj Output Action: Intruder—Status-Evaluated-Evente-279 Figure 6.6: Transitions from Other-Traffic, Proximate—Traffic, and Potential-Threat to Threat. Figure 6.8. As the Threat-Condi tion macro shows, a macro may contain other macros. Thus, there may be several levels of indirection within the guarding conditions. The Threat macro when fully expanded, is one of the most complex macros in the TCAS II requirements specification. The macros included in the Threat —Condi ti on macro range in size from the smallest, a one column two row table, to the largest, a six column ten row table; the expansion of these macro predicates is included in Appendix A. Transition(s): [Proximate-Traffic I ——) [Other-Traffic] Location: Other-Aircraft t> Intruder-Statuss.136 Trigger Event: Air-Status-Evaluated-Evente-279 Condition: OR mu :IEZZEZE 1 --TT-- . )—-—-—-—4)——I-—-—lr-———4)——~)—---—— .LLLLLLL; F41 ' £,_'___' ginigrim; “TEE—.‘TiT—T‘ referrer-fr.- “from—drape .——r)—()———4 F—I—t—lr—-—1 3““ 4LLELL_£ Output Action: lntruder-Status-Evaluated-Eventgmg Figure 6.7: Transition from Proximate-Traffic to Other-Traffic. A 140 Macro: Threat-Condition Definition: 013— RA-Inhibitmeo E 1 Otller-Air-Statuss-1m in state Airborne T T Threat-Range-Testm-222 T ”71“ Threat-Alt-Testm-221 T T Reply-Invalid-Testm-219 F F TCAS-TCAS-Crossing-Testm-220 W T— Level-Waits-101 instate 3 F T Alt-Separation-Testm-216 F 77— Low-Firmness—Separation-Testm.z17 E :1: Figure 6.8: The Threat-Condition Macro. To demonstrate the usefulness of our technique, first consider the results shown in Figure 6.9 that we obtained from a first run symbolic analysis using BDDs without variable reordering enabled. The results show the number of satisfying paths in the result BDD, the BDD node profile which shows the number of nodes at each level in the result BDD, and the total number of nodes in the result BDD. The results in the figure reflect that there were 3,986,640 satisfiable paths in the BDD and 1218 total nodes in the BDD. The 3,986,640 satisfying paths in the BDD imply that there are 3,986,640 inconsistencies in the specification of the guarding conditions. Since the specification was originally designed to be consistent, the analyst can safely conclude that most of the reported inconsistencies are spurious inconsistencies and that there is some information missing hour the analysis process that is leading to the spurious error reports. Note that for this particular case there were a total of 56 predicates in the BDD that resulted from conjoining the guarding conditions on the transitions? Any combination of these 56 predicates may be involved in the undetected contradictions that are leading to the spurious error reports. Consider now the results shown in Figure 6.10 for the same guarding conditions as described 2The total number of predicates was obtained by simply counting the number of levels (predicates) that appear in the node profile of the full result BDD; the full result BDD is not included here due to space limitations. 141 Number of Satisfying Paths found: 3986640 Inhibit_Biased_Climb() > Down_Separation(): 1 ( Own_Tracked_A1t() - Other_TrackeddAlt() ) >= MINSEP: 1 Up_Separation() >= ALIM(): 1 ( Own_Tracked_A1t() - Other_TrackeddAlt() ) <= n_MINSEP: 1 Down_Separation() >= ALIM(): 1 OtherdAir_Status IN_STATE State_Airborne: 4 # Other_A1t_Reporting = cTrue: 4 # (Other_Tracked_Range()*Other_Tracked_Range_Rate())<=H1TA(): 4 # Other_Tracked_Range_Rate() > RDTHRTA: 8 ## Other_Tracked_Rangel) <= DMODTA(): 4 # TAURTA() < TRTHRTA(): 4 # PREV1_Other_Range_Valid = cTrue: 16 #### PREV2_Other_Range_Valid = cTrue: 16 #### Other_Range_Valid = cTrue: 32 ######## Other_Capabillity = TA_RA: 48 ############ Modified_Tau_Capped() < TFRTHR(): 48 ############ Down_Separation() <= SENSEFIRM(): 48 ############ Up_Separation() <= SENSEFIRM(): 48 ############ Own_Tracked_Alt() < Other_Tracked_A1t(): 48 ############ 0wn_Tracked_Alt() > Other_Tracked_A1t(): 64 ############## Current_Vertical_Separation() > LOWFIRMRZ: 24 ###### Auto_SL IN_STATE ASL_2: 36 ######### Mode_Se1ector = TA_On1y: 36 ######### RA_Inhibit_From_Ground = cTrue: 36 ”sure“ Other_Tracked_Range_Rate() > Zero: 72 ################ Threat_Alt_VMD() < ZT(): 72 ################ ADOT() >= n_ZDTHR(): 72 ################ Time_To_CodA1t() < TVTHR(): 72 ################ Time_To_Co_Alt() < True_Tau_Capped(): 36 ######### Tau_Rising IN_STATE PLUS_3: 36 fi######## Intruder_Status IN_STATE Threat: 36 ######### Other_Tracked_Range() >= DMOD(): 36 ######### Other_Tracked_Range() > DMOD(): 36 #######t# Modified_Tau_Capped() < TRTHR(): 36 ######### Other_Tracked_Range() <= RMAX: 36 ######### (Other_Tracked_Range_Rate() * Other_Tracked_Range())>H1(): 18 #### Other_VRC = No_Intent: 10 #8 Intent_Received IN_STATE IR_No: 10 ## Own_Tracked_A1t_Rate() <= OLEV: 8 ## Other_Tracked_Relative_Alt() > MINSEP: 4 # Other_Tracked_Relative_A1t() < n_MINSEP: 4 # Own_Tracked_Alt_Rate() >= Zero: 4 # Other_Tracked_A1t_Rate() >= Zero: 8 ## Other_Tracked_A1t_Rate() <= OLEV: 4 # Current_Vertica1_Separation() > MAXALTDIFF: 6 # Current_Vertica1_Separation() > MAXALTDIFFZ: 4 # Modified_Tau_Capped() < FRTHR(): 6 # Leve1_Wait IN_STATE S3: 2 A1t_Reporting IN_STATE Lost: 4 # Alt_Reporting IN_STATE No: 2 Other_Bearing_Valid = cTrue: 1 Other_Air_Status IN_STATE State_On_Ground: 1 Total: 1218 Figure 6.9: Partial BDD node profile for conjunction of Proximate-Traffic to Threat and to Other- Traffic without variable reordering. 142 above but with variable reordering enabled during the symbolic analysis. From the BDD node profile, the first six nodes and the last node were deemed potential indicator nodes. The BDD node profile and the samples reveal that all satisfying paths in the BDD pass through the first six nodes, and the first five nodes maintain the same truth value on every path (this was determined by examining the truth values of the 400 samples taken). Selected samples are shown in Figure 6.11. We analyzed the indicator nodes and samples and found several contradictions between the in-state predicate involving Other-Air-Status. Further analysis of the results from this run revealed that all of the contradictions involved the first two nodes in the result BDD and the last node in the result BDD. The contradiction related to the first node Other-Alt-Reporting was not detected because some relevant structural information about the system was missing from the analysis process; thus, the spurious error reports resulting from the above undetected contradiction were class four spurious errors. We used our indicator nodes technique to identify the following domain axiom related to the first node, and used the samples to verify that the identified domain .axiom was actually violated in a majority of samples of the result BDD: Other-Alt—Reporting = True if Alt-Reporting IN-STATE Yes. So, how did we find this domain axiom using our indicator nodes method? In the BDD node profile in Figure 6.10, where variable reordering was enabled, the first indicator predicate is Other-Alt-Reporting = True. This provides a place for‘the analyst to start looking for contradictions involving this predicate and some other predicate(s) that may be causing spurious errors in the analysis report. In this case, we would look in the index of the specification for all references to the variable Other-Alt-Reporting. In version 6.04A of the specification there are fifteen references to this variable. Several of them can be easily eliminated from consideration for various reasons: (1) they may occur in guarding conditions for transitions not involved in the current analysis, (2) they may be references to the definition of the variable, (3) they may be input 143 Number of Satisfying Paths found: 4254600 Other_A1t_Reporting = cTrue: Other_Air_Status IN_STATE State_Airborne: Auto_SL IN_STATE ASL_2: Mode_Selector = TA_On1y: RA_Inhibit_From_Ground = cTrue: Own_Tracked_A1t() > Other_Tracked_A1t(): Own_Tracked_Alt() < Other_Tracked_A1t(): Inhibit_Biased_Climb() > Down_Separation(): ( Own_TrackeddAlt() - Other_Tracked_A1t(l ) >= MINSEP: Up_Separation() >= ALIM(): ( Own_TrackeddAlt() — Other_TrackeddAlt() ) <= n_MINSEP: Down_Separation() >= ALIM(): Current_Vertical_Separation() > LOWFIRMRZ: Up_Separation() <= SENSEFIRM(): Down_Separation() <= SENSEFIRM(): «nasalosalm)p a N N N N>N N w w NlH H H H H H Other_Range_Va1id = cTrue: # Other_Bearing_Valid = cTrue: # A1t_Reporting IN_STATE Lost: ## Modified_Tau_Capped() < TFRTHR(): 1 #### Level_Wait IN_STATE S3: # Other_Tracked_RelativedAlt() < n_MINSEP: # Other_Tracked_Re1ative_A1t() > MINSEP: # Other_Capability = TA_RA: 1 #### A1t_Reporting IN_STATE No: 7 # Other_Tracked.Range_Rate() > Zero: 21 ##### Threat_A1t_VMD() < ZT(): 21 ##### Current_Vertica1_Separation() < ZT(): 63 ############### Time_To_Co_Alt() < True_Tau_Capped(): 21 ##4## Time_To_CodAlt() < TVTHR(): 21 ##### ADOT() >= n_ZDTHR(): 21 ##### PREV1_Other_Range_Valid = cTrue: 12 ### PREV2_Other_Range_Valid = cTrue: 12 ### Other_Tracked_Range() >= DMOD(): 18 #### Tau_Rising IN_STATE PLUS_3: l8 #### Intruder_Status IN_STATE Threat: 18 #### Other_Tracked_Range_Rate() > RDTHRTA: 18 #### (Other_Tracked_Range()*Other_Tracked_Range_Rate())<=H1TA(): 6 # Other_Tracked_Range() <= DMODTA(): 6 # Other_Tracked_Range() <= RMAX: 12 ### Modified_Tau_Capped() < TRTHR(): 12 ### Other_Tracked_Range() > DMOD(): 12 ### (Other_Tracked_Range_Rate()*Other_Tracked_Range()) > H1(): 12 ### Other-Tracked_Alt_Rate() > OLEV: 4 # Own_Tracked_Alt_Rate() <= OLEV: 8 ## Other_VRC = No_Intent: 6 # Intent_Received IN_STATE IR_No: 6 # Other_Tracked_Alt_Rate() <= OLEV: 2 Own-TrackeddAlt_Rate() >= Zero: 2 Other_Tracked.Alt_Rate() >= Zero: 4 # Current_Vertica1_Separation() > MAXALTDIFF: 2 Current_Vertica1_Separation() > MAXALTDIFFZ: 2 Other_Air_Status IN_STATE State_On_Ground: 1 Total: 484 Figure 6.10: Partial BDD node profile for conjunction of Proximate-Traffic to Threat and to Other- Traffic with variable reordering. 144 OtherAltReporting = cTrue T111truuuuuurrlrtuuuuuuuuurrrruuurrrrrrr OtherAirStatus INSTATE StateAirborne I]rrrrrwrrurrurrIrrrrlrrrrrrurrrrrrrrrrr AutOSL INSTATE ASLZ FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFPFFFF ModeSelector = TA_Only FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF RA_Inhibit_From_Ground = cTrue FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF OwnTrackedAlt()>Other_Tracked_Alt() TTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF OwnTrackedAlt() Ground Location: Other-Air-Statuss.101 Trigger Event: Effective-SL-Evaluated-Eventc-279 Condition: (Other-Tracked-Altf.243 - Ground-Levelf.237) g 180 fqmgpou Other-Alt-ReportingM 13 = True Output Action: Air-Status-Evaluated-Eventc_279 Figure 6.13: Guarding condition for transition from Other-Air—Status state Airborne to Other-Air- Status state On-Ground. 'fi'ansition(s): ——+ Arrbome Location: Other-Air—StatusHm Trigger Event: Effective-SL-Evaluated-Evente-279 Condition: (Other-Tracked-Altf.243 - Ground—Levelf.237) 2 200 ftmgpom I Other-Alt-Reportingv-113 = True Output Action: Air-Status-Evaluated-Evente-279 Figure 6.14: Guarding condition for transition from Other-Air-Status state On-Ground to Other-Air- Status state Airborne. 146 rAlt-Reporting Yes LOSt t J Figure 6.15: RSML specification of state Alt-Reporting. Figure 6.15 shows the state Alt-Reporting and its child states Yes, Lost, and No. The guarding conditions associated with the transitions in the state Alt-Reporting are shown in Figures 6.16 through 6.20. It is clear from examining the guarding conditions that when- ever Other—Alt-Reporting is TRUE Alt-Reporting is in state Yes, and whenever Other-Alt—Reporting is FALSE Alt-Reporting is not in state Yes. Therefore, the do- main axiom Other-Alt-Reporting = True ifi’Alt-Reporting is in state Yes is identified. This time the samples showed that the identified domain axiom was violated in a significant number of samples; 240 of the 400 samples violated the domain axiom. It should be pointed out here that without intimate knowledge of the specification, it is not obvious that the Other-Alt—Reporting predicate and the Alt-Reporting in-state predicates are related or involved in any domain axioms. Thus, unless an analyst has intimate knowledge of the system specification, it is not obvious where to look for problems without some method to highlight areas to investigate. The complete specification for TCAS II version 6.04A is over 400 pages. “nth our method, the analyst only had to consider a few pages of the specification before finding the missing information leading to the spurious errors. Identifying the above domain axiom could eliminate many spurious errors, but the indicator 147 Transition(s): —) Location: Alt-Reportings-1m Trigger Event: Effective-SL-Evaluated-Eventemo Condition: I Other-AIt-Reportingv-1 13 = TrueJ Output Action: None Figure 6.16: Guarding condition for transition from Alt-Reporting state Yes to Alt-Reporting state Lost. Transition“): Lost I ——-) El —-> Location: Alt-RCPOflings-IOI Trigger Event: Effective-SL—Evaluated-Eventc_279 Condition: [Other-Alt-Reportingv-113 = TrTe] Output Action: None Figure 6.17: Guarding condition for transition from Alt-Reporting states Lost and No to Alt- Reporting state Yes. Tbansition(s): —) Location: Alt-Reportings-1m‘ 'h'lgger Event: Effective-SL-Evaluated-Evente-279 Condition: [Other-Alt-Reportingv-113 = True] Output Action: None Figure 6.18: Guarding condition for transition from Alt-Reporting state Lost to Alt-Reporting state No. 148 'I‘ransition(s): © ——+ Location: Alt-ReportingHm Trigger Event: N/A Condition: [Other-Alt-Reportingv-113 = True] Output Action: None Figure 6.19: Guarding condition for transition from Alt-Reporting state C to Alt-Reporting state Yes. Transition(s): @ —-> Location: Alt-Reportings-101 Trigger Event: N/A Condition: LOther-Alt-Reportingv-n3 = True] Output Action: None Figure 6.20: Guarding condition for transition from Alt-Reporting state C to Alt-Reporting state No. 149 nodes also indicated that the Other-Air-Status in-state predicates also required further in- vestigation. The potential problems with these predicates were more obvious. Samples revealed that the remaining 160 (out of the 400) samples violated the mutually exclusive nature of the Other—Air-Status in-state predicates. The contradictions identified in the remaining 160 sam- ples between the second node and the last-node, were also not detected because information was missing from the analysis process, but to eliminate these contradictions simply entailed enabling the decision procedures during symbolic analysis; in other words, the analysis process was augmented with information about the mutually exclusive and all-inclusive nature of enumerated types. Identi- fying the problem related to the second and last node was trivial and did not require any examination of the specification. Using the information we obtained from the initial symbolic analysis to determine our next course of action, led us to enable the decision procedures in the symbolic analysis to eliminate the contradictions related to the in-state predicates, and to incorporate the identified domain axiom as a macro into the RSML specification (Figure 6.21). The second iteration of the symbolic analysis with Macro: Other-Alt-Reporting-Alt-Reporting-Assertion Definition: Other-Alt-Reportingwn 3 = True Alt-ReportingHm in state Yes Alt-Reportingmm in state Lost Alt-Reporting,-101 in state No Figure 6.21: Domain axiom in tabular form for Other-Alt-Reporting, Alt-Reporting assertion. decision procedures enabled and with the domain axiom included resulted in zero satisfying paths, showing that the initial guarding conditions were consistent. The fact that zero satisfying paths were reported rather than the explicit report that the guarding conditions were consistent shows that the result BDD did not reduce to the constant zero during the symbolic analysis. Rather, while 150 traversing the BDD to find satisfying paths to convert to AND/0R table format to present to the analyst for review, the traversal routine, using the decision procedures, eliminated all paths in the result BDD. The run with the decision procedures enabled took a significantly long time (on the order of an hour or so). \Vrthout the decision procedures enabled however, we were unable to show the guarding conditions to be consistent with symbolic analysis. When we applied the symbolic analysis with the identified domain axiom added but without the decision procedures enabled, the analysis reported 435,720 satisfying paths in the result BDD. We attempted PVS analysis on the original guarding conditions without adding any augmenting information; i.e., we did not add the domain axiom that we identified earlier using the process described above. We ran the analysis on a SPARCserver 1000 with 256 MB main memory and four 85MI-Iz CPUs, using the proof strategy shown in Figure 6.22 (Appendix B provides a description of the PVS commands and strategies we used in our PVS analyses). The PVS analysis of the original guarding conditions ran for over a day and we ultimately aborted the process. We then added the domain axiom that we identified using our indicator nodes technique to the PVS specification and reran the PVS analysis on a SPARCstation 20 Model 70, with 64 MB main memory and one 7 SMI-Iz CPU. With the appropriate information added to the specification, PVS proved the guarding conditions consistent in 44.87 seconds using the PVS proof commands shown in Figure 6.23. (apply (then (rewrite-msg-off) (skolem!) (auto-rewrite-defsS) (do-rewriteS) (repeat* (try (bddsimp) (record) (postpone) ) ) ) ) Figure 6.22: General PVS strategy to prove two guarding conditions are consistent. 151 (skolemizeandrewrites) strategy (lemma 'OtherAltReporting_A1tReporting_Assertion") (apply (repeat* (inst?))) (apply (repeat* (try (bddsimp) (record) (postpone)))) Figure 6.23: Specific PVS commands to include a domain axiom into the analysis process and to prove two guarding conditions consistent. This case shows that both symbolic analysis using BDDs and PVS analysis verified that this pair of guarding conditions was mutually exclusive, but only when the appropriate information was included in the analysis process. 6.1.1.1.2 Promote-Traffic to Threat and Proximate-Ttafic to Potential-Threat Figures 6.6 and 6.24 show the guarding conditions for the transitions from Proximate-Traffic to Threat and Proximate-Traffic to Potential-Threat. Figure 6.25 shows the results obtained when we applied symbolic analysis without variable Transition(s): Other-Traffic ] —+ motential-Threat] Proximate-Traffic] —) [Potential-Threat] Location: Other-Aircraft t> Potential-Threats436 Trigger Event: Air-Status-Evaluated-Eventc_279 Condition: OR statees :TFET = 123;.“ -17=rue T'T-f WTT‘LT"? ”.‘TTT‘? rarer %— —4 4.435: Output Action: Intruder-Status-Evaluated-Eventc_279 Figure 6.24: Transitions from Other-Traffic and Proximate-Traffic to Potential-Threat. reordering. Figure 6.26 shows some of the 400 samples taken from four different portions of the result BDD. The numbers in parentheses preceding each row of truth values in the samples correspond to the parenthesized numbers preceding the predicates in the node profile. Normally, fl 152 Number of Satisfying Paths found: 3722880 (1) InhibitBiasedClimb() > DownSeparation(): 1 (2) (OwnTrackedAlt() - OtherTrackedAlt()) >= MINSEP: 1 (3) UpSeparation() >= ALIM(): 1 (4) (OwnTrackedAlt() - OtherTrackedAlt()) <= nMINSEP: 1 (5) DownSeparation() >= ALIM(): 1 (6) CurrentVerticalSeparation() < ZTHRTA(): 4 # (7) ADOT() >= nZDTHRTA: 4 # (8) -((CurrentVerticalSeparation()/ADOT())) RDTHRTA: 16 #### (13) OtherTrackedRange() <= DMODTA(): 8 ## (l4) TAURTA() < TRTHRTA(): 8 ## (15) PREVlOtherRangeValid = cTrue: 24 ###### (16) PREVZOtherRangeValid = cTrue: 24 ###### (17) OtherRangeValid = cTrue: 40 ########## (18) OtherCapabillity = TARA: 48 ############ (19) OwnTrackedAlt() < OtherTrackedAlt(): 48 ############ (20) OwnTrackedAlt() > OtherTrackedAlt(): 66 ############## (21) CurrentVerticalSeparation() > LOWFIRMRZ: 24 ###### (22) AutoSL INSTATE ASL2: 36 ######### (23) ModeSelector = TAOnly: 36 ######### (24) RAInhibitFromGround = cTrue: 36 ######### (25) CurrentVerticalSeparation() < ZT(): 36 ######### (26) OtherTrackedRangeRate() > Zero: 72 ################ (27) TimeToCoA1t() < TVTHR(): 72 ################ (28) TimeToCoA1t() < TrueTauCapped(): 36 ######### (29) TauRising INSTATE PLUS3: 36 ######### (30) IntruderStatus INSTATE Threat: 36 ######### (31) OtherTrackedRange() >= DMOD(): 36 ######### (32) OtherTrackedRange() > DMOD(): 36 ######### (33) ModifiedTauCapped() < TRTHR(): 36 ######### (34) OtherTrackedRange() <= RMAX: 36 ######### (35) (OtherTrackedRangeRate()*OtherTrackedRange())>H1(): 18 #### (36) OtherVRC = NoIntent: 12 ### (37) IntentReceived INSTATE IRNo: 12 ### (38) OwnTrackedAltRate() <= OLEV: 9 ## # # (39) OtherTrackedRelativeA1t() < nMINSEP: (40) TCASTCASVMDU >= Zero: (41) OwnTrackedAltRate() >= Zero: (42) OtherTrackedAltRate() >= Zero: (43) OtherTrackedAltRate() <= OLEV: (44) CurrentVerticalSeparation() > MAXALTDIFF: (45) CurrentVerticalSeparation() > MAXALTDIFFZ: (46) ModifiedTauCapped() < FRTHR(): (4'7) LevelWait INSTATE $3: (48) AltReporting INSTATE Yes: (49) OtherBearingValid = cTrue: (50) PTTimer INSTATE PTO: Total: 1293 Hlowwoxumwmwasm 18: Figure 6.25: Partial BDD node profile for conjunction of Proximate-Traffic to Threat and to Potential-Threat without variable reordering. 153 the predicates would appear along with their corresponding samples, but due to space limitations in this document, we used the parenthesized numbers to identify the predicates instead. We now examine the results obtained from symbolic analysis when variable reordering was used, and compare and contrast these results with those obtained without variable reordering. Figure 6.27 shows the results obtained from symbolic analysis with variable reordering. The result BDD contained 4,909,890 satisfying paths and a total of 390 nodes. From the BDD node profile, we identify the first seven nodes and the last node as potential indicator nodes. The 400 samples taken for the result BDD show that all satisfying paths in the BDD pass through the first seven nodes and that the first six nodes maintain the same truth value on every path. A portion of the 400 samples is shown in Figure 6.28. We used the same process described earlier to examine the indicator nodes and samples to locate domain axioms in the specification that did not hold in the analysis output. We found that the missing domain axiom involved the first two indicator nodes 0 Alt-Reporting IN-STATE Yes and o Other-Alt-Reporting = True. In every one of the 400 samples, Alt-Reporting IN-STATE Yes, predicate (1), was always FALSE and Other-Alt-Reporting = True, predicate (2), was always TRUE; according to the specification, whenever Alt—Reporting IN-STATE Yes is FALSE, Other-Alt-Reporting = True is FALSE. We added the identified domain axiom (Figure 6.21) to the machine readable specification as a macro and reran the symbolic analysis using BDDs. The output of the symbolic analysis with the domain axiom added to the analysis process reported that the two guarding conditions were consistent. In the symbolic analysis run without variable reordering, the two predicates involved in the missing domain axiom were at levels 9 and 55 in the full result BDD (not shown due to space 154 (1) TTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFPFFFF (2) .. ..................... TTTTTTTTTTFFFFFFFFFF (3) ....................... FFFFFFFFFF ....... ... (4) TTTTTTTTTTFFFFFFFFFFFFF .................... (5) TTTTTTTTTT ................................. (6) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFF (7) ................................. FFFFFFFFFF (8) ... ....... . ...................... FFFFFFFFFF (9) 1111111111111111111111111111111111111111111 (10) Twirl11111111111111irrrrrrrrruurrrrrrrlrrrr (11) TTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (12) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFF (13) TTTTTTTTTT ................................. (l4) ................................. FFFFFFFFFF (15) TTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (16) ..... ..... TTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFF (l7) TTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFF.... ...... (18) TTTTTTTTTTTTTTTFFFFFFFFTTTTTFFFFFFFFFFFFFFF (l9) ............... TTTTTTTT ............... .. . (20) TTTTTTTTTT.... .............. TTTTTFFFFFTTTTT (21) .......... .....FFFFFFFF ..... FFFFF ..... FFFFF (22) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (23) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (24) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF .(25) TTTTTTTTTTTFFFFFFFFFFFFTTTTTFFFFFFFFFFFFFFP (26) TTTTTTTTTTTFFFFTTTTTFFFFFFTTTTTTTFFFFFFFFFF (27) ........... trrrrrrrrrrr ..... rrrtrwrtlrrrrrr (28) ..... ......... TTTTTTTT ...... ITILLLLIIILFLFI (29) TTTTTTTTTTFFTTTTTTTTTTTFTTTTTTTTTFFFFTFFFFF (30) TTTTTTTTTT..TFFFFFFFFFF.TFFTFFFFF....F ..... (31) ..... . ..... ..FFFFFFFFFF..FF.FFFFF....F ..... (32) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFTFFFFFF (33) .. ............................... FTTTFFFFFF (34) .................................. FTT ...... (35) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF..FFFFFF (36) TTTTTTTTTTFFFFF ........ FFFFF ............... (37) . ...... FFFFF ........ FFFFF ............... (38) TTTTTTTTTT ..... FFFFFFFF ..... FFFFF ..... FFFFF (39) ..TTTTFFTT ............ . .................... (40)...TTFF. .TT ............................... (41) ............... TFFFFTTT.....TTTTT ..... FFFFF (42) ............... TFFTTFFF ..... FFFFT ..... FFTTT (43) .................. TTTTT ..... TTFF ........ FFT (44) TFTFTFTFTF ..... FTFFTTFF ..... FT..T ..... FT..F (45) ............ .................. FT ........ PT. (46) F.F.F.F.F ....... F..FF ........ F.FF ...... F.F. (47) TTTT....TT ........ . ........................ (48) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (49) TTTTTTTTTT ................................. (50) .......... FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF Figure 6.26: Selected samples from symbolic analysis of Proximate-Traffic to Threat and to Potential-Threat without variable reordering. 155 Number of Satisfying Paths found: 4909890 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25) (26) (27) (28) (29) (30) (31) (32) (33) (34) (35) (36) (37) (38) (39) (40) (41) (42) (43) (44) (45) (46) (47) (48) (49) (SO) (51) (52) AltReporting INSTATE Yes: OtherAltReporting = cTrue: OtherAirStatus INSTATE StateAirborne: AutoSL INSTATE ASL2: RAInhibitFromGround = cTrue: ModeSelector = TAOnly: OwnTrackedA1t() > OtherTrackedA1t(): OwnTrackedAlt() < OtherTrackedA1t(): InhibitBiasedClimb() > DownSeparation(): (OwnTrackedA1t() - OtherTrackedAlt() ) >= MINSEP: UpSeparation() >= ALIM(): (OwnTrackedAlt() — OtherTrackedA1t() ) <= nMINSEP: DownSeparationt) >= ALIM(): CurrentVerticalSeparation() > LOWFIRMRZ: CurrentVerticalSeparation() < ZTHRTA(): ADOT() >= nZDTHRTA: -((CurrentVerticalSeparation()/ADOT())) MINSEP: OtherTrackedRangeRate() > Zero: ThreatAltVMD() < ZT(): CurrentVerticalSeparation() < ZT(): TimeToCoAlt() < TrueTauCapped(): TimeToCoAlt() < TVTHR(): ADOT() >= nZDTHR(): OtherCapability = TARA: PREVIOtherRangeValid = cTrue: PREVZOtherRangeValid = cTrue: OtherRangeValid = cTrue: OtherTrackedRange() >= DMOD(): TauRising INSTATE PLUS3: IntruderStatus INSTATE Threat: OtherBearingValid = cTrue: OtherTrackedRangeRate() > RDTHRTA: OtherTrackedRange() <= DMODTA(): (OtherTrackedRange()*OtherTrackedRangeRate())<=H1TA(): OtherTrackedRange() > DMOD(): (OtherTrackedRangeRate()*OtherTrackedRange())>H1(): OtherTrackedAltRate() > OLEV: OwnTrackedAltRate() <= OLEV: OtherVRC = NoIntent: IntentReceived INSTATE IRNo: ModifiedTauCapped() < FRTHR(): OtherTrackedAltRate() <= OLEV: OwnTrackedAltRate() >= Zero: OtherTrackedAltRate() >= Zero: CurrentVerticalSeparation() > MAXALTDIFF: CurrentVerticalSeparation() > MAXALTDIFFZ: PTTimer INSTATE PTO: Total: 390 h b b mrh a h NlM N>N N N N H H H H H H H rare P’PJIJ errata P'FJ F414 pneat» plea b.nslo Riser» Ritnln Uli>tb oat: C>C><3 crcacaub HNNhNthmm * * ## ## ## ## ##### ##### ############## ##### ##### ##### ##### #### #### ####*## ####### ####### ####### ###### ######### #####* ###### ###### #####f ## #### ### ### ## 6 # ## fl # Figure 6.27: Partial BDD node profile for conjunction of Proximate-Traffic to Threat and to Potential-Threat with variable reordering. 156 (l) FFFFFFFFFFFFFFFFPFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (2) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT (3) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT (4) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (5) FFFFPFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (6) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (7) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFF (8) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFTTTTTTTTTTTFFFFFF (9) ...... ... ..................... TTTTTTTTTTFFFFFFFFFFF ...... (10) ........................................ TTTTTTTTTTT ...... (11) ........................................ FFFFFFFFFFF ...... (12) .............................. FFFFFFFFFF ................. (13) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT ........................... (14) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFF (15) ................................................... FFFFFF (16) ................................................... FFFFFF (17) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFF (18) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT .......................... (19) .......... ..... ............... TTTTTTTTTTTTTTTTTTTTTFFFFFF (20) .............................. FFFFFFFFFFFFFFFFFFFFF ...... (21) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFF (22) ................................................... FFFFFF (23) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFF (24) ... ...... ...... ............... TTTTTTTTTTTTTTTTTTTTTTTTTTT (25) .......... . ................... TTTTTTTTTTTTTTTTTTTTTTTTTTT (26) .............................. FFFFFFFFFFFFFFFFFFFFFFFFFFP (27) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFTTTTTFFFFFFFFTTTFFFTTT (28) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT ..... TTTTT ........ TTT...FFF (29) .... .................................................. FFF (30) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFTTTTTTTTFFFTTTFFF (31) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFTTTFFF (32) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT ..................... TTT... (33) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT ..................... TTT... (34) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFF ..... TTTTTTTF...FTT... (35) TTTTTTTTTTTTTTTTTTTFFFFFFFFFFFTTTFFFFFTTFFFFFTTFFFFTFFFTT (36) TTTTTTTTTTFFFFFFFFF ........... TTF ..... TF ..... TT....T...FT (37) TTTTTTTTTT .................... FT ...... F ...... FT....F....F (38) ................... TTTTTTTTTTT...FFFFF..TTFFF..FFFF...F. (39) ................... TTTTTTTTTTT...FTFTT..TTFTT..TFTT.FTT.. (40) ................... TTTTTTFFFFF....T.TF..TF.TF..T.TF..FT.. (41) FFFFFFFFFFFFFFFFFFF ...... FFFFFFFFF.F.FFF.FF.FFF.F.PFFF.FF (42) FFFFFFFFFFFFFFFFFFF ...... FFFFFFFFF.F.FFF.FF.FFF.F.FFFF.FF (43) TTTTTFFFFFTTTTTFFFFFFFFFFTTTTT ........................... (44) TTFFFTTTTTTTFFFTTTTFFFFFFTTFFF ..... FFFFF ........ FFF...FFF (45) ..TTF ....... TTF....TTTTTF..TTF ..... FFFFF ........ FFF...FFF (46) TFTF.TTTTTTFTF.TTTTTTTTF.TFTF ............................ (47) ..... TFFFF ..... TFFFFFFF .................................. (48) ...... TTFF ...... TTFTTFF .................................. (49) ...... TFTF ...... TFTTFTF .................................. (50) F.F..FF..FF.F..FF..F..F..F.F ............................. (51) ....... FF ........ FF.FF ................................... (52) .......... FFFFFFFFF ........... F.FFFFFFFF..FFFF.FFFFFFFFFF Figure 6.28: Selected samples from symbolic analysis of Proximate-Traffic to Threat and to Potential-Threat with variable reordering. 157 limitations); thus, the node profile without variable reordering would not help the analyst to narrow the search for missing domain axioms or to locate the relevant domain axioms in a timely manner. The analyst may be able to use the samples at this point to look for predicates that maintain the same truth values in all samples or in a majority of samples, and check these predicates to see if they are involved in domain axioms in the specification that are violated in all or a majority of the samples. In this case, there are six predicates that maintain the same uuth values in all samples, and there are several other predicates that maintain the same truth values in a majority of samples. Clearly, the search is more focused and more timely when our indicator nodes technique is applied. We now show that the identified domain axiom is also required in the analysis of the guarding conditions when PVS analysis is applied. We ran PVS analysis on the full guarding conditions without domain axioms added and using the strategy shown in Figure 6.22, on a SPARCserver 1000 with 256 MB main memory and four 85MHz CPUs. Again, as with the previously discussed transitions, we aborted the analysis after it ran on the order of a day. We added the domain axiom we identified using the indicator nodes and samples to the PVS specification of the guarding conditions, and reran the PVS analysis on a SPARCstation 20 with 64 MB of main memory and one 75MHz CPU. We used the proof commands in Figure 6.23 and proved the guarding conditions consistent in 60.69 seconds. As in the previous case, both symbolic analysis using BDDs and PVS analysis verified that this pair of guarding conditions was mutually exclusive, but only after the appropriate information was included in the analysis process. 6.1.1.1.3 Proximate-Mc to Other-Traffic and Proximate-‘Itaffic to Potential-Threat The guarding conditions for the transitions Proximate-Traf fic to Other-Traf fic and Proximate-Traffic to Potential-Threat were shown in Figures 6.7 and 6.24. Our 158 analysis of the guarding conditions for these transitions showed that the specifications of the guard- ing conditions were inconsistent The results obtained from symbolic analysis without variable reordering are shown in Figure 6.29. As in the previous two analyses for transitions out of state Proximate-Traf f ic, the BDD node profile generated when symbolic analysis without variable reordering was applied, gives no indication as to what particular predicates might be involved in the undetected contradictions that are leading to spurious error reports. Figure 6.30 shows a portion of the 400 samples reported to the analyst. Notice that there are no predicates that maintain the same truth values in all of the samples. This is because there are true inconsistencies between the two guarding conditions, that are mixed in with the spurious inconsistencies; this makes it even more difficult to identify the information that is missing from the analysis and leading to spurious errors in the analysis output. Consider now, the results obtained from symbolic analysis with variable reordering (Fig- ure 6.31). With variable reordering, the analysis reported 46,665,726 potential inconsistencies and the result BDD contained 932 nodes. There are still too many error reports to show to the analyst and for the analyst to manually inspect. However, the analyst can now use our method to identify any information that is missing from the analysis process and that is leading to spurious error reports in the analysis output. The BDD node profile obtained with variable reordering shows that the first three nodes are indicator nodes (they occur along every path in the BDD, and thus, occur in every reported inconsistency), and that the last node is a potential indicator node. Rather than the analyst having to check some combination of 65 predicates (the full result BDD contained 65 predicates) for undetected contradictions, the analyst can now concentrate on only four predicates, starting with the first three. The indicator predicates associated with the first three indicator nodes are: 0 Other-Air-Status IN-STATE On-Ground, o Alt-Reporting IN—STATE Yes,and o Other-Alt-Reporting = True. 159 Number of Satisfying Paths found: 99048816 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25) (26) (27) (28) (29) (30) (31) (32) (33) (34) (35) (36) (37) (38) (39) (40) (41) (42) (43) (44) (45) (46) (47) (48) Total: PREVRAInhibit = cTrue: InhibitBiasedClimb() > DownSeparation(): (OwnTrackedA1t() - OtherTrackedAlt()) >= MINSEP: UpSeparation() >= ALIM(): (OwnTrackedAlt() - OtherTrackedAlt()) <= DownSeparation() >= ALIM(): CurrentVerticalSeparation() < ZTHRTA(): ADOT() >= nZDTHRTA: nMINSEP: -((CurrentVerticalSeparation()/ADOT()))= ABOVNMC: CurrentVerticalSeparation() < PROXA: (OtherTrackedRange()*OtherTrackedRangeRate())<=H1TA(): OtherTrackedRangeRate() > RDTHRTA: OtherTrackedRange() <= DMODTA(): TAURTA( ) < TRTHRTAU : PREVIOtherRangeValid = cTrue: OtherRangeValid = cTrue: OtherCapability = TARA: ModifiedTauCapped() < TFRTHR(): 0wnTrackedAlt() > OtherTrackedA1t(): AutoSL INSTATE ASL2: ModeSelector = TAOnly: RAInhibitFromGround = cTrue: CurrentVerticalSeparation() < ZT(): OtherTrackedRangeRate() > Zero: TauRising INSTATE PLUS3: IntruderStatus INSTATE Threat: OtherTrackedRange() > DMOD(): (OtherTrackedRangeRate()*OtherTrackedRange())>H1(): OtherVRC = NoIntent: OwnTrackedAltRate() <= OLEV: OtherTrackedAltRate() > OLEV: OtherTrackedRelativeAlt() > MINSEP: TCASTCASVMD() <= Zero: OtherTrackedRelativeAlt() < nMINSEP: TCASTCASVMD() >= Zero: CurrentVerticalSeparation() > MAXALTDIFF: ModifiedTauCapped() < FRTHR(): LevelWait INSTATE $3: AltReporting INSTATE Lost: AltReporting INSTATE No: OtherBearingValid = cTrue: OtherAirStatus INSTATE StateOnGround: AltReporting INSTATE Yes: PTTimer INSTATE PTO: 3491 mmNNNMNH on 100 144 144 192 110 110 110 90 180 ## ##### ## ## #### ######## ############ ############ ############### ######### ######### ######### ####### ############## ####### ####### ####### #### ## ## # ****** Figure 6.29: Partial BDD node profile for conjunction of Proximate-Traffic to Other-Traffic and to Potential-Threat without variable reordering. 160 (1) TTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (2) TTTTTTTTTTTTTTTFFFFFFTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFF (3) ............... TTTTTT ............. FFFFFFFFFFFFFFFFFFFFFF (4) ............... FFFFFF ................................... (5) 11111111111£TT1 ...... FFFFFFFFFFFFF ...................... (6) 11111111rrrrr11 ......................................... (7) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFF (8) ....................... . .......... FFFFFFFFFFFFFFFFFFFFFF (9) .................................. FFFFFFFFFFFFFFFFFFFFFF (10) TTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (11) 111111111rI111111111r1111rT11111rIrFFFEErrrrr1rTTTT11111 (12) TTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFTTTTTFFFFFFFFFFFFP (13) ............... TTTFFFTTTTTFFFFFFFFFFFF ..... FFFFFFFFPFFFF (14) 111111111111111 ......................................... (15) TTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFTTTTFFFFFFFFFFFFF (16) TTTTTTTTTTTTTTTFFFTTTTFFFFTTTFFFFFFFFFTFFFFFFFFFFFPFFFFF (17) 111111111111111...TTT ............................. ...... (18) ............... FTT....FTTT...FFFFFFTTT.FTTTFFFFFFFFTTTTT (19) 111111111111111 ......................................... (20) 11111111111111I.TTFFT..TTFFFTTTTTT.FFF..r111111111111111 (21) 111111111111111 ....................... . ................. (22) TT1I11111111111 ......................................... (23) 111111111r11111 ......................................... (24) TTFFFFFFFFFFFFF.. ......... FFTFFFFF ......... FFFFFFTT ..... (25) ..TTFFFFFFFFFFF ........... TT.TFFFF ......... FTTTTT ....... (25) ....TTFFFFFFFFF ............... FFTT ......... T ............ (27) ...... TTTTTTTTT ......................................... (28) ...... TTTTTTTTT ......................................... (29) ...... T'rr'r'r'r'r'rr ......................................... (30) ...... TTTTTTTTT ......................................... (31) ...... TTFFFFFFF ......................................... (32) ........ TTFFFFF ......................................... (33) .......... TTTTT ......................................... (34) .......... TTTTT ......................................... (35) .......... TTTTT ......................................... (35) .......... TTTTT ......................................... (37) .......... TTTTT ......................................... (38) ........................................................ (39) ........................................................ (40) .......... TTTFF ......................................... (41) .......... TTF ........................................... (42) ............ TFF ......................................... (43) TTFFTTFFTTFF.FF...FFF ..... FFFTTTFF.FFT ..... TFFFTTFFFFFTT (44) ..FF..FF..FF.TF...TFT ..... FTT...TT.FT ....... FTT..FTFTT.. (45) TFTFTFTFTFTFTFT.FT..T..FT...FFFTTF ....... FTT.FTFT.FTFTFT (46) T.TTT.TTT.TTT.T....TT ..... T....TT..T ....... TT.T.TT.T.T.T (47) .T.T.T.T.T.TFT..T.TT...T.T ......... TTT..TT .......... T.T. (48) ............... FFF...FFPFFFFFFFFFFF...FFFFFFFFFFFFF ..... Figure 6.30: Portion of samples from symbolic analysis of Proximate-Traffic to Other-Traffic and to Potential-Threat without variable reordering. 161 Number of Satisfying Paths found: 46665726 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25) (26) (27) (28) (29) (30) (31) (32) (33) (34) (35) (36) (37) (38) (39) (40) (41) (42) (43) (44) (45) (46) (47) (48) (49) (50) (51) (52) Total: OtherAirStatus INSTATE StateOnGround: AltReporting INSTATE Yes: OtherAltReporting = cTrue: OtherAirStatus INSTATE StateAirborne: AutoSL INSTATE ASL2: ModeSelector = TAOnly: RAInhibitFromGround = cTrue: OwnTrackedAlt() > OtherTrackedA1t(): OwnTrackedA1t() < OtherTrackedA1t(): InhibitBiasedClimb() > DownSeparation(): ( OwnTrackedAlt() - OtherTrackedAlt() ) >= UpSeparation() >= ALIM(): ( OwnTrackedAlt() - OtherTrackedAlt() ) <= nMINSEP: DownSeparation() >= ALIM(): CurrentVerticalSeparation() > LOWFIRMRZ: UpSeparation() <= SENSEFIRM(): DownSeparation() <= SENSEFIRM(): OtherRangeValid = cTrue: OtherBearingValid = cTrue: AltReporting INSTATE Lost: ModifiedTauCapped() < TFRTHR(): LevelWait INSTATE S3: OtherTrackedRelativeAlt() < nMINSEP: OtherTrackedRelativeAlt() > MINSEP: OtherTrackedRangeRate() > Zero: CurrentVerticalSeparation() < ZT(): TimeToCoAlt() < TrueTauCapped(): TimeToCoAlt() < TVTHR(): ADOT() >= nZDTHR(): AltReporting INSTATE No: CurrentVerticalSeparation() < ZTHRTA(): ADOT() >= nZDTHRTA: MINSEP: -((CurrentVerticalSeparation()/ADOT())) < TVTHRTATBL(): CurrentVerticalSeparation() < PROXA: OtherTrackedRange() < PROXR: OtherTrackedRange() >= DMOD(): TauRising INSTATE PLUS3: IntruderStatus INSTATE Threat: OtherTrackedRangeRate() > RDTHRTA: (OtherTrackedRange()*OtherTrackedRangeRate())<=HlTA(): OtherTrackedRange() <= DMODTA(): , (OtherTrackedRangeRate()*OtherTrackedRange()) > H1(): OtherTrackedAltRate() > OLEV: OwnTrackedAltRate() <= OLEV: OtherVRC = NoIntent: IntentReceived INSTATE IRNo: OtherTrackedAltRate() <= OLEV: OwnTrackedAltRate() >= Zero: OtherTrackedAltRate() >= Zero: CurrentVerticalSeparation() > MAXALTDIFF: ModifiedTauCapped() < FRTHR(): PTTimer INSTATE PTO: 932 O‘ChO‘O‘O‘O‘O‘O‘UUIUIUIhé-NH as 21 HNNwwwaUNNNi-‘NNNQN NNH hwmmbooowwbpbummmmmmmmwwq HNNbNNQO‘O **#** ****¢*=fl=* # ##### #### ##### ##### ## #4 ## ###### ############## ###### ###### ###### ### ###### ###### ###### ######## ######## ####### ####### ####### ######## ####### ####### #88 fl #6 fl # Figure 6.31: Partial BDD node profile for conjunction of Proximate-Traffic to Other-Traffic and to Potential-Threat with variable reordering. 162 We know from the first example discussed (Section 6.1.1.1.1) that there are domain axioms involving the predicates o Alt-Reporting IN-STATE Yes and Other-Alt-Reporting = True, and o Other—Alt-Reporting = True and Other-Air-Status IN-STATE On-Ground. The domain axioms, respectively, are: 0 Other—Alt—Reporting = True if Alt-Reporting IN—STATE Yes, and a ifOther-Alt-Reporting is FALSE, then Other-Air-Status cannot be in the state On—Ground. If we did not have the information about the above two domain axioms from a previous analy- sis, we would use the same technique described in Section 6.1.1.1.1 to identify the domain axioms involving the indicator predicates. We also know that the variable Other—Air-Status can- not be in states On-Ground and Airborne at the same time, and that it must be in at least one of the states On-Ground or Airborne. The indicator predicate Other-Air—Status IN- STATE On-Ground occurs at level 0 (the root) in the result BDD, and the predicate Other-Air-Status IN-STATE Airborne occurs at level 3 in the result BDD. The root node necessarily occurs along every satisfying path in the result BDD. The related predicate at level 3 occurs four times in the result BDD; note that at level 3, the maximum number of times the predicate can occur is 23 or 8. Figure 6.32 shows a portion of the 400 samples from the result BDD for the first four predi- cates. If we examine all of the 400 samples, we see that in the first 200 samples the root node is always TRUE and in the last 200 samples the root node is always FALSE. The majority of the first 200 samples for the three nodes following the root are TRUE. The last 200 samples of the predi- cate Other-Air-Status IN-STATE Airborne are all don' t cares revealing that this predicate does not occur on the FALSE side of the root node Other-Air-Status IN-STATE On-Ground. This means that in most of the first 200 samples, both of the Other-Air—Status 163 PREDICATES | ----------- I (1) OtherAirStatus INSTATE StateOnGround (2) AltReporting INSTATE Yes (3) OtherAltReporting = cTrue (4) OtherAirStatus INSTATE StateAirborne SAMPLES | ( 1) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFPFFFFFPPFFFFF (2 ) TTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFF (3 ) TTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFF (4 ) TTTTTTTTTTFFFFFFFTTTTTTTTT ............ FTTTTTTTTT ..................... Figure 6.32: Selected samples from symbolic analysis of Proximate-Traffic to Other-Traffic and to Potential-Threat with variable reordering. in state predicates are TRUE - a contradiction that was not detected during the symbolic analy- sis since we did not enable our decision procedures. The last 200 samples reveal that the pred- icate Other-Alt-Reporting = True is always FALSE. In addition, the last 200 samples reveal that the predicate Alt-Reporting IN-STATE Yes is always TRUE in the first 100 (of the last 200) samples, and always FALSE in the last 100 (of the last 200) samples. The in- formation revealed in the last 200 samples showed that the identified domain axiom involving Other-Alt-Reporting = True and Alt-Reporting IN—STATE Yes was violated in 100 of the 200 samples. Information from the samples also revealed that the domain axiom involv- ing Other-Alt-Reporting and Other-Air—Status was violated in only twelve of the 400 samples. We added the domain axiom Other-Alt-Reporting = True if Alt-Reporting IN-STATE Yes (Figure 6.21), enabled the decision procedures, and reran the symbolic anal- ysis. We did not add the third domain axiom, if Other-Alt—Reporting is FALSE, then Other-Air—Status cannot be in the state On—Ground, identified above because it was only violated in twelve of the 400 samples. The results from the symbolic analysis run with variable reordering, decision procedures enabled, and with the domain axiom included, are shown in 164 Figure 6.33. As the results show, the total number of inconsistencies detected is now 7,540,053, a reduction of 39,125,673; this means that adding the domain axioms identified above and enabling the decision procedures eliminated 39,125,673 spurious inconsistency reports. While this reduction is significant and exemplary, there are still too many inconsistency reports to be presented to the analyst and for the analyst to manually inspect. The BDD node profile that results from the symbolic analysis with domain axioms can- not be used further to locate potential indicator nodes, since the predicates involved in the do- main axiom have now become the most dominant predicates in the analysis output and over- shadow any remaining predicates that might be involved in undetected contradictions. Recall that we did not add all of the domain axioms identified above, since the domain axiom involv- ing the predicates Other—Alt-Reporting = True and Other-Air—Status IN-STATE On-Ground was only violated in twelve of the 400 samples. We examined the new samples gen- erated from the symbolic analysis run with the other domain axiom and with the information about enumerated type predicates added, and found that 48 (half of which were redundant) of the 400 samples now showed the previously excluded domain axiom violated. Though the number of viola- tions is not significant, we added the domain axiom (Figure 6.34) along with the previously added domain axiom, turned on the decision procedures, and reran the symbolic analysis. As expected, the results were not much different from the run without this last domain axiom added. The results reported 7,540,029 potential inconsistencies verses the previously reported 7,540,053; a reduction of only 24. Since we could not identify any additional domain axioms, we concluded that the symbolic anal- ysis was generating many redundant true inconsistency reports that could be collapsed into fewer reports if the proper Boolean reduction techniques were applied. Our symbolic analysis component 165 Number of Satisfying Paths found: 7540053 AltReporting INSTATE Lost: AltReporting INSTATE No: OtherAirStatus INSTATE StateOnGround: OtherAltReporting = cTrue: OtherAirStatus INSTATE StateAirborne: AltReporting INSTATE Yes: OtherBearingValid = cTrue: AutoSL INSTATE ASL2: ModeSelector = TAOnly: OwnTrackedAlt() > OtherTrackedA1t(): OwnTrackedA1t() < OtherTrackedA1t(): ( OwnTrackedAlt() - OtherTrackedAlt() ) >= MINSEP: UpSeparation() >= ALIM(): ( OwnTrackedA1t() - OtherTrackedAlt() ) <= nMINSEP: UpSeparation() <= SENSEFIRM(): DownSeparation() <= SENSEFIRM(): OtherRangeValid = cTrue: PREVRAInhibit = cTrue: TCASTCASVMD() >= Zero: OtherTrackedRelativeA1t() < nMINSEP: TCASTCASVMD() <= Zero: OtherTrackedRelativeAlt() > MINSEP: OtherCapability = TARA: TimeToCoAlt() < TrueTauCapped(): TimeToCoAlt() < TVTHR(): ADOT() >= nZDTHR(): CurrentVerticalSeparation() < ZTHRTA(): ADOT() >= nZDTHRTA: -((CurrentVerticalSeparation() / ADOT())) < TVTHRTATBL(): CurrentVerticalSeparation() < PROXA: H mmmwmmmmd\lQQNNNNHQNNNNNMHUJWUU‘MUTNNH OwnTrackedA1t() >= ABOVNMC: OtherTrackedRange() < PROXR: 1 PREVIOtherRangeValid = cTrue: PREVZOtherRangeValid = cTrue: OtherTrackedRange() >= DMOD(): 12 TauRising INSTATE PLUS3: 12 IntruderStatus INSTATE Threat: 12 OtherTrackedRangeRate() > RDTHRTA: 14 (OtherTrackedRange() * OtherTrackedRangeRate()) <= H1TA(): 8 OtherTrackedRange() <= DMODTA(): ModifiedTauCapped() < TRTHR(): OtherTrackedRange() > DMOD(): ( OtherTrackedRangeRate() * OtherTrackedRange() ) > H1(): OtherTrackedAltRate() > OLEV: OwnTrackedAltRate() <= OLEV: OtherVRC = NoIntent: IntentReceived INSTATE IRNo: ModifiedTauCapped() < FRTHR(): OtherTrackedAltRate() >= Zero: CurrentVerticalSeparation() > MAXALTDIFF: PTTimer INSTATE PTO: Total: 333 HHNNWUfiNOtO‘mCD ## ## ## ****#*# ### *** O ### ### ### ### #### #### #### ######## # ######## #### #### ###### ###### ###### ####### #### #### ### ### ### # ## # *** Figure 6.33: Partial BDD node profile for conjunction of Proximate-Traffic to Other-Traffic and to Potential-Threat with variable reordering and domain axioms. 166 Macro: Other-Alt-Reporting-Other-Air-Status-Assertion Definition: OR g Other-Alt-Reportingv-113 = True Other-Air-Statuss-101 in state On-Ground I Figure 6.34: Domain axiom for Other-Alt-Reporting, Other-Air-Status assertion with its simple decision procedures for enumerated type predicates is not capable of handling this reduction task, so we converted the original guarding conditions to PVS specifications and ran PVS analysis on them with no domain axioms added to the PVS specification. The analysis ran on a Ross-based SPARC 20 with two 150MHz CPUs for over three hours3, and yielded 1129 unprovable subgoals (each unprovable subgoal represents a potential inconsistency in the specification of the guarding conditions). Though this is still a large number of potential inconsistencies, it is a sig- nificant reduction from what we were able to achieve using symbolic analysis with added domain axioms and our decision procedures. The analysis report is now at least within reasonable size for an analyst to manually inspect. Figure 6.35 shows some of the 1129 unprovable subgoals. We assumed that many of the inconsistency reports might be spurious because the PVS spec- ification lacked the knowledge about the system that was related to the domain axioms identified using the results from the symbolic analysis. We added the domain axioms into the PVS specifi- cation and reran the PVS analysis using proof commands to introduce the axioms into the proof process. The proof process (shown in Figure 6.36) yielded 808 unprovable subgoals (representing 808 potential inconsistencies). The proof completed in a little over 2 hours on a 200MHz Pentium Pro machine with 64th of RAM running Redhat Linux. We manually checked all of the reported inconsistencies and they appeared to be true inconsistencies. Manually analyzing the predicates and the predicate’s uuth values in the reported inconsisten- cies, reveals that only a few predicates may be leading to all of the inconsistencies, but the number 3The time to complete the proof is dependent on the system load. The time reported was the fastest time: a subsequent run ran for a little over nine hours. 167 SUBGOAL.1 : SUBGOAL.2 {-1) {-2) StateOnGround?(OtherAirStatus!1) {-1} StateOnGround?(OtherAirStatus!1) cTrue?(OtherRangeValid!1) | ------- cTrue?(OtherBearingValid!1) {1} No?(AltReporting!1) ----- {2} Lost?(A1tReporting!1) PTO?(PTTimer!1) {3} PTO?(PTTimer!1) SUBGOAL.6 {-1} {-2} {-3} {-4} {-5} StateOnGround?(OtherAirStatusll) (OtherTrackedRangell * OtherTrackedRangeRatell) <= HlTAll OtherTrackedRangeRatell > 10 OtherTrackedRange!1 <= DMODTAll cTrue?(OtherRangeValid!1) cTrue?(OtherBearingValid!1) cTrue?(OtherAltReportingll) OwnTrackedAltll >= 15500 SUBGOAL.23 : SUBGOAL.24 : {-1} {-2) {2} {3) (4) (5} TAOnly?(ModeSelector!l) {-1) ASL2?(AutoSL!1) No?(AltReporting!1) {-2} No?(AltReporting!1) OtherTrackedRangeRate!1 > 10 TAURTAll < TRTHRTAll cTrue?(OtherRangeValid!1) cTrue?(PREVRAInhibitll) PTO?(PTTimer!l) SUBGOAL.45 : {—1} {-2) {-3) {-4) {-5) {1} (2) {3) (4) {5} (6) cTrue?(OtherAltReporting!1) ADOTll >= -1 IRNo?(IntentReceived!1) {1} (2} (3} (4} (5) InhibitBiasedClimb!1 > DownSeparation!1 OwnTrackedAlt!1 < OtherTrackedAlt!1 CurrentVerticalSeparation!1 > 150 CurrentVerticalSeparation!1 < 1200 CurrentVerticalSeparation!1 < ZTHRTA!1 ModifiedTauCapped!1 < TFRTHRll No?(AltReporting!1) Lost?(A1tReporting!1) PTO?(PTTimer!1) SUBGOAL.1128 : {2) (3} {4) {5) OtherTrackedRange!1 < 6 cTrue?(0therAltReporting!l) OtherTrackedRangeRatetl > 10 TAURTAll < TRTHRTAll PTO?(PTTimer!1) OtherTrackedRangeRate!1 > 10 TAURTAll < TRTHRTAll cTrue?(OtherBearingValidll) cTrue?(PREVRAInhibit!1) PTO?(PTTimer!1) SUBGOAL.1129 {-1} OtherTrackedRangeRate!1 > 10 {1) OtherTrackedRangell < 6 (2} cTrue?(OtherAltReportingll) (3) (OtherTrackedRange!1 * OtherTrackedRangeRatell)<=H1TA!1 (4} PTO?(PTTimer!1) Figure 6.35: Selected unprovable subgoals from PVS analysis of Proximate-Traffic to Other-Traffic and to Potential-Threat. 168 (skolemizeandrewrites) strategy (lemma "OtherAltReporting_AltReporting_Assertion") (lemma 'OtherAltReporting_OtherAirStatus_Assertion") (apply (repeat* (inst?))) (apply (repeat* (try (bddsimp) (record) (postpone)))) Figure 6.36: The PVS proof commands used to include the domain axioms into the analysis process to attempt to prove two guarding conditions consistent. of permutations of the other predicates (specifically, those predicates that comprise specific macro conditions), may be leading to the large number of inconsistency reports. We will examine this theory in future experiments by not expanding the macro conditions; our translation tool to translate from RSML specifications to PVS specifications does not currently provide the option to disable macro expansion. The selective macro expansion option may be added in a future version of the tool. 6.1.1.2 Transitions out of State Other-Traffic When Intruder-Status is in state Other-Traffic, other aircraft in the airspace are being monitored, but no hazard currently exists. Transitions out of state Other-Traffic signify that an intruder aircraft has moved closer to the aircraft monitoring the intruder, and thus, the intruder aircraft status is upgraded. We ran our consistency analysis on each of the three pairs of guarding conditions individually. All three pairs of the guarding conditions proved to be consistent. 6.1.1.2.] Other-Traffic to Threat and Other-halite to Potential-Threat Figures 6.6 and 6.24 showed the guarding conditions for the transitions from Other—Traf f ic to Threat and from Other-Traffic to Potential-Threat. Figure 6.37 shows the re- sults obtained when we applied symbolic analysis with variable reordering; the analysis reported 4,909,890 potential inconsistencies. 169 Number of Satisfying Paths found: 4909890 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25) (26) (27) (28) (29) (30) (31) (32) (33) (34) (35) (36) (37) (38) (39) (40) (41) (42) (43) (44) (45) (46) (47) (48) (49) (SO) (51) (52) Total: AltReporting INSTATE Yes: OtherAltReporting = cTrue: OtherAirStatus INSTATE StateAirborne: AutoSL INSTATE ASL2: RAInhibitFromGround = cTrue: ModeSelector = TAOnly: OwnTrackedAlt() > OtherTrackedAlt(): OwnTrackedAlt() < OtherTrackedAlt(): InhibitBiasedClimb() > DownSeparation(): (OwnTrackedAlt() — OtherTrackedAlt()) >= UpSeparation() >= ALIM(): (OwnTrackedAlt() — OtherTrackedAlt()) <= CurrentVerticalSeparation() > LOWFIRMRZ: CurrentVerticalSeparation() < ZTHRTAt): ADOT() >= nZDTHRTA: -((CurrentVerticalSeparation()/ADOT())) < TVTHRTATBL(): ModifiedTauCapped() < TFRTHR(): LevelWait INSTATE S3: UpSeparation() <= SENSEFIRM(): DownSeparation() <= SENSEFIRM(): OtherTrackedRangeRate() > Zero: ThreatAltVMD() < ZT(): CurrentVerticalSeparation() < ZT(): TimeToCoAlt() < TrueTauCapped(): TimeToCoAlt() < TVTHR(): ADOT() >= nZDTHR(): OtherCapability = TARA: PREVIOtherRangeVa1id = cTrue: PREVZOtherRangeValid = cTrue: OtherRangeValid = cTrue: OtherTrackedRange() >= DMOD(): TauRising INSTATE PLUS3: IntruderStatus INSTATE Threat: OtherBearingValid = cTrue: OtherTrackedRangeRate() > RDTHRTA: OtherTrackedRange() <= DMODTA(): MINSEP: nMINSEP: (OtherTrackedRange()*OtherTrackedRangeRate()) <= H1TA(): TAURTA() < TRTHRTA(): OtherTrackedRange() <= RMAX: ModifiedTauCapped() < TRTHR(): OtherTrackedRange() > DMOD(): (OtherTrackedRangeRate() * OtherTrackedRange()) > H1(): OwnTrackedAltRate() <= OLEV: OtherVRC = NoIntent: IntentReceived INSTATE IRNo: ModifiedTauCapped() < FRTHR(): OtherTrackedAltRate() <= OLEV: OwnTrackedAltRate() >= Zero: 0therTrackedA1tRate() >= Zero: CurrentVerticalSeparation() > MAXALTDIFF: CurrentVerticalSeparation() > MAXALTDIFFZ: PTTimer INSTATE PTO: 390 n a»m p a.» Mr» N N N»N»H H H H H H H HNNhNNfiO‘O‘ ## ## ## ##### ##### ############ ##### ##### ##### ##### #### #### ####### ####### ####### ####### ###### ######### ###### ###### ###### ###### ###### ###### ###### #### ### ### ## # # ## # # Figure 6.37: Partial BDD node profile for conjunction of Other-Traffic to Threat and to Potential- Threat with variable reordering. 170 From the BDD node profile shown in Figure 6.37, we identified the first seven nodes and the last node as potential indicator nodes. We already know (Section 6.1.1.1) of several domain axioms related to the first three indicator predicates shown in the BDD node profile. The next step is to examine samples of the result BDD to see if any of the known domain axioms are violated in all or a majority of the samples. Selected samples from the symbolic analysis with variable reordering are shown in Figure 6.38. As the samples show, all satisfying paths in the BDD pass through the first eight predicates, and the first six predicates maintain the same truth value on every path. Recall (Section 6.1.1.1.1), that one of the domain axioms involving the predicates Other—Alt-Reporting = True and Alt-Reporting IN-STATE Yes is Other-Alt-Reporting = True if Alt-Reporting IN-STATE Yes. The samples show that Alt-Reporting IN—STATE Yes, predicate (1), is always FALSE and that Other-Alt-Reporting = True, predicate (2), is always TRUE; thus, the domain axiom involving these two predicates is violated in all of the samples. We intro- duced this domain axiom into the specification and reran the symbolic analysis. The analysis output reported the guarding conditions consistent. Thus, all of the above reported inconsistencies were spurious and were all due to the violation of a single domain axiom involving the first two predi- cates in the result BDD obtained using variable reordering. Without our technique for identifying the missing information, the analyst would have no indication of which of the 58 predicates (the full result BDD contained 58 predicates) to examine to see if they are involved in any domain axioms in the specification that do not hold in the analysis output. We attempted PVS analysis without domain axioms, on the full guarding conditions using the strategy in Figure 6.22, on a Ross-based SPARC 20 with two 150MHz CPUs. We aborted the proof 171 (1) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (2) 111111111111111111111111111111111111111111111111 (3) urllr111111111111111£TTT11111111r1rTT11111111111 (4) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (5) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (6) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (7) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFF (8) TTTTTTTTTTTTTTTTTFFFFFFFFFFFFFTTTTTTTTTTFFFFFFFF (9) ................. TTTTTTTTTTTTTFFFFFFFFFF ........ (10) .............................. TTTTTTTTTT ........ (11) .............................. FFFFFFFFFF ........ (12) ................. FFFFFFFFFFFFF .................. (13) 1111ITTTT11111111 ............................... (14) 1111111111111111111111111111111111111111rrFFEFEE (15) ........................................ FFFFFFFF (16) ........................................ FFFFFFFF (17) TTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (18) 11111111111111111..... .......................... (19) ................. TTTTTTTTTTTTTTTTTTTTTTTFFFFFFFF (20) ................. FFFFFFFFFFFFFFFFFFFFFFF ........ (21) 11111111111111111111111111111111111111rTrrFFFFFF (22) ........................................ FFFFFFFF (23) TTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (24) ................. 1111111111111111111111111111111‘ (25) ................. 1111111111111111111111111111111 (26) ................. FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (27) TTTTTTTTTTTTTTTTTFFFFFFFFFFTTTFFFFFFFFFFFFFFTTTT (28) 11111111111111111 .......... TTT ........ . ..... FFFF (29) ........................... . ............... .TTTT (30) TTTTTTTTTTTTTTTTTTTTTTTTTTTFFFTTTTTTTTTTFFFFFFFF (31) TTTTTTTTTTTTTTTTTFFFFFFFFFFTTTFFFFFFFFFFFFFFFFTT (32) 11111111111111111 .......... FTT ................ FF (33) 1111111r111111111 ........... TT .................. (34) TTTTTTTTTTTTTTTTTTTTTTTTTFF...TTTTTTTTTF ........ (3S) TTTTTTTTTTTTTTTTTFFFFFTTTFFFFFFFFFFFTTTFFFFTTTFF (36) TTTTTTTTTTFFFFFFF ..... TTF ........... TTF....FFT.. (37) TTTTTTTTTT ............ FT ............ FT ....... F.. (38) ................. TTFFF...FFFFFTTTFFF...FFFF...FF (39) ................. TTFTT...FTFFTFTTFTT...FFTT...FT (40) ................. TF.TF....T..T.TF.TF ..... FT....F (41) FFFFFFFFFFFFFFFFF.FF.FFFFF.FF.F.FF.FFFFFFF.FFFFF (42) FFFFFFFFFFFFFFFFF.FF.FFFFF.FF.F.FF.FFFFFFF.FFFFF (43) TTTTTFFFFFFFFFFFF ............................... (44) TTFFFTTTTTFFFFFFF .......... FFF .............. FFFF (45) ..TTF ..... TTTTTTF .......... FFF .............. FFFF (46) TFTF.TTTTTTTTTTF ................................ (47) ..... TFFFFTFFFF ................................. (48) ...... TTFF.TTFF ................................. (49) ...... TFTF.TFTF ................................. (50) F.F..FF..FFF..F ............................. .... (51) ....... FF...FF .................................. (52) .......... FFFFFFF..FFFF.FFFFFF...FFFF.FFFFFFFFFF Figure 6.38: Selected samples from symbolic analysis of Other-Traffic to Threat and to Potential- Threat with variable reordering. 172 attempt after it ran for over a day with no results. We added the domain axiom we identified using our indicator nodes method to the PVS specification and ran PVS analysis on a SPARCstation 20 Model 70 with 64th of main memory and one 75MI-Iz CPU. PVS reported the guarding conditions consistent in 107.85 seconds, using the proof commands shown in Figure 6.23. 6.1.1.2.2 Other-Traffic to Threat and Other-Traffic to Proximate-‘Iraffic Figures 6.6 and 6.39 show the guarding conditions for the transitions from Other—Traf f ic to Threat and from Other-Traffic to Proximate-Traffic. Figure 6.40 shows the results 'li'ansition(s): [Other-Traffic ] —-> [Proximate-Traffic] Location: Other-Aircraft t> Intruder-Statuss-136 Trigger Event: Air-Staurs-Evaluated-Evente-279 Condition: ' OR TT- T7“ T".— T’T— T‘F‘ Tr- - r: Output Action: Intruder-Status-Evaluated-Eventc-279 Figure 6.39: Transition from Other-Traffic to Proxirnate-Traffic. obtained when we applied symbolic analysis with variable reordering. From the BDD node profile shown in Figure 6.40, we identified the first ten nodes and the last node as potential indicator nodes. We know that the first and fourth predicates in the node profile are involved in the domain axiom noted earlier (Section 6.1.1.2.1). Selected samples from the symbolic analysis with variable reordering are shown in Figure 6.41. From the samples, we can see that all satisfying paths in the BDD pass through the first eleven predicates, and the samples also reveal that the first nine predicates maintain the same truth value on every path. Sampling also revealed 173 Number of Satisfying Paths found: 3447360 (1) AltReporting INSTATE Yes: (2) OtherBearingValid = cTrue: (3) OtherTrackedRange() < PROXR: (4) OtherAltReporting = cTrue: (5) CurrentVerticalSeparation() < PROXA: (6) OtherAirStatus INSTATE StateAirborne: (7) AutoSL INSTATE ASL2: (8) RAInhibitFromGround = cTrue: (9) ModeSelector = TAOnly: (10) OwnTrackedAlt() > OtherTrackedAlt(): (11) OwnTrackedA1t() < OtherTrackedAlt(): (12) InhibitBiasedClimb() > DownSeparation(): (13) (OwnTrackedAltt) - OtherTrackedAlt()) >= MINSEP: (14) UpSeparation() >= ALIM(): (15) (OwnTrackedAlt() — OtherTrackedA1t()) <= nMINSEP: (16) CurrentVerticalSeparation() > LOWFIRMRZ: (17) UpSeparation() <= SENSEFIRMt): (18) DownSeparation() <= SENSEFIRM(): (19) ModifiedTauCapped() < TFRTHR(): (20) LevelWait INSTATE S3: (21) TCASTCASVMD() >= Zero: (22) OtherTrackedRelativeAlt() < nMINSEP: (23) TCASTCASVMD() <= Zero: (24) OtherTrackedRelativeAlt() > MINSEP: (25) OtherTrackedRangeRate() > Zero: (26) ThreatAltVMD() < ZT(): (27) CurrentVerticalSeparation() < ZT(): (28) TimeToCoAlt() < TrueTauCapped(): (29) TimeToCoAlt() < TVTHR(): (30) ADOT() >= nZDTHRt): (31) ModifiedTauCapped() < FRTHRt): (32) CurrentVerticalSeparation() < ZTHRTA(): (33) ADOT() >= nZDTHRTA: (34) —((CurrentVerticalSeparation()/ADOT())) < TVTHRTATBL(): (35) OtherCapability = TARA: (36) OwnTrackedAltRate() <= OLEV: (37) PREVlOtherRangeValid cTrue: ***#**=fl=¥==fl= ## ## ## ## ## ##### ##### ############# ##### ##### ##### ## ##### ##### ##### ########## ######## ###### H GQOUIUIUINUIU'IMUIU‘IUINNNNNbNNNNNNNNHHl—‘HHHPHI-4H * =80: * * 1.1 (38) PREV20therRangeValid = cTrue: 6 ###### (39) OtherRangeValid = cTrue: 12 ############ (40) OtherTrackedRange() >= DMOD(): 12 ############ (41) TauRising INSTATE PLUS3: , 12 ############ (42) IntruderStatus INSTATE Threat: 12 ############ (43) OtherTrackedRangeRate() > RDTHRTA: 12 ############ (44) OtherTrackedRange() <= DMODTA(): 6 ###### (45) (OtherTrackedRange()*OtherTrackedRangeRate()) <= H1TA(): 6 ###### (46) TAURTA() < TRTHRTA(): 6 ###### (47) OtherTrackedRange() <= RMAX: 6 ###### (48) ModifiedTauCapped() < TRTHR(): 6 ###### (49) (OtherTrackedRangeRate() * OtherTrackedRange()) > H1(): 6 ###### (50) IntentReceived INSTATE IRNo: 3 ### (51) CurrentVerticalSeparation() > MAXALTDIFF: 1 # (52) PTTimer INSTATE PTO: 1 # Total: 247 Figure 6.40: Partial BDD node profile for conjunction of guarding conditions from Other-Traffic to Threat and to Proximate-Traffic with variable reordering. 174 that the last predicate was always TRUE. The samples showed that the same domain axiom that was violated in the above case (Section 6.1.1.2.1), was also violated in all samples for the two guarding conditions being analyzed here; predicates (l) and (4). We added the appropriate domain axiom and reran the symbolic analysis. The oumut from this iteration of the symbolic analysis augmented with the appropriate domain axiom showed the guarding conditions consistent. We applied PVS analysis without domain axioms, on the full guarding conditions using the strategy in Figure 6.22, on a Ross-based SPARC 20 with two 150MHz CPUs. We aborted the proof attempt after it ran for over a day with no results. We added the domain axiom we identified using our indicator nodes method to the PVS specification and ran PVS analysis on a SPARCstation 20 Model 70 with 64th of main memory and one 75MHz CPU. PVS reported the guarding conditions consistent in 39.97 seconds, using the proof commands in Figure 6.23. 6.1.1.2.3 Other-Me to Proximateo'lraffic and Other-Me to Potential-Threat Both the symbolic analysis using BDDs and the PVS analysis showed the guarding conditions consistent on the first iteration; no domain axioms needed to be added to the analysis to show the guarding conditions consistent. In addition, we did not need to invoke our decision procedures during symbolic analysis. We ran the PVS proof on a Ross-based SPARC 20 with two 150MHz CPUs using the PVS proof strategy in Figure 6.22. The proof finished in 17.28 seconds. 6.1.1.3 Transitions out of State Potential-Threat When Intruder-Status is in state Potential—Threat, an intruder aircraft is in a poten- tially threatening position in relation to own aircraft. The status of the intruder may be upgraded to Threat or downgraded to Other—Traf f ic or Proximate-Traffic depending on the intruder’s course and the course of own aircraft. A hazardous situation could result if the same conditions were satisfied for transitions that both upgrade and downgrade the threatening status of 175 (1) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (2) T11111111111111111111:111111111111111111111 ( 3 ) '1"1"1"1'1"1 '1 1 1 1"1"1"1"1"1 1 1 1 1' 1'1 1"1"1 1'1 1 1 1 1"1 1 1 1 1 1 1 1' 1'1 1'1 1' (4) 1111111111111111111111111111111111111111111 (5) 1111111111rT1r11111111111ITITI11r1111111111 (6) 1111111111111111111111111111111111111111111 (7) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (8) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (9) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (10) TTTTTTTTTTFFFFFFFFFFTTTTTTTTTTTFFFFFFFFFFFF (11) TTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFF (12) .......... FFFFFFFFFFTTTTTTTTTTT ............ (13) .......... TTTTTTTTTT ....................... (14) .......... FFFFFFFFFF ....................... (15) .................... FFFFFFFFFFF ............ (16) TTTTTTTTTT... ...................... . ....... (17) .......... TTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFF (18) .......... FFFFFFFFFFFFFFFFFFFFF ............ (19) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFF (20) TTTTTTTTTTFFFFFFFFFFFFFFFFFFFFF ............ (21) .......... 111111111111111111111 ............ (22) .......... FFFFFFFFFFFFFFFFFFFFF ............ (23) .......... 1111111111111‘1'1"111111 ............ (24) .......... FFFFFFFFFFFFFFFFFFFFF ............ (25) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFF (25) ............................... FFFFFFFFFFFF (27) TTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (28) .......... 111111111111111111111111111111111 (29) .......... 111111111111111111111111111111111 (30) .......... FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (31) TTTTTTTTTT ................................. (32) TTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (33) .......... TTTTTTTTFFTTTTTTTTFFFFFFFFFFFFFFF (34) .................. FF ........ FFFFFFFFFFFFFFF (35) TTTTTTTTTTFFFFTTTTFFTTTTTTTTTTTFFFFFTTTTTTT (35) TTTTTTTTTT ................................. (37) TTTTTTTTTT....FFFF..TTTTFFFFFFF ..... FFFFFFF (33) .............. FFFF ...... FFFFTTT ..... FFFFTTT (39) T1111111r1rTTTITTTT111111111111111111111111 (40) TTTTTTTTTTFFFFFTTTFFFFFFFFFFTTTFFTTTTTTTFFF (41) TTTTTTTTTT ..... TTT .......... FFF..FFFTTTT... (42) TTTTTTTTTT ..... TTT .................. TTTT... (43) TTTTTTFFFFTFFFFTFFFFFFFTTFFFFFFFTFFFFFFTFFF (44) TTTFFF ..................................... (45) FFF .................. . ..................... (46) ...... FFFF ................................. (47) ...... TTTT.FTTT.FTTFTTF..FTTFTTT.FTTFTT.FTT (48) ...... TTTF..TFF..TT.FT....TF.TFT..FT.FT..FT (49) FFFFFF...FFF.FFFF..FF.FFFF.FF.F.FFF.FF.FFF. (50) .TF.TF.TF ..... FFFF ...... FFFF ........ FFFF... (51) FF.FF.FF.F ................................. (52) 111111111111111111111111111111111111111111r Figure 6.41: Portion of the samples obtained from symbolic analysis for consistency of Other-Traffic to Threat and to Proximate-Traffic with variable reordering. 176 the intruder aircraft. We ran our consistency analysis on each of the three pairs of guarding conditions individually. All three pairs of the guarding conditions proved to be consistent. 6.1.1.3.] Potential-Threat to Threat and Potential-Threat to Proximate-Traflic Figures 6.6 and 6.42 show the guarding conditions for the transitions from Potential -Threat to Threat and from Potential -Threat to Proximate—Traf fic. Figure 6.43 shows the results obtained when we applied symbolic analysis with variable reordering. Itansition(s): LPotential-Threat ] —) [Proximate-Traffic] Location: Other-Aircraft t> Intruder-Statuss-136 Trigger Event: Air-Status-Evaluated-Evente-279 Condition: OR TTT—7"” 23:3"— ETLP— 417;: 117' TT'TF ETEI EELT Lie; _'e_'__°r_. Output Action: Intruder-Status-Evaluated-Eventc-279 Figure 6.42: Transition from Potential-Threat to Proximate-Traffic. From the BDD node profile in Figure 6.43, we identified the first two nodes and the last four nodes as potential indicator nodes. This example is not as clear cut as the previous examples, therefore, we provide a more detailed explanation. For this example, we assume that the analyst has no prior information about any domain axioms in the specification; i.e., the analyst does not know about the domain axioms we identified in the previous test cases. Using our method to attempt to identify the relevant domain axioms, the analyst begins with the 177 Number of Satisfying Paths found: 6655740 (1) OtherBearingValid = cTrue: 1 (2) AltReporting INSTATE No: 2 # (3) AutoSL INSTATE ASL2: 2 # (4) RAInhibitFromGround = cTrue: 2 # (5) ModeSelector = TAOnly: 2 # (6) AltReporting INSTATE Lost: 1 (7) OwnTrackedA1t() > OtherTrackedAlt(): 2 # (8) OwnTrackedAlt() < OtherTrackedAlt(): 4 ## (9) InhibitBiasedClimb() > DownSeparation(): 4 ## (10) (OwnTrackedAlt() — OtherTrackedAlt()) >= MINSEP: 4 ## (11) CurrentVerticalSeparation() > LOWFIRMRZ: 4 ## (12) ModifiedTauCapped() < TFRTHR(): 8 #### (13) LevelWait INSTATE S3: 4 ## (14) UpSeparation() <= SENSEFIRM(): 4 ## (15) ModifiedTauCapped() < FRTHR(): 4 ## (16) OtherRangeValid = cTrue: 9 #### (17) TCASTCASVMD() >= Zero: 2 # (18) TCASTCASVMD() <= Zero: 2 # (19) OtherTrackedRelativeAlt() > MINSEP: 2 # (20) OtherCapability = TARA: 7 ### (21) OtherTrackedRangeRate() > Zero: 7 ### (22) ThreatAltVMD() < ZT(): 7 ### (23) CurrentVerticalSeparation() < ZT(): ' 21 ########## (24) TimeToCoAlt() < TrueTauCapped(): 7 ### (25) TimeToCoAlt() < TVTHR(): 7 ### (26) ADOT() >= nZDTHR(): 7 ### (27) OtherAltReporting = cTrue: 7 ### (28) OwnTrackedAltRate() <= OLEV: 4 ## (29) PREVlOtherRangeValid = cTrue: 3 # (30) PREVZOtherRangeValid = cTrue: 3 # (31) -((CurrentVerticalSeparation()/ADOT())) < TVTHRTATBL(): 6 ### (32) ADOT() >= nZDTHRTA: 6 ### (33) CurrentVerticalSeparation() < ZTHRTA(): 6 ### (34) OtherTrackedRange() >= DMOD(): 12 ###### (3S) TauRising INSTATE PLUSB: 12 ###### (36) IntruderStatus INSTATE Threat: 12 ###### (37) OtherTrackedRangeRate() > RDTHRTA: 12 ###### (38) OtherTrackedRange() <= DMODTA(): 6 ### ### ### ### ### (39) (OtherTrackedRange()‘OtherTrackedRangeRate()) <= H1TA(): 6 (40) TAURTA() < TRTHRTA(): 6 (41) OtherTrackedRange() <= RMAX: 6 (42) ModifiedTauCapped() < TRTHR(): 6 (43) OtherTrackedRange() > DMOD(): 6 ### (44) (OtherTrackedRangeRate() * OtherTrackedRange()) > H1(): 6 ### (45) OtherVRC = NoIntent: 3 # (46) IntentReceived INSTATE IRNo: 3 # (47) CurrentVerticalSeparation() > MAXALTDIFF: 1 (48) CurrentVerticalSeparation() < PROXA: 1 (49) OtherAirStatus INSTATE StateAirborne: 1 (50) OtherTrackedRange() < PROXR: 1 (51) PTTimer INSTATE PTO: 1 Total: 288 Figure 6.43: Partial BDD node profile for conjunction of Potential-Threat to Threat and to Proximate-Traffic with variable reordering. 178 predicate at the root of the result BDD Other-Bearing—Valid = True, and examines the index of the specification to find all the locations where the variable Other-Bearing—Val id is referenced. The analyst finds that this variable is referenced twelve times in the specification, and checks each of the references to see if there are any domain axioms associated with this vari- able. The analyst quickly determines that only two of the twelve references are relevant to the two guarding conditions being analyzed. The analyst also determines that there are no relevant domain axioms in the specification involving the variable. Figures 6.7 and 6.42 show the only two references of the variable under consideration; it is easy to see that there are no relevant do- main axioms involving Other-Bearing-Val id since the truth values shown for the predicate Other-Bearing—Valid = True do not remain constant in the specification of the guarding conditions. The analyst next examines the predicate Alt-Reporting IN -STATE No, and finds sev- - enteen references to the state Alt-Reporting in the specification. Only four of the seventeen references are relevant to the guarding conditions being analyzed, and three of the four lead the analyst to the information that is missing from the symbolic analysis and resulting in the spuri- ous errors; Figures 6.15 through 6.20 show the relevant references to the state Alt—Reporting and the guarding conditions associated with transitions within the state Alt-Reporting. The domain axiom related to the state Alt-Reporting is Other-Alt-Reporting = True if Alt-Reporting IN -STATE Yes. Once the analyst identifies a relevant domain axiom, she looks at the samples to see if the domain axiom is violated in all or a majority of the samples. Selected samples from the symbolic analysis with variable reordering are shown in Figure 6.44. In the fig- ure, the predicates involved in the domain axiom are triple starred to offset them. As one can see from the samples shown, it is difficult to identify the relevant domain axiom and to verify that the domain axiom is violated in all or a majority of the samples without some indication of what 179 predicates to look at in the first place. To make it easier to see that the identified domain axiom is violated in all of the samples, I have isolated the three predicates involved in the domain axiom (Figure 6.45). Note that the predicate Alt—Reporting IN-STATE Yes did not appear in the conjunction of the guarding conditions, yet is part of the relevant domain axiom; this makes it even more difficult to determine the relevant domain axiom without some indication of where to look. With a knowledge of the all—inclusive and mutually exclusive nature of enumerated type predicates, the analyst can determine from the samples of the three isolated predicates shown in the figure, that the domain axiom is violated in all samples. The samples show that Al t-Reporting is either in state No or in state Lost all of the time. Given this information and the fact that enumerated type predicates are mutually exclusive, we can conclude that Alt-Reporting is never in state Yes, which violates the domain axiom identified above, since Other-Alt-Reporting = True is always TRUE. When we added the identified domain axiom Other-Alt-Reporting = True If Alt-Reporting IN -STATE Yes and reran the symbolic analysis, the output reported that the guarding conditions were consistent. We ran PVS analysis on the full guarding conditions without the domain axiom added, using the proof strategy in Figure 6.22 on a Ross-based SPARC 20 with two 150MHz CPUs. We aborted the proof attempt after it ran on the order of a day. We then added the domain axiom we identified using our indicator nodes approach to the PVS specification of the guarding conditions. Using the proof commands in Figure 6.23, we ran PVS analysis on a SPARCstation 20 Model 70 with 64MB main memory and one 75MHz CPU. The guarding conditions were proven consistent in 92.85 seconds. 6.1.1.3.2 Potential-Threat to Threat and Potential-Threat to Other-Traffic Figures 6.6 and 6.46 show the guarding conditions for the transitions from Potential-Threat to Threat and from Potential-Threat to Other-Traffic. 180 (1) TTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFF (2) TTTTTTTTTTTFFFFFFFFFFFFFFFTTTTTTTTTTTTTTTTTTTTTTTTTT (***) (3) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (4) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (5) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (6) ........... TTTTTTTTTTTTTTT .......................... (**‘) (7) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFF (8) TTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (9) ........... TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT ........... (10) ........... FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF ........... (11) TTTTTTTTTTT ......................................... (12) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFF (13) TTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF ........... (14) ......................................... FFFFFFFFFFF (15) TTTTTTTTTTT ......................................... (16) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFF (17) ........... FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF ........... (18) ........... 1111I1T111I1111111111111111111 ........... (19) ........... FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF ........... (20) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFF (21) TTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (22) ........... TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFF (23) TTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (24) ... ................................ . ..... TTTTTTTTTTT (25) ........... 11TTT1111I111111111111111ET111111111IITTT (26) ........... FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (27) 111111111rTTTTI111111111111111r111111111111111111111 (***) (28) TTTTTTTTTTT ......................................... (29) TTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF ........... (30) ........... TTTTTTTTTTFFFFFTTTTTTTTTTFFFFF ........... (31) TTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFTTTTTTTTTTT (32) TTTTTTTTTTT .............................. TTTTTTTTTTT (33) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFT (34) TTTTTTTTTTTFFFFFTTTTTFFFFFFFFFFTTTTTFFFFFFFTTTTTTTTF (35) TTTTTTTTTTT ..... TTTTT .......... TTTTT ....... FFFFTTTT. (36) TTTTTTTTTTT ..... TTTTT .......... TTTTT ........... TTTT. (37) TTTTTTFFFFFTTFFFTTFFFTTFFFTTFFFTTFFFTTFFFFTFFFTFFFTF (38) TTTFFF ..... FT...FT...FT...FT...FT...FT .............. (39) FFF ......... F....F....F....F....F....F .............. (40) ...... FFFFF..FFF..FFF..FFF..FFF..FFF..FFF .......... F (41) ...... TTTTT..TTF..TTF..TTF..TTF..TTF..TTFT.FTT.FTT.F (42) ...... TTTFF..FT...FT...FT...FT...FT...FT.T..FT..FT.. (43) FFFFFF...FFFFF.FFFF.FFFF.FFFF.FFFF.FFFF.F.FFF.FFF.FF (44) FFFFFF...FFFFF.FFFF.FFFF.FFFF.FFFF.FFFF.F.FFF.FFF.FF (45) TFFTFFTFFTF .......... FFFFF .......... FFFFF ........... (46) .TF.TF.TF.T .......... FFFFF .......... FFFFF ........... (47) FF.FF.FF.FF ......................................... (48) '1"1"111111111111'1'1111'11"1"1"1"111111111111'111'1'111111111111 (49) TTTT1111I11llfTTTTTTTlTlll1TT1111111111111£TTTT11£11 (50) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT ( 5 1 ) 1 1 1 1 1"1"1 1"1"1"1"1"1"1 1 1 1 1 1"1 1'1'1"1"1"1"1"1"1'1"1' 1 1 1'1"1 1 1 1 1 1 '1 1 1 1'1 1 1 1 1 1 1 Figure 6.44: Selected samples from symbolic analysis of Potential-Threat to Threat and to Proximate-Traffic with variable reordering. 181 (2 ) TTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT (6) ........... TTTTTTTT11I1111111FI ............................... (27) 11111111111111111111111r1111111111111111111ITTT1111111111II111 Figure 6.45: Selected samples for Potential-Threat to Threat and to Proximate-Traffic of predicates involved in the domain axiom. Figure 6.47 shows the results obtained when we applied symbolic analysis with variable reordering. 'fi'ansition(s): Potential-Thread ——> [Other-Traffic] Location: Other-Aircraft l> Intruder-Statuss-136 “nigger Event: Air-Status-Evaluated-Evente.279 Condition: OR ZTTEEEZFE: qjil—ngigq _;__'_Ll_'__'_LT " FoF-F-F'Tfi—T ______.__;____._. 4314141414; r r r r F F F F .- mmmmm-fim—rrd— TTT”_-"'TT"1‘__'1“T="f armor—firm??? :::*:T:TT: Lit—Jijigil Output Action: Intruder-Status-Evaluated-Eventc.279 Figure 6.46: Transition from Potential-Threat to Other-Traffic. From the BDD node profile in Figure 6.47, we identified the first two nodes and the last node as potential indicator nodes. This example is similar to the last example discussed in Section 6.1.1.3.1, in that it is not as easy to locate the information that is missing from the analysis process and leading to spurious inconsistency reports. We assume, as in the last analysis reported, that the analyst has no prior information about any domain axioms in the specification. The analyst begins investigating the predicate at the root of the result BDD PT-Timer IN- STATE PT- 0, and examines the index of the specification to find all references to the state PT—Timer. The analyst finds that this state is referenced eight times in the specification, and checks each of the references to see if there are any Number of Satisfying Paths found: (1) (2) (3) (4) (S) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25) (26) (27) (28) (29) (30) (31) (32) (33) (34) (35) (36) (37) (38) (39) (40) (41) (42) (43) (44) (45) (46) (47) (48) (49) (50) (51) (52) 182 15095745 PTTimer INSTATE PTO: AutoSL INSTATE ASL2: ModeSelector = TAOnly: RAInhibitFromGround = cTrue: OtherAltReporting = cTrue: OtherAirStatus INSTATE StateAirborne: OwnTrackedAlt() > OtherTrackedAlt(): OwnTrackedAlt() < OtherTrackedAlt(): InhibitBiasedClimb() > DownSeparation(): (OwnTrackedAlt() - OtherTrackedAlt()) >= MINSEP: (OwnTrackedAlt() UpSeparation() <= SENSEFIRM(): DownSeparation() <= SENSEFIRM(): ModifiedTauCapped() < TFRTHR(): LevelWait INSTATE S3: OtherTrackedRelativeAlt() < nMINSEP: OtherTrackedRelativeA1t() > MINSEP: OtherTrackedRangeRate() > Zero: ThreatAltVMD() < ZT(): CurrentVerticalSeparation() < ZT(): TimeToCoAlt() < TrueTauCapped(): TimeToCoAlt() < TVTHR(): ADOT() >= nZDTHR(): CurrentVerticalSeparation() < PROXA: OtherTrackedRange() < PROXR: AltReporting INSTATE Lost: OtherRangeValid = cTrue: OtherBearingValid = cTrue: OtherTrackedRangeRate() > RDTHRTA: OtherTrackedRange() <= DMODTA(): (OtherTrackedRange()*OtherTrackedRangeRate())<=H1TA(): TAURTA() < TRTHRTA(): OtherTrackedRange() <= RMAX: ModifiedTauCapped() < TRTHR(): OtherTrackedRange() > DMOD(): (OtherTrackedRangeRate()*OtherTrackedRange()) > H1(): OtherCapability = TARA: OwnTrackedAltRate() <= OLEV: PREVlOtherRangeValid = cTrue: PREV20therRangeVa1id = cTrue: OtherTrackedRange() >= DMOD(): TauRising INSTATE PLUS3: IntruderStatus INSTATE Threat: OtherVRC = NoIntent: IntentReceived INSTATE IRNo: ModifiedTauCapped() < FRTHR(): OtherTrackedAltRate() <= OLEV: OwnTrackedAltRate() >= Zero: OtherTrackedAltRate() >= Zero: CurrentVerticalSeparation() > MAXALTDIFF: CurrentVerticalSeparation() > MAXALTDIFFZ: OtherAirStatus INSTATE StateOnGround: Total: 403 - OtherTrackedAlt()) <= nMINSEP: bfimfihhhéfiNNNNNNl-i ksFAla Errata P' P‘FJIJ oarala p.9-es-a-q-q F'CDCBCDIM UIC><>C>£h 14 P‘k‘k‘ F‘ O\thJthh clare- HMNDNN§G #### ## ## ## ##### ##### ############# ##### ##### ##### ## ## ##### ####### ##### ########## ### ### ### ####### ####### ####### ####### ####### #### ### ### ###### ###### ###### ### ### ## # # ## # # Figure 6.47: Partial BDD node profile for conjunction of Potential-Threat to Threat and to Other- Traffic with variable reordering. 183 domain axioms associated with this state, but does not identify any. The analyst also determines that there are no relevant domain axioms in the specification involving the state Auto— SL. Looking at the last indicator predicate Other-Air-Status IN-STATE On-Ground. the analyst identifies two domain axioms: 1. if Other-Alt-Reporting = True is FALSE then NOT Other—Air-Status IN-STATE On—Ground, and 2. Other-Air—Status cannot be in both On-Ground and Airborne at the same time, and it has to be in one of On-Ground or Airborne at any time. The analyst examines the samples to see if either or both of the identified domain axioms are violated in all or in a majority of the samples. Selected samples are shown in Figure 6.48. The analyst finds that the first domain axiom she identified is not violated in any of the samples, but finds that the axiom related to the mutually exclusive nature of the enumerated type predicate is violated in most of the samples; the samples show that Other-Air-Status is in both of its sub-states at the same time; predicates (6) and (52). We reran the symbolic analysis with variable reordering and decision procedures enabled. The analysis output reported 3,982,320 potential inconsistencies. Though this reduction in the number of reported inconsistencies is substantial (a reduction of almost 12 million spurious inconsistencies), it does not facilitate the analyst in locating the true inconsiStencies (if any). At this point, the analyst has a couple of options with our method. First, she can continue check- ing each individual predicate in the node profile starting from where She left off at the third predicate, or, second, she can look at the sample output obtained from either the first iteration with variable ordering, or she can look at the sample output obtained from the second iteration with variable re- ordering and decision procedures enabled. Either of these outputs will reveal that five of the top six predicates in the node profile maintain the same truth values in all samples. Constant truth.values are indicative of domain axioms that are being violated. The analyst has already examined the second predicate in the node profile, so she can concentrate on the next four predicates rather than having 184 (1) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFF (2) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (3) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (4) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (5) 111rTT11111111111111111x111111111111111111111r111111 (6) TT11111111111111111111111111111111111111111111111111 (7) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFF (8) TTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFTTTTTTTTTTTFFFFFFFFFF (9) ..................... TTTTTTTTTTFFFFFFFFFFF .......... (10) ............................... TTTTTTTTTTT .......... (11) ..................... FFFFFFFFFF ..................... (12) ..................... TTTTTTTTTTTTTTTTTTTTTFFFFFFFFFF (13) ..................... FFFFFFFFFFFFFFFFFFFFF .......... (14) T1111111111111111111111111111IT11111111111rFFFFFFFFF (15) TTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFF .......... (15) ..................... FFFFFFFFFFFFFFFFFFFFF .......... (17) ..................... FFFFFFFFFFFFFFFFFFFFF... ....... (18) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFF (19) .... ..................................... FFFFFFFFFF (20) TTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (21) ..................... '1”111111111111111111111111111111 (22) .............. 1 ...... 1111111111111111111111111111111 (23) ..................... FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (24) TTTTTTTTTTTTTTTTTTTTTFFFFFFFFFF ................. .... (25) 111111111111111111111 ............................... (26) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFTTTTTTTTTTT (27) TTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFTTTTTTTTTTTFFFFFFFFFF (28) 111111111111111111111 .................... F .......... (29) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFTFFFFFFFFFF (30) ..................... FFFFFFTTTT .......... T .......... (31) ........................... TTTT .......... T .......... (32) .......................................... FFFFFTTTTT (33) ............................... TTTTTTTTTT.TTTTTFFFFF (34) .............................. .FFFFFFFFFF.TTTTT.. (35) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF ..... FFFFF (35) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF ..... FFFFF (37) TTTTTTTTTTTTTTTTTTTTTTTTFFFFFFTTTTTTTTTTFTFFTTTFFFTT (38) TTTTTTTTTTFFFFFFFFFFF .............................. (39) TTTTTTTTTTFFFFFFFFFFF ..... .....FFFFFFTTT. T ..... ..... (40) .......... TTTTTTTTTTT .......... TTTFFF ............... (41) TTTTTTTTTTTTTTTTTTTFFFTTFTTTTFTFTTFTTTTFFTTTFTTFTTFT (42) TTTTTFFFFFFFFFFFFFF...TF.TFFT.F.TF.TFFT..FFT.FT.FT.F (43) TTTTT ................. T..T..T...T..T..T....T..T..T.. (44) TTFFFTTFFFTTFFFFFFFTTFFF ...... F...FFF ....... FFF...FF (45) ..TTF..TTF..TTTTTTF..FFF ...... F...FFF ....... FFF...FF (45) TFTF.TFTF.TFTTTTTF.TT ............................... (47) .......... F.TFFFF..TF ............................... (48) .......... F..TTFF...T ............................... (49) .......... F..TFTF...T ............................... (50) F.F..F.F..F.FF..F..FF ............................... (51) .............. FF .................................... (52) '1'1111111111111111111111111"1'. .. .TTTTTTTTTT.TTTTT ..... Figure 6.48: Selected samples from symbolic analysis of Potential-Threat to Threat and to Other- Traffic with variable reordering. 185 to potentially examine all 55 of the remaining unexamined predicates. The analyst does not find any domain axioms related to the third or fourth predicates, but finds the following domain axiom related to the fifth predicate Other-Alt-Reporting = True: Other—Alt-Reporting = True ifi’Alt-Reporting IN-STATE Yes‘. The samples revealed that this domain axiom was violated in a majority of cases; predicates (5) and (26). We added the relevant domain axiom to the specification and reran the symbolic analysis with decision procedures enabled. The analysis reported the guarding conditions consistent. We attempted PVS analysis without domain axioms, on the full guarding conditions using the strategy in Figure 6.22, on a Ross-based SPARC 20 with two 150MHz CPUs. We aborted the proof attempt after it ran for over a day with no results. We added the domain axiom we identified using the process described above to the PVS specification and ran PVS analysis on a 200MHz Pentium Pro machine with 64MB RAM running Redhat Linux. PVS reported the guarding conditions consistent in 32.72 seconds, using the proof commands in Figure 6.23. 6.1.1.3.3 Potential-Threat to Proximate-Traffic and Potential-Threat to Other-Me Figures 6.42 and 6.46 show the guarding conditions for the transitions from Potential-Threat to Proximate—Traffic and from Potential—Threat to Other-Traffic. Figure 6.49 shows the results obtained whenwe applied symbolic analysis with variable reordering. From the BDD node profile (Figure 6.49) we identified the first three nodes and the last node as potential indicator nodes. There are some very obvious domain axioms associated with the Alt-Reporting and Other—Air-Status in state predicates that showed up as indicator predicates. The samples (Figure 6.50) showed that these in state predicates (predicates (2), (3), ‘Section 6.1.1. 1.1 discussed this domain axiom in detail. Number of Satisfying Paths found: (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25) (26) (27) (28) (29) (30) (31) (32) (33) (34) (35) (36) (37) (38) (39) (40) (41) (42) (43) (44) (45) (46) 186 4787831 PTTimer INSTATE PTO: AltReporting INSTATE No: AltReporting INSTATE Yes: PREVRAInhibit = cTrue: AutoSL INSTATE ASL2: RAInhibitFromGround = cTrue: ModeSelector = TAOnly: OwnTrackedA1t() > OtherTrackedAlt(): OwnTrackedA1t() < OtherTrackedAlt(): InhibitBiasedClimb() > DownSeparation(): (OwnTrackedA1t() - OtherTrackedAlt()) <= nMINSEP: ModifiedTauCapped() < TFRTHR(): LevelWait INSTATE $3: UpSeparation() <= SENSEFIRM(): AltReporting INSTATE Lost: OtherRangeValid = cTrue: OtherBearingValid = cTrue: OtherCapability = TARA: OtherTrackedRangeRate() > Zero: ThreatAltVMD() < ZT(): CurrentVerticalSeparation() < ZT(): TimeToCoAlt() < TrueTauCapped(): TimeToCoAlt() < TVTHR(): ADOT() >= nZDTHR(): OtherAltReporting = cTrue: OwnTrackedAlt() >= ABOVNMC: CurrentVerticalSeparation() < ZTHRTA(): -((CurrentVerticalSeparation()/ADOT()))= nZDTHRTA: OtherTrackedRange() >= DMOD(): TauRising INSTATE PLUS3: IntruderStatus INSTATE Threat: OtherTrackedRangeRate() > RDTHRTA: OtherTrackedRange() <= DMODTA(): (OtherTrackedRange()*OtherTrackedRangeRate())<=H1TA(): TAURTA() < TRTHRTA(): OtherTrackedRange() <= RMAX: ModifiedTauCapped() < TRTHR(): OtherTrackedRange() > DMOD(): (OtherTrackedRangeRate()*OtherTrackedRange()) > Hl(): OtherVRC = NoIntent: IntentReceived INSTATE IRNo: CurrentVerticalSeparation() < PROXA: OtherTrackedRange() < PROXR: OtherAirStatus INSTATE StateAirborne: OtherAirStatus INSTATE StateOnGround: Total: 523 HQNNfiNNNHUWWNNl—‘H r4 F'PJOJ P‘F‘Ifl «959:0 hirdld F‘ is F'F‘FJ huhare h' rd ainraraaaocncnrmrn ¢»h»¢-a-¢-cstorm h-o-p-a3a-asds-q PNhNO’t near: ## ### ## #### #### #### ############ #### #### #### ##### #### #### #### ######## ######## ######## ######### ##### ##### ##### #### #### #### #### ## ## Figure 6.49: Partial BDD node profile for conjunction of Potential-Threat to Proximate-Traffic and to Other-Traffic with variable reordering. 187 (15), (45), and (46)) violated the enumerated type properties in a majority of cases. We chose to rerun the symbolic analysis with decision procedures enabled rather than to continue looking for more domain axioms; this choice was made because it was obvious that in most of the samples, the spurious inconsistency reports resulted because the symbolic analysis lacked the information about enumerated type predicates that was necessary to eliminate the spurious inconsistencies associated with these predicates. The results from the second iteration of symbolic analysis with decision pro- cedures reported 39 potential inconsistencies. This number of inconsistency reports can easily be managed by an analyst, but we converted the output to a PVS specification and ran PVS analysis to see if PVS’ decision procedures could collapse the spurious inconsistency reports. The PVS analysis reduced the 39 inconsistency reports from the symbolic analysis to 12 inconsistency reports. All of the 12 reported inconsistencies violated the domain axiom described previously (Section 6.1.1.3.2), if Other-Alt-Reporting = True is FALSE then NOT Other-Air-Status IN-STATE On-Ground. We ran PVS analysis without domain axioms, on the full guarding conditions using the strategy in Figure 6.22, on a Ross-based SPARC 20 with two 150MHz CPUs. The PVS proof finished in 26.33 seconds and reported the same 12 spurious inconsistencies as reported above when we ran PVS analysis on the results from the symbolic analysis. We added the domain axiom NOT (Other-Alt-Reporting = True) IMPLIES Other-Air-Status IN—STATE Airborne to the PVS specification of the guarding con- ditions. We used the proof commands in Figure 6.51 to prove the guarding conditions consistent. The proof finished in 106.29 seconds on a SPARCstation 20 Model 70 with 641MB main memory and one 75MHz CPU. 188 (l) irrrrrlrr1LLLrrrrlrrrrrrrrrrrrrrrrruuulllr11111111r111;11r1111' (2) TTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFTTTTTTTTTTTTTTTTTTTTTFFFFFFFFF (3) TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTFFFFFFFFFFFFFFFFFFFFFFFTTTTTTT (4) TTTTTTTTTTFFFFFFFFFFF ........... TTTTTTTTTTTFFFFFFFFFF ......... (5) .......... TTTTTFFFFFFFFFFFFFFFFF ........... FFFFFFFFFF..FFFFFFF (6) ............... TTTTTTTTTTTFFFFFF ........... TTTTTFFFFF..FFFFFFF (7) .......................... FFFFFF ................ FFFFF..FFFFFFF (8) .......................... TTTTTT ....................... FFFFFFF (9) .......................... FFFFFF ....................... FFFFFFF (10) .......................... TTTTTT .............................. (11) .......................... FFFFFF .............................. (12) .......................... TTTTTT ....................... FFFFFFF (l3) .......................... FFFFFF .............................. (14) ....................................................... FFFFFFF (15) TTTTTTTTTT ........... FFFFFTTTTTTFFFFFFTTTTT ..... TTTTTTTFFFFFFF (16) rrrrrrrrrrrrrrrrrrurr ..... FFFFFF ...... TTTTTTTTTTFFFFFTTFFFFFFF‘ (l7) TTTTTTTTTTTTTTTFFFFF .................. FFFFFTTTTT ..... TT ....... (18) .......................... TTTTTT ....................... FFFFFFF (19) .......................... FFFFFF ....................... FFFFFFF (20) .......................... TTTTTT ....................... FFFFFFF (21) .......................... FFFFFF ....................... FFFFFFF (22) ....................................................... FFFFFTT (23) .......................... TTTTTT ............................ TT (24) ............. . ............ FFFFFF ............................ FF (25) Trurruurrrirrrrrrrrrrrr11LLLLTTILTIFFrirrrrrrrrrrrrrrrrrrrrrrr (26) ........ FF ....... FFF ............... FFF ................. FFF.... (27) TTTFFFFF..TTFFFFF...TFFFFFFFFFFFTTT...FFFFFFFFFFFFFFFTT...FFFF (28) ...TTTTF....TTTTF....TTTTFTTTTTT ...... TTTTFFTTTTTTTTF ..... FTTT (29) ...TFFF ..... TFFF ..... FFFT.FFFFFF ...... FFFT..TFFFFFFT ....... FTT (30) . ......................... TTTTTT ............................ FT (31) .......................... FFFFFF ............................. F (32) ....................................................... . ...... (33) TTF.TTF.TTTF.TTF.TTFTTTF..TTTTTTTTFTTFTTF....FTTTTF..TTFTT.FTF (34) TF..TF..TFF..TF..TF.TFT...FFFFTTFT.FT.FT ...... TFFT...FT.FT.... (35) F...F...F....F...F..F.F ....... FF.F..F..F ...... F..F....F..F.... (36) ..F...F....F...F...F...F .......... F..F..F....F....F....F...F.. (37) ............................................................. F (38) ................................. . ............................ (39) .......................... TFFFFF ............. . ............... TF (40) ........................... FFTTF ............................. T (41) ........................... TF..F .............................. (42) ............................ T..T .............................. (43) TTTTTTTT..TTTTTTT...rrrrrrrrrrurrur...rrrrruuuuurrrrruu...TTTT (44) TTTTlrrlllrrrrrrrrrrrrrirrrrrrrrrrrrrrrrrrrrrlrrrrrrrrrrrrrrrr (45) TTTTTTTT..TTTTTTT...TTlrrrTTTlrrrrL...rrrrrllrrullrrrrr...TTTT (46) TTTTTlTllllllll ...... TTTTT ...... TTTTTT ..... TTTTT ..... TTTTTTTTT Figure 6.50: Selected samples from symbolic analysis of Potential-Threat to Proximate-Traffic and to Other-Traffic with variable reordering. 189 (skolemizeandrewriteS) strategy (lemma "OtherAltReporting_0therAirStatus_Assertion") (apply (repeat* (inst?))) (apply (repeat* (try (bddsimp) (record) (postpone)))) Figure 6.51: Specific PVS commands to include a domain axiom into the analysis process and to prove two guarding conditions consistent. 6.1.1.4 Transitions out of State Threat When Intruder-Status is in state Threat (Figure 6.4), there is another aircraft on colli- sion course with own aircraft. We analyzed the transitions out of the atomic states Passed and Fai led within the state Threat, for consistency. It is important to ensure that a transition out of state Threat does not happen prematurely (i.e., when a threat still exists), since such a premature transition could lead to a collision. The transitions and their corresponding guarding conditions are shown in Figures 6.52 through 6.57. 'Tr'ansition(s): —+ [Other-Traffic] Location: Other-Aircraft r> Intruder-Statuss-136 Trigger Event: Air-Status-Evaluated-Eventemg Condition: Output Action: Intruder-Status-Evaluated-Evente-279 Figure 6.52: Transition from Threat to Other-Traffic. For each pair of transitions, we performed both symbolic analysis using BDDs, and PVS anal- ysis. The symbolic analyses included the macro expansion, sift reordering, and path count options. The symbolic analyses either showed the guarding conditions consistent on the first iteration, or yielded a manageable number of error reports. We performed PVS analysis on both the original 190 Transition(s): —+ Botential-Threatj Location: Other-Aircraft r> Intruder-Statuss-136 'n'lgger Event: Air-Status-Evaluated-Eventc-279 Condition: ° e-it-it-l H'l-l-lm] Output Action: Intruder-StatusEvaluated-Eventcmg Figure 6.53: Transition from Threat to Potential-Threat. Transition(s): —> LPotential-Threat] Location: Other-Aircraft r> Intruder-Statuss-136 Trigger Event: Air-Status-Evaluated-Eventc-279 Condition: t> state state Output Action: Intruder-Status-Evaluated-Evente-279 Figure 6.54: Transition from Failed to Potential-Threat. Transition(s): -—-> Tourer-Traffic? Location: Other-Aircraft t> Intruder-Statuss-136 Trigger Event: Air-Status-Evaluated-Evente-z79 Condition: t > Time-Advisory-Changedf.253 + 5.53(TM,NL1 s Other-Air-Statuss.101 in state Airborne Output Action: Intruder-Status-Evaluated-Eventemg Figure 6.55: Transition from Failed to Other-Traffic. 191 Transition(s): ——) Location: Threat t> Range-Tests.136 Trigger Event: Air-Status-Evaluated-Event,-279 Condition: Threat-Range-Testm-222 Other-Air-Statuss-101 in state Airborne Output Action: Range-Test-Evaluated-Event Figure 6.56: Transition from Failed to Passed. 'h'ansition(s): ——> Location: Threat t> Range-Tests-135 Trigger Event: Air-Status-Evaluated-Evente-279 Condition: OR Threat-Range-Testm.222 I Other-Air-Statuss-1m in state OncGround Output Action: Range-Test-Evaluated-Event Figure 6.57: Transition from Passed to Failed. guarding conditions, and on the output of the symbolic analysis if the symbolic analysis reported any inconsistencies. In all PVS analyses we used the proof strategy in Figure 6.22. To verify that the analysis components produced the same results, we compared the results out- put from the PVS analysis on the original guarding conditions, the symbolic analysis on the original guarding conditions, and the PVS analysis on any reported inconsistencies from the symbolic anal- ysis. In all cases, the guarding conditions proved consistent by either the analysis procedures or by inspection. The guarding conditions for transitions out of the atomic states Passed and Failed are not as complex as the transitions we discussed earlier out of the atomic states of Int ruder-Status. Therefore, we provide only a summary of the results here. 192 6.1.1.4.] Summary of Results For the atomic states Passed and Failed within the state Threat, we analyzed a total of fourteen pairs of guarding conditions. Of these fourteen, five proved consistent after a single itera- tion of both the symbolic analysis and the PVS analysis. Both the symbolic analysis and the PVS analysis required only a few seconds to report that the guarding conditions were consistent. For the remaining nine pairs, the first iteration of the symbolic analysis reported from four to 403 inconsistencies. All of the reported inconsistencies from the symbolic analysis were automat- ically translated to a PVS specification and we ran PVS analysis to see if any further reductions in the reported inconsistencies could be achieved. In all cases (except for the minimum output of four inconsistencies from symbolic analysis), the PVS output was significantly reduced. The number of inconsistencies reported by PVS analysis ranged from one to 46. We ran PVS analysis on the origi- nal guarding conditions and achieved the same inconsistency reports as we achieved from applying PVS analysis to the output of the symbolic analysis component. Manual inspection of the reported inconsistencies seemed to confirm that the guarding condi- tions could both be satisfied at the same time (i.e., they were incensistent). These potential in- consistencies were reported to the maintainers of the TCAS II specification. They reviewed the reported inconsistencies and noted that in their definition of the semantics of RSML, transitions out of a super-state take precedence over transitions out of a sub-state. Thus, both transitions cannot be satisfied at the same time; the transition from the super-state overrides the transition from the sub-state. Therefore, all of the guarding conditions for transitions out of the states Passed and Fai led were, in their choice of semantics, mutually exclusive. 193 6.1.2 'It'ansitions in State Intruder-Status - TCAS 11 Version 7 The latest version of the TCAS II requirements specification is version 7. We ran the same tests on the Intruder-Status portion of version 7 as we ran on the Intruder-Status portion of version 6.04A. The guarding conditions on the transitions in the Intruder-Status portion of version 7 of the specification were not as complex (but were still significantly complex) as the guarding conditions from version 6.04A of the specification (i.e., the AND/0R tables in version 7 of the specification contained slightly fewer predicates and slightly fewer columns). In addition, there was one less transition out of the super-state Threat. Many of the guarding condition pairs that were inconsistent in version 6.04A of the specification, were consistent in version 7 of the specification. To quickly identify and eliminate from future analysis the consistent guarding conditions, we initially ran symbolic analysis on the guarding conditions of all transitions at Once, rather than on only two guarding conditions at a time; i.e., the machine readable specification included specifica- tions for all guarding conditions and our analysis procedure set up and performed the consistency checks for all pairs of guarding conditions out of each state. The first run symbolic analysis showed the following guarding conditions consistent with no augmenting information required: 0 Other-Traffic-TO-Proximate-Traffic and Other-Traffic-TO—Potential-Threat o Potential—Threat—TO-Other-Traffic and Potential —Threat-TO-Proximate-Traf fic o Potential-Threat-TO-Other-Traffic and Potential-Threat-TO-Threat Proximate-Traffic-TO-Threat and Proximate-Traffic-TO-Other-Traf fic o Proximate-Traffic-TO-Other-Traffic and Proximate-Traf f ic-TO-Potential-Threat Fai led-TO-Passed and Threat—TO—Potential -Threat 194 Initial PVS analysis also yielded the same results. For the remaining large guarding conditions that symbolic analysis reported as inconsistent, there were millions of potential inconsistencies reported, just as in the analysis of version 6.04A of the specification. In addition, PVS analysis of the guarding conditions that involved the Threat macro always ran on the order of a day or more before we aborted the process. We used the same method described in Section 6.1.1 to identify domain axioms in the specification that did not hold in the result BDD. Only one domain axiom was required in all of the cases: Other—Alt-Reporting = True rflAlt-Reporting IN-STATE Yes. Once weadded this domain axiom to the analysis process, both the symbolic analysis and PVS analysis showed the guarding conditions consistent. 6.2 Completeness Analysis We applied our completeness analysis technique to the transitions out of Auto—SL state 1 and to the transitions out of Ef f ective-SL state 4. To demonstrate the effectiveness of our method applied to completeness analysis, we first give a detailed description of the results obtained when we applied our method to Auto—SL state 1. Next, we provide a summary of the results obtained when we applied our analysis method to the guarding conditions on transitions out of Eff ective-SL state 4. 6.2.1 Transitions in State Auto-SL Figures 6.58 through 6.64 show the guarding conditions for the transitions out of the Auto-SL atomic states. First, recall (Chapter 4, Section 4.2.6.2) that when our symbolic analysis component checks the guarding conditions out of a state triggered by the same event for completeness, the paths leading to 195 Transition(s): ANY ——> El Location: Own-Aircraft r> Auto-SL530 Trigger Event: Descend-Inhibit-Evaluated-Event¢-279 Condition: OR Own-Air-Statusv-36 = On-Ground g Traffic-Display-Permittedvfig = True I I Mode-Selectorv-34 = Standby Output Action: Auto-SbEvaluated-Evente-279 Figure 6.58: Guarding condition for transition from any Auto-8L state to Auto-SI. state 1. Transition“): ANY —+ Location: Own-Aircraft t> Auto-SL530 Trigger Event: Descend-Inhibit-Evaluated-Eventc-279 Condition: OR T—T'fl—f :::: T- . ::E: F - TF T‘F‘ET “7T 7.“ T‘s—i- am 23:: Output Action: Auto-SL-Evaluated-Eventcmg Figure 6.59: Guarding condition for transition from any Auto-8L state to Auto-8L state 2. 196 Transition(s): ANY —-> Location: Own-Aircraft r> Auto-SL530 Trigger Event: Descend-Inhibit-Evaluated-Eventc-279 Condition: OR EEZIZQ; g:::i:: firef++++ 33:33:: fififi£ Liésais Tfihftfhr‘rT-fi 23::144 Output Action: Auto-SL-Evaluated-Eventemg Figure 6.60: Guarding condition for transition from any Auto-8L state to Auto-8L state 3. Transition(s): ANY ——> E] Location: Own-Aircraft t> Auto-81.3-30 Trigger Event: Descend-Inhibit-Evaluated-Evente-279 Condition: OR TTT—“TTT? ZZZT‘TET l;;;;;_ 4.43.; ' r. iiiFELF Tllzlll 'T_;_T__°._T_£. TZT F r F F “TTT—h—‘FT ‘T‘ETTLUZ‘T - :T:::T: 1> _'.:_;_'__T__'__ mmmmglm Output Action: Auto-SL—Evaluated-Eventcmg Figure 6.61: Guarding condition for transition from any Auto-SL state to Auto-SL state 4. d 197 'Iiansition(s): ANY —> Location: Own-Aircraft r> Auto-SL530 Trigger Event: Descend-Inhibit-Evaluated-Evente-279 Condition: OR T'I" . TH".— eelesg P7T.l—7_.; Te ' :__.; 2:”?- ' . TH—TE'T TETTFL TTT—Fr r i—rr—u—(r——ri——r——i FLF FLF Tr - Ir; 3.;::' T_; ---T . _..i_.J_—__ii___i__i Output Action: Auto-SL-Evaluated-Eventcmg Figure 6.62: Guarding condition for transition from any Auto-SL state to Auto-8L state 5. Transition“): ANY ——+ [E Location: Own-Aircraft D Auto-SL530 Trigger Event: Descend-Inhibit-Evaluated-Eventc-279 Condition: film l-ldl'l-ll-l-llml'J-l'ldl'lg Fil-Idlml-lldlml-l-l'lel'] Fl l Hal-11'“! I I H ] WW-l-l-l-ibl—iW-il 17 I'l-ldlmT-ilqml- Output Action: Auto-SL-Evaluated-Event._279 Figure 6.63: Guarding condition for transition from any Auto-SL state to Auto-SL state 6. 198 Transition(s): ANY —) Location: Own-Aircraft t> Auto-81.3-30 Trigger Event: Descend-Inhibit-Evaluated-Eventg-279 Condition: OR T"? 5T T’T‘ rIf? TT W7“ 3: 4E. Output Action: Auto-SL-Evaluated-Eventc-m Figure 6.64: Guarding condition for transition from any Auto-8L state to Auto-SL state 7. FALSE in the result BDD represent potential incompletenesses in the specification of the guarding conditions. This is because if the specifications of the guarding conditions are complete, the dis- junction of the guarding conditions will be TRUE (a tautology). If the disjunction of the guarding conditions does not reduce to TRUE, then there are some conditions for which a transition out of a state is not possible. These conditions are represented in the result BDD as paths leading to FALSE, and are reported to the analyst as conditions for which the specification provides no transitions out of a particular state. The analyst must then determine a way to incorporate the missing conditions into the specification of the guarding conditions for the particular state under consideration. Recall further, that we use the same routine in both completeness and consistency analysis to convert the result BDD to AND/OR table format to present the potential incompletenesses and inconsistencies to the analyst. As we discussed in Chapter 4, Section 4.2.6, every satisfying path in the result BDD becomes a column in the AND/0R table. In completeness analysis we want to report unsatisfiable paths to the analyst. Therefore, we simply negate the result BDD before we send it to the AND/0R table translator portion of our tool, so that the unsatisfiable paths in the result BDD t“? ". o 199 become satisfiable paths, and vice versa. This means that if there is some contradiction among any of the predicates in a column of the AND/OR table, then that particular condition does not represent a true incompleteness in the specification; the reported condition cannot happen in the first place, so it need not be added to the specification of the guarding conditions to make the specification complete. This latter fact is significant for interpreting and explaining the results we obtained below for the guarding conditions for Auto—SL state 1. Note that the guarding conditions for the transitions shown in Figures 6.59 through 6.64 con- tain many relational expressions such as, Own-Alt-Radio 2 1100 in Figure 6.60. These are simple relational expressions and it is easy to see that many combinations of them cannot be sat- isfied at the same time, and that many of them are mutually exclusive (for example, the predicates Own-Alt-Radio < 1100 and Own-Alt-Radio 2 1100 cannot both be FALSE at the same time, and one of them has to be TRUE). In Figure 6.65 I have collected and grouped all of the relational expressions involving the variable Own-Alt-Radio (abbreviated OAR in the figure), according to obvious mutually exclusive and all-inclusive relationships. There are also various other combinations of the relational predicates that are mutually ex- clusive. For example, if the predicate Own-Alt-Radio > 9500 is TRUE, then the predicate Own—Alt-Radio < 2550 cannot be TRUE at the same time. In addition, there are many multi- ple dependencies between the relational predicates, such that three or more of the relational pred- icates are mutually exclusive. Relational predicates such as those described here, wreak havoc for symbolic analysis methods using BDDs. It is possible to develop decision procedures to detect non-satisfiable combinations of relational predicates such as these, but we chose not to do this. We applied our symbolic analysis using BDDs to check the guarding conditions on transitions from the Auto-SL states for completeness. First, we applied the analysis without our decision procedures for enumerated type predicates, and with variable reordering. The number of reported 200 OAR <= 900, OAR > 900 OAR < 1100, OAR >= 1100 OAR <= 2150, OAR > 2150 OAR < 2550, OAR >= 2550 OAR <= 4500, OAR > 4500 OAR <= 9500, OAR > 9500 Figure 6.65: Mutually exclusive and all-inclusive relational expressions involving the variable Own-Alt-Radio (OAR). incompletenesses was on the order of 120, 000.' We could not determine the domain axioms us- ing our indicator nodes approach; we will explain why, shortly. We reran symbolic analysis with both decision procedures and variable reordering. The number of reported incompletenesses was on the order of 10, 000; a significant reduction in reported incompletenesses, but not enough of a reduction to be helpful to the analyst in identifying any true incompletenesses in the specifica- tions of the guarding conditions. We examined the samples output from the latter symbolic anal- ysis and discovered some obvious contradictions between the relational predicates as noted above. Clearly, the information our analysis process lacked, was the information about the mutually ex- clusive and all-inclusive aspects of the relational predicates. We attempted to add the missing information to the specification and rerun the analysis, but there were too many multi-way inter- dependencies between too many of the relational predicates for both the Own-Alt—Radio and the Other-Tracked-Alt variables. In other words, there were too many different combinations of the relational predicates that were resulting in the spurious incompleteness reports and there was too much information missing from the analysis process. It was not feasible to attempt to add all of the required axioms describing the unsatisfiable combinations of the relational predicates, and we did not want to complicate our symbolic analysis by incorporating decision procedures for relational expressions. Especially since PVS has decision procedures that can easily deal with the problems we encountered. We applied PVS analysis to the full guarding conditions on a 200MHz Pentium Pro machine 201 with 64MB RAM running Redhat Linux. The strategy we used is in Figure 6.66. PVS reported eight incompletenesses in under five minutes. The eight reported errors reduced to one because PVS split up one enumerated type IN-ONE-OF predicate and reported it as two separate unprovable subgoals in four instances. In addition, these four potential incompletenesses involved the permutations of the predicates in three macro predicates; we simply needed to specify different truth values for the macro predicates, rather than providing all possible combinations of TRUE and FALSE for the two predicates in each macro. The three macro predicates (and the one included macro) involved in the incompleteness reports are shown in Figures 6.67 through 6.70. The results from the PVS analysis are shown in Figure 6.71. (defstep completeZ (apply (then (skolem!) (rewrite-msg-off) (auto-rewrite—defsS) (do-rewriteS) (repeat* (try (bddsimp) (try (record) (assert) (postpone)) (skip)))))) Figure 6.66: PVS strategy for proving guarding conditions complete. Macro: Radar-Bad-For-RADARLOST-cycles Definition: For eaChj 6 {1,2,...,10(mm}: OR grim j(Radarout-EQ-0m-201) - Standby-Sincem-2020) I Figure 6.67: The Radar-Bad-For-RADARLOST-cycles macro. Macro: Radarout-EQ-O Definition: g Own-Alt-Radiov-3l is credible Radio-Altimeter—Statusv-35 = valid Figure 6.68: The Radarout-EQ—O-macro. 202 Macro: Climb-Desc.-Inhibit Definition: g Climb-Inhibits-” in state Inhibited Descend-Inhibits-3o in state Inhibited Figure 6.69: The Climb-Desc.-Inhibit macro. Macro: Standby-Since(i) Definition: There exists a such that EH 0 g i g) PREVa(TCAS-Controllers.u in state Fully-Operational) Figure 6.70: The Standby-Since macro. Recall (Chapter 2, Section 2.2.3.4.1) that the interpretation of an unprovable PVS subgoal is the conjunction of the formulas in the antecedent implies the disjunction of the formulas in the consequent. Since a => 6 E -~a V b, and since by DeMorgan’s Law -(al A a2) E Hal V ‘102, our unprovable subgoals can be written in disjunctive form. The disjunction of the negation of the individual formulas in the antecedent forms one disjunct and the disjunction of the individual formulas in the consequent forms the other disjunct. To convert the subgoals into the proper form for our tabular notation, we simply need to apply DeMorgan’s Law to the disjunctive formula. In the resulting formula, each individual formula from the original subgoal is a conjunct; thus, each unprovable subgoal may become one column in an AND/0R table. We show now that there is not necessarily a one-to—one correspondence between subgoals and columns in an AND/OR table. An examination of the results shown in Figure 6.71 reveals that formulas {—2} and {5} in each of the reported incompletenesses, requires that the macro Radar-Bad-For-RADARLOST-Cycles (Figure 6.67) be FALSE. Formula {3} in each of the reported incompletenesses requires that the macro C1 imb-Desc . - Inhibi t (Figure 6.69) SUBGOAL.1 : {-1} ESL_1?(Effective_SL!l) {-2} cTrue?(PREV_j_Radarout_EQ!1) {1} On_Ground?(OwndAir_Status!1) {2} ESL_3?(Effective_SL!1) {3} D_Inhibited?(Descend_Inhibit!l) {4) Valid?(Radio_Altimeter_Status!1) {5} cTrue?(Standby_Since_jll) SUBGOAL.3 {-1} ESL_1?(Effective_SL!1) (-2} cTrue?(PREV_j_Radarout_EQ!1) {1} On_Ground?(Own_Air_Status!1) {2} ESL_3?(E££ective_SL!l) (3} D_Inhibited?(Descend_Inhibit!l) (4} Own_A1t_Radio!1 = Credible!l {5} cTrue?(Standby_Since_j!1) SUBGOAL . 5 {-1} ESL_1?(Effective_SL!1) (-2} cTrue?(PREV_j_Radarout_EQ!1) {1} On_Ground?(Own_Air_Status!1) {2) ESL_3?(Effective_SL!1) {3) C_Inhibited?(Climb_Inhibit!l) {4} Valid?(Radio_Altimeter_Status!1) {5} cTrue?(Standby_Since_j!1) SUBGOAL.7 : {-1} ESL_1?(E££ective_SL!l) {-2} cTrue?(PREV_j_Radarout_EQ!1) {1} On_Ground?(Own_Air_Status!1) {2} ESL_3?(E£fective_SL!1) {3} C_Inhibited?(Climb_Inhibit!l) {4} Own_Alt_Radio!1 = Credible!l {5} cTrue?(Standby_Since_jl1) 203 SUBGOAL . 2 : {-1} ESL_2?(Effective_SL!1) {-2) cTrue?(PREV;j_Radarout_EQ!1) l ....... {1) On_Ground?(OwndAir_Status!1) {2} ESL_3?(E££ective_SL!l) {3} D_Inhibited?(Descend_Inhibit!1) {4} Valid?(Radio_Altimeter_Status!1) {5} cTrue?(Standby_Since_j:1) SUBGOAL.4 {-1} ESL_2?(Effective_SL!1) {-2} cTrue?(PREV_j_Radarout_EQ!1) | ....... {1} On_Ground?(Own_Air_Status!1) {2} ESL_3?(Effective_SL!l) {3} D_Inhibited?(Descend_Inhibit!1) {4} Own_Alt_Radio!1 = Credible!1 {5} cTrue?(Standby_Since_j!1 SUBGOAL.6 {-1} ESL_2?(Effective_SL!1) {-2} cTrue?(PREV_j_Radarout_EQ!l) I ....... (1) On_Ground?(Own_Air_Status!1) {2) ESL_3?(Effective_SL!1) {3} C_Inhibited?(Climb_Inhibit!l) (4} Valid?(Radio‘Altimeter_Status!1) {5} cTrue?(Standby_Since_j!1) SUBGOAL . 8 {-1} ESL_2?(E£fective_SL!l) {-2} cTrue?(PREV_j_Radarout_EQ!1) I ....... {1} On_Ground?(OwndAir_Status!1) {2) ESL_3?(Ef£ective_SL!l) {3} C_Inhibited?(Climb_Inhibit!1) {4} Own_Alt_Radio!1 = Credible!1 {S} cTrue?(Standby_Since_j!1) Figure 6.71: Unprovable subgoals from PVS analysis. 204 be FALSE. Formula {4} in each of the reported incompletenesses requires that the macro Radarout-EQ- 0 (Figure 6.68) be FALSE. This means that the eight unprovable subgoals without the macros expanded, become the two subgoals shown in Figure 6.72. Each of the two subgoals SUBGOAL.1 : SUBGOAL.2 : {-1} ESL_1?(Effective_SL!1) {-1} ESL_2?(Effective_SL!l) I ------- l ------- {1} On_Ground?(Own_Air_Status!l) {1} On_Ground?(Own_Air_Status!1) {2} ESL_3?(Ef£ective_SL!1) {2} ESL_3?(E££ective_SL!1) {3} Climb_Desc_Inhibit {3} C1imb_Desc_Inhibit {4} Radarout_EQ_0 {4} Radarout_EQ_0 {5} Radar_Bad_For_RADARLOST_Cycles {S} Radar_Bad_For_RADARLOST_Cycles Figure 6.72: Reduced unprovable subgoals from PVS analysis. could become a separate column in an AND/OR table, thus, representing two incompletenesses. However, recall that Ef fective—8L can be in only one of its sub-states at a time, and that we can represent this using the IN-ONE—OF predicate; Effective-3L IN-ONE-OF {1, 2}. Thus, the eight reported incompletenesses become the one incompleteness shown in Figure 6.73; i.e., this condition is reported as missing from the specification and as a condition that should be included in the specification of the guarding conditions to make the specification complete. We examined the specification and determined that the reported incompleteness was a true incompleteness. We added the missing condition to the specification of the guarding conditions and reran PVS analysis on the same machine and using the same strategy. PVS reported the specification of the guarding conditions complete in under six minutes. Effective-8L IN-ONE-OF {1,2} Effective-8L IN-STATE 3 Own-Air-Status = On-Ground Climb—Descend-Inhibit-Macro Radarout-EQ-O-Macro Radar-Bad-For-RADARLOST-Cycles-Macro "l'fl "1'“ "1'6 Figure 6.73: Condition to add to specification of guarding conditions to make the specification complete. 205 6.2.2 'Iiansitions in State Effective-8L Figures 6.74 through 6.80 show the guarding conditions for the transitions in the state Effective-SL. Note that the majority of the predicates involved in the guarding conditions shown in the figures, are enumerated type predicates: in state, in one of, equal, and equal one of. Transition(s): ANY —> [fl Location: Own-Aircraft t> Effective-SL930 Trigger Event: Auto-SL-Evaluated-Eventemg Condition: OR gAuto-SLSJO lnstatel I Mode-Selectorv_34 = Standby - Output Action: Effective-SL-Evaluated-Eventc-279 Figure 6.74: Guarding condition for transition from any Effective-8L state to Effective-8L state 1. Transition(s): ANY —» Location: Own-Aircraft p Effective-SL530 Trigger Event: Auto-SL-Evaluated-Event.-279 Condition: g1 Auto-SL330 in state 2 [Mode-Selectorv-34 = Standby Output Action: Effective-SL-EvaluatedFvente-2-,9 Figure 6.75: Guarding condition for transition from any Effective-8L state to Effective-8L state 2. We applied symbolic analysis using BDDs to check the guarding conditions to see if their spec- ification was complete. In the first iteration of symbolic analysis we did not enable our decision procedures. The output from the symbolic analysis reported on the order of 130,000 potential in- completenesses. We reran symbolic analysis with decision procedures enabled and the analysis reported zero satisfying paths in the result BDD, signifying that the specification of the guarding 206 'Iiansition(s): ANY —-> Location: Own-Aircraft r> Effective—81.3-30 Trigger Event: Auto-SL-Evaluated-Eventcmg Condition: OR TTT—7‘7" TTT—FT TTT—7‘7”? TTTTT ‘7’?fo— 71%fo TTTT'.‘ rffTrfifi r——“'—l — mmmEl Output Action: Effective-SL-Evaluated-Eventcmg Figure 6.76: Guarding condition for transition from any Effective-8L state to Effective-8L state 3. Transition(s): ANY —+ Location: Own-Aircraft t> Effective-SL344) Trigger Event: Auto-SL-Evaluated-Eventc-279 Condition: l-JHI‘Tl-l-ll-l'l-ig ['J-l-I—il-ll-l-l-ij H J I J l Isl-Ill l-l-I'Ial'l-J-ll'l-il ['l-l-ll-H-l-T‘H Output Action: Effective-SL-Evaluated-Eventc-279 Figure 6.77: Guarding condition for transition from any Effective-8L state to Effective-8L state 4. 207 Transition(s): ANY —> Location: Own-Aircraft o Effective-SL530 'Ii'igger Event: Auto-SL—Evaluated-Eventc-279 Condition: l-l'l-il-l'l-l-LI-il I'JHJ-l-l'lal-l-lalg l°l‘l°['-il-ll'l-lal-J H l l I I [HI-1H [ii-[HHJHI'FI Output Action: Effective-SL-Evaluated-Eventemg Figure 6.7 8: Guarding condition for transition from any Effective-8L state to Effective-5L state 5. Transition(s): ANY ——+ [E Location: Own-Aircraft t> Effective-SL530 Trigger Event: Auto-SL-Evaluated-Eventc_279 Condition: OR mm :2277 ...TT r—"—"——-—i -24 '1‘ . . . '1‘ 4.4.1:: ---T- :rjr: v-34= 41°1; 44:;L v-= Jmmml Output Action: Effective-SL-Evaluated-Eventemg Figure 6.79: Guarding condition for transition from any Effective-8L state to Effective-8L state 6. 208 'li-ansition(s): ANY -—-> Location: Own-Aircraft r> Effective-SL530 Trigger Event: Auto-SL-Evaluated-Event,-279 Condition: OR 3:: '1‘ . . “fin—T 31;: III Output Action: Effective-SL-Evaluated-Evente-279 Figure 6.80: Guarding condition for transition from any Effective-3L state to Effective—8L state 7. conditions for transitions in the state Ef fective-8L was complete. No augmenting information (other than the knowledge about enumerated type predicates that is incorporated into our decision procedures) was required to show the specification of the guarding conditions complete. We applied PVS analysis to the PVS specification of the original guarding conditions. PVS analysis showed the guarding conditions complete in under four seconds on a 200MHz Pentium Pro machine running Redhat Linux. We used the same strategy as in the previous completeness analysis for transitions in the state Auto-5L (Figure 6.66). 6.3 Summary and Heuristics From the results we obtained and discussed in this chapter and from additional case studies we performed, we developed some general heuristics to assist analysts in applying our method effi- ciently and effectively. First, for guarding conditions similar to those of the Auto-SL transitions that contain a large number of relational type predicates with a large number of possible multi-way interdependencies, such as (a: < c1, 5: >= c1), (2: <= c2, 9: > (:2) where a: is an integer or integer function and c1 and c2 are integer constants, and (a: < c3,a: >= 04) and (a: >: c4,a: <= 05) 209 where a: is an integer or an integer function and c3, c4, and c5 are constants subject to the constraint: c3 < c4, c5 < c4 where the predicates cannot be both TRUE or both FALSE at the same time, PVS analysis should be applied on the first iteration; symbolic analysis (without some additional deci- sion procedures) will not be effective, and there will be so much missing information in the analysis process that the indicator nodes method is not feasible; there are too many multi-way dependencies between the predicates. Second, for guarding conditions similar to those of the Ef f ective-SL transitions that con- tain a large number of enumerated type predicates, either symbolic analysis with our decision pro- cedures, or PVS analysis can be applied first. However, since PVS has more decision procedures available, applying PVS on the first iteration may yield fewer spurious or redundant error reports. Third, if the guarding conditions being analyzed have many rows and columns with many levels of indirection similar to the guarding conditions of the Intruder—Status transitions (for both TCAS H version 6.04A and version 7), it is best to apply symbolic analysis with variable reordering on the first iteration. The symbolic analysis report may show that some of the guarding conditions are consistent (complete). When the report from the symbolic analysis shows many inconsistencies (incompletenesses) then apply our domain axiom identification process to identify any missing do- main axioms. Once the missing domain axioms have been identified, it is best to generate the PVS specifications of the guarding conditions and the conjecture theories to check for consistency and completeness, and add the identified domain axioms to the PVS specification. Then, apply the PVS prover to the conjectures, augmenting the proof process with the identified domain axioms. Since PVS has more decision procedures available, the output may report fewer spurious errors and fewer redundant errors when true errors exist. We do not start with PVS analysis for complex guarding conditions since, as our results show, there are some guarding conditions that PVS analysis fails on; this failure is generally because some information is lacking from the specification and the proof 210 process. Once the PVS specification and proof process are augmented with the domain axioms identified using our indicator nodes approach, PVS analysis is effective. Fourth, for guarding conditions that contain a significant number of linear arithmetic predicates, non-linear arithmetic predicates with constants, or expressions involving division by constants when the expressions are structurally equivalent, it is best to apply PVS analysis to the guarding condi- tions on the first iteration; symbolic analysis using BDDs cannot effectively manage any of the aforementioned problems unless all of the predicates are structurally equivalent and there are no multi-way interdependencies between predicates (for example, three or more arithmetic predicates that cannot be satisfied at the same time). Finally, the results we obtained for both our completeness analysis of transitions in the state Auto-SL, and for our consistency analysis for the guarding conditions on the transitions Proximate-Traffic to Other-Traffic and Proximate-Traffic to Potential-Threat, suggest that a large numberof reported inconsistencies or incomplete- nesses result because of the large number of permutations that may exist between the predicates within a macro (for example, there may be six ways the truth values of three predicates in a one column table can be permuted such that the resulting macro is unsatisfied). Rather than reporting all permutations of truth values that make a particular macro FALSE or TRUE, we can combine all of the permutations into the single macro from whence the predicates came, and just report that the macro has to be FALSE or TRUE, accordingly. Thus, making the analysis output still smaller and more manageable. We can also investigate the selective expansion of certain macro predicates, for example. Our analysis tool does not have these capabilities yet, but these are ideas that bear further consideration. Chapter 7 Conclusions and Future Investigations Statically analyzing requirements specifications to assure that they possess desirable properties is an important activity in any rigorous software development project. However, static analysis is performed on a formal model of the requirements that is an abstraction of the original requirements specification. Often, abstractions in the model lead to spurious errors in the analysis output. A high ratio of spurious errors to true errors in the analysis output makes it difficult, error-prone, and time consuming to find and correct the true errors in the specification. Two desirable properties that certain requirements documents should satisfy are completeness and consistency. The goal of the research described in this dissertation was to develop a method to analyze state-based requirements for completeness and consistency in a way that is fast enough and automated enough to be used on a day-to—day basis by practicing engineers, that generates analysis output with a small ratio of spurious errors to true errors, and that is scalable and generalizable. Analyzing state-based requirements for completeness and consistency generalizes to analyzing large logical expressions for tautologies and contradictions. In Chapters 2 and 3 we showed that different analysis methods had different strengths and weak- nesses. In Chapter 3 we showed that all analysis techniques suffer from a common problem, namely, 211 212 spurious errors in the analysis output. Two methods for analyzing logical expressions for tautolo- gies and contradictions are symbolic methods such as those that rely on Binary Decision Diagrams (BDDs), and reasoning methods such as theorem proving. We showed that symbolic methods are fast and fully automated, but generate output that may contain many spurious errors since the anal- ysis model contains many abstractions. Reasoning methods tend to be slower and require more manual intervention, but generate more accurate output since the analysis model contains fewer abstractions. We showed that both symbolic analysis and reasoning methods work well for cer- tain problems, but that neither method alone can always provide the analyst with accurate analysis output in a timely and automated manner. In Chapter 3 we identified four different classes of spurious errors and the undetected contra- dictions that lead to them. We demonstrated that different analysis methods can deal effectively with different classes of spurious errors; i.e., different analysis methods can detect and eliminate different types of contradictions. We showed that no analysis method is able to detect contradic- tions between predicates when information about environmental constraints or when information related to the structure of the state machine is missing from the analysis model. In Chapter 4 we discussed that if one can identify the information that has been abstracted away and that is causing the spurious errors, and add the information back into the analysis process, then the spurious errors can be eliminated. In Chapter 5 we described a technique to help identify the missing information. We call the augmenting information domain axioms. Our domain axiom identification technique uses the structure of the symbolic representation (BDDs) to point out potential predicates that may be involved in domain axioms that have been abstracted out of the analysis model. We found that nodes that occur on all or a majority of paths near the top or bottom of the BDD are indicative of predicates involved in domain axioms in the specification. We call such nodes indicator nodes, and we call the predicates associated with them, 213 indicator predicates. We demonstrated how we can use the indicator predicates to identify domain axioms in the specification that have been abstracted out of the analysis model and that need to added back into the analysis model to eliminate the spurious errors. The results of the research are: (1) an iterative approach that integrates the strengths of a sym- bolic and a reasoning component to analyze logical expressions for tautologies and contradictions while circumventing their weaknesses, and (2) a simple technique that uses a symbolic representa- tion of logical expressions to help identify abstractions in a model that are causing spurious errors in the analysis output. We applied our method to some of the most complex portions of a large real-world avionics specification, the TCAS II requirements specification. During this application we learned some heuristics to help guide analysts in using our approach efficiently and effectively. We reported these heuristics at the conclusion of Chapter 6. The results we achieved demonstrate that our approach is feasible and promising. 7 .1 Potential Future Work There are many sub-tasks of our method that can be further automated. For example, the process of identifying the indicator nodes (predicates) can easily be automated. Automating the processes of identifying the relevant domain axioms and verifying that the identified domain axioms are truly axioms in the specification is more difficult and requires further research. Another sub-task that may be automated and that requires further investigation is the process of inspecting the samples from the symbolic analysis process and identifying potential domain axioms by identifying patterns in the samples. The latest version of PVS has provisions for batch mode, so it is possible that the PVS analysis process can be further automated as well. In addition, our method of applying the different analysis techniques based on the results of 214 particular iterations can easily be automated. Our analysis tool allows the analysts to select the desired analysis via command line options when the analysis is initiated. We could easily set up a shell script to interact with our analysis process to decide what action to take on subsequent iterations based on the output from the most recent iteration. Analysis of input and output interfaces is also related to this research [29]. Constraints may be placed on when an input can be received and when an output can be generated. These constraints can be in the form of logical expressions. The analysis technique discussed in this dissertation easily extends to examining input and output interfaces for completeness and consistency. 7 .2 Suggestions for Future Investigations One suggestion for future investigations is to determine if our method for identifying domain ax- ioms can be integrated into other analysis methods that use BDDs, such as PVS [44, 55], model checkers [4, 9, 10, 42], and SVC [47]. Another opportunity for future investigation of the applicability of our method is to investigate other application domains that rely on manipulation of logical expressions to check for tautologies and contradictions. For example, areas that rely on decision tables, and the SCR requirements specification formalism. Finally, a more difficult problem that requires further investigation is what to do once the errors have been determined; for example, when incompletenesses are discovered, in what guarding con- ditions should the missing conditions be added; or, should conditions be removed and the guarding conditions simplified. In other words, once the inconsistencies and incompletenesses have been found, how are they removed and corrected. These are difficult questions that need to be addressed in the near future. APPENDICES Appendix A Macro Definitions 215 216 Macro: Alt-Separation-Test Definition: OR Other-Capabilityv-m = TA/RA Fr— 3 T f. Two-Of-Threem-222 T l ; __ No—Vertical-Intentm-213 L T _-_ _A Modified-Tau-Cappedmu < FRTHR L T T T ALT-RATE-&~SEPARATION—TEST l T _‘1; l Noncrossing-Biased-Climbm-217 l - l Own-Tracked-Altf.24g < Other-Tracked-Altf.243 l ; l Noncrossing-Biased-Descendm-21g ; T - T Own-Tracked-Altf.24g > Other-Tracked-Altf.243 _-_ l L L— Abbreviations: FRTHR Dime-To-CPA-Firmness-Dependent,-zu [Conflict-814.231 ,Other-Track-Firmnessngu ALT-RATE-&-SEPARATION-TEST OR Own-Tracked-Alt-Ratef.247| g 600 mummy, T ”-— T T Other-Tracked-Alt-Ratef.244| _<_ 600 film”, T T T T SAME-VERTICAL-DIRECTION T T T T Current-Vertical-Separationf.231 > 600 Wm T 7F T :— Current-Vertical-Separationf.231 > 850 ftmxmbm, j : : E SAMIE-VERTIQAL-DIRECTION E3 g Own-Tracked-Alt-Ratef.247 2 0 Other-Tracked-Alt-Ratef.244 2 0 HE Figure A.l: Alt-Separation-Test Macro 217 Macro: Low-Firmness-Separation-Test . OR Definifio Other-Capabilityv-m = TA/RA 1‘ 1' E E E E No-Vertical-Intentm-213 l _L L 4 _-_ _:_ Two-Of-Threem-222 T T T - - - Modified-Tau-CapPCdf-ur < TFRTHR E E E E E E gDown-Separationf.236(low-firm) g SENSEFIRM T - T - - Up—Separationf.261(low-firm) _<_ SENSEFIRM T T T T T T ems-ME : Z Z : a : Own-Tracked-Altf.248 < Other-Tracked-Altf_243 - T - - T - Own-Tracked-Altf.243 > Other-Tracked-Altf.243 f fl T f T T Current-Vertical-Separationf.231 >150 mm 1 I: Z I I 3 Abbreviations: CHANGE-ME LInhibit-Biased-Climbf.233 (low-firm) > Down-Separationf.236(low-firm) I SENSEFIRM Eow-Firmeness-Alt-Threshold,-275 [Alt-Layer-Valuef.229 fl TFRTHR I Time-To-CPA-Firmness-Dependentt-274 [Conflict-Sham ,Other-Track-Firmnessf.243] l Figure A.2: Low-Firmness-Separation-Test Macro Macro: Noncrossing-Biased-Climb Definitinm. _ . __ OR __ Inhrbrt-Brased-Climbf.23g(normal) > Down-Separationf.236(normal) _I: E T Own‘TraCRCd'Altf.248 _ omer’Tkaed'Altf.243 Z 300 fkmrng) T ' f Up—Separationf_261(normal) Z ALIM '1‘ . . Own-Tracked-Altf.243 - Other-Tracked-Altf.243 g — 300 mm, : - E Down-Separationf.236 (normal) 2 ALIM _c‘ E - Abbreviations: ALIM LPositive-RA-Altitude-Limit-Threshold,-274 [Alt-Layer-Valuef.229] ] Figure A.3: Noncrossing-Biased-Climb Macro 218 Macro: Noncrossing-Biased-Descend DefinitionL Inhibit-Biased-Climbf.233 (normal) > Down-Separationf.236(nonnal) Own-Tracked-Altf.243 — Other-Tracked—Altf.243 Z 300 prlegp, Up—Separationf.261(normal) 2 ALIM Own-Tracked-Altf.243 - Other-Tracked-Altf.243 g — 300 fpmNsm Down-Separationf.236(normal) _>_ ALIM Abbreviations: ALIM fi’ositive-RA-Altitude-Limit-Threshold‘-274 [Alt-Layer-Valuef.229 ]] Figure A.4: Noncrossing-Biased-Descend Macro Macro: No-Vertical-Intent Definition: Other-VRCMog = No-Intent Intent-Received“ 13 in state No Other-Capability“ 11 = TA/RA Figure A.5: No-Vertical-Intent Macro Macro: RA-Mode-Canceled Definition: g RA-Inhibitmzm = True PREV(RA-Inhibitm-219) = False Figure A.6: RA-Mode—Canceled Macro Ll-T‘nl'l'fil [HI-ll ' l - HES [TI-[TIT] 219 Macro: RA-Inhibit Definition: Auto-SL330 in state 2 Mode-Selectorv-34 = TA-Only RA-Inhibit-From-Ground Abbreviations: RA rthlbit-From round there exists i : Mode-S-Ground-StationIi].Ground-Commanded-SLV.193 = 2 Description: There exists at least one ground station that has commanded sensitivity level 2 Figure A.7: RA-Inhibit Macro Macro: Reply-Invalid-Test Definition: Other-Capability“ 11 = TA/RA No-Vertical-Intentm-213 TWO-Of-TMQn Figure A.8: Reply-Invalid-Test Macro 220 Macro: TCAS-TCAS-Crossing-Test Definition: 0R__ Other-Capabilityv-m = TA/RA T T No-Vertical-Intentm-213 T T Two-Of-Threem-222 E E Modified-Tau-Cappedf-ul < TFRTHR l l g l0wn-Tracked-Alt-Ratef.247| S 600 ft/mimoum T T lOmer-Tracked-Alt-Ratef.244| > 600 ft/mimowv, T T Other-Tracked-Relative-Altf.246 > 300 ftmmsar) 7T 74 VMD g o T 7— Other-Tracked-Relative-Altf.246 < -300 ftamsm T— T VMD 2 0 Z I Abbreviations: TFRTHR [ Time-To-CPA-Firmness-Dependentau [Conflict-814.231 ,Other-Track-Fir'mnessf.243] ] VMD [Not defined in this document for this macro] Figure A.9: TCAS-TCAS-Crossing-Test Macro 221 Macro: Threat-Alt-Test Definition: OR Other-Alt-Reportingv-113 = True 1‘ j T : Current-Vertical-Separationf.231< ZT 1 F T F Other-Tracked-Range-Ratcf.245 > 0 F - T F |VMD| < 21‘ T - - l ADOTZ-lft/8(ZDTHR) ' _F_ ° F Time-TO-CO-Altf.253 < TVTHR - T ° T Time-To-Co-Altf.253 < True-Tau-Cappedmm ; l 4 E Abbreviations: ADOT rOther-Tracked-Relative-Alt-Ratef.246 * SIGN(Other-Tracked-Relative-Altf.245)J VIVfl) Let: R2 = Own-Tracked-Altf.243 - Other-Tracked-Altf.z43 RZD = Own-Tracked-Alt-Ratef.247 — Other-Tracked-Alt-Ratef.244 TRTRU = True-Tau—Cappedmm TAUR = Modified-Tau-Cappedf.z41 TVPCMD = XTVPC'I‘BLX‘.272[Conflict-SL{.231] Then: VMD = Venical-Miss-Distanccf.262 (RZ, RZD, TRTRU, TAUR, TVPCMD) ZT [Threat-Alt-Thresholdt.275 [Alt-Layer-Valuef.229 fl TVTHR [Not defined in this document for this macro. I Figure A. 10: Threat-Alt-Test Macro 222 Macro: Threat-Range-Test Definitio - .93.. Other-Tracked-Range-Ratef.245 > 10 ft/smmm, _-_ _F_ Other-Tracked-Rangef.245 > DMOD i - Modified-Tau-Cappedf.241 < TRTHR _J E Other-Tracked-Rangef.245 g 12.0 nmimm,0 - T |Other-Tracked-Range-Ratef.245 a: Other-Tracked-Rangef.245| > H1 F F NUTSANCE-ALARM-FILTER E E: Abbreviations: DMOD | Tmat-Mmimum-nge-Thmsholm-p3 [Conflict-814.231 lj TRTHR fThreat-Modified-Tau-Thresholdt-274 [CODfliCt-SLf.231 ] ] H1 [ Threat-Minimum-Divergence-Threshold.-273 [Conflict-SL931] I NUISANCE-ALARM-FILTER Tau-Risings-143 in state 3Plus Intruder-Statuss-136 in state Threat Other-Tracked-Rangef.245 Z DMOD HHH Figure A. ll: Threat-Range—Test Macro Macro: Two-Of-Three Definition: there existsj 6 {1,2} : g PREV j(Other-Range-Validv.m) = True Other-Range-ValidM 17 = True HE Figure A.12: Two-Of-Three Macro Appendix B PVS Commands and Strategies In Section B. l, we describe the commands and strategies available in PVS that we used to build our own PVS strategies. Section B.2 describes how to define proof strategies in PVS. In Section 8.3, we describe the proof strategies we defined to assist the analyst with proving disjunctive and conjunctive logical expressions satisfiable and mutually exclusive. For additional details about the proof checker and the proof commands, see [55]. For details on the PVS specification language, see [44]. For information on how to use the PVS specification and verification system see [15. 45]. B.1 Built-in PVS Commands and Strategies The description of the PVS commands and strategies listed in this section, comes from the online PVS help facility and from [55]. 0 (apply strategy): Applies a proof strategy, strategy, in a single atomic step. - assert: Simplifies the current goal using the decision procedures. 0 auto-rewrite-defsS: Installs all of the definitions used directly or indirectly in the original statement (i.e., relevant to the conjecture) as auto-rewrite rules. 0 bddsimp: Performs propositional simplification using Binary Decision Diagrams (BDDs). 223 224 o do-rewriteS: Applies the rewrite rules to all of the formulas in the sequent. o inst?: Tries to automatically instantiate a quantifier. o (lemma name): Introduces an instance of the lemma named name. name may be an axiom, lemma, or definition. 0 postpone: Marks the current goal as pending to be proved, and changes focus to the next remaining goal. 0 record: Adds more assumptions to the data structures used by the decision procedures. The data structures are used to record the assumptions that are true in the current context. 0 (repeat* step): A strategy that successively applies step until step does nothing. 0 rewrite-msg-ol'f: Turns off printing of applied auto rewrites and skips. 0 skip: Does nothing. Primarily used in writing strategies where a step is required to have no effect unless some condition holds. 0 skolem!: Skolemizes by automatically generating skolem constants. 0 (then step] step2): A sequencing strategy that first applies stepI, and if any subgoals are generated, then applies step2 to each of the generated subgoals. If step] has no effect, then step2 is applied to the original goal. 0 (try step] step2 step3): A strategy for subgoaling and backtracking that applies step! to the current goal, and if step] succeeds and generates subgoals, then it applies step2 to the generated subgoals. If step] does nothing, then step? is applied to the current goal. B.2 Defining Strategies The user-defined strategies are saved in a file called pvs-strategies [55]. When the PVS prover is invoked, PVS loads the strategies from a file of the above name. A user-defined strategy definition has the form: (defstep name ( required-parameters &optional optional-parameters &rest parameter strategy-expression documentation-string format-string ) where &rest is a keyword indicating that zero or more values may be provided for the indicated argument. Our user—defined strategies (described in Section 8.3) do not use parameters. 225 The strategy definition generates both a defined rule name and a strategy name$ ( [55]). The main difference between rules and strategies is that rules are atomic. When a rule is applied to a goal, it reduces the goal to zero or more subgoals in a single step, whereas a strategy generates a tree of rule-applications while reducing a goal to subgoals. 3.3 Our PVS Defined Strategies For Checking For Consistency and Completeness We developed the following strategy consistentZ to assist the analyst in checking conjunctive logical expressions for mutual exclusion. The strategy is invoked using the command consistent2$. (defstep consistentZ NIL (apply (then (rewrite-msg—off) (skolem!) (auto-rewrite-defs5) (do-rewrite$) (repeat* (try (bddsimp) (record) (postpone) ) ) ) ) We developed the following strategy completeZ to assist the analyst in checking disjunctive logical expressions for satisfiability. The strategy is invoked using the command complete2$. (defstep completez NIL (apply (then (skolem!) (rewrite—msg-off) (auto-rewrite-defsS) (do-rewrites) (repeat* (try (bddsimp) (try (record) (assert) (postpone)) (skip))))) 226 The skolemizeandrewrite strategy is used to perform only skolemization and rewriting. It is used when the proof process requires the addition of lemmas, since a single automated strategy cannot be used. When lemmas need to be added to the proof process, each lemma must be specified by name and individually included. The strategy is invoked using skolemizeandrewriteS. (defstep skolemizeandrewrite NIL (apply (then (rewrite—msg—off) (skolem!) (auto-rewrite-defss) (do-rewrites) )) Bibliography [1] R.J. Anderson, P. Beame, 8. Burns, W. Chan, F. Modugno, D. Notkin, and J .D. Reese. Model checking large software specifications. In D. Garlan, editor, Proceedings of the Fourth ACM SIGSOFT Symposium on the foundations of Software Engineering (SIGSOI'T’96), volume 21, pages 156—166, November 1996. [2] Joanne M. Atlee. Native model-checking of SCR requirements. Obtained via ftp; Dept. of Computer Science; University of Waterloo; Waterloo, Ontario. [3] Joanne M. Atlee. Automated Analysis of Software Requirements. PhD thesis, University of Maryland, 1992. [4] Joanne M. Atlee and John Gannon. State-based model checking of event-driven system re- quirements. IEEE Transactions on Software Engineering, pages 24—40, January 1993. [5] BDD(3). Manual page, June 1993. Manual page for BDD package obtained via ftp from Carnegie-Mellon University. [6] Ramesh Bharadwaj and Constance Heitrneyer. Verifying SCR requirements specifications using state exploration. In First ACM SIGPLAN Workshop on Automatic Analysis of Software, Jan 1997. [7] Michael C. Browne et al. Automatic verification of sequential circuits using temporal logic. IEEE Transactions on Computers, C-35(12):1035-1043, December 1986. [8] Randal E. Bryant. Graph-based algorithms for boolean function manipulation. IEEE Transac- tions on Computers, C-35(8):677-691, August 1986. [9] Jerry R. Burch et al. Symbolic model checking for sequential circuit verification. IEEE Trans- actions on Computer-Aided Design of Integrated Circuits and Systems, 13(4):40l-424, April 1994. [10] LR. Burch, EM. Clarke, K. L. McMillan, et al. Symbolic model checking: 1020 states and beyond. Information and Computation, 98(2): 142-170, June 1992. [11] William Chan, Richard Anderson, Paul Beame, and David Notkin. Combining constraint solving and symbolic model checking for a class of systems with non-linear constraints. In Oma Grumberg, editor, Computer Aided Verification; 9th International Conference, CAV '97 Proceedings, volume 1254 of Lecture Notes in Computer Science, pages 316—327, June 1997. 227 228 [12] William Chan, Richard Anderson, Paul Beame, and David Notkin. Improving efficiency of symbolic model checking for state-based system requirements. In International Symposium on Software Testing and Analysis (ISSTA), pages 102-112, March 1998. [13] E. M. Clarke, E. A. Emerson, and A. P. Sistla. Automatic verification of finite-state concurrent systems using temporal logic. ACM Transactions on Programming Languages and Systems, 8(2):244-—263, April 1986. [14] E. M. Clarke et al. Using temporal logic for automatic verification of finite state systems. In K. R. Apt, editor, Logics and Models of Concurrent Systems, pages 3—26. Springer-Verlag, Berlin, 1985. [15] Judy Crow, Sam Owre, John Rushby, et al. A tutorial introduction to PVS. Presented at WIFT 95: Workshop on Industrial-Strength Formal Specification Techniques, 1995. Available at http://www.csl.sri.com/fm-papers.html. [16] David Duffy. Principles of Automated Theorem Proving. John Wiley & Sons Ltd., 1991. [17] E. Allen Emerson and Joseph Y. Halpem. ”sometimes” and ”not never” revisited: On branch- ing versus linear time temporal logic. Journal of the Association for Computing Machinery, 33(1):151—178, January 1986. [18] Melvin Fitting. First-Order Logic and Automated Theorem Proving. Springer-Verlag, 1990. [19] Stephen J. Garland and John V. Guttag. A guide to LP, the larch prover, December 1991. [20] D. Harel. Statecharts: A visual formalism for complex systems. Science of Computer Pro- gramming, 8:231—274, 1987. [21] David Harel et al. Statemate: A working environment for the development of complex reactive systems. IEEE Transactions on Sofiware Engineering, pages 403-413, April 1990. [22] V. Hartonas-Garmhausen et al. Automatic verification of industrial designs. In Workshop on Industrial Strength Formal Specification Techniques, pages 88—96, April 1995. [23] Klaus Havelund and Natarajan Shankar. Experiments in theorem proving and model checking for protocol verification. In Formal Methods Europe (FME ’96), pages 662—681, 1996. [24] M. PB. Heimdahl and N .G. Leveson. Completeness and Consistency Analysis of State-Based Requirements. IEEE Transactions on Software Engineering, TSE-22(6):363—377, June 1996. [25] Mats P.E. Heimdahl. Analysis of state-based models: A survey. Technical Report CPS-94-59, Michigan State University, Department of Computer Science; 3115 Engineering Building; E. Lansing, MI 48824-1027, November 1994. [26] Mats PE. Heimdahl. Completeness and consistency in hierarchical state-based requirements. Technical Report CPS-94-69, Michigan State University, Department of Computer Science; 3115 Engineering Building; E. Lansing, MI 48824-1027, December 1994. [27] Mats P.E. Heimdahl. Experiences and lessons from the analysis of TCAS II. Technical Report CPS-95-25, Michigan State University, Department of Computer Science; 3115 Engineering Building; E. Lansing, NH 48824-1027, June 1995. 229 [28] Mats P.E. Heimdahl and Nancy G. Leveson. Completeness and consistency analysis of state- based requirements. Technical Report CPS-94-53, Michigan State University, Department of Computer Science; 3115 Engineering Building; E. Lansing, MI 48824-1027, October 1994. [29] Mats P.E. Heimdahl, Jeffrey M. Thompson, and Barbara J. Czemy. Specification and analysis of system level inter-component communication. IEEE Computer, pages 47—54, April 1998. [30] Constance Heitmeyer et al. Consistency checking of SCR-style requirements specifications. In International Symposium on Requirements Engineering, March 1995. [31] Constance Heitmeyer, Ralph D. Jeffords, and Bruce G. Labaw. Automated consistency check- ing of requirements specifications. ACM Transactions on Software Engineering and Method- ology, 5(3):23l-261, July 1996. [32] Constance L. Heitmeyer and Bruce G. Labaw. Consistency checks for SCR-style require- ments specifications. Technical Report NRL/FR/5540-93-9586, Naval Research Laboratory, Washington, DC 20375-5320, December 1993. [33] Kathryn L. Heninger, J. W. Kallander, J. E. Shore, and D. L. Pamas. Software requirements for the A-7e aircraft. Technical Report 3876, Naval Research Laboratory, 197 8. [34] G. J. Holzrnann. Design and Validation of Computer Protocols. Prentice-Hall, 1991. [35] Alan J. Hu et al. Higher-level specification and verification with BDDs. In Participants ' Proceedings of the fourth workshop on computer-aided verification, pages 84—96, 1992. [36] Introduction to the HOL Theorem Proving System. Available at http://lal.cs.byu.edu/lal/holdoc/Description/Description.html. [37] D. Jackson. Exploiting symmetry in the model checking of relational specifications. Technical report, Camegie-Mellon University, Pittsburgh, PA, 1994. [38] Matthew S. Jaffe et al. Software requirements analysis for real-time process-control systems. IEEE Transactions on Software Engineering, pages 241—257, March 1991. [39] Jeffrey J. Joyce and Carl-Johan H. Seger. Linking BDD-based symbolic evaluation to inter- active theorem-proving. Technical Report TR-93-l8, University of British Columbia, Depart- ment of Computer Science; Vancouver, B.C. V6T 122 Canada, 1993. [40] N. G. Leveson, M. P. E. Heimdahl, H. Hildreth, and J. Reese. TCAS [1 requirements specifi- cation. [41] N. G. Leveson, M. P. E. Heimdahl, H. Hildreth, and J. Reese. Requirements specification for process-control systems. IEEE Transactions on Software Engineering, 20(9):684—707, September 1994. [42] Kenneth L. McMillan. Symbolic Model Checking. Kluwer Academic Publishers, 1993. [43] NASA; Office of Safety and Mission Assurance. Formal methods specification and verification guidebook for software and computer systems; draft 3.0. 230 [44] S. Owre, N. Shankar, and J .M.Rushby. The PVS Specification Language. Computer Sci- ence Laboratory; SRI International, Menlo Park, CA 94025, beta release edition, April 1993. Available at http://www.csl.sri.com/pvs.html. [45] S. Owre, N. Shankar, and J .M.Rushby. User Guide for the PVS Specification and Verification System. Computer Science Laboratory; SRI International, Menlo Park, CA 94025, beta release edition, March 1993. Available at http://www.csl.sri.com/pvs.html. [46] Christos H. Papadimitriou. Computational Complexity. Addison-Wesley Publishing Company, 1994. [47] D. Y. W. Park, J. U. Skakkebaek, M. P. E. Heimdahl, B. J. Czemy, and D. L. Dill. Checking properties of safety critical specifications using efficient decision procedures. In FMSP '98: Second Workshop on Formal Methods in Software Practice, pages 34—43, March 1998. [48] Mauro Pezze, Richard N. Taylor, and Michal Young. Graph models for reachability analysis of concurrent programs, October 1994. Available at http://www.cs.purdue.edulhomes/young/papers/papers.html. [49] Roger S. Pressman. Software Engineering A Practitioners Approach. McGraw-Hill, Inc., fourth edition, 1997. [50] Jean B. Rogers. A Pmlog Primer. Addison Wesley, 1987. [51] J. M. Rushby, 1995. personal communication. [52] John Rushby. Formal specification and verification for critical systems: Tools, achievements, and prospects. In Electric Power Research Institute (EPRI) Workshop on Methodologies for Cost-Efiective, Reliable Software Verification and Validation, pages 9—1 to 9—14, January 1992. [53] John Rushby. Model checking and other ways of automating formal methods. Position paper for panel on Model Checking for Concurrent Programs; Software Quality Week, May/June 1995. [54] Carl-Johan Seger. An introduction to formal hardware verification. Technical Report TR- 92-13, University of British Columbia, Department of Computer Science; Vancouver, B.C., Canada V6T 122, 1992. ‘ [55] N. Shankar, S. Owre, and J .M.Rushby. The PVS Proof Checker: A Reference Manual. Com- puter Science Laboratory; SRI International, Menlo Park, CA 94025, beta release edition, March 1993. Available at http://www.csl.sri.com/pvs.html. [56] Ian Sommerville. Software Engineering. Addison-Wesley Publishing Company Inc., fourth edition, 1992. [57] T. Sreemani and J. M. Atlee. Feasibility of model checking software requirements. In II th Annual Conference on Computer Assurance COMPASS, pages 77-88, June 1996. [58] Tutorial for the HOL Theorem Proving System. Available at http://lal.cs.byu.edu/lallholdoc/tutorial.html. 231 [59] Jeannette M. Wing. A specifier’s introduction to formal methods. IEEE Computer, pages 8—24, Sept 1990. [60] Jeannette M. Wing and Mandana Vaziri-Farahani. Model checking software systems: A case study. Technical Report CMU-CS-95-128, Carnegie Mellon University, March 1995. [61] Larry Wos et al. Automated Reasoning Introduction and Applications. McGraw-Hill, Inc., second edition, 1992. [62] Wei Jen Yeh. Controlling State Explosion in Reachability Analysis. Doctoral thesis, Purdue University, December 1993. [63] Wei Jen Yeh and Michal Young. Compositional reachability analysis using process algebra. In Symposium on Software Testing, Analysis, and Verification TAV4, pages 49—59. ACM SIG- SOFT, ACM Press, October 1991. [64] Michal Young and Richard N. Taylor. Combining static concurrency analysis with symbolic execution. IEEE Transactions on Soflware Engineering, l4(10):1499—1511, October 1988. [65] Michal Young and Richard N. Taylor. Rethiniking the taxonomy of fault detection techniques, September 1991. Available at http://www.cs.purdue.edu/homes/young/papers/papers.htrnl. [66] Michal Young, Richard N. Taylor, Kari Forester, and Debra Brodbeck. Integrated concunency analaysis in a software development environment. In Proceedings of the 3rd International Workshop on Testing, Analysis, and Verification, 1989. [67] Michal Young, Richard N. Taylor, David L. Levine, Kari Forester, and Debra Brodbeck. A concurrency analysis tool suite: Rationale, design, and preliminary experience. Technical Report TR-128-P, Software Engineering Research Center, November 1994. GNR STATE UNIV [IWWIIWIIUIIWHIHIMIMI“![IIIIHIIIIHIIHIWI