M... v- a»: . A” .u. :Q L1; 1‘ ‘ " " 3'5" 3?: 1 ‘sl- 3;” ,1 :31 1% ' l “ 43533; w a: , fiiflsfii‘: ’3 if Zoo} This is to certify that the dissertation entitled A FORMAL APPROACH TO PROVIDING ASSURANCE TO Ph.D. DYNAMICALLY ADAPTIVE SOFTWARE presented by Jl ZHANG has been accepted towards fulfillment of the requirements for the degree in Computer Science and Engineering “’[Efli 1 NC (J W14 MaJQKProfessor’s Sigofa’ture 5 / i 7/ 3007’ Date MSU is an affirmative-action, equal-opportunity employer LIBRARY Michigan State University PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/07 p:IClRC/DateDue.indd-p.1 A FORMAL APPROACH TO PROVIDING ASSURANCE TO DYNAMICALLY ADAPTIVE SOFTWARE By Ji Zhang A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Computer Science 2007 ABSTRACT A FORMAL APPROACH TO PROVIDING ASSURANCE TO DYNAMICALLY ADAPTIVE SOFT\~\-’ARE By Ji Zhang Increasingly, software must adapt. its l.')el‘iavior in response to the changes in. its run-time environment and user requirements in order to upgrade services. to harden security, or to improve 1.)erforniance. In order for adaptive software to be used in safety critical and mission critical systems. they must be trusted. Adaptive software assurance must be addressed at different. stages of the software development process, including the requirements analysis phase. the design phase. and the im],)lementation phase. An adaptation-oriented systematic software development process that applies formal methods throughout. the process can be used to provic‘le assurance to adaptive systems. This dissertation introduces a number of specification lz‘inguages. modei— ing techniques. and model checking techniques to support. a systematic tuiproacl‘r to providing assurance to adaptive software from requirements through design and im— plementation phases. We introduce A-LTL. an adaptation extension to LTL. and a goal-based requirements analysis technique to formally specify adaptation require- ments. We develop a. model—based design technique to describe the designs that. satisfy the adaptation requirements. V “rification techniques are proposed to ensure that the artifacts produced in later phases conform to artifacts produced in earlier ones. Safe adaptation protocols and model checking techniques are applied to ensure that these designs are correctly followed and the requirements are satisfied in the implementa— tion. We have applied our teclmiques to a number of case studies involving adaptive mobile computing applications. Copyright by J I ZHANG 2007 To my wife and my parents and sister in China. for their support. and encouragement. iv Table of Contents LIST OF TABLES ix LIST OF FIGURES x 1 Introduction 1 1.1 Problem Statement ............................ 1 1.2 Summary of Contributions ........................ 5 1.3 Outline of the Thesis ........................... 7 I Requirements Analysis for Adaptive Software 8 2 A Formal Model For Adaptive Software 9 2.1 Background: Program Kripke Structure ................. 9 2.2 Finite State Model For Adaptive Software ............... 10 2.2.1 Formal Definition ......................... 11 2.2.2 Compositions of Adaptive Programs .............. 13 2.3 Autonomic Computing System ...................... 16 2.4 Related Work ............................... 16 2.5 Discussion ................................. 19 3, Temporal Logic for Specifying Adaptive Programs 20 . 3.1 Background: Linear Temporal Logic (LTL) ............... 21 3.1.1 Syntax ............................... 21 3.1.2 Semantics ............................. 22 3.2 The Adapt-Operator Extended LTL (A—LTL) ............. 22 3.2.1 Syntax ............................... 23 3.2.2 Semantics ............................. 23 3.2.3 Expressiveness ........................... 24 3.3 Adaptation Semantics .......................... 27 3.3.1 One-Point Adaptation ...................... 28 3.3.2 Guided Adaptation ........................ 29 3.3.3 Overlap Adaptation ........................ 29 3.3.4 Safety and Liveness Properties .................. 30 3.4 Specification Compositions ........................ 33 3.4.1 Neighborhood Composition ................... 33 3.4.2 Sequential Composition ..................... 34 3.5 Case Study: lVIetaSockets ......................... 36 3.5.1 Specifying Steady—State Programs ................ 36 3.5.2 Specifying Adaptations ...................... 37 3.5.3 Specifying Global Invariants ................... 39 3.6 Related Work ............................... 40 3.7 Discussion ................................. 42 4 Goal-Based Requirements Analysis 4.1 4.2 4.3 4.4 Background: Goal-Based Models .................... Specifying Adaptation Requirements .................. 4.2.1 Local Properties and Global Invariants ............. 4.2.2 Adaptation Variants ....................... Related Work ............................... Discussion ................................. II Model Design and Analysis of Adaptive Software 5 MASD: Model-Based Adaptive Software Development 5.1 5.2 cm co 5.4 5.5 5.6 Background: Petri Nets ......................... Our Specification Approach ....................... 5.2.1 Illustrative Adaptation Scenario ................. 5.2.2 Constructing Models for Source and Target ............ 5.2.3 Constructing Adaptation Models ................ Reifying the Models ............................ 5.3.1 Rapid Prototyping ........................ 5.3.2 Model~Based Testing ....................... Case Study: Adaptive Java Pipeline Program ............. 5.4.1 Specifying Global Invariants .............. 5.4.2 Specifying Local Properties ............... 5.4.3 Constructing Steady—State Models ................ 5.4.4 Constructing Adaptation Models ............... 5.4.5 Reifying the Models ....................... Related Work ............................... Discussion .............. . ................... 6 Re—Engineering Software to Enable Adaptation 6.1 6.2 6.3 6.4 Background ................................ 6.1.1 Aspect-Oriented Adaptation Enabling Technique ....... 6.1.2 Metah’Iodel-Based UML Formalization Il‘eclmique ....... Model-Based Re-Engineering ....................... 6.2.1 Requirements Analysis ...................... 6.2.2 Design and Analysis ....................... 6.2.3 Code Generation ......................... Case Study: Adaptive Java. Pipeline Program ............. 6.3.1 Requirements Analysis ...................... 6.3.2 Design and Analysis ....................... 6.3.3 Code Generation ......................... Extensions ................................. 6.4.1 Collaborating Adaptive Components .............. 6.4.2 Adapting to hilultiple Target Programs ............. Related W'ork ............................... vi 52 53 55 57 59 61 66 76 76 78 80 81 83 84 86 88 89 91 6.6 Discussion ......................... III Implementation of Adaptive Software 7 Modular Model Checking for Adaptive Software 7.1 Specifying Adaptive Systems ....................... 7.1.1 Adaptive TCP Routing ...................... 7.1.2 Verification Challenges ...................... 7.2 Preliminary Algorithms and Data Structures .............. 7.2.1 Partitioned Normal Form .................... 7.2.2 Property Automaton ...................... 7.2.3 Product Automaton Construction and Marking ........ 7.2.4 Interface Definition ........................ 7.3 h'Iodular Verification ........................... 7.3.1 Global Invariant Verification ................... 7.3.2 Transitional Properties ...................... 7.4 Details of Model Checking Algorithms ................. 7.4.1 Simple Adaptive Programs .................... 7.4.2 N -p1ex Adaptive Programs .................... 7.4.3 Global Invariants ......................... 7.4.4 Claims ............................... 7.5 Case Study: Adaptive Java Pipeline Program ............. 7.6 Optimizations. Scalability. and Limitations ............... 7.6.1 Optimizations ........................... 7.6.2 Complexities and Scalability ................... 7.6.3 Limitations ............................ 7.7 Related Work ............................... 7.8 Discussion ................................. 8 Run-Time Model Checking for Adaptive Software nnnnnnnn 8.1 Run-Time Verification .................. 8.1.1 Aspect-Oriented Instrumentation . . . . . . . 8.1.2 Run-Time Verification .............. 8.2 Case Study: Adaptive Java Pipeline Pi'r'igram ..... 8.2.1 Adaptation Requirements ............ 8.2.2 Instrumentation and Model Checking ...... 8.3 Related Work ....................... 8.4 Discussion ......................... 9 Safe Dynamic Adaptation Protocol 9.1 Theoretical Foundations for Safe Adaptation .............. 9.1.1 Dependency R.€l£tthllSllipS ............ 9.1.2 Critical Commtmication Segments ........ 9.1.3 Enabling Safe Adaptation ............ oooooooo 128 129 131 131 134 136 137 139 141 147 147 148 152 154 155 156 159 160 164 168 168 171 172 173 176 9.2 Safe Adaptation Process ......................... 202 9.2.1 Analysis Phase .......................... 203 9.2.2 Detection and Setup Phase .................... 204 9.2.3 Realization Phase ......................... 204 9.2.4 Failures During Adaptation Process ............... 206 9.3 Case Study: Video Streaming ...................... 209 9.3.1 Safe Adaptation Path ....................... 210 9.3.2 Performing Adz-iptive Actions Safely ............... 213 9.4 Related Work ............................... 214 9.5 Discussion ................................. 21.7 10 Conclusions and Future Investigations 219 10.1 Contributions ............................... 221 10.2 Future Investigations ........................... 223 10.3 Final Thoughts .............................. 225 A Timed A-LTL 226 Al Background: TPTL ............................ 226 A2 Timed Adapt Operator-Extended LTL ................. 228 A21 Syntax ............................... 228 A22 Semantics of TA—LTL ....................... 229 A3 Specifying Adaptation Timing Properties ................ 230 A31 One-Point Adaptation ...................... 233 A32 Guided Adaptation ........................ 235 A33 Overlap Adaptation ........................ 237 _A.4 Case Study: Live Audio Streaming Program .............. 240 A41 Forward Error Correction (F EC) Filters ............ 242 A42 Specifying QoS Constraint with TA-LTL ............ 245 A5 Related Work ............................... 247 A6 Discussion ................................. 248 A61 Expressiveness ........................... 248 A.6.2 Decision Procedure and Model Checking ............ 249 A63 Summary ............................. 250 B Supporting Material for MASD 252 B.1 Rapid Prototyping for Adaptive GSA-I-Oriented Protocol ....... 252 B2 Stub Files for h‘lodel-based Testing ................... 263 C Obtaining Statechart Diagrams from Java Code 267 LIST OF REFERENCES 274 viii 9.1 9.2 A1 A2 C.1 List of Tables Safe configuration set ........................... 212 Adaptive actions and cc.)rresponding cost ................. 212 Loss rate comparison of different FEC codes .............. 245 Delay comparison of different F EC codes ................ 245 Rules for the l\‘1etamodel-based Java to UML translation ....... 268 1.1 2.1 2.2 2.3 3.1 4.1 4.2 6.1- 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10 6.11 6.12 List of Figures Assurance techniques in three phases .................. A simple adaptive program ........................ Metamodel for ada1.)tive program .................... General architecture for autonomic system ............... Three adaptation semantics ....................... Goal model for adaptive software .................... Adaptation semantics graph ....................... A coloured Petri net. example. ...................... Goal model for adaptive software .................... GSM-oriented encoding and decoding .................. Audio streaming system connection ................... Sender source net (GSM (12)) ...................... Receiver source net (GSM (1.2)) ..................... Lossy network net (environment model) ................. Sender target net (GSM (1,3)) ...................... Receiver target net (GSM (13)) ..................... Sender adaptation net .......................... Receiver adaptation net. ......................... Sender restricted source net ....................... Adaptation controller net. ......................... Overall adaptation with controller net .................. Code excerpt from the sender net stub (SenderNet . stub) ...... Goal model for adaptive Java pipeline .................. Synchronized pipeline net ........................ Asynchronous pipeline net ........................ Adaptive pipeline adaptation net .................... Aspect-oriented adaptation enabling .................. Dataflow diagram for the proposed re-engineering approach ...... Goal model for adaptive. software .................... An example of the simple. case of quiescent /entry states ........ An example of the general case of quiescent/cutry states ....... The cascade adaptation mechanism ................... Goal model for adaptive Java pipeline .................. Java code canonical form conversion ................... Before and after canonical form conversion of sync.PipedInput read() ........................ Statechart translation of sync.PipedInput.read() .......... Adaptation model for the piped input class ............... The read() method for the adaptive piped input class ........ 11 14 17 31 6.13 AspectJ code for adaptation enabling .................. 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 8.1 8.2 8.3 8.4 8.5 9.1 9.2 9.3 9.4 A1 A2 A.3 A4 A5 A6 A.7 A.8 C.1 C.2 Case study: adaptive routing protocol .................. Illustration for Theorem 9 ........................ Markings for global invariant. me .................... Markings for transitional properties ................... Illustration for Proof 10 ......................... Illustration for Proof 11 ......................... Case study: adaptive Java pipeline ................... Performance comparison between our approach and alternative ap- proaches ............................... The dataflow diagram for AMOEBA—RT verification ......... The adaptive Java pipeline ........................ The sequence diagram for the adaptive Jam pipeline ......... Instrumentation aspect definition for the. main() method ....... Instrumentation aspect definition for Sync-Output ........... State diagram of a local process during adaptation ........... State diagram of the adaptation manager during adaptation ..... Configuration of the video streaming application ............ Safe adaptation graph (SAG) ...................... Three adaptation scenarios ..... : .................. Real-time constraints for one- point adaptation ........ Real-time constraints for guided adaptation. . . . . ......... Real—time constraints for overlap adaptation ............... Audio streaming system connectitm ................... Operation of block erasure code ...................... GSM encoding on a packet stream ((1.: data. 9.: copy). ........ Specifying adaptation semantics with TA—LT L ............. l\'ietamode1 for subset of Java legacy programs ............. Metamodel for subset of UML Statechart. diagrams .......... xi 123 133 146 169 181 184 187 190 191 206 207 210 213 231 234. 237 241 242 243 244 247 269 270 Chapter 1 Introduction ) Increasingly. computer software must adapt to changing conditions in both the supporting computing and comm:mication infrastructure. as \\ ell as in the surround— ing physical environment [96}. To meet the needs of emerging and future adap- tive systems, numerous resmn'ch efforts in the past several years have addressed \v;‘tys to construct. adaptive software. Examples include support for adaptabilitv in programming languages [1 65. 1.11]. fraineu'orks to design (ontext—awa‘n‘e applica- tions [43. 1‘26]. dynamic architectures for reconfigiirablc softwn'c com ponents and con- nectors [2 92, 98. 105]. adaptive middleware platforms that insulate applications from external dyn.:nuics [15,, 69]. and adaptable and extensible operating systems [6. 11. 39]. However. despite these advances in 'III,(.'(:.]I(1.'H78771.5 used to build adaptive software. the full potential of (lynnnice‘illy adaptiye software systems can be realized only if we (.tn establish and ensure the consistency of the system during and after adapta- tion [150, 152]. 1 . 1 Problem Statement. Assurance is even more illlpol‘té-llll when adaptive software is used in safety and mission critical syster‘ns. For safety critical systems. errors in the software systems may have catastrophic consequences. For mission critical systems. the cost to fix an error in the software after the. mission has been launched may be exceedingly high. ln order to be used in such safety and mission critical s} stems. the adaptive software must be trusted. Critical properties. such as safety properties. must be ensured in the softy 'are by the software development process. In order for us to gain confidence in an adaptive system. assurance must be addressed at different stages of the software develo})ment process. including the re- quirements analysis phase. the design phase. and the implementation phase. During the requirements analysis phase. assurance for adaptation can be achieved only if the software developers fully understand the adaptation requirements for the software, which requires precise specifications of requirements. In the design phase. the re— quirements for the adaptive software should be satisfied in the design. and should be amenable to verification. Also. for maintainsbiiity purposes. the design of adaptive software should separate adaptive behmrior from non-atLatptive behavior. In. the im- plementation phase. distributed adaptation acct ions should be. coordinated to ensure critical constraints (e.g.. dependency relationships) among components of adzn‘itive - software systems. ~Verification techniques may be used to verify the implementations against the requirements specifications for adaptive software. The above phases must be addressed systematically so that conditions expressed in the requirements are sat- isfied in the implementation. Adaptive software is generally more difficult to specify. verify. and validate due to its high complexity [144]. Particularly. when adaptations are multi—threaded. the program behavior is the result. of the collal'xiirative behavior of multiple threads and software components. Adaptations require the adaptive actions of all the components and threads to be executed in a coordinated fashion. Thesis statement: An adaptation- oriented software development pro- cess that systematically applies formal methods throughout the process can be used to gain assurance for adaptive systems. As depicted in Figure 1.1. our approach provides assurance to the development process of adaptive software in the requirements. design. and in;plenientation phases. In the requirements phase. formal specification languages can be used to specify adap— tation requirements. We (.leveloped A-LTL [143. 146]. an adaptation extension to the Linear Temporal Logic (LTL) [11 l] to enable the formal specification of the objectives for the software. We also ("l(‘\el()])(‘(l TA-LTL [15-5] to specify real-time properties of adaptive systems. We applied the temporal logics to formally specify three frequently occurring adaptation senrantics [143. 146]. The formal temporal logic specifications enable us to perform numermis automated analyses. such as verifying specification consistency. model checking for safety [n.‘opcrties. etc. \Ve intmduced a. goal-based requiremmrts analysis tecl‘mique [148] for analyzing adaptation requirements and con— structing A-LTL and LTL specifics-itions for a(la}_)ta’-:ion. The formal specifications produced in this phase will be utilized by the design phase. We have. developed a model» based design technique [1441] to describe the designs that satisfy the adaptation requirements and to mitigate the complexity in the de- sign. The models that satisfy mon-adaptive requirements are separated from those. that. satisfy adaptive 1‘qulirements. Vt}, mmture quiescent states (the states from which adaptation may safely start) in the design for adaptations in terms of specific adaptation contexts. including the program behavior. the requirements for the adap- tation, and the adaptation mechanism. The, design models can be analyzed using automated tools to determine whether they satisfy adaptation remiirements. These models can then be used to generate rapid prototypes and test cases. \Ve also in— troduce a technique that uses formal models to provide assurance in re—engineering Requirements \ Specify properties of adaptive programs Design ~ Create deSIgn models and model check them \ against adaptation requnrements \ \ \ i r Implementation Develop an adaptive system that ensures consistency before, during. and after adaptation Figure 1.1: Assurance techniques in three phases legacy software for adaptation [148]. In the implementation phase. safe adaptation protocols can be applied to ensure that. adaptaticm designs will be followed correctly in the implementation. Our safe adaptation protocol [150. 152] ensures that safe states for an adaptation are reached before the adaptation occurs, and that, the adaptation steps are performed in such a way that does not violate the. dependency relationships among components of adaptive software. Our approach provides (K‘Iltl'illlxfll management of adaptations. thereby enabling optimizations when more than one set of adaptive actions can be used to satisfy a given adaptation goal. Critical. properties in adaptive software can be verified using forn'ial analysis tech- niques. We have developed a mmlular model checking teclmique AMOEBA for adap- tive software [145]. which not only supports the verification of A-LTL. but. also signifi- cantly reduces model checking complexity for adaptive software. \V'e. also developed a run-time model checker, AMOEBA—RT. for arquitive software [147]. AMOEBA-RT uses an aspect—oriented technique [13?] to insert. instrumentation code. in adaptive software to collect. run-time conditions. These. conditions are then sent to a run-time model checking server. which verifies these conditions at run time against. A—LTL/LTL properties. 1.2 Summary of Contributions The following is a list of ccmtributions made by the thesis. Requirements Phase. 0 We propose a finite. state machine model for adaptive software [144, 145]. This model enables existing analysis techniques for traditional, non—adaptive software to be directly applied to adaptive. software. Also, it enables us to exploit specific Characteristics of adaptive software in order to optimize their analysis [144] (Chapter 2). 0 we introduce A-LTL. the adapt operator-extended linear temporal logic. we use A—LTL to formally specify adaptaticm properties of adaptive software [143, 146] (Chapter 3). 0 We generalize three adaptation semantics. nan'iely the one-point adaptation, the guided adaptation, and the overlap adaptation, and formally specify the semantics in A—LTL [143, 146] (Chapter 3). e We introduce a goal-based technique to systen'iatically analyze the requirements for adaptive software and to generate formal requirements specifiu—itions in A- LTL [144, 149] (Chapter 4). 0 We introduce TA—LTL, a real-time extension to A—LTL for specifying critical real—time constraints in adaptive software. Three types of critical properties are identified. namely safeness. liveness. and stability properties [153. 155] (Ap— pendix A). CT! Design Phase. 0 We propose a formal design modeling technique. for adaptive software. We iden— tify the key features of adaptive software design (i.e.. quiescent states, adaptive states, and adaptive transitions) and introduce a state-based modeling approach (MASD) to capture these features with design models [144]. We also introduce rapid prototyping and model-based testing to carry these features in the models to their implementations [144] (Chapter 5). 0 we propose a model—based re-engineering technique to enable adaptation in legacy software with assurance by leveraging the MASD approach [144], the metamodel-based language translation technique [97]. and the aspect-oriented adaptation enabling technique [13.9]. we also introduce the cascade a.da}')t:-1tion mechanism to handle state transforinatit)n from a. source program to a target program in an adaptation [149] (Chapter 6). Implementation Phase. 0 We describe AMOEBA, a modular model checker that. modularly verifies adap- tive software against both LTL and A-LTL properties. The proposed technique reduces the model checking complexity by a factor of n. where n is the. number of steady—state programs encon'ipassed by the adaptive program [145] (Chapter 7). 0 We describe a run-time model checker AMOEBA—RT that monitors the run— time conditions in adaptive software and verifies these conditions against. LTL and A-LTL properties (Chapter 8). c We introduce a safe software adaptation protocol that minimizes the adaptation cost and ensures consistencies among adaptive. cmnrmnents. A retry/roll-back mechanism is employed to ensure the consistency of ada1‘)tive tuittions in the presence of failures [150, 152] (Clumter 9). 1.3 Outline of the Thesis This thesis is presented in three parts to reflect the research investigations for the three key phases of software (‘levelopmentz requirements, design, and implementz‘ition. Part 1, comprising Chapters 1—4, discusses formal techniques for the requirements analysis phase of adaptive software (.levelcmment. Chapter 2 describes a "finite state model for adaptive software, which serves as the foundation of the thesis. Chap- ter 3 introduces the A—LTL sl')ecification language and three frequently occurring adaptation semantics in adaptive software. Chapter 4 presents a goal-based require— ments analysis approach to derive adaptation requirements specifications in A-LTL and LTL. Part II, comprising Chapters 5--6. introduces assurance techniques for the design phase of adaptive software development. Chapter 5 describes the model-lmsed design technique for adaptive software. Chapter 6 extends the model—based design technique to provide assurz‘mce in I'e—ei’igiiiei'q'iiig legacy software for adaptation. Part III, comprising Chapters 7-——9, introduces assurance mechanisms and analysis tech— niques for the implementation phase of adaptive software development. Chapter 7 presents the AMOEBA modular model checking technique, and the run—time adap— tive software verification technique AMOEBA-RT is described in Chapter 8. Chap- ter 9 describes a. safe adaptation protocol for distributed aclz'mtive software systems. Finally, Chapter 10 presents concluding remarks and outlines potential future. inves- tigations. Several appendices elaborate details of the work. Appendix A describes TA-LTL, a real-time extension of A-LTL for specifying real—time constraints, including safety, liveness, and stability properties in adaptive software. Appendix B provides supportng material for Chapter 5. Appendix C provides supporting material for Chapter 6. Part I Requirements Analysis for Adaptive Software Chapter 2 A Formal Model For Adaptive Software This chapter describes a. finite state formal model for adaptive software. This model serves as the foundation of our specification and analysis techniques thrmrgh— out the thesis. This chapter is organized as follows. Section 2.1 gives background information on a finite state machine model for general software. Section 2.2 describes the formal finite state model for adaptive software and two composition operations. Section 2.3 outlines the architecture of autonomic: computing system. Related work is described in Section 2.4. Section 2.5 discuss possible extensions to the formal model introduced in this chapter. 2.1 Background: Program Kripke Structure In this section, we introduce a finite state model for general software. A non- terrninating program can be considered an w—automaton [133], i.e... a finite state automaton that accepts w-WL)I‘(lS.l A program state can be represented by the truth value of a set of propositions in the state called atomic propositimrs, denoted AP. lAn w-word can be considered as a. sequence of infinite states. 9 Emerson et at [37] defined a program as a Kripke structure M = (S, P, L), where o S represents a finite set of program states. 0 P : S <—-> S represents nomleterrninistic program transitions. Relation P is total, meaning that VS 6 S, 3 s’ E S such that (3, 8’) E P. o L : S —+ 2AP rrraps each state to a set of atomic propositions that are true in the state. A computation of a program is an infinite sequence of states a = so, 81, such that. for each 2' 2 0, (32-, 51+1) 6 P. 2.2 Finite State Model For Adaptive Software In, general, a program exhibits certain behavior and operates in a certain domain, where a domain is defined to be the input space of the program [10]. A (finiarmically adaptive progrmu operates in different domains, changes its behavior at run time in response to changes of the domains. In this thesis, we. take a general View of programs, i.e.. an adaptive program is a program whose state space can be separated into a number of disjoint regions, each of which exhibits a different steady-state behavior [2] and operates in a different domain. The state space exhibiting each different. kind of steady-state [2] behavior is a steady-state program, or briefly a. program. The states and transitions connecting one steady-state program to another are adaptation sets. An adaptive program usually contains multiple steady-state programs and mul— tiple adaptation sets connecting these programs. We term an adaptive program corn- prising 72, different steady-state programs an n-plcr adaptive program. Initially, we Simplify our discussion by focusing. on the adaptation behavior starting from one program, undergoing one occurrence of adaptation, and reaching a. second program. Th is type of adaptation behavior is represented by simple adaptive programs. A simple 10 adaptive program contains a source program, (the program from which the adaptation starts). a target program (the program .in which the adaptation ends). and an adap- tation set connecting the source program to the target program. Figure 2.1 shows a simple adaptive program where S is the source program. T is the target progrz—‘un, and M is the adaptation set from S to T. Accordingly. a general n-plex ac‘laptive program can be considered as the union of one or more simple adaptive programs. (\\ / 0 program 0 adaptation set -——> transition Figure 2.1: A simple adaptive program 2.2. 1 Formal Definition The Finite State .i’llaclime (FSM) model for adaptive software is for‘rii.zill_\-' defined as follows: Given a set of atomic propositions AP. a finite-state mac/zine (FSM) is a tuple M = (S, So. T, L). where o S is a set of states; 0 the initial state set. S0 Q S is a subset of the states: 0 transiticms T : S x S is a set of state pairs, where (s t) E T represents that there is an are from s (the predecessor) to t (the successor); o the function L : S ——> 2"”) labels each state .9 with a set of atomic 1‘)ropositions that are evaluated true in s. 11 \v‘Ve represent the states. the. initial states. the transitions. and the labels of a given program model M with S (M ) S(,(M), T(M). and L(ii-l) respectively. The states in M that do not have successor states are dcadlock states. denoted DUI). We can eliminate deadlock states in an FSM by introducing a self—loop to each deadlock state. An FSM is an Extended FSAI (EFSM) if it does not contain a deadlock state. We formally model an 72,—plcr adaptive program as an EFSM that. contains u steady-state programs P1. P2. - -- ,Pn. each of which is an EFSM. Each steady-state program represents a different program behavior for a run—time execution domain. We require that these steady—state 1.)rograrns be state disjunct, i.e.. no two steady-state programs may share any state: The adaptation from P, (the. source program) to Pj (the. targct program) is modeled by an adaptation. set Aw- An adaptation set contains the intermediate states and tran- sitions connecting one stee'idy-state program to another, representing a collaborative adaptation procedure. such as the insertion and removal of filters [ :30. 152}. An adaptation set is formally defined as an FSM with the following features: 0 All initial states are states in the source program: Std/42,1) Q 5U?)- 0 All deadlock states are states in the target program: D(A,J) g S(P]). 0 No transition should return from the adaptation set to the. source program: Vs,t : S(A,J-). (s t) E T(A.,-.j) :> t e? S(P,). 0 No transition should return from the target program to the adaptation set: V8, t : S(A,J). (s t) 6 TM”) :> s e? S(P_,). 0 There should be no cycles in the adaptation set. This condition ensures the adap- 12 tation integrity constraint [Hi]. i.e.. the zzldaptation should finally reach a state of the target program. 0 No two adaptation sets may share the same states other than those in the. target. and source programs. Figure 2.2 shows the metamodel for the FSM representation of adaptive pro- grams. An adaptive program aggregates a set of atomic propositions, a set of steady— State programs, and a. set of adaptation sets. Each steady-state progra In or adaptation set aggregates a set of states and transitions. A state has a. state ID. and satisfies a subset of the atomic prop<1>sitions in the adaptive 1,)1'ogram. A transition is a pair of incoming and outgoing state IDs. 2.2.2 Compositions of Adaptive Programs We define the sequential composition, of two programs compt'Pz. P1) to be a pro- gram with all the states and transitions in P.- and Pj. and with initial states coming from P1: 00712.1)(Pi. P1) = (S. 50. T. L), wl'iere S 3(3) U 5003‘): 50 = 30(3), T : 71(1).) U T(P]-). and L 2 MP.) U L(PJ-). The camp operation can be recursively extended to accept a list of programs: cam.p(P.-l. P. - . Pi“) : c0mp(- - - (r()mp(P.,;l, P22). - - - . P.“ ). 2,0- 13 Adaptive Program t share states Adaptation Set Transition 1 1 ~—-——- incoming outgoing id ’ 1 ___ 1 Steady-State Program State 1 1 * . ,¢ satisfies .. 1’ Atomic Proposition State ID <————— ,‘ s_# 1 Figure 2.2: Metamodel for ada tive )ron‘ram C) O Similarly, we define, the union. of two programs union(P¢. P1) to be a. program with all the states. transitions. and initial states from both P,» and P]: union( Pi, Pi) = (S. SO. T, L).where S = S(P1)US(PJ‘). S()———S(‘)(P1‘)US()(PJ). T = T(P.)UT(1’J). L=L(P.)LJL(PJ). 1 :1 The union operation can be extended to accept a. list. of progrtnns: anxion(P.1, P12. - - - . Pi“) = anion(- - - mrion(Pn. P52). - - - . Pi, ). I I A simple adaptive program SAL]- from P,- to Pj includes the source program Pi, the target program Pj. and the adaptatitm set Az‘a’ that cmnprises the intermediate states and transitions connecting P. to Pj. Fm‘mally. we define the simple adaptive program from P.» to Pj as the composition SAW- : comp(Pz-. A”, PJ). An n—ple:r adaptive program M contains the union of all the states. transitions. and initials states of n steady-state 1.)rograms and the. corresponding adaptation sets. Formally: M = comp('zrnxziontPl. - - . P”). t/Jtio'n(AL-3. A13. - ~ ~ A.,.,n_1)). An earecution of an n—plex adaptive program M is an infinite state sequence 80. 31. 32. - ~ such that s. E 5(M), (31.31“) E T(M). and so 6 50(M) (for all i 2 O). A non-adaptive execution is an execution so. 81,82,' - -. such that. all its states are within one program: 5,- E P]. for all s.- and some Pj. An adaptive ereeution is any execution that is not non-adaptive. An adaptive execution goes through one or more adaptation sets. and two or more. steady-state programs. 2.3 Autonomic Computing System Kephart et al [(56] proposed autonomic computing as an approach to manage the exploding complexity of con‘iputing systems. In an autonomic computing system. software interacts with its run-time environment. and adjusts its own behavior in order to achieve self—managing, self-healing. self—optimizing. self-protecting. etc [24. 33, 50. 66]. A general autonomic system is depicted in Figure 2.3 where the central part of the system is an adaptive program interacting with its run—time environment. The adaptive program includes an adapt-ready program [130]. a number of monitors. a decision maker, and an adaptation coordinator. An adapt—ready program is a program whose behavior can be altered at run time to exhibit a number of different steady- state behaviors in response to adaptation requests. The monitors collect run-time. internal data in the adapt—ready program (e.g.. variable values) and external data in the environment (e.g.. network bandwidth). and evaluate a set of predefined conditions (e.g.. lossrate <.' 0.2) upon these data. \V’hm a condition is met. the monitors sends the condition to the decision maker. The decision maker. connirismg a. set of decision making rules. which are either predefined or re-loadable at run time. makes adaptation decisions based on the conditions received from the monitors and sends these adaptation decisions to the adaptation coordinator. The adaptation coordinator decomposes an adaptation decision into a sequence of adaptation requests. which are. then sent to the adapt-ready program to alter its behavior. The assurance addressed by this thesis cross cuts all the components in the adaptive program. 2.4 Related Work Adaptive software has been studied by researchers for more. than a (’lecade. Nu- Inerous researcl'iers have proposed ways to formally specify dynamically adaptive pro- grams [18]. 16 Autonomic Systel 7V 1 1 .. ‘ t t *i- - in erac s 1’ Environment [VI-\dapttve Program 1,. Adapt-Ready monitors . \,0utputs Input of Decision outputs '“P” 0 . proqram Monitor 2 Maker Coordinator 1" 1-- ‘ 1.. ' 1 .. * 1.. . outputs Condition I Adaptat on T Decision Internal ' External Data Data j 1 .. . ___ ’— Adaptation input of 1 __ . Requests i Figure 2.3: General architecture for autonomic system Graph-based approaches have been proposed to model the dynamic architectures of adaptive. programs as graph transformations [57. 99. 128]. In the Hypergraph approach [57]. for example. an atla}.)tive program is defined as a 1131].)ergraph G = (N. E . L). where N is a. set of nodes representing connnunication port-s. E is a. set of edges representing components. and L is a set. of labeling functions for nodes and edges. A node label includes the name of the port; an edge label includes the name of the component and its current status. The. components can be connected to each other through ports. A software t-trchitecture style is a. set of software architectures with similar structures. described by a hyper-edge contort-free gmmrmm'. A graph transfm'mation production rule has the format. L —> R. where L (.lescrihes the graph to be rewritten. and R describes the graph to be generated. They defined three types of rules: Construction rules are used to (onstruct the initial structure of the software: 17 dynamic evolution rules are used to transform the software structure dynamically; and communication rules describe the communication among components in the softw't-tre. Architecture Description Language (ADL)-based approaches are also introduced to model adaptive software [73, 106. 131]. ADL approaches are similar to the graph— based approaches in the sense that. a software architecture is represented by compo— nents connected with connectors. The major (llfft’fl‘tfllCG lj)e.tween these two categories is that in ADL-based approaches. the structural changes are defined as disconnections and reconnections of the connectors, rather than graph transforination production rules. For example, Darwin [91, 92] is an ADL used to specify software configu- rations/reconfigurations. In Darwin. components are represented as boxes. Each component. declares its requirement ports (denoted as empty circles) representing the services required by the components and pro-vision ports (denoted as full circles) rep— resenting the services provided by the component. Components are connected with one another by binding provision ports with requirement ports. Dynamic: structure is achieved by the definitions of bindings between ports of component. “types” rather than component instances. Thus. the ports of dynamically instantiated components can be identified and bound to one another. Darwin supports two techniques for dynamic software bindings: direct dynamic instantiation and lazy instantiation. Di- rect dynamic instantiation allows port references to be passed as messages among components. Lazy insttzmtiation allows a component to delay its instantiation until another component tries to use a service that it provides The formal semantics of the Darwin architecture is defined in Tr—calculus. Both the graph—based approaches and the ADL-based approaches focus on struc— tural changes of adaptive programs. In contrast. our a.1‘)1‘)roach focuses on behavioral changes of adaptive programs. 18 2.5 Discussion Our work uses finite state machines to model adaptive software. This strategy en- ables existing analysis techniques for traditional rum-adaptive software to be directly applied to adaptive software. Also. it enables us to exploit certain characteristics of adaptive software in order to optimize. the analysis for adaptive software. The finite state machine models for t‘tdaptive progrtuns introduced in this chapter are abstract representations of adaptive software implementations. Existing tools. including Bandera [28]. FLAVERS [‘26]. Borgor [117]. etc.. process programs in high- level programming languages. such as Ada. Java. and C++. and generate finite state models. we envision that these techniques can be leveraged to generate adaptive program models described in this chapter. In order to specify real—time }_)rograms. reset‘trcht'trs have. also extended the. models with rea1»time properties [3, 30. £37. 48, 88. 89]. The finite state. model for adaptive programs introduced in this chapter can also be. extended with real—time information. including those introduced in the timed state graph [.73] to. l‘(-“])I‘(‘.S(‘Ilt real—time adaptive pl'Ogl'éllllS. 19 Chapter 3 Temporal Logic for Specifying Adaptive Programs This chapter describes the Adapt operator—attended Linear T emporat Logtc (A- LTL) [143 146], an extension to LTL [111]. to forrnz’tlly specify EtCltlptathll from one steaclyéstate program to another. We introduce three basic adaptation semantics and use A-LTL to formally specify these semantics. These basic adaptation semantics can be composed to derive more complex adaptation semantics. This chapter is organized as follows. Section 3.1 gives background information on the Linear Tempo ‘al Logic (LTL). Section 3.2 describes the syntax and the semantics of A—LTL and Section 3.3 introduces three commonly occurring adaptation semantics specified in A-LTL. In Section 3.4 we introduce two composition techniques to construct complex A—LTL specifications from simple ones. We demcmstrate the A-LTL specification using an adaptive audio streaming example. in Section 3.5. Related work is (lt‘SCl'll‘Wd in Sec- tion 3.6. Section 3.7 sunnnarizes this chapter and discusses possible extensions to A—LTL. 20 3.1 Background: Linear Temporal Logic (LTL) Linear Temporal Logic (LTL). first proposed by Pnueli [111]. is an extension to the propositional logic. It extends propositional logic by introducing four basic temporal operators: the ezrtstential operator 0. the global operator Cl, the until op- erator U , and the nest operator 0. An LTL formula is evaluated upon a sequence of infinite states, where in each state. a propositional sub—formula is either true or false. Informally7 given formulae to and Ill. 0659 (read as eventually to) means that (D is eventually true; Dd) (read as always (b) means that c) is always true; (at)! U) (read as o until 1;") means that to is true until U) is true; Orb (read as next to) means that o is true in the next. state. LTL has been adopted by the formal methods conununity to express temporal properties of software, including safety p1‘01’)eI‘tiPS and lateness properties [81]. A safety [)I‘(‘)I)(‘I'ly (usually having the form of “Ll-1(5)”) states that “nothing bad trill happen”. A liveness property (usually having theform of "‘FJOgo”) states that. “something good will eventually happen”. 3.1.1 Syntax The syntax of LTL is defined as follows: 1. Each atomic proposition is a formula; 2. if to and U) are formulae, then no. amt“), (pl/l U), Orr") are forrmtlae. Other operators are introduced as abbreviations of the above operators: 09") E true uo (i.e., “finally a)” is equivalent to “ true. until (3')"). and Do E fiO—ta’) (i.e., “zttlways 95” is eduivalent to “not finally not Q”). 21 3.1.2 Semantics Let AP be an underlying set. of at-(nnic propositions. the semantics of LTL is defined with respect to a linear Kripke structure M = (S. :r, L). where 1. S is a finite set of states; 2. 1' : N—>S is an infinite sequence of states representing an infinite linear timeline; 3. L : S —-—> ZAP is a labeling function of each state representing the atomic propo- sitions evaluated to be true in the state. M , a‘ l: q') indicates that the LTL fm‘mula. (p is true under the given structure M and timeline at, or l‘)riefly a: l: go when M is understood. The symbol :1," represents the ith suffix of at. i.c., if :1: = 50. 31. 32. - . - . then a" = 3.. .sl+1..sz+-_).' - -. The semantics of LTL is defined as follows: 1. :r l: (,5? if and only if 6'9 E L(.<:0). for any atomic proposition CD; 5° I l: (DAU) if and only if :1: l: U) and :1: t: It"; 3. a: l: (bl/{U} if and only if 33' such that a? f: If). VHO 5' h < j. and (ck t: o); t“ 1‘ l: (Do if and only if 1:1 t: 0. An LTL formula to is satisfiable. if and only if there exists a Kripke structure M = (S , :r. L). such that M , :L' l: (I). in which case, we say M defines a. model of a). An LTL (j) is valid, denoted as l: a), if and only if for all Kripke structures M = (5.1:, L). we have M, a: [2 Q5. 3.2 The Adapt-Operator Extended LTL (A-LTL) In this section. we introduce the Ada})t—operator extended LTL (A-LTL), a tem— poral logic for the specification of adaptaticm behavior. 22 3.2.1 Syntax To specify adaptation behavior. we introduce A-LTL by extending LTL [111] with the adapt operator (2*) [143. 146]. The A~LTL formula. “Ci—Sit)” is read as “(j adapts to U) with adaptation constraint Q", where C) (.5). and Q are three tem— poral logic: formulae named the source specification. the target specification. and the adaptation constraint of the formuht, respectively. Informally. a, program satisfying “egg/I” means that the program initially satisfies (.5. In a certain state A, it stops being constrained by e), and in the next state B. it starts to satisfy U). and the two- state sequence (A, B) satisfies 9.. We use (bit: to specify the adaptation from a steady—state program that satisfies C) to another that satisfies C’- Formally, we define A-LTL as follows: o lf 0 is an LTL formula. then c; is also an A-LTL formula; ' l I . I 2 l o if :0 and U) are both A-LTL formulae. and Q is an LTL formula. then 5 :2 C) Au) is an A-LTL formula: 0 if (D and u) are. both A-LTL fornnllae. then m, oAU‘. ovt'v. Do. 0C), and cit/{UP are all A-LTL formulae. 3.2.2 Semantics We define A-LTL semantics over both finite state sequences (denoted by “#fin”) and infinite sequences (denoted by "[*:z’nf“. or ln'iefiy. [2). 0 Operators (:>, A. V. D. O. U. -1. etc.) are defined similarly as those in LTL. o If a is an infinite state sequence and (D is an LTL formula. then 0 satisfies a in A-LTL, if and only if 0 satisfies (2‘) in LTL. Formally. o ]:,,,f o¢$o [2 (3‘) holds in LTL. o If 0 is a finite state sequence and a) is an A-LTL formula. then o ]:f,-.,. (D<:>o’ [Zinf U9. where o’ is the infinite state sequence cmistructed by repeating the last state of 0. ,. s2 . . . . . o o [:mf o—wn if and only if there exist a finite state sequence 0’ = (so. sl. - - - . st) and an infinite state sequence o” = (3H1. st”. - . - ) such that o = o’ A o”, 0" [:fin (.5, 0” [=sz U. and (sk. 8k+1) [zfin Q. where (.6, UI and S2 are A—LTL for— mulae, and the A is the sequence concatenation operatt‘n'. . . , o . . Infm‘mally. a sequence satisfying o—e UI can be consrdered the coricatenation of two Stibsequences. where the first subsequence satisfies (D. the second subse- quence satisfies U’. and the two states connecting the two subsequences satisfy 52. For convenience. we define empty to be the formula that is true if and only if it. is evaluated upon single state setuiences. i.e., deadlock states: empty E -I©lrue [17]. Also. we write 0—H." to represent o—NU) when it if true. 3.2.3 Expressiveness We now show that A—LTL and LTL are equivalent in their expressive power. An w-automaton is a. finite state automaton that accepts w—words [133]. Two most popular forms of w-automata. are Biichi automata and Muller autonmta [133]. It. has been shown that. Biichi autonuita and Muller automata. are. equivz-ilent [133]. A counter [137] in an w-automaton A is a sequence of distinct states so. s1. - - - . sm with m > O and a finite word 7T. such that for all t E [0. m — 1]. there is a path labeled 7r in A from si to st“. and from sm to so. An w—automaton is cotmtcr—free if it does not contain counters [137]. \Vilke showed that LTL is equivalent to counter- free e)- automata and introduced an algorithm to convert. a (7(f)1111it;?I‘-fl'(‘€‘ automaton to an LTL formula [137]. We establish the equivalence between LTL and A-LTL by showing that 24 A-LTL is also equivz-ilent to counter-free automaton. The above statements applies to both infinite and finite words [137]. Theorem 1: A-LTL is (ya-lenient to courntcr-fm autmnayta. Proof 1: We need to prove (I) that any A—LTL can be accepted by a counter—free Bitchi automaton and (2) that any counter— free Bitch-i automaton can be earpressed by an A-LTL formula. 1. ”"6 prove by induction on the number of nested adapt “—3” operators in an A~LTL formal a. Initial condition: Any A-LTL forn‘iula with 0 adapt operator is also an LTL formula. which can be accepted by a counter-free to-a'utonuiton in w-a'ord do— main, and can be accepted by a counter- free. 17 SA in finite word domain Assumption: Any A-LTL formula with. no more than A: nested adapt operators can be accepted by a counter-free (as) automaton. For an A-LTL formula with. A: + 1 nested adapt operators. it can be expressed l S2 l ' ' as ‘49—‘93- where U) and 1;) hat-e no more. than I; nested adapt operators. Based on the assumption. we can build an FSA A for (a in finite word domain and a counter-free iii-automata B for U in to-word domain. Then we connect the accepting states of A to the initial states of B with t7'ansitio'n..<;_. and make all states in A non-(niceptiny to form an w-(lltt07llfll07l C. Clearly C' accepts the set . . .trne ,. . . . of executions that satisfy U9—x U). By selectively connecting the accepting states of A and the initial state of B. we can. make sure all the state pairs satisfy Q. _ . . Q _. The resulting w-aatomaton accepts eanctly the set of sequences satisfying (p—AUW. The same process applies to A-LTL in finite 'u-‘ord domain. Next we show C is also (gro'untcr- free. Since A and B are (twitter free, if there 25 is a counter in. C . the counter must tmrlude states both. tn A and in B. Since there is no transition from B to A. such a counter cannot eartst. Therefore. C is counter- free. Therefore. any A-LTL formula ua'th no more than A? + 1 nested adapt operators can. be accepted by a coun,te7"-free (w) automaton. 2. Any counter-free automaton. can. he erpressed tn LTL. and any LTL formula is also an A-LTL formula. Therefore. any co urzter— free automaton can be eapressed in A-LTL. A logic L1 is more eafpresstve than another logic Lg, denoted as L1 2 L2, if and only if for any forn'mla 652 in L2, there exists a forn'iula 651 in L; such that (2)1 accepts exactly the same set of models that 672 accepts [338].“1‘. say a logic L1 is strictly more (impressive than another logic L3}, denoted as L1 > L2. if and only if L] 3 L21 but. L2 2” L11. “"6 say L1 and Lg are equivalent in expressive power. denoted Ll E L2, if and only if L1 2 L2 and L2 2 L1. Theorem 2: A-LTL ts equivalent to LTL tn erpress'z‘zit: power: Xl-LTL E LTL. Proof 2: We prove that A—LTL 2.3 more expressive than, LTL and vice versa. 1. A-LTL is more earprzsstue than. LTL: A—LTL 2 LTL. The proof of this statement is stra7yhtforuurd. Since A~LTL is a super set of LTL, for any formula to of LTL, there is also a formula. (9’ of (tractly the same form in A-LTL, and q)’ = ab. Therefore. "we. have A-LTL Z LTL. 2. LTL is more eapresstue than A-LTL: LTL Z A-L TL. For any formula o in A-LTL, we can build a counter—free automaton that accepts exactly the set of words S that satisfy (f) (Theorem I). And thus. there eartsts an 26 LTL joiniula (9’ that accepts S. Then we have LTL Z A-LTL. 3.3 Adaptation Semantics In this section. we introchice three adaptation semantics specified in terms of A-LTL. In an adaptive program. we term the properties that must. be satisfied by the program in each individual domain as the local properties for the domain. The properties that. must be satisfied by the. program throughout its execution. regardless of the adaptations. are called adaptation ylobal 'imzoriants (or global invariants). “"0 use the term adaptation variants to denote the properties that change during the program’s execution. i.e.. from the local properties of the. source program to the local properties of the target. program. This section focuses on specifying adaptation variants using A-LTL. Based on results presented in the ill't’l‘i'l‘iill‘t‘. [5. 2'2. 77} and our own experi~ ence [152], we summarize three commonly-used semantics for adaptation We assume that the local properties of the source program and the target program have both been s1')e(.tified in» LTL. “'0 call these local properties base specifications. \Ve spec- ify the adaptation from the source program to the target program with A--LTL by extending the base specifications of the source and the. target, programs. For some adaptations. the source/target. program behavior may need to be constrained dur- ing the adaptation. These constraints. termed restriction. conditions. are specified in LTL. We assume the adaptive program has moderate computational reflection capa- bility [90]. i.e., it is aware of its adaptation and the currently running steady-state program}. This capabilitVr (an be achieved by simply introducing flag propositions in the program to identify its current stcn-uly-state program or adaptation status. We as— sume that. a decision maker (as described in Section 2.3) that translates environment 27 changes into specific adaptation requests is available. Our specification technique describes the expected pr(')gram behaewior in response to these requests. We use an atomic proposition A 1?fo to represent the receipt. of an adaptation request to a target program from the decision maker. In the following, we summarize three commonly occurring basic adaptation se— mantic interpretations from the literature [5, 2‘2. 77, 152] specified in terms of A-LTL. There are potentially many other adaptation semantics. In all three adaptation se— mantics, we denote the scurrce and the target program base specifications as 53,)“- and TSPEC, respectively. If applicable, the restriction condition during adaptation is Ramp. We assume that the flag 1‘)1'()1')<‘)sitio1‘is to be parts of the specifications. \Ve use the term fallfilment states to refer to the states where all the obligations of the source program are fulfilled. thus making it safe to terminate the source behavior. 3.3.1 One-Point Adaptation Under one-point adaptation semantics. after receiving an adaptation request A no, the program adapts to the target program TSPEC at a. certain point during its execution. The prerequisite for om—r—point adaptation is that the source program S SPEC. should always eventually reach a. fullfilment state during its execution. Q . (S::Pr-:c"/\<>x4irl-7Q)"" TSPEC” (3'1) The formula states that the program initially satisfies S SPEC. After receiving an adaptation request, AREQ, it. waits until the program reaches a. fullfiln'ient state. i.e.. all obligations generated by S SPEC are satisfied. Then the program stops being obligated, to satisfy 55pm: and starts to satisfy TSPEC. This semantics is visually presented in Figure 3.1(a), where circles represent. a sequence of states; solid lines represent state intervals; the label of each solid line represents the property that is held by the ‘28 interval; arrows point to the states in which adaptation requests are received. This semantics is straightforward and is explicitly or implicitly applied by most approaches (e. g.. [5. 22, 152]) to deal with simple cases that do not require restraining the source behavior or overlapping the source and the target behavior. 3.3.2 Guided Adaptation Under guided adaptation semantics (visually depicted in Figure 3.1(b))., after receiving an adaptatlon request. the program first restrains its source program be— havior by a restriction condition.RC..;,NL,. and then adapts to the target program when it reaches a fullfilment state. This semantics is suitable for adaptations whose source programs do not guarantee reaching a fullfilment state within a given amount. of time. The restriction condition should ensure that the. souice program will finally reach a fullfilment state. / £2. \ S2; ,_ . (SSI’ECA(<>AHH(2 _X Rama .1) "A T JPEC” (3' I\_v \_./ This formula states that Initially Same is satisfied. After an adaptation request. AREQ, is received, the program should satisfy a. restriction condition Rm” (marked with gi). W7 hen the program reaches a fullfilment state of the source, the program stops being constrained by Sseec» and starts to satisfy Tspn: (marked with 83*). The hot-swapping technique introduced by Appavoo et al [5} and the safe adaptation protocol [152] introduced in Chapter 9 use the guided adaptation semantics. 3.3.3 Overlap Adaptation Under overlap adaptation semantics (visually depict..ed in Figure 3.1(c)). the tar— get program behavior starts before the source program behavior stops. During the overlap of the source and the target behavior. a. restriction condition is applied to safeguard the correct behavior of the program. This adaptation semantics is appro- priate for the case when continuous service from the adaptive progrzu‘n is required. The restriction condition should ensure that the source program reaches a fullfihnent state. a e. - a. a.» ‘ ,. (SSPE(-‘/\(<>AREQ Rcoivz)» hue A(<>Ach (Ts'.DEC/\(Rcoiv1:>‘ true D). (3.3) This formula states that initizjrlly S SPEC lS satisfied. After an adaptation request, AREQ. is received, the program should start to satisfy TSPEC and also satisfy a. re- striction condition, Rwy-,1, (marked with E23*). When the progran'r reaches a fullfilment state of the source program. the program stops being obliged. by SW.“ and RUM“) (marked with 93). The grmreful adaptation protocol introduced by Chen et at [2‘2] and the distributed reset protocol introduced by Kulkarrn ct a/ [77] use. the overlap adaptation semantics. 3.3.4 Safety and Liveness Properties Temporal logics are. often applied to the specifications of safety and liveness prop— erties of a program. A safety property asserts something bad never happens, while a liveness property asserts something good will eventually happen [[24]. Although general forms of safety and liveness 1_)roperties are not. preserved by the adaptation semantics defined above. some common forms of safety and liveness properties are preserved. we define a formula. 6 to be a point-safety property if and only if e = 0-17} (read as 77 never holds during execution). where 77 is a point formula. (i.e.. a propositionz‘rl .. Swat Tmsr' - - - - ' C)OO’>OOOQ...CO.. ii Aka) . . . (a) one-pomt adaptation ROUND I-————I _ SSH-1' _ % Tserr' = OOO?C)OO0.0..OCC AREQ (b) guided adaptation R('I)NI) T = _ set-L J _ SSH-L 000000000000... (c) overlap adaptation ._ ___._-._____—_. _._._ (‘3' state before adaptation , O state after adaptation (“ state during adaptation I-'—-I state interval S‘WI- source specification Amy adaptation request T5,“. target specification Rm”, restriction condition Figure 5.1: Three adaptation sen‘mutics formula). We define a formula 6 to be paint-lurchcss property if and only if e 2 Ci(a=><>,=3) (read as it is always the case that. if 0' holds at some point. P, then (L3 will eventually hold at a point. after P ). where both a and ,13 are point formulae. Theorem 3: All three adaptation semantics preserve [mint—safety properties. That is, if (SSPEC‘VTSPF;C) —> Ei—wn. where 7i is a point property. then s —+ 0—17). where 5 Is the adaptation specification based on any one of the three senmnttrts. For brevity, we only provide the proof for the. one—poii’it adaptation case; other cases can be proved similarly. 31 Proof 3: Let the adaptation speeifieation g be as in Formula 8.]. that is 1 52 g :: (SSPECAOAREQ)ATSPEC. For an arbitrary sequence a l: 5, 30’ and o” sar‘h that (7’ l: SSPECAOAWQ, a” l: TSPEC, a-Tld 0 = 0, f‘ (7". 87:72.68 (SSPI‘ICVTSPEC) —* DIV]. 'UFP hUI'C 0" l: Dal], altd 0” F- D77]. Therefore, a l: Den. This theorem implies that if a propositional property (such as a tariable is never greater than a. given 'alue) should be held in both the source and the target programs. then the invariant. is also held by a simple adaptive program under all three adaptation semantics. This conclusion does not apply to general temporal properties. Theorem 4: Point-tiienr'ss properties are preserved by all three adaptation senzantios. That is, if(SsphC\/”TSPEV) —> "](a —> ()3), then;~ -—> mm --+ (at). aiheref is the adaptation. specification based on any one of the three semantirs. gain, we only provide the proof for the one—point. adaptation ease; other cases can be proved similarly. Proof 4: Let the adaptatirm. speez/irf'ation 5 be as in. Formula 3.], that is ) . S. E : (SSPECAOAREQ)—‘Tsprzc- For an arbitrary sequence so, .91, - -- t: f. 3 i. such. that So- .31. - - . 51 l: S_.;pm~7/\.<>AREQ and sz+1,.s.¢+2,--- |= TSPEC. Since (SSH.3(.VTSM.;C) —> Elm —> ()3). ”UV?ha.'1’(f.9(),81,---81 l: Etta —> OS), and 314-1..91-+2,--- I: Elm —» 0,3). For an arbitrary state .9]. 2f 8] l: (r. then. we have 0 ifj g i , then there mists ht] < h g i) such that sk l: [3,. 0 if] > i. then there artists h(j < h) Sin-h, that 3k t: ’13. That is. so. .91, ~ -- i: Elm ——> 0,3). Therefore, we have 6 ——> [3((1 ——> 0,3). 3.4 Specification Compositions Thus far, we have. described how to specify simple adaptive programs. These specifications can be composed to describe complex adaptation requiren'ients. 3.4.1 Neighborhood Composition ‘ A complex adaptive program may adapt to different target programs in response to different adaptation requests. The neighborhood composition is defined to specify multiple adaptation options from a single program. we define the neighborhood adap- tive program of a program 5 to be the union of all the simple adaptive programs that share the same source program 5'. An execution starting from S can either adapt to a target program if a corresponding adaptation request is received, or renuiin in S if no adaptation request is received. Assume that. the property for the adaptation from S' to the it" target program T1 is SI.)()('lfl(’(l with an A-LTL formula S Time-(v, and the property of S is specified with SSPHH We can ('(‘instruct the specification for the neighborhood of S by the disjunction of 55p”. and ST-ingC. Let NSPEC be the neighborhood specification of S, we have It NSPEC : V ST’HPW" V Ssrieru (‘54) 2'21 33 where A: is the number of simple adaptive I’n'ograms sharing the same source program S. 3.4.2 Sequential Composition A complex adaptive progran'i may sequentially perform adaptaticms more. than once during a single execution. For example. an execution of an adaptive program may start from a program A, then sequentially 1.)erform adaptations from A to B to obtain program B . and B to C to obtain program C. This execution slimild sequen- tially satisfy the local properties of .4, B . and C , and should satisfy the adaptation variants from A to B and from B to C during adaptation. “"0 term the properties that must. be satisfied by executions going through multiple steady—state programs transv'.tie-not properties. Specifications for transitional properties can be constructed from adaptation semantics using sr-zque-ztrzal cmnposztrons as follows: Assume that the. local properties of 4. B. and C are specified with AWN, Bspge. and CW”, respectively. The specification for the adaptation from .4 to B under a given adaptation semantics is a function of .4..,.,;.;(. and BSPEF: AB‘WW 2 AD.4PT1(ASP,.;C, BSPEC). The B to C adaptation specification under a, given adapta- tion semantics is a function of 83pm and CSPEC: BCSPBQ. :- AD.4PI‘2(BS,,E(., CW”). The specification of A to B to C may be cmrstructecl by substituting ABsm-c for BSPHC ”1 BCsPEc- ABCW = ADAPT2 ODESle ()utput(.r)). The I)ES(i4lnput (rcsp. D 3864 Output) are events indicating the input. (rcsp. 36 output) of a, packet to (rcsp. from) the MetaSocket under DESG4 configuration.l The flag proposition BIL-5'64” indicates that the program is running under the DESG4 mnfiguration. The formula states that. under this configuration. for ev— ery input packet to be encoded by the DESM filter. the i\let2"LSo(-,ket should eventually output a. DESb'l encoded I')21('l{(?t. The following program specifica- tions can be interpreted in a similar way. 0 13138128 program: DE5128SPEC : (D 131351238 p,l.)/\(D(:DESl‘281nput(rr) ——+ ODES1280'u.tpu.f(.‘r))). 0 DESG4COM program: 015564 coir/SPEC. = (o 013564 COM ,. ,. w o a [)ESG 4 eon/1mm (1:) -—» <1» DESG 1 (7011101111: in (:1? ) )) . o DESI‘ZSCOM program: DESIB8COMSPM. : (D DESIQ8C0MH) /\(Cl(DESlQ8 COR/Inputhr) —+ 001373128 COJUOUIpuMm/ll). 3.5.2 Specifying Adaptations To determine the semantics of each adaptation, we consider the. follmving three factors: (1) The. ;\'Iet.aSocket component is designed for the transmission of live video and audio data. Therefore. we should minimize data. blocking. (2) \Ve should not Strictly speaking, the notation of DESG4Input(.\') is a. predicate. However. LTL requires the underlying logic to be propositional. Here we implicitly employ the 43-071ch data absfnn‘lz'on intro- duced b " Dw rer and Pasareanu I36 .. which converts )redieates to m )ositions bv usine constant 3 i u a values to represent arbitrary values. 37 allow both the source and the target programs to input simultaneously because that will cause ambiguity in the input. (3) \Ve. should not allow the target. program to output data before any output is produced from the source program. otherwise, it will complicate the logic on the receiver. Based on the above considerations. it is appropriate to apply the overlap semantics with conditions prohibiting the types of overlap discussed above. We use the DESG4 to DESI28 adaptation as an example to demonstrate the simple adaptive program specification construction. Tire restriction condition is that neither the input nor the output should overlap between the source and the target programs. Formally in LTL: IZ(~(7,,\.,)(DESblj—DESIQS) 2 CH»"iDESG-Unput(.1:)/\"WDESIQ8qupui(IUD. (3.6) The intuition for this restriction mndition is that the program should not accept. any more DESG4 inputs and will not produce any DESI‘ZB outrurts until all DESG4 outputs have been produced. Given the source and target program sprgécifications. the overlap semantics. and the restriction condition. we apply Forn'rula 3.3 to derive the following specification: DES64-DE31289PEP :— ((013564 SPECA (O‘4REQ([)E‘9128)—\RC‘OND(DESU-i‘DESIBé))) ) “3‘ true) /\ (AREQ(DE8128)—‘\(DESIQ83PEC/\(RCOND(DESg4'DE8128)—\t7‘ue))). (3.7) Forn‘rula (3.7) states that after the program receives an adaptation request to the DE8128 program (AREQ(DESIQS)). it should adapt to the DESI‘28 mogram. Further- 38 more, the behavior of DESG4 and DESIZZS may overlal'). and during the overlapping period. the program should satisfy the restriction condition RCOND(DES64—DESIQ8). In this example, the Q notation of the adaptation operators is not used. We simply assign true to Q in the four adapt (')})€I‘Eli()l' locations. \Vith the same approach. we specify other simple adaptive progranns in the adaptive program. Further more. both the source and the target specifications are point liveness properties. We can unify them with the following formula by disregarding the types of inputs and outputs. D(Input(;r) —> (>()utput(:r)). (3.8) According to Theorem 4. we. can conclude that the adaptaticm also satisfies the point . liveness property. 3.5.3 Specifying Global Invariants In addition to the specification for adaptation behavior. the program should also satisfy a set of adaptation invariants to maintain its integrity 0 Security invariant: At any time during execution. insecure output should not be produced. i.e., all out-put 1.)ackets should be encoded by a DES filter: [NV]. : Clfi-z'rzsec-‘irreOutput(:17). o QOS invariant: During program execution, the sender I\»letaSocket should not cause any loss of packets. 1.0., all input packets should be output: [NI/'2 : Cl(lnput(:r) —> O()1rtput(:1:)). Note, this invariant is already implied by the point liveness preservation prop- 39 erty of the adaptation semantics (Formula 3.8). o Precedence invariant: The. MetaSocket should not output a packet. before the corresponding input has been received; [NI/"3 = D(—~0utput(1‘)u I-nput(.r)). 3.6 Related Work We now introduce work related to A— LTL and the adaptation semantics discussed in this chapter, including work related to the classification of adaptaticm semantics, and work related to the specification of critical properties in adaptive software. Biyani (it a! [14] classified distributed software adaptation into overlap adep— tat-zfln, where the source and the target. programs merlap during a-‘laptation. and non-overlap adaptation, where the source and the target programs are not present in the system sirnultaneously during adaptation. They further refined the overlap adaptation into three sub-categories: quiescence adaptation. parallel adaptation, and mired-mode adaptation. In both quiescence and parallel achiptations. the source and the target progran’rs are. not allowed to comn‘runicate with each other. The key differ— ence is that although quiescence adaptation allows the source and the target programs to coexist in the system. they are not allowed to coexist in a single process, while. parallel adaptation allows both. In mixed—mode adaptation. the source program is al— lowed to directly communicate with the target program. Their classification is largely inspired by ours, where their non—overlap adaptation subsumes both the one-point adaptation and the guided adaptation semantics as described in this chapter. The different types of adaptation they introduced can be specified by our approach. For exan‘rple. in the quiescence adaptation. the exclusive relationship between the source and the target. components in each process can be specified by an overlap adaptaticm 40 semantics with restriction conditions preventing the different versions of the same functionality from coexisting. The, parallel adaptation can be specified similarly with the restriction conditions relaxed to allow different versions of the same. functionality to coexist. The mixed—rnode adaptation can be modeled as a sequential composition of two one—point adaptations: from 55pm to M SPEC, and from M SPEC to TSPEC, where the intermediate specification M 5PM. allows the source program to connmmicate with the target program. Their focus is to study assurance techniques for the mixed—mode adaptation, rather than to specify adaptation semantics using a temporal logic as in our approach. Other temporal logics have also been proposed to specify properties of adaptive software. Feather ct a! [41] proposed using a real—time temporal logic, FLEA [31], to specify consistency constraints in adaptive software. The FLEA specification lan- guage describes orders and durations of events using five basic temporal operators: OR, THEN, COUNT, WITHIN, and START. A PLEA compilm' automatically converts a FLEA expression into run—time monitoring code that verifies the confor- mance between a. sequence of events and the FLEA expression. Compared to A—LTL, FLEA has the capability of expressing quantitative timing constraints, as in other real-time temporal logics [3, 48. 107]. However, like LTL, F LEA. is not convenient to specify adaptation requirements since it is not. designed specifically for that purpose. The properties they introduced in their paper were all non-adaptive properties rather than adaptation properties as described in this chapter. Other non—temporal logic formal specification languages have also been proposed to specify properties of adaptive software. Kramer et (1.] [72] described how to use a process algebraic language, F SP, to specify the critical properties that must be sat- isfied by the adaptive software described in Darwin. A critical property is specified as an FSP process, which identifies a set of allowable action sequences. New FSP processes can be composed from existing ones using primitive operators including 41 prefix (“—>”). choice (“mid”). and parallel (H ). They demonstrated using FSP to specify that a propositional property is satisfied in certain states. Allen et al [2] used Dynamic Wright to specify the behavior of adaptive software, and used a. process algebraic language. CSP {58]. to describe the critical properties that. must be satis— fied by the software. Since process z-rlgebras are generally considered as design-level languages, the properties that the above two approaches are concerned about are also design-level properties. In contrast, the A-LTL introduced in this chapter is intended for expressing reqtrirements-level properties. Furthermore. the properties they spec- ified are not used to describe how the system may change, i.e., adaptation variants. Instead, they are concerned about global invariants. In contrast, we specify both global invariants and adaptation variants in adaptive software with A—LTL/LTL. 3.7 ' D'iSCussion Adaptation semantics nmst be precisely specified at. the requirements level so that they can be well understood and correctly implemented in later phases of the software development process. After an adaptation temporal spet‘aficaticm is constructed, it. can serve as guidance for the adaptation developers to clarify the intent for the adaptive program. It. also enables us to perform model checking to verify the correctness of the program model against the temporal logic specifications with both static verification and run-time model checking. Our technique can be used to specify arlzmtatiori behavior of autonomic computing systems to achieve self—healing, self—protecting. self-managing. and so on [24, 33, 50]. Although environment changes such as failures are unexpected, after the decision maker has translated the changes into adaptation requests, the. program behavior in response to the requests is genm'ally deterministic. Our technique can be applied to describe the expected program behavior of adaptive progran'is in response 42 to these adaptation requests. We chose to use the A-LTL to specify adaptation sen‘iantics due to two rnajm‘ considerations: effectiveness and simplicity. Compared to LTL. A—LTL is more effec- tive in conveying adaptation objectives in spet ifications. Some other logics, including the choppy logic [119], the propositional interval temporal logic (PITL) [17, 52, 102], and the interval temporal logic (ITL) [20], are also sufficient to specify adaptation behavior. However, they are generally more complex and verlmse. \Ve find A—LTL to be simple and sufficiently effective to specify adaptation behavior, given that the logic is specifically designed to support adaptation. A-LTL specifies only the relative temporal ordering among events and system states that occur during the adaptation process. While this capability is suited to specifying many types of adaptive behavior. A—LTL is insufficient to specify systems with real-time l‘f‘tlull‘C'I'IIPI‘liS, where the absolute timing may play an important role. To address this shortcoming. we developed TA-LTL a real-time. extension to A-LTL; the. details of the specification language are introdut ed in Appendix A. 43 Chapter 4 Goal-Based Requirements Analysis In this chapter, we introduce a goal-based approach [32. 84] to construct. LTL and A-LTL require)nents specifications for adaptive software. In Chapter 3. we have intro- duced the temlmral logic, A—LTI-, to specify adaptation semantics from one steady- state behavior to another. In this chapter, we discuss the technique to determine the different steadvstate behaviors the adaptive program should be able to exhibit, and the adaptations among these behaviors. As a result. we produce adaptation specifica- tions in A-LTL/LTL for the adaptive progrmn, which can be. used in the design phase for formal analysis. This chapter is (in'ganized as f<‘)llows. Section 4.1 gives background information on goz'il—based models. Section 4.2 describes the gml-base requirenuénts specification for adaptive software. Related work is described in Section 1.3. Finally, Section 4.4 summarizes this chapter. 4.1 Background: Goal-Based Models In this section, we briefly overview relevant knmvlcdge of the goal-based spec- ificaticm language [32 84] used in this chapter. A goal model specifies goals that are stakeholder objectives that. the system should z‘rchieve. A goal may be refined into subgoals and/ or requirements that elaborate how the goal is achieved. A goal 44 is AND—refined if its subgoals must all be achieved for the goal itself to be achieved. A goal is OR—refined if any one of its subgoals must. be achieved for the goal itself to be acl'iieved. i.e., OR-refinernent describes alternative ways to achieve the goal. A requirement is a goal that can be directly operationalized by an airple'i‘ne'nt(it-27072.. A condition for a. goal is the property that is assumed to be true. Existing goal model representatit‘nis include KAOS [l3] and Tropos/i* [140], either of which is sufiicient for our specification purpose. 4.2 Specifying Adaptation Requirements we now describe a goal—based requirements specification technique that con— structs temporal logic specifications for adaptive software. We first describe the process to construct global invariants and local properties in LTL. 'I hen we intro-‘ duce the use of an adaptation semantics graph to construct adaptation specifications. in A—LTL. 4.2.1 Local Properties and Global Invariants We use a goal—based approach [82. 1.34] to analyze global invariants and local properties in adamtive software [144]. Berry ct al [10] have formalized the different environmental conditions in which the program is required to execute as a set of execution domains D1, D2. - - - ,Dk. As shown in Figure 4.1, we create a goal model for the adaptive software, where the top—level goal is intended to be. achieved by the adaptive software under different run-time environmental conditions. We start. by creating a high—level goal node in a goal model. Then we study the requirements for the adaptive program and refine the goal model according to the following steps. 1. We use high—level specifitation languages, e.g.. LTL. to specify the propmties (i.e., global in fariants) th 2 adaptive program must satisfy throughout its execu- tion in order to achieve the top—level goal. The global invariants usually contain safety and liveness constraints. Model refinement: We create a requirement node GI (depicted as a, paral- lelogram) containing the global inva.ri:‘ints under the trui—level goal in the goal model. 2. “"0 determine the. set of environmental conditions (domains) in which the adap— tive program is required to execute (e.g.. data. loss conditions of a communica~ tion channel). Model refinement: For each different domain. we create an OR~refined sub— goal node (e.g., subgoal j) under the. top-level goal. Under each subgoal node, we create a condition node (e.g.. DJ) to include the conditions for the domain. 3. W1: use high-level specification languages. e.g.. LTL, to specify the local prop-~ erties in each domain. The local properties must be satisfied by the adaptive program in order to achieve the top-level goals under the conditions of the domain. Model refinement: \Ve create a requirement node R, containing the local properties under each subgoal in the goal model. 4.2.2 Adaptation Variants Next, we construct. A-LTL specifications for athqitatiori variants. We study how the execution domains of the program may change at run time. and how the adaptive program should respond. Consider the case where the program is initially running P, in domain D,- (where P, satisfies local property R1). A change of domain from D,- to Dj may warrant an adaptation of the program from P, to PJ- (where Pj satisfies R1) depending on the cost to develop such an adaptation and the overhead that may be 46 4‘ / \— \\\ \\\ | . . — f‘ "“ _-" __/ /9 obal invariants GI/ @bgoal 1i - - - [subgoal i/ -- - Lsubgoalj / -- - subgoal k/ . _ ,_ J __ _J ____J / \ // ' / /—J—\ t r ”a /|oca| property Ri/ . . . domain D,- j . . . jlocal property Rj/ \ domain 0} ,l \W ___I v v\___, __ __/ _ I change change imp ement Adaptation variant Ri./ l P,‘ I vEP' change [:3 goal < > condition a implementation :7 requirement > dynamic change Figure 4.1: Goal model for adaptive software incurred during the adaptation. For each adaptatioi‘i that. moves from satisfying Hz to satisfying Hi we determine its adaptation semzmtics based on the cl’iaracteristics of the adaptive progiam. e.g.._ whether certain functionalities can be temporeu'ily disabled during adaptation, etc. After the adaptatirm semantics of all adaptations are determined. the adaptation variants of the adaptive program can be visually represented as a graph. where each different local property is depicted by a vertex and each adaptation is depicted by an are. For example, the. adaptation semantics graph for the MetaSocliets introduced in Section 3.5 is visually represented in Figure 4.2. This graph represents that the program is required to adapt among the different local properties using the, overlap 47 — ~ adaptation semantics. 95564 m. DESIZ8 V oveflap overlap oveflap oveflap oveflap A . overlap . V DES64COM overlap DES 1 28COM Figure 4.2: Adaptation semantics graph Formally, we define an adaptation semantics graph. to he a tuple ((1). (DO. A. ‘11, INV), Where 0 (I) is a set, of vertices representing the set. of local properties for an adaptive p ro gra m . o (1)0 ((1)0 g (1)) is a set of initial vertices. representing the. initial local properties of the adaptive program. 0 A (A Q (I) X (1)) is a set of arcs representing adaptations. o \I’ (\II : A —+ semantics) is a function that maps an adaptation ”to the semantics for the adaptation. o [NV is the set of adaptation iii\-'z-n‘iants. “7e I‘OpI‘PSCIlt an adaptation semantics with an A-LTL formula template where symbols are placeholders that are replaced with specifications when the semantics is instantiated for a specific adaptation. For exz-nnple. the one-point adaptation seman- tics is represented as $2 (Sspzic A0 A Iii-2Q ) “4 Tsmcc'a 4s where Sspgc, AREQ. S72.) and TSPHF are symbols. When this semantics is evz’iluated upon the following symbol assignments: SSPEC E DESOX SPEC: TSPEC‘ E DE8128 SPEC w 14an E REQDE.S'G-1~ 128, £2 E true. we derive the DES64 to DES'128 one-point. adaptation specification by replacing the symbols in the expression with their corresponding values: < 015564 SPEC/\OREQDE864~128) tgeDESZZS SPEt'“ Adaptation specifications for the adaptive 1‘)rogram in A-LTL can be evaluated from the adaptation semantics graph using the simple symbol substitution (.lescribed above. 4.3 Related Work The goal—based approach introduced in this chapter is influenced by many oth- ers [41, 82, 142] who have applied goal—based models to analyze adaptive. software. Lapouchnian et al [82] introduced the use of goal models as a foundation for adaptive software development process. and as a possible architecture sketch for autonomic systems that can be built from these models. They use goal models to explicitly represent and analyze the space of alternative ways to fulfill a stakeholder goal. They distinguish two types of goals: Hard goals are functional stakeholder goals that de- scribe the tasks to be accomplished (e. g schedule. a meeting); softyoata are qualitative attributes that describe the quality of the tasks (e.g.. good quality schedule). The fi".'.? 356: art . I '2‘ 0‘. .\.'. u A. ‘9. 1 g \ I l I. .:c.-’ 01‘. I 3. A..‘, ~ \. - -r -r u i- 1.1;, "go, goal models are used to guide the design and implementation of adaptive programs. Feather et al [41] proposed using a go:-1l-based approach to model and monitor the run-time behavior of adaptive software. The KAOS goal modeling language they used has a two-level structure: The outer semantics net layer is the graphic goal representations used for declaring concepts. their attributes, and various links among the concepts. An internal formal assertion. layer formally defines the concepts using temporal logics. The critical properties in a goal model are specified using a real-time temporal logic [70] in the internal formal assertion layer. The temporal logic spec- ifications are then translated into FLEA [31] specifications. which can be converted into run-time monitoring code by the FLEA compiler [31] for run-time verification. The properties they specified were all global invariants. i.e. non—adaptive properties. Our approach is inspired by the approaches (,lescribed above. we extended their ap- proaches with the use of A—LTL for specifying global invariants. local properties, and ada1')tation variants properties. \Ve assigned each potential adaptation an adaptation semantics and proposed the adaptation semantics graph to represent. adaptation vari- ants and automatically gt—inerate adz-iptation s1_)ecifications for selected adaptations. 4.4 Discussion In this chapter, we introduced the use of g(_)a»1l—l‘)ased models and adaptation se- mantics graphs to analyze and specify the adaptation semantics of adaptive programs in A-LTL/LTL. These techniques enable adamtation developers to achieve the benefits of formal specification while using the easier—to—understand graphical notations. Our goal-based adaptive software requirements analysis approach is inspired by existing goal-based analysis techniques, including KAOS [41] and Tropos/i* [141]. Our approach extends these techniques by associating the goal—based models with A-LTL specifications. The goal-based requirements analysis approach introduced in this clniipter serves as the foundation of later chapters in this thesis. A number of concrete examples of its applications will be shown in Chapters 5 and 6. Part II Model Design and Analysis of Adaptive Software 52 Chapter 5 MASD: Model-Based Adaptive Software Development In this chapter, we propose a process for designing formal adaptation models for adaptive software and ensuring that these models szfitisfy the adaptation requirements constructed using the s1.)ecificati(f)n approach introduced in Part I (Cl’iapters 2--4). \Ne first apply the goa.l-bz—.1sed teihnique to specify the adaptation requirements for adaptive software. \Ve then construct formal adaptation models for adaptation, and verify these models against the above requirements. These formal adaptatimi models are used as bluemints for the implementation of adaptive software. Numerous research efforts have been proposed to formally specify dynznnically adaptive programs in the past. several years [18]. Graph-based approaches model the dynamic architectures of adaptive programs as graph transformations [57, 99, 128]. Architecture Description Language (ADL)-based approaches model adaptive programs with connection and reconnection of connectors [71), 106. 131]. Generally, these approaches have focused on the structural changes of adaptive programs. Few efforts have formally specified the lwhavioral changes of adaptive programs. A few exceptions include those that use process algebras to specify the behavior of adaptive programs [2. 19, 72]. However. they share the. following drawbacks: (1) Portions of the adaptation-specific specifications are entangled with the non—adaptive behavior specifications; (2) They do not support the specification of state transfer, therefore, the target behavior after adaptation must always start from the initial state. thus making adaptations less flexible; (3) These techniques are specific to systems specified by a special type of formalism (i.e., process algebra). making them potentially difficult to be extended to systems specified by other types of formalisms. To address the above drawbacks in existing approaches. we introduce a Model- based Adaptive Software Development (MASD) approach. Our approach has the following features that. distinguish it from the existing approaches. (1) Our approach separately models the non—adaptive behavior and the adaptive behavior, thus making each of the respective models simple and more amenable to automated analysis and visual inspection; (2) we use global invariants to specify properties that should be satisfied by adaptive programs regardless of adaptations. These properties are en~ sured throughout the program’s execution by model checking; (3) Our specification approach supports state transfer from the source program (i.e.. the program before adaptation) to the target progrrmz (i.e., the program after adaptation), thereby poten- tially providing more choices of states in which ada],)tations may be safely performed; (4) Our approach is generally applicali>le to many (.lifferent state—based modeling lan- guages. We have successfully applied our approach to several state-lmsed modeling languages, including process algebras (e.g., Petri nets) and UMI. state diagrams; (5) We also introduce a technique that uses the models as the basis for automatically generating executable prototypes, and ensures the model’s (‘tonsistency with both the high-level requirements and the adaptive program implementation. This chapter focuses on the be/‘taxaéor of adaptive. 1’)rograms. We explicitly iden- tify and specify the key prormrties of adaptation behavior that are common to most adaptive programs, regardless of the application domain. the progranmiing language. or the adaptation mechanism. \Ve define the. quiescent states, (i.e.. states in which adaptations may be safely performed) of an adaptive program in terms of the pro- gram behavior before, during, and after adaptation, the requirements for the adaptive program, and the adaptation mechanism. This kind of definition leads to precise and flexible adaptation specifications. \Ve accompany the discussion of our approach with an example of an adaptive GSM (Global System for Mobile Comrminications)-oriented audio streaming proto- col [154].1 This protocol had been previously developed without our technique. where the developers had found the overall program logic to be complex and error—prone. Our approach significantly reduced the developmr’s burden by separating different concerns and leveraging automated model construction and analysis tools, and thus decreased the development time and improved softy-tare quality. The remainder of this chapter is organized as follows. Section 5.1 gives back- ground information on Petri nets, the modeling language used in this chapter. Sec- tion 5.2 describes our approach for constructing and analyzing models for adaptive. programs. Section 5.3 outlines the generation of adaptive programs based on the mod— els. Section 5.4 describes an adaptive Java pipeline. case study using the proposed model-based development technique. Related work is introduced in Section 5.5 and Section 5.6 discusses limitations and possible extensions of the proposed approach. 5.1 Background: Petri Nets This section briefly overviews Petri nets which we use to model adaptive systems. Petri nets are a graphical formal modeling language [109. 110], where a Petri net comprises places, transitiovzs, and arcs. Places may contain zero or more tokens. A state (i.e., a marking) of a Petri net is given by a function that assigns each place the lGSM is a telephone audio compression standard defined by the European Teleconnnunications Standards Institute [40]. number of tokens in the place. Places are connected to transitions by input arcs and output arcs. Input arcs start from places and end at transitions; output arcs start from transitions and end at places. The places incident on the input (or output) arcs of a transition are the input places (or output places) of the transition. Transitions contain inscriptions describing the guard conditions and associated actions. A transition is enabled if the tokens in its input places satisfy the guard condition of the transition. A transition can be fired if it is enabled. where firing a transition has the. following three effects: (1) consuming the tokens from its input places. (2) performing the associated actions, and (3) producing new tokens in its output places. An execution of a Petri net comprises a sequence of transitions. Interactive executions of Petri nets are called token games. Since their introduction, Petri nets have been extended in many ways. For ex— ample. the extension used in this chapter. Coloured Prtrt Nets (CPN) [75} allow the tokens to be distinguished by their colors (or types). making them more convenient to model data in software. The inscripticms on arcs of a coloured Petri net specify the numbers and types of tokens to be taken from (or put into) the incident input (or output) places when the transiticms are fired. Numerous tool suites have been developtx'l to support graphical net construction, Simulations. token games, and analyses. Among these tools, MA RIA [94] is a coloured Petri nets analysis tool that supports model checking for LTL properties. Renew [78] is a coloured Petri nets tool suite that supports graphical net construction. automated simulation, and token games. Furthermore. Renew supports a synchronous con'n'nu— nication I‘nechanism that enables Petri net. models to conununicate with each other and with other Java apI'JIications. Figure 5.1 depicts a coloured Petri net created with Renew. where circles P1 and P2 represent places, and the box T represents a transition. P1 and P2 are the input and output places of T. respectively. The in— scription of P1 (“[2]”) indicates that P1 contains an integer type token with value ‘2. The inscriptions of the input. and the output arcs for T (“‘x” and “x+1". respectively) Show the relationship between the input and the output of T. The inscription of T (“guard :L‘ < 3") shows the condition under which the trz‘insition is enabled. In this example. T is enabled, and firing the transition consumes the token in P1 and gen— erates an integer token in P2 with value 3. In this chapter, coloured Petri nets are used to specify the design of adaptive systems. P1 T P2 guard x<3 Figure 5.1: A coloured Petri net example. 5.2 Our Specification Approach We propose a general specification process for adaptive systems using state— based modeling languages. We also describe the analyses that may be performed to ensure the consistency among the. models. high—level requirements, and low-level implementations. We illustrate the process with Petri nets and a working adaptive communication protocol example. Although we illustrate our approach with Petri nets. our approach extends to other state-based modeling languages. including UML Statechart. diagrams [149] (Cl’izgipter 6) The approach introduced in this section leverages the artifacts produced in Part I, where we proposed a goal—based requirements analysis to construct adaptation spec- ifications in A-LTL/LTL. In this section. we assume that the requirements analysis has generated the following artifacts (shown in Figure 5.2): o A set of global invariants IN V . specified in LTL. o A set of domains D1, D2. - .. ,1)” in which the program is required to execute, and the local requirements for each domain R1, R2, - - - ,Rn, specified in LTL. o A set of adaptations A” that. the program is required to perform. and the adaptation variant property for each adaptation Rid- top-level goal “EC-hange \verify ' adaptation variant Rj’j/ ‘—‘\ [:M/ i / / construct implement ' l l goal 0 condition / \ model . . . dynamic D requirement D implementation > change Figure 5.2: Goal model for adaptive software In this chapter“ we focus on constructing adai’)tation design models that sat1sfy the above requirements (also shown in Figure 5.2). This is achieved in the following steps : Step 1: For each domain Di, construct a state-limsed model Mi. Verify the model against the local properties of the domain I1",2 using model checking or simulation. 58 Step 2: For each adaptation A 2:14 (:(‘mstruct an adaptation model MU for the adaptation. Verify the adaptation model against. global invariants [NV and adaptation variants Rn" Step 3: These state-based adaptation models can be further used to generate rapid prototypes or serve to guide the developn'ient of adaptive programs. They can also be used to generate test cases and verify execution traces. “hen errors are found in a. given step. developers may be required to return to an earlier step to correct. the errors. We introduce the general process to model and analyze an adaptive pro- gram in Petri nets, accompanied by a concrete GSM (Global System for Mobile Communications)—oriented audio streaming prott.)c.ol.2 5.2.1 Illustrative Adaptation Scenario GSM-oriented audio stream enmding and decoding protocol is a signal processing—based f(_)rward error correction protocol ['16. 154]. The protocol is used to lower delay and overhead in audio data transfer through lossv netwm'k connections, especially in a Wireless network entircnrment. In this protocol, a lossv, compressed data segment of packet 2' is piggybacked onto one or more subsequent packets. so that even if packet 2' is lost, the receiver still can play the data of 2' at. a. lower quality as long as one of the stilisecpient. encodings of 1i is received. This protocol takes two parameters, c and 6. The parz'u'neter c describes the number of consecutive packets onto which an encoding of each packet is inggybacked. The parameter 0 describes the offset. between the original packet and its first compressed version. Figure 5.3 shows a GSM-oriented audio streaming protocol example with 6 = 1‘ c = 2. In order to 2111 this chapter, we only discuss the verification of local properties and global invariants. The verification of adaptation variant 1,)roperties is discussed in Chapter 7. accommodate packet loss changes in the network connection. we dynamically switch among components that. implement different. (9 and c values. Dmame A r:{> I di+2 di+1 di ‘7 H - , / Figure 5.3: GSM—oriented encoding and decoding The configuration of the system is shown in Figure 5.4. A desktop computer. running a sender program. records and sends audio streams to hand-held devices (e. g., iPAQ), running receiver programs. through a. lossy wireless network. We expect the system to operate in two different. domains: the domain with a low loss rate and the domain with a higher loss rate. The loss rate of the wireless connection changes over time and the program should adapt its behavior accordingly: When the loss rate is low, the sender / receiver should use a low loss—toleramte and low bandwidth-consuming encoder / decoder; and when the loss rate is high. the sender / receiver should use a high loss—tolerance and consequently high bandwidth—consuming protocol. Specifically. in this chapter, we describe the simple adaptive program that initially uses GSM(1,2) encoding/ decoding when the loss rate is low (where (9 = 1. c = 2); when the loss rate becomes high. it. dynamically adapts to using GS;\I(1,3) encoding/ decoding. Note that this adaptation may appear to be simple paran'ieter changing. where. in fact, it requires swapping encoders and decoder at run time. Moreover, the states of the GSM(1.2) encoder/ decoder must be correctly preserved and transferred to the states of the GSM(1.3) encoder / decoder so that the system is always running in a consistent state. Audio Stream \ Access \ Point \ 31—1 Sender Wireless Receivers Figure 5.4: Audio streaming system connection 5.2.2 Constructing Models for Source and Target Assume that we have. the local requirements for the source domain (the source '7‘6t’11t’i7‘677l671t5) and for the target domain (the target chujr'cmcr2ts). \Ve need to build a model for the source. domain (the source model) and a model for the target domain (the target model). The source and target models should not include information about. each other. or about the adaptation. The source and target models should be verified against the local requirements for the source and target domains. respectively. This step is illustrated by example as follows. Assume we have identified the source domain 5 (the domain with low loss rate) and the target. domain T (the domain with high loss rate). The requirements Rs for the. source domain are: 0 Sender liveness: The sender must read packets until the data source is empty (i.e., the data source is eventually empty). and the sender must always eventu- ally send a packet if it reads a packet. In terms of LTL, we have the following: 0(dataSomce = empty) /\ Cl('7'ea,(1(;r) —> Ose'n.d(.1:)). 3 (5.1) ‘SStrictly speaking, the notations of 1'ca.d(.r.) and send(x) are predicates. However. LTL requires the underlying logic to be propositional. Here we implicitly employ propositionu/zfzaf-m-n [127]. which 61 i .-'.'Z. "'9'".! 'L'J ‘v'r r 'u- " Q j. "-0 13“": l", .u .l'_9 ff?" :L ‘r -., .V I‘- ‘ .o._‘ '1 I -.-f A, <. i... I‘- ‘..‘ . I o Dié“. ’.." l. ' 17 g.. are .._, 185;! ‘4' 0 Receiver liveness: The. receiver must always decode data. once a new packet is received. In LTL: D(7‘c(;7ei'ec(1') —+ Odecode(.’r)). (5.2) 0 LOSS tolerance: The sender/ receiver should use a. protocol that. tolerates 2— packet loss. In LTL: (CllossCmmt <= 2) —» (Cl-ilo.se(;r)). (5-3) The requirements RT for the target domain have the same set of 1:)1‘operties except that the loss tolerance constraint is as follmvs: 0 Loss tolerance: The sender/ receiver should use a protocol that tolerates 3~ packet loss. In LTL: (DIossCount <2 .3) —-> C(fiZOSCUI/‘N- (.5-4) To model the program for S , we build a Petri net for the GSA‘I(1,2) encoder on the sender side and a Petri net for the GSM(1,2) decoder on the receiver side. Figure 5.5 shows the sender net (elided). The circles represent places and boxes represent transitions. The white circles represent places that are not. part of the sender model, i.e., shared either by the source and the target sender nets, or by the sender and the receiver nets, or by both. In the net, the place dataSource contains a sequence of data. tokens. The readData transition removes a token from the dataSource place, and puts the original data in the inputData place and a GSl\I compressed data in the dataX place. The shiftX and shiftY transitions shift the encoded data in a converts predicates to propositions by using constant values to represent. arbitrary values. 6‘2 buffer represented by the places dataX. dataY, and dataZ. The encode transition takes the current input data. and two previously compressed data. from the places dataY and dataZ. and outputs a GSA-[(1.2) data packet in the encodedData place. The send transition sends the encoded packet to the output socket place. index i+1 readData inputData [i.x,y.z]"O-[i,x,y,z dataSource ; encode encodedData send gsm(X) y Mil—fig) output dataX shiftx dataY shiftY dataZ socket non-adaptive an, arc for non- : transition adaptive transition 0 state adaptive __> arc for adaptive transition transition 0 shared state Figure 5.5: Sender source net (Cu-SM (1.2)) Figure 5.6 shows the receiver net (elided). The receive transition receives the audio packets from the input socket and puts the input packet in the inputData place. The decodelnput transition takes the input packet from the inputData place. extracts data from the original data segment, and puts the data in the bufferedData place. The decodeBuffer transition extracts the compressed data segments from the. input packet, and updates the data in the bufferedData place. The outputData transition takes decoded data. from the bufferedData. and then outputs the data to the sink. We build the model for the lossy network separately (shown in Figure 5.7); this model serves to define the behavior of the environment in which the. sender and the receiver execute. The input socket and output socket places are shared by the sender and receiver nets, respectively. The lossy network is modeled as a. packet. queue. Firing the send transition puts a data packet in the output socket place. thereby enabling both the enqueue transition and the lose transit ion. Firing the lose transitimi discards 63 index guamii>0; se 3 k+1 A k Vi \. k [0.01 [-1.01 “ input decodelnput [ ,r] butteredData socket [k.$];[k1.s1] guard i>0 [0,x0,y0,zO] - , 9’ [k,s]+O receive inputData outputData SINK Figure 5.6: Receiver source net (GSM (1,2)) the 1.)acket in the output socket place and increments the value in the loss count place. Firing the enqueue transition moves a packet from output socket to the network buffer place. Firing the dequeue transition moves a. 1’)a.cket from network buffer to input socket of the receiver. The packet in input socket can be received by the receiver. The value in the loss count place indicates the number of packets lost during data transmission. lose Iosscount output socket input recewe socket send enqueue network dequeue bufler Figure 5.7: Lossy network net (environment model) After building the source models. we first play token games with the sender and receiver models (prior to adaptation) to visually validate the models. If we find errors with the models, then we revise the models until the models pass the visual validation. Then we run model checking to verify these models against the high—level local requirements of the source domain S. including the. safety. liveness. and loss- 64 tolerance constraints (Formulae 5.1—5.3). we revise the models until they pass the model checking analysis. To model the program for the target domain T. we build separate Petri nets for a GSM (1,3) encoder on the sender side and a GSM (1,3) decoder on the receiver side. Figures 5.8 and 5.9 show the sender net and the receiver net, respectively.4 The modeling, validation, and verification process is similar to that used for the source model. output 9 index socket readData inputData i,x,y,z,t] O[i,x,y,z,t] dataSource ncoded Data send dataXx shiftx dataY shiftY dataz Zshiftz ZdataT Figure 5.8: Senrlcr target net (GSM (13)) index guard i >0; 5 3 k+1 A V K dec/odelnput l0~Olil'1.Ol€['2-0l “ butter dData input SOCKet /\\R[k [k. s]. r:[ky1]; ,,[k2 32] va- 1.1)” y 2,1] guard i>0 [k,r];[k1.r1];[k2,r2] [KS] 'X'Y'Zl /decodeBufler \ . i.x,y,z.t] $902 guard l>=k+2 [0,x0,y0.zO,t0 [i.x,y,z,t} lk~$l'O receive 109010313 outputData sink Figure 5.9: Receiver target net (GSM (13)) 4By comparing the source model to the target model, one can notice that the source model can be systematically extended to construct target models for GSM—oriented protocol of various parameters. 5.2.3 Constructing Adaptation Models This section describes Step (‘2) the process to create a model of the adaptation behavior from the source domain to the target domain. Recall the three types of adaptation semantics introduced in Chapter 3: one-point adaptation. guided adapta- tion. and overlap adaptation. we introduce the modeling technique for each of these types of adaptation. For each type of adaptation. we first introduce a general speci- fication approach for state-lmsed modeling languages. then instantiate the approach with Petri nets and apply it to the GSM-oriented protocol example. The term q'zaescent states is commonly used to refer to those states suitable for adaptations in the literature [5. 71]. They are usually identified as the “state— less” states of a program. e.g.. states equivalent to initial states. However. in some programs. reaching such states may not occur in a reasonably short period of time. causing the program execution to block. Thus. this type of definition for quiescent states is not suitable for changes that require prompt adaptation responses. such as those needed for fault tolerance. error recovery. attack detection. etc. Furthermore, such a quiescent st ate definition is not sufficient to ensure the correctness of adapta- tion in the absence of the requiremmrts to be achieved by the adaptation. We argue that the quiescent states of an adaptive program must be defined in the context of the program adaptat ion behavior before. during. and after adaptaticm. and the global invariants that the adaptive program must satisfy. A state of the source program. 5. is a quiescent state. if and only if we can define a. state transformation function. f. such that there exists a state. t. in the target program. t = f (s). and any execution paths that include the adaptive transition .9 ——> t do not violate any global invariants. The set of quiescent states is determined by the state transformation from s to t that satisfies the global invariants. Generally speaking. the more quiescent states we can identify. the more flexible the adaptation is. i.e.. the more. states from which we 66 may perform adaptation. Potentially, all states of the source model can be ('111l(3S('€Ilt states. but that. would require us to define a. complex state transformation function. Therefore. we should balance the conmlexity of the transformation function and the flexibility of the adaptation. we use Petri nets to illustrate the quiescent. state identification. We define an “adapt” transition to model the set. of adaptive transitions. The “adapt” transition connects the source net to the target net: All the input places of the transition are in the source net. and all the output places are in the target net. When the “adapt” transition is fired. it 1:)erforms the state transformation by consuming the tokens in the source model and generating tokens in the target model. The quiescent states of the Petri net are those markings that enable. the “adapt" transition. which can be restrained by the guard conditions of the transition. More. than one “adapt” transition can be defined in a similar fashion. each identifying a different set of quiescent. states and defining a different state transformation function upon this set. The source net and the target net. connected by the “adapt” transition. is the adaptation model. We use token games and model checking to validate and verify the adaptation model againstthe global invariants properties: If violaticms are found. then we need to revise the models and/or the properties. One-point adaptation As described in Section 3.3. with one-point adaptation. at. one state during the source program’s execution. the source behavior should complete. and the target behavior should start. The adaptation process completes after a. single transition. The major tasks for one—point adaptation are to identify the states that are suitable for adaptation and define adaptive transitions from those states. In the GSh’I-oriented audio streaming protocol example. the sender or the receiwr adaptation alone can be. considered one—point adaptation. The global invariants for 67 the adaptive sender and receiver are specified as follows: 0 Sender global invariant: The sender should read packets until the data source is empty. and the sender should always eventually send a packet if it reads a packet. In LTL. 0(dataSou7‘ce 2 empty) /\ Cl(7'caal('1:) —> Oscnd(;r)). ( CI! C31 V o Receiver global invariant: The receiver should always decode data once a new packet is received. In LTL. U(rcccice(.r) —+ Odccode(:r)). (56) The adaptation model for the sender is shown in Figure 5.10. The enabling condition of the “adapt” transition of the sender identifies the quiescent states to be “after encoding a packet and before sending the packet. and after the data in the compressed data buffer have been shifted to the next location”. The “adapt” transition directly moves the tokens from dataY and dataZ to the corresponding places in the target. The token in dataT in the target model is generated from the encoded packet in encodedData of the source by taking the last piggy/backed data segment 2. Figure 5.11 shows the adaptation model for the receiver. The quiescent states of the receiver are identified by the adapt transition of the receiver. Upon receiving the packet. the model adapts to the target. receiver net. and the state transformation is defined by the output places and inscriptions on the arcs. We ran token games on the sender and receiver adaptation models in order to 'alidate that the models reflect our purpose for the program. Then we used l\Iaria to model check these models against the global invariants (Formulae 5.5 and 5.6). Once the models passed our validation and verification analysis. we concluded that these 68 r readData inputData Sender Source Model \ 0 index .4 t. <;. ‘> readData inputData t " - -—~——- sat ' .._.@_ m dat So ce ’ a ur / / encodedData send . ng’a‘“ W [i X.y.zl gm yg 21_ J adapt Sender Target Model output socket non—adaptive ___., arc for non- :3 transition ' adaptive transition O state adaptive _> arc for adaptive 0 shared state ' transition transition L___ Figure 5.10: Sender adaptation net models had been constructed appropriately. Guided adaptation As described in Section 3.3. with guided adaptation. when the source program receives an adaptation request. it enters a restricted mode. in which some function— alities of the program are blocked. Entering the restricted mode ensures that the program will eventually reach a quiescent state. from which a. one-point adaptation takes the program to the target program state space. To specify a guided adaptation. we should determine the functionalitics that 69 r Receiver Source Model \ ~-~N\t..\_ Ik.si;tk1.s1l:lk2v321 ~-~~ M h unmanata \‘N l l ’ l seqt -" ‘ i / ’7 ’/ ,..z- _....-‘I \\\ i / 1"". fi “-\\K\ \\ i rm" decodeButter “Omsk.“ ‘ .—‘ seqz ‘ “\“N‘k l ARM .. H.... “as, A. . i . “a“--. , .. , 7,”, ,, . -._... Cam inputData outputData av non-adaptive m, are for non- :j transition adaptive transition 0 state ada tive . are for ada tive l transition transitian O shared state Figure 5.11: Receiver adaptation net should be blocked in the restricted mode, and identify the quiescent. states of the program in the restricted mode. To achieve functionality blocking, transitions are removed from the source program. Let. the source model be Mg, the target model be MT, and the restricted source model be M g M ; must share the same set of states with MS, but Mg has only a subset of the transitions of MS. MS and MT can be . ‘ 7 ‘ - ~ _ - , I a . constructed in the same way as 111 one-point adaptation. M s can be constructed by first copying M 5 and then removing transitions or strengthening the firing conditions of transitions that may otherwise prevent the program from reaching a. quiescent. 70 state. We next define the state transformations from states in M5 to states in Mg. and from quiescent states in Mt; to states in MT. As .11.; shares all the states with MS. the state transformation from MS to M; is trivially an identity function (a function that maps an element to itself) on the ("loniain of all the states in MS. The approach to define quiescent states of Mg. and the state transforrnation from M g to M T is the same as that used for one-point adaptation. The approach in which MA. is constructed ensures that any execution path of A]; is also an allowable execution path of Mg. which implies that. as long as M L does not cause deadlock, Mg. satisfies all safety and liveness constraints that MS does. The properties we need to verify about Mg. are that it does not reach a deadlmrk state before it eventually reaches a quiescent state. and that the adaptation model constructed by Mg. 111;. and MT should satisfy the global invariants. The guided adaptation can be illustrated with the GSh'I-oriented adewtive sender model. Assume. the quiescent states of the sender are identified by the adapt transition in Figure 5.10. which requires the inputData place to be empty. the. encodedData place to be non-empty. and the encoded data in dataY and dataZ to have, already shifted one location. The semantics of Petri nets determines that the order of firing the. send transition and shifting the data in dataY and dataZ is non-deternnnistic. It might, be the case that the. send transition is always fired before shifting the data. rendering the quiescent states unreachable. To deal with this problem. we construct a net that represents a. restricted variation of the sorn‘ce net that disables the send trz’rnsition. we do this by copying the source net. then removing the send transition from the. net. Figure 5.12 shows the conceptual model for the adaptation from the sender source net to the sender restricted source net. The restrict transition represents a total identity function from the source to the restricted source, N. The adapt transition . T from N to the target net is similar to that in Figure 5.10. Note. the way in which N is constructed guarantees that all properties verified before are still valid. The only additional property we need to verify is that N will eventually reach a quiescent. state. which is formally specified as follows: 0 Quiescence constraint: After the restrict transition is fired, the restricted sender must eventually reach a quiescent. state. In LTL, Cl(7cstrict :> Oquiesccnt). (. L”! \I v we model checked the model in Figure 5.12 against Fornnilae 5.5~5.7, where the analysis verified that the property holds. Overlap adaptation As described in Section 3.3. for overlap ar'ia1.)tation, the source to target adap- tations are acr:omplisl'ied by a. sequence of adaptation transitions that are performed one after another. The target progrzmi starts to execute after the first. adaptive tran- sition. and the source program completes before the last adaptive transition. Overlap adaptation is particularly useful for mult i-threaded programs, where different threads adapt to the target. Each thread 1‘)erf(,)rnrs a one-point adaptation or guided adapta— tion at. different times, and the combined result yields overlap adaptation. Overlap adaptation is more complex than one-point adaptation in the sense that we need to determine not. only the state transformation functions of each one—point. or guided adaptation, but also the coordination among these adaptations. Example coordination relationships among these adaptations include precedence relationship, cause-effect relzrttionship, parallel relationship. etc. The key task in modeling overlap adaptations is to define how these multiple adamtations should coordinate with each other in order to satisfy global invariants. In addition to satisfying explicitly specified global invariants, an adaptive program should also satisfy an adaptation integrity 72 readData dataSource . Sender Restricted Model (N) / Sender Source Model \ Q index i ‘r‘ 5. r readData inputData "r " ~---:~ {:rm'O -~+t:i ~—-—~O 4:1 dataSource ../ /encode encodedData send\b 0’ 51—0.... 0......“ dataX shmx dataY shiitY dataZ 3°C“ ‘ restrict Q index : ‘t 7 1 I R I _l inputData o [:1 Mo ----- c1 0 / encode encodedData ’ O G----~--i:i~—~-~-~~~o------~i:i———0 m... dataX shittX dataY shittY dataZ socket \ / f- __ non-adaptive “a, are for non- L: transition adaptive transition O state adaptive _> arc for adaptive . . . . 0 shared state transntion transntion c _.______..-_.1 Figure 5.12: Sender restricted source net coristraim: Once the adaptation starts, it must eventually complete, i.e., the adaptation should finally reach a state of the target program. Violations of this constraint result. in an inconsistent state of the program that is not designed for the target domain, and we have no means to ensure its correctness. To facilitate understanding, we have descriimd the models for the GSM-oriented adaptive comrrmnication protocol in a. modular fashion, where each discussion focuses In fact. on one of the models relevant to achieving adaptation. the models in Fig— ures 5.7, 5.10, 5.11, and 5.12 can be connected to form a. com1')rehensive model for the GSA‘I-oriented adaptive ccmnnunication exhibiting overlap ada1.)ta.tion semantics. The 73 sender and the receiver models are connected by the lossy network net (in Figure. 5.7). The adaptation starts when the sender restrict transition is fired (in Figure 5.12). and ends when the receiver adapt transition is fired (in Figure 5.11). After the sender has adapted to the target (in Figure 5.10) and before the receiver adapts to the target (in Figure 5.11). the source and the target overlai'). i.e.. the sender exhibits the target sender behavior and the receiver exhibits the source receiver behavior. By consid- ering the sender and the receiver as an entire program, the adaptive GSM—oriented adaptive comn’mnication protocol is an overlz'ip a.(,ia].)tation. The constraints for the entire adaptive program are specified as follmvs: o GSM example loss-tolerance global invariant: The adaptive program should tolerate a. 2-packet loss throughout its execution. In LTL, (CliossCount <= 2) --> (5405805)) f ‘\ -. .1 00 v We used model checking to verify this property successfully. o GSM example adaptation integrity constraint: If the sender adaptive transition is fired, then the receivers’ adaptive transition will also eventually be fired. In LTL, D(serz,dcrAdapted —> ()rcce'iuer'Adapted). (5.9) We found errors when model checking the adaptation integrity constraint (For- mula 5.9). By inspecting the counter example, we realized that in a rare case, if all the packets after the sender’s adaptation are lost, then the receiver will not receive. any packet encoded by the target sender, and thus the. receiver will not adapt. We revised the model by using a reliable communication channel to send the first packet. 74 after the sender adapts to the target. so that the receiver will be guaranteed to re.— ceive the packet. Note that it is generally possible to build a. reliable communication channel atop unreliable underlying infrastructure by using acknowledgement-based protocols [129]. Using it to send audio streams would incur a performance penalty. However, if we use it to send only critical packets occasionally. then the penalty is neg- ligible. W'e repeated the model checking for the revised model against the adaptation integrity constraint (Formula 5.9). and the result showed that the adaptation indeed ran to completion with the revised model, indicating that the property is satisfied. Adaptation controller net In a complex adaptation scenario, a number of concurrent adaptive components are involved in a. single overlap adaptation process. In order to drive a. complex adaptation process, we use an adaptation controller net to model the sequence of adaptive transitions in the adaptation. (in essence, the adaptation controller net. serves as a. driver for the adaptatitm process. 3 Figure 5.13 shows the controller net for the adaptation of the sender and the receiver. The adaptation includes four phases: in source,’ restricted source; sender adapted, and in target, each of which corresponds to a place in the adaptation controller net. The transitions in the controller net, including restrict, sender adapt, and receiver adapt, ccn‘respond to the adaptive transitions in the sender and the receiver adaptation nets. restricted sender sender receiver adapt in source restrict source adapt a d a ted in target adapt request Figure 5.13: Adaptation (_tontroller net As shown in Figure 5.14. we compose the adaptation controller net with the sender and the receiver adaptation nets to drive. their 21(‘la}’)tive transiticms. thus pro- viding a global view of the adaptation procedure. In this model. the are connecting the in source place of the adaptation controller and the send transition of the sender source model indicates that the transition is enabled only in the in source phase of the adaptation. Firing the restrict source transition disables the send transition in the sender source model and enables the adapt transition in the sender adaptation model. Firing the sender adapt transition not. only transfers the sender state from the source to the target. but also transitions the adaptation phase from restricted source to sender adapted. thus enabling the receiver adapt transition in the receiver adaptation model. Finally. firing the receiver adapt transition moves the adaptation to the in target phase, thus completing the adaptation. 5.3 Reifying the Models This section introduces the. ap1;)roacli to generate executable prototypes and de- velop code based on the models constructed in the previous section with the assistz-ince of the Renew tool suite [78]. 5.3. 1 Rapid Prototyping Renew [78] supports the specification of implementation-specific (Java) code in its transition inscriptions. \Vl‘ien a transition is fired, the code associated with the transition will be executed. In model-driven approaches. the model drives the se- quence in which the transitions are fired. By using this mechanism. we can generate rapid prototypes directly from the adaptive models. whose bel‘iavior has been verified. “"0 map each transition to a Java method call. whose functionality is manually gener- ated based on the input/output places. guard conditimis. and other inscriptions of the. transition. The adapt transition is mapped to an adapt method. which in'iplements adapt In source "WUBSi Sender Source Model g Receiver Source Model 9 . - , ———e restrict . 0 x _. restrlcted ‘1 ,. 39 I ' ~ source . ,. / Recelver Target Model . sender adapt // Sender Target Model . < ‘5 -Q é sender adapted / , ' i 5'1 receiver adapt i »- a ~~-m¥3~‘>o In target 7% ~ non-adaptive .._» arcfor non- [__:l transition adaptive transition O state :3], < adaptive _> are for adaptive - - transition transition 0 shared state Figure 5.14: Overall adaptation with controller net the necessary state transformation. Following the procedure introduced above, we have built a rapid prototype for the adaptive sender and adaptive receiver in Java. and executed the application. The prototype can be included as a module in a Java program to further validate its design. We have tested its execution results, and the rapid prototype executed as expected, thus serving to validate the Petri net models. The detailed Java implementation for rapid prototyping is included in Appendix B. 77 5.3.2 Model-Based Testing Given the rapid prototype generated. the product.ion-level adaptive program can be designed and im )lemcnted. After the )roo‘ram is im )lemented, we ensure that the C) C) program is implemented properly with a model-based testing technique [112], which is supported by Renew [79]. \‘V-"e verify the following two constraints: 1. Each transition in the model must have a corresponding handler method in the implementation, where a handler method for a transition triggers the transition when the method is invoked. 2. The sequence in which the handler methods are called must conform to an allowable transition sequence (i.e.. an execution) of the Petri net models. For each Petri net model. we manually create a stub file (in a R.enew-s1_)ecific format), which defines the maiming between handler methods and transitions in ”the model. For example, we create a file named “SenderNet.stub” to define the ban-- (ller methods for the. sender a(,l.-rpt.ation net in Figure 5.10. The code excerpt from “SenderNet.stub” is shown in Figure 5.15. In the stub, line 2 declares that a. Java class SenderNet is to be created for the sender net. The body of the stub (lines 4—-21) defines the mapping between the methods (i.e., handler methods) of the SenderNet class and the transitions of the sender net. For example, lines 4——6 define that the readdata method of the SenderNet class is mapped to the readData transition in the net. The stubs for other nets are created similarly. The detailed implementation of other stub files is included in Appendix B. These stubs are processed by a Renew stub cmnpilcr. co77'2.pil(:st'u}), which creates Java source code for the handler methods. These methods interact with the Petri net models at run time: Invocations of these. handler methods have the following effects. 0 If the corresponding transition is enabled at the time a method is invoked. then the transition will be fired and the method will return imn'iediately. 78 01 package gsm; 02 class SenderNet for net sender 03 { 04 void readdata(){ 05 this:readData(); 06 } 07 08 void S_encode(){ 09 this:S_encode(); 10 } 11 12 ...... 13 14 void T_encode(){ 15 this:I_encode(); 16 } 17 18 void adapt(){ 19 this:adapt(); 20 } 21 ...... 22 } Figure, 5.15: Code excerpt from the sender net stub (SenderNet . stub) o If the eorrespomling transition is not enabled at the time the method is invoked, then the method will block until the transition is enabled. For each transition T in the Petri net models. we first manually identify the code segment C (usually a method call) in the adaptive Java. program that is intended to implement the transition. \Ve then insert an invmtation to the handler method for T at. the ently point. of C, i.e., at the entry point. of the I'nethod. During an execution of the Java. p1‘(_)gram, if the sequence in which these hz'indler methods are invoked is an allowable sequence of the Petri nets. then the execution will complete successfully, otherwise, it will deadlock. W’ith this apprmeh, we can exv'aluate the confornn'uice between the executions of the Java implementation and that of the Petri net models. 5.4 Case Study: Adaptive Java Pipeline Program In order to validate. the model-based adaptive software developinent process, we next describe another example application of our ap}:)roach: s}_)e(’tificallv. we applied it to the (’levelopment of an adaptive Java. pipeline. pr(')gran'1 [151]. In some multi- threaded Java programs. including proxy servers. data are, processed and transmitted from one thread to another in a. p1pelined fashion. The Java. pipeline is implemented using a. piped I/O class. The synclironizatitm between the input, and the output to the. pipeline is implemented using Java synchronized functions. Previously, we have studied optimization techniques and proposed an asynchronous Java pipeline design to be. run on a multi—processor machine [151]. By eliminating synchronization overhead. the asynchronous version inmroves performance with a speed up rate of 4.8.3 over the synchronized implementation when the CPU workload is low. However. when the CPU workload is highcthe synchronized version 1,)erforms better. Based on the above olmervations. we would like to build an adaptive version of the Java. pipeline where the program can choose to use the optimal implementations at run time based on the CPU workload. However. the complexity of designing an efficient and verifiablv correct algorithm that switches between synchronization protocols pre— vented us from implementing the adaptive version of the program using traditional development techniques. With the approach introduced in this chapter. we are able to not only build an adaptive program model, but also model check critical properties, and therefore, gain confidence in the design of the adzu.)tive mogrzun. Applying tecl'n'iiques in Chapter 4. we use a goal model to analyze the adapta— tion requirements of the adaptive program. As shown in Figure 5.16. the top~level goal G for the adaI')tive Java. pipeline program is to enable efficient pipelined data transmission between a. writer thread and a reader thread in Java. Next we refine the goal model. 80 5.4.1 / _ _ . i/ G: Achieve efficient / inter-thread pipelined / data transmission /’«~-~/T;>\ , // i OR \ ( / r i » _—— |.— \‘\ f inglaii'gadts j // Achieve G when / I Achieve G when I // CP U is under Ioaded/ Z CPU is overloaded -/ rr/ / // / / //“/’ z_ / s... local property for low CPU load implement Change-i". . .. , 7 . ...... .. verify Chan 8 /conslruct g // / / (I . / \ ow CPU load local property for r \ high CPU load “'9“ CPU 'Oad/ \.—.—_—.- , mil. \ _ ___i .Ifbtsync pipeline\ ’Sync pipeline\ \ / net / net ‘\ _ 1 / ' . , , Adaptation ,/ " .......... net ...... l // Sync t V Pipe Adaptive . change Pipe /— ,lgoal condition £:_\J model 1 dynamic l :7 requirement I: implementation ’ change We first use Figure 5.16: Goal model for adaptive Java pipeline Specifying Global Invariants global in\-’ariants to specify the expected behavior of the Java 1.)i15)eline program. The set of global invariants serve as a. contract between the adaptive pipeline program and itsclie nts. and tilt refoie. must hold regardless of adaptations The global in I'ariants are as follows: 0 N0 deadlock-safety invariant: The program should never deadlock. i.e.. the pipeline should continue to read from the input. data buffer until the input buffer becomes empty. Or simply stated. the input buffer must finally become empty. 81 In LTL, Oinput_e'mpty. (5.10) o No data loss—liveness invariant: All data input to the. pipeline must even- tually be output. from the pipeline. In LTL, Cl( 1T71.p'1/,t(.r):> O o-utput(17 ) ). (5.11) o No erroneous output-invariant: The pipeline must not output any data that is not a part. of the input data. In LTL. (--wou..tpzrt(;1')u1721‘)11.t(1.r))VD-wim'mthr). (5.12) 0 Data ordering-invariant: The order in which de‘rta are. read from the piiwline must match the order in which data are written to the 1;)ipeline, i.e., no out—of- order transmission. In LTL, (fi'i71.pttt(JT)Z/{ 27711211,“y)):>(-10utput(:r)U 0'utp‘zlt(y)). (513) As shown in Figure 5.16, we create a. requirement node under the top-level goal to include these global invariants in the goal model. 82 5.4.2 Specifying Local Properties Next, we specify local properties for the Java. pipeline program. The adaptive pipeline program executes in a high CPU workload domain and a low CPU workload domain. In the high CPU workload domain, we require the, program to operate in the synchronized mode. which has lower concurrency level, but is less CPU intensive. The writer and the reader are not allowed to operate simultanecmsly. In addition to the global invariants, we. have the following local property for this domain: 0 Mutual exclusion-local property: The read and the write operations must not be enabled simultaneously at any time. In LTL, D-i(write_ enabled /\ read -emrbled). (5.14) In the low CPU workload domain, we require the program to operate. in the asynchronous mode, which is more CPU intensive, but has higher concurrency. In this mode, we require maximal comzurrency, i.e., when the buffer is not. full (or empty), the write (or read) operation must be enabled. 0 Writer concurrency-local property: The. write operation must be enabled when the buffer is not full. In LTL, El ( —ifu.l l :> arr-2' to _(anrlbled). A CI! 'p—J Cf! V 0 Reader concurrency-local property: The read operation must be enabled when the buffer is not, empty. In LTL, 83 Cl ( -w empty?» 7"(:'f(1. (I _e'nrrbled ) . (5.16) As shown in Figure 5.16, we create an OR—refined subgoal for each domain in the goal model. Under each subgoal, we crez'ite a. requirement node to include local properties and a condition node to include (lonmin conditions. 5.4.3 Constructing Steady-State Models In this step. we construct steady-state models in Petri nets and verify these models against global invariants and local properties. We. build Petri net. models for the synchronized and asynchronous pipeline classes. Figure 5.17 shows the. Petri net. model for the synchronized pipeline class with three internal data. buffers bufferl, buffer2, and buffer3, each of which can hold 0 or 1 data. token. The three write transitions model the write operation that puts input data in one of the empty internal buffers. The three read transitions model the read operziition that reads data from one of the internal buffers. The places free. reading, and writing model the states in which a read/write lock is free, held by the reader. and held by the writer, respectively. A lack token must. be in the reading (or the writing) place to enable the read (or the write) transitions. The lock can be transferred among reading, writing. and free by the firing one of the readerJock, writerJock, write. and read transitions. he input and output places are respectively the data source and the data sink for the model. In the synchronized pipeline model, the write and read transitions must be explicitly enabled by writerJock and readerJock, and only one of them can be enabled at. any given point in time. we perform model checking to verify that. the model satisfies the mutual exclusion local property (formula (5.14)) and all the global invariants (formulae (5.10)—(5.13)). The model checking results showed that. these properties 84 are in fact satisfied by the model. writer_lock reader_lock guard n>0 writing writer buffer reader buffer in ut ' p write read output Figure 5.17: Synchronized pipeline net Next, we build the Petri net model for the asynchronous pipeline program (shown in Figure 5.18). In the asynchronous model, there is no lock controlling the synchro— nization between the write and the read transitions. Since it allows the writer and the reader to operate the buffers simultaneously. the model must be carefully de-- signed to avoid potential risk of race conditions. Under no circumstances should the reader and the writer both operate the same buffr-ir unit. at any given point of time. Model checking has been performed to ensure. that the model satisfies the writer and reader concurrency local properties (formulae (5.15). (5.16)) and the global invariants (formulae (5.10)—(5.13)). bu er1 read output X"'"'—"< > ' p buffer3 A 'U Write read Figure 5.18: Asynchronous pipeline net 5.4.4 Constructing Adaptation Models In this step. we construct a.(;la.1;)tation models in Petri nets and verify these models against global irwariants. In Figure 5.19. we slmw the adaptation controller net for the overlap adaptation from the synchronized pipeline to the asynchronous pipeline. ' The athrptati0n process includes three phases in source, writer adapted, and in target. The first adaptive transition, adapt writer, identifies the quiescent states to be. “when the writer is ready to write a data unit to the pipeline”. Firing the transition has the following effects: 1. Disable the input to the synchronized pipeline and enable the input to the asynchronous pipeline. 2. Transfer the current input data. from the synchronized pipeline to the asyn- chronous pipeline. 3. Move the cm'rent phase to writer adapted. 86 After the adapt writer transition is fired, the writer will start to write to the asyn— chronous (target) pipeline, and the reader will still read from the synchronized (source) pipeline if there is any data remaining in the buffers. The second adaptive transition is the adapt reader transition, which identifies the quiescent states to be “when the buffers in the synchronized pipeline are all empty”. Firing this transition will switch the output from the synchronized pipeline to the asynchronous pipeline, and move the current phase to in target. thus completing the adaptation process. For the adaptation model, we. specify the. following adal’nation integrity constraint: 0 Adaptation integrity constraint: If the adapt writer transition is fired, then eventually the adapt reader transition must also be fired. In LTL, A (j! L... Kl v C] ( (1 (I (1 pt guir-itc’rzé O adapt-7"ea.(1e'r). we then model check the adaptation model z'igainst the global invariants. During the verification, we found the no deadlock— safety il'izriar'zant (i.e., Gimp-Ht-empty) was violated by the model. By examining the violation path, we found that when the synchronized (source) pipeline was in the writing state and when there was input data availalfle in writer buffer, firing the adapt writer transition moves the input. data to the asynchronous (target) pipeline, leaving the synchronized (source) pipeline locked in the writing state, thus violating property (5.] 7). We modified the model by changing the quiescent states for adapt writer to be “when writer buffer is empty”. Model checking results indicated that the I‘ICW model satisfied all the global invariants. The models and the analysis capabilities enabled us to focus our attention on the design of the adaptation algoritlmi in order to maximize the program perforimince during adaptation. The design of the models has the following performance advan- tages. First, it is an overlap adaptation, i.e.. the writer adapts to the target model 87 synchronous pipeline Tim writer adapt 7vriter adapged reader in “Rel /output asynchronous pipeline Figure 5.19: Adaptive pipeline adaptation net without blocking or flushing the buffers in the pipeline. Second, it minimizes the overhead of state transfer, i.e., it does not need to copy data from the source model to the target model. 5.4.5 Reifying the Models In this section, we demonstrate how the Petri net models created above can be used in the development of an adaptive program. We followed the Petri net 88 models to build an adaptive program in Java that switches between synchronization protocols under adaptation requests. We built two hitter-pipelined objects that im- plement the synchronized and the asynchronous models, respectively. We created an adaptation driver object. that implements the adaptation controller net which sets its states based on the current phase in ariarflation. we finally created an adaptive Java-pipelined object that comprises the adaptation driver and the syn- chronized/ asynchronous pipelined objects. The adaptive pipelined object delegates read/write requests to the appropriate pipelined object based on the state of the adaptation driver object. 5.5 Related Work The work presented in this chapter has been significantly influenced by several related projects on formally specifying adaptive program behavior with process alge- braic languages. For example, Kramer and Magee [72] have used Darwin to describe the arcl'iitectiu'al view and used FSP (a process algebraic language) to model the behavioral view of an adaptive program. They used property automata (specified in FSP) to specify the properties to be held by the adaptive program and used LTSA to verify these properties. A quiescent state in their approach refers to the state in which the component to be changed is passive, and all comnuuiications with the component initiated by other components have cmnpleted. Their work highlighted the importance of identifying the states in which adaptation may be correctly per- formed, and it provided insight into the use of model checking to verify adaptation correctness. Allen et al [2] integrated the s1.)ecitica.tions for both the architectural and the behavioral aspects of dynamically adaptive programs using the Wright ADL. They used two separate component. specificaticms to describe the behz-ivior of a compo— nent before and after adaptation and encapsulate the dynamic changes in the “glue” 89 specification of a connector. thereby achieving separation of concerns. “right, specifi- cations can be converted into a process algebraic language. CSP [58]. which can then be statically verified. Canal et at [19] used LEDA. an ADL that supports inheritance and dynamic reconfiguration. to specify dynamic. programs. LEDA is based on the 7r-calculus [100]. a simple and powerful process algebra. The richer, more. expressive nature of the 7r—calculus enables n'iodelers to e:q')ress dynamic component connections more easily when compared to CSP-based approaches [L]. It is also possible to derive prototypes and interfaces from the specification automatically. Below are some of the key advantages when comparing our approach to the above approaches: (1) The above approaches do not take, into consideration the. impact of adaptation mechanisms when defining quiescent states. nor do they evaluate the quiescent states in the context of global. high-level recuiirements. (2) None. of the above aj.)proaches support. state trz'insfer. which makes it. necessary for the, programs to wait or even be blocked until a quiescent state is reacl‘ied. (3) The specifications for ar‘laptive behavior are entangled with the specifications for non- adaptive behavior in the sense that the quiescent states for adaptations are specified as'part of the source specifications. instead of as part of the adaptation specificatnins. (=1) The adaptive actions discussed by all three approaches are simple actions rather than the coordination among concurrent. adaptive actions. (5) The above techniques are specific to the type, of formalism being used (process algebra). thus making them potentially difficult to be generalized to other types of formalisms. Theorem proving has also been used to ensure critical properties in adaptive software. Kulkarni et a! [76] ii’itroduced a transitional-invariant lattice technique that uses theorem proving to show that during and after an adaptation. the adap- tive system is always in correct states with respect to sz'itisfying a. set. of invariants. A transitional—invariant lattice is a single—source. single-sink directed acyclic graph (DAG), where the source node 8 represents the source progrz-un and the sink node 90 t represents the target program. An intermediate node N between 5 and t repre- sents an intermediate program obtained from s by perfm'ming one or more adaptive actions. Let p(n.) be the program represented by node n. where 71. can be 3, t, or any intermediate node. They associate s and t with the invariants tnv(s) for p(s) and mm) for [)(t). respectively. They also associate >ach intermediate node n with a transitional—invariant inetn) for ptn). They prove that, in the lattice, if there is an edge from node. n,- to node n], then p01,) satisfies inii(n,,) implies p(nj) satisfies tm:(nJ-). As such, if the source program satisfies its invariants, then they conclude that the target program also satisfies its invariants. There are two major differences be- tween our approaches. First. theorem proving is fundamentally different from model checking in that although it is more powerful in handling certain cases. it requires extensive human intervention during the. process. Second. the properties that they discuss are restricted to propositional properties. while we handle both propositional and temporal properties. 5.6 Discussion In this chapter, we have proposed a. model-driven software development process for creating state—based models for adaptive software based on l’iigl’i-level require- ments. as well as verifying and validating the adaptive. models. Furthermore, we described how to use these models to generate executable adaptive programs. The concept. of quiescent states is similar to fulfillment states introduced in Chap- ter 3, where a fulfillment state is defined to be a state in which all the obligations of the source behavior are fulfilled. and thus making it safe to terminate the source behavior. The “fulfillment state" is a concept at. the requirements level, while “qui— escent state” is a. concept at the design level. A (priescent state may or may not be a fulfillment state depending on the. effect of the adaptation: If its effect is to terminate 91 the source behavior. such as the adapt transition in Figure 5.10. then it must be a fulfillment state; otherwise. SIM l1 as the restrict transition in Figure 5.12, it may not be a fulfillment state. Our experience shows that the proposed approach has the potential to improve both the development time and reliability of the code. The running adaptive au— dio streaming example had been originally (,leveloped by members in our l'CSCE-II‘CII lab without the proposed approach. After applying; our approach to the example, the original developers found that our approach significantly improved their under- standing of the problem. and the new design was clearer than the original one. Our approach reduced the developer’s burden by separating different concerns and lever- aging automated model construction and analysis tools. In the adaptive Java pipeline example. the proposed approach helped us focus on the design of adaptation algo- rithms. making it possible for us to build an efficient and \I'erifiably correct adaptive Java pipeline progrz-im. Automated analysis has jjilz-iyed an essential role by reducing a significant portion of burden of verifying the correctness of the adaptation. The discussions in this chapter have focused on the (ltwelopment of new adaptive software. \Ve also investigated the application of the niodel~based technique to en- abling dynamic adaptation in legz-icy software with assurance. which will be. discussed . in Chapter 6. In this chapter, we focused our discussion on the verification of global. invariants and local properties specified in LT L. These properties are supported by existing tools, such as MARIA. The verification of adaptation variant prrmerties specified in A-LTL will be described in Chapter 7. 92 Chapter 6 Re—Engineering Software to Enable Adaptation In this chapter. we extend the MASD approach introduced in Chapter 5 to enable dynamic adaptation in legacy non-adaptive softh'ire [1:19]. We focus on the connection between design models and implementations of adaptive software. There is a gap between existing modeling and implementation-l>ased approaches for adaptation: ()n the one hand. existing techniques [1. 1‘2, 15. 43. 69. 114, 1‘22, 126] that address adaptz'ition in legacy code heavily rely on developers’ experience and common sense rather than leveraging rigorous verification techniques, such as model checking. On the other hand. existing techniques [2. 18. 19. 57. 72. 73, 09. 106, 128, 131] addressing correctness in dynamic adaptation using rigorous software engineering techniques focus on abstract. models and do not take the models to their implementations. i In order to bridge the gap between existing modeling and i1n})len'ientation-based approaches for adaptation, we introduce a formal technique to ensure that the adap- tation requirements for the software are satisfied by the adaptive software implmnen- tation. The key insight is that the Unified Model Lz-uiguage (UML) models. with 93 formally defined semantics [97]. can be used as an intermediate representation to bridge the gap between adaptive software im1')lementati(ins and the. formal models used for adaptive software verification. the. design for adaptation can be performed on the UML models by creating adaptation U.\lL models. which can be autmnatically translated into formal models using existing tools [97] for formal analysis. Our approach has three key dimensions: model driven, non—invasive to the legacy code, and enables formal analysis of the resulting adaptive systems for adherence to local invariants, and adaptation properties. We leverage and extend several key enabling techniques that we. and others developed 1:)reviously. First we leverage the MASK approach in Chapter 5 to gain a model—driver approach Second, we extend an aspect-oriented adaptation enabling technique [139] to enable non—iiiy';'1siverress to the legacy code. Third we leverage and extend a. UML ft_)rmali7_.at.ion framework [97] to reverse-erigineer legacy code to UML models and formalize UM]; models. The aspect—oriented adaptation enabling technique allows us to insert. adaptation related code to existing software without. directly altering original legacy source code, i.e., it. is rum-invasive. Non—invasiveness to the source code is important in order to enable the adaptation code and the legacy code to be maintained separately. The assurance in this approach is achieved by model checking the design models and support. for systematic translations between the models and their implenrentations. Furthermore. our approach allows more flexible adamtation when compared to existing z—rdaptation enabling techniques [115, 121, 139]. 1\»’lany adaptation techniques addressing legacy code [115, 121, 139] require the points for adaptation to be separated from the code segment changed by the adaptation. As discussed in section 5.2.3, this constraint may impose unacceptable performance penalties in some adaptation scenarios. Thus. we introduce a. cascade adaptation tee/zxmque that. enables more flexible adaptation. thus allowing adzmtation to start at more points in the program. We have applied our approach to the adaptive .laya pipeline program [151] intro- 94 duced in Chapter 5. We demonstrate our approach by re—engineering a. non—adaptive version of the Java pipeline. program to become an adaptive version using our pro- posed approach. The remainder of this chapter is orgcuiized as follows. Section 6.1 overviews earlier work on an aspect-oriented adaptation enabling technique [139] and a metamodel-based UML formalization technique [97]. In Section 6.2. we describe a simplified version of our approach to be used in scenz-u'ios where there is only one adaptive component adapting to only one target, behavior. A case study is described in Section 6.3. Section 6.4 describes a number of extensions to the simplified tech- nique for more general adaptation scenarios. Section 6.6 summarizes this chapter and discusses limitations and extensions of our approach. 6. 1 Background In this section, we overview two techniques that are extensively leveraged for our proposed approach. The aspe Adapt Ready _-af_> Dynamically Adaptive ___> 3 Program 5 Process Behavior Adaptor Ad: [3:11;]? n . Aspects c/ r E_ _______ f3 ___________ i j\ § 4 ”.._, Development Time 3 Compile Time 3 Run Time Figure 6.1: Aspect~oriented adaptation enabling 96 6.1.2 MetaModel—Based UML Formalization Technique l\rIcUmber and Cheng developed a. general framework [23] based on mappings l‘)etween metamodels (i.e.. class diz’igran'is depicting abstract syntax) for formaliz- ing a subset. of UML diagrams in terms of different formal languages, including Promela [97]. The formal (target) language chosen should reflect. and support the intended semantics for a given domain (e.g.. mobile computing systems). This for- malization framework enables the construction of a. set of rules for transforming UML models into specifications in a formal language [23]. The resulting specifications de- rived from UML diagrams enable either execution through simulation or analysis through model checking. using existing tools. The mapping process from UML to a target language has been automated in a tool called Hydra [97]. we use Hydra to translate the UML diagrams created for adzmtation to Promela models to be analyzed by the Spin model checker [12]. 6.2 Model-Based Re—Engineering We now show how the as1.)ect-oriented adaptation enabling technique [139], the MASD process. and the metamodel—based formalization technique can be integrated and leveraged to bridge the gap between adaptation implementations and formal models for adaptive systems. Our approach addresses three phases in the development of adaptive software: the requirements analysis phase. the model design and analysis phase, and the imple- mentation phase. As shown in Figure 6.2. the overall process includes the following four steps: Step (1) occurs in the requirements analysis phase. We perform require— ments analysis to elicit a set of global invariants and local properties. Based on these properties. we select from a, code base a set of non—adaptive legacy 1')r(:)grams P1, P2, - - - , Pk. each of which differs from another by one or more segments of code. 97 Step (2) occurs in the model design and analysis phase. The programs are reverse- engineered into UML Statechart diagrams M1. .7112, - - - , Mk. named steady—state mod- els. These models are then translated into formal models and verified against their local properties using autcnnated tools [97]. Step (3) also occurs in the model design and analysis phase. After the steady—state models are verified, developers must create an adaptation model Mid, also in terms of UML Statechart diagrams, for each adap- tation from program P7; to P). The adaptatirni models are then translated to formal models and verified using formal analysis tools against global invariants. Step (4) occurs in the implementation phase. After the adaptation models are \I'erified, they are integrated and translated into adaptive programs. In this step. our approach addresses the following questions: Vv’liat mechanisms do we use to make legacy code adaptive"? \Vhere and what code should be inserted in the legacy code so that the implementation faithfully reflects the adaptatitm design? We next describe each phase in more detail. For discussion purposes, we focus on the adaptation of one adaptive cmnponeut from one source. behavior to one target. behavior. Collaborative adaptive comrmnents with multiple potential target programs are discussed in Section 6.4. 6.2.1 Requirements Analysis In order to re—engineer legacy code. we customize and apply the goal-based re- quirements analysis for adaptive software introduced in Chapter 4. We use Figure 6.3 to guide our discussion. Assume that. the adaptive software is required to achieve a high-level goal G in a set of different domains DI. DZ. - -- .Dk. The local properties (in LTL) for these domains are R1, R3, - - - , Rt. respectively. The set of global invari— ants (in LTL) for the adaptive softwau‘e is INV. Also assume that we have a legacy code base comprising a set of non—adaptive legacy programs and the properties asso- ciated with each program. After the set of requirements are specified. we query the 98 (1 ) Analyze requirements local properties and global invariants legacy programs Pi for all i selected legacy programs P1, P2. Pk Formal analysis tool (2) Translate Java code to UML Statecha ' Statechart diagrams formal models models M1, M2, Mk UML UML Statechart formalization diagrams tool c d b 0 8 389 (Mi, Mj) for all i, j adaptation models AM for all i,j (3) Design adaptation model adaptation model Ai,j for all i,j adaptive program P (4) Translation UML to Java Figure 6.2: Dataflow diagram for the proposed re—engineering approach code base using the local properties and select the set of non-adaptive legacy pro— grams P1, P2, . - - ,Pk to be used in the domains D1, D2, - - - , Dk, respectively. In this context, we assume that the legacy programs for all the requirements already exist. Each of these legacy programs will become a steady-state program in the adaptive program. Next, we determine how the execution domains of the program may change at run time, and how the adaptive program should respond. Consider the case where the program is initially running Pi in domain Di. A change of domain from D1 to DJ 99 { top-level goa ‘ :1 ax“ /\. im/ l lobal invariants _ local property Rj query verify query construct construct .—-—I[ . implement implement ..................................................... ’ P \-- D l ) condition i“‘“"’l code base goal WM D requirement D implementation / \ model dynamic -- - -~> change Figure 6.3: Goal model for adaptive software may warrant an adaptation from P.- to Pj depending on the cost to develop such an adaptation and the overhead that may be incurred during the adaptation. 6.2.2 Design and Analysis After selecting the set of legacy programs and adaptations. we create adaptation models in UlViIL using the following steps: F irst. we reverse engineer each legacy program P2- to generate a UML model (Stateclnftrt diagram) Mi. Second, we verify .:c 'c . 2.. 21' “'-".-'.c 's _ic. ‘ s. 'r. the Statechart (in ram for etch legicy iogiam against 1t let 11 IO ertie Th1 d for each adaptation from P,- to PJ- identified in the requirements analysis. we design 100 an adaption model Mia' comprising adaptive states and transitions from the source model AA to the target mode M}. Finally, we translate the adaptation models into Promela models to verify global invariants. The. above process is very similar to the MASD process introduced in Section 5.2 except that. the models M,; are reverse engineered from existing programs rather than refined from requirements as in forward engineering. Generate Statechart diagrams we use a metamodel-based techniqtie [97] to generate the Statechart model M,- for each legacy program Pi. This technique had been previously proposed for formalizing a subset of UML diagrams in terms of different formal languages, including Pron'iela. It is also generally applicable to formalizing the transformation of programs from one language to another. In order to apply this technique. we first define the metamodels for the legacy code, Java in this case. and for UML diagrams. Then we define the rules for the translation from Java programs to UML models in terms of the metamodels. After the rules are defined. the translation from Java. programs to UML models can then be performed n'ieclianicz-illy by a developer and can potentially be automated. V'Ve have developed rules for translating the subset. of Java that is relevant to our mobile computing and other applications of study. The I‘netamodels and rules are included in Appendix C. Developing the rules for translating the full Java language is non-trivial; ongoing investigations are underway by other groups [68]. Verify local properties for assurance We use the Hydra tool suite to transform the UML models M, to Promela models. Then we use the Spin model checker [12] to verify the Promela models against the local properties specified in the requirements analysis. Violations of local propertit-és by the. models may indicate one or more of the following cases: (I) The legacy programs Pl- 101 were implemented incorrectly, (2) the UML models M,- have not. been generated to actually reflect. the legacy programs P,, or (3) the local properties R, are specified incorrectly. Then we inspect the above artifacts to determine which one is at fault. and the (:or'res}:)oncling erroneous artifacts must be revised until the models conform to the properties. Design adaptation models After the UML models are generated, we design an adaptation model Mia‘ for each required adaptation. Assume the program is required to adapt from running Pi (the source) to running PJ- (the target), and the corresponding Statechart diagrams are M,- and M.-, res1")ectively. We create an adaptation Statechart model Mm- from M,- to M]- by adding adaptive states and transitions to M,- and .7le such that the global invariants are preserved before. during, and after adaptation in MiJ- This task is achieved by applying the MASD process introduced in Chapter 5 to Statechart diagrams in three steps: (1) Identify the quiescent states in the source model 1”,; (2) identify the entry states in the target model M]; (3) determine the state transformation from the ’ quiescent states to the entry states. The adaptation model design is considered correct if and only if the model sat- isfies the global invariants specified in the requiren’ients analysis. Although theoret- ically, the quiescent states and the entry states can potentially be any states in the models, certain heuristics can be followed to keep the design clean and simple. First, since an adaptation can only start from a quiescent state, quiescent states must be on paths that the program frequently executes;1 otherwise, there may be a long delay be- fore the adaptation may start. Second, the conditions for the quiescent states should be kept simple. For example the conditions at. the entry point of a loop are usually simpler than the conditions in the body of the loop. Example conditions include loop 1The frequency may be monitored by instrumentation of the source code. 102 invariants. pre/post conditions. etc. Similarly. conditions for the entry states in the target program need to be simple as well. However. the entry states are not. required to be on a frequently executed path. The adaptive states and transit ions usually include saving the states of the source model. transforming states of the source model to states of the target model, and restoring the states in the target. model. The quiescent. / entry states information that needs to be saved/ restored are usually the values of those live variables, i.e., those that. have been defined and may be used before they are dead (i.e.. not used any longer) or redefined in the model. A state transformation defines a function from the set of variables in the quiescent state to the set of variables in the entry states. A necessary condition for a valid state. transformation function is that the output must satisfy the conditions for the entry states given that the input satisfies the conditions for the quiescent states. Verify global invariants for assurance An adatnation model must be verified against the global invzu‘iants. The process is similar to that used for verifying the local prt’)perties. Violations of the global invariants indicate that the adaptation model or the global invariants are incorrect. In either case, we must. return to previous steps to revise the corresponding artifacts until the global invariants are satisfied by the model. 6.2.3 Code Generation After the adaptation model is constructed. we i111plement it in Java to enable the legacy code to be adaptive. “’e assume that each non—adaptiw legacy program P, is initially encapsulated in a Java class. First. we create an adaptive Java class such that the class implements the. adaptive behax-ior described by the adaptation model. Second, we replace invocations to the constructors of the non-adaptive classes in the 103 legaci’ty code with those of the adaptive class using an aspect—oriented technique. The remainder of the legacy program remains unchanged. Next. we describe each step in further detail. Construct adaptive classes We implement. adaptive programs by systematically following the adaptation models. Since the UML models were initially generated from the programs, we assume the traceability links between the UML models and the original Java. programs are already established in the generation process. Traceability links are the mappings between states in the UML models and statements in the programs. These links can be stored in the UML models as annotations recording the line numbers in the programs. we will introduce a concrete example of traceability links in Section 6.3. We identify the locations and ccmditions in the source (resp. target) program that correspond to the quiescent (resp. entry) states in the adaptation model. At the locations corresponding to the quiescent state, we insert code to test whether an adaptation request has been received. If so, then the source program execution will be suspended and a state object comprising the current state information will be created. The state object is then transferred to the location in the target program. During the transfer, the state object is transformed from a state object for the source program to a state object for the target program. At the location in the target program, the state of the target program is restored from the state object and the execution is resumed from the location in the target progran'i. Next we describe a challenge that. arises in the implementation of adamtive tran— sitions from quiescent states to entry states and introduce a. cascade adaptation mech- anism to address this challenge. 104 The Challenge. A simple case of such kind of transitions is illustrated in Figure 6.4, where the code segment to be changed is enclosed by the method bar() (lines 1—4). and the quiescent/entry state for adaptation is “outside,” of the method bar() (line 8). A number of techniques have been proposed to address such types of simple tran- sitions. including the code hot—swa1i)1')ing mechanism [5] the strategy design pattern approach [49]. etc. source Program S target program T 01 bar( ){ 01 bar’( ){ 02 02 03 //code block 2 03 //code block 2' 04 } 04 } 05 05 06 £00 ( ) { 06 foo'. ( ) { 07 while (true) { adapt 07 while (true) { 03 // quiescent state 03 // entry state 09 bar (); 09 bar’ (); 10 10 11 // code block 1 11 // cdoe b1°Ck 1 12 } 12 l 13 } 13 } 14 14 15 main( ) { 15 main ( ) { 16 foo ( ) 16 foo' ( 17 } 17 l Figure 6.4: All example of the simple case of quiescent/entry states However, in a. more general case. the code segments that are. to be changed by the adaptation may be scattered across the source and the target programs. As illustrated in Figure 6.5, the quiescent / entry states are within the code segment changed by the adz’iptation. We believe that, in legacy code. scattering is the norm rather than the exception, since we have encountered a number of adaptation scenarios of this nature in our resezu'ch, including the. adaptive Java pipeline [145]. The code hot-swapping or the strategy pattern approach will not work in the general case. since there is no location in the code where a. code hot—swap will have the intended behavior. 105 source program S target program T 01 bar ( ){ 01 bar’ ( ){ 02 02 while ( condition’ ) { 03 while( condition ) { 03 //code block 3' 04 .......... . O4 .......... . 05 /code block 3 639‘ 05 //entry state 06 . a 06 .......... . O7 //quiescent state 07 //code block 4' 08 .......... . 08 } 09 //code block 4 09 } 10 } 10 11 } 11 foo’ ( ){ 12 12 while (true) { 13 foo ( ) { 13 .......... . 14 while (true) { 14 // code block 1' 15 .......... . 15 16 // code block 1 15 bar’ 0 ; 1'7 17 18 bar () ; 18 . 19 19 // code block 2' 20 // cdoe block 2 20 } 21 } 21 } 22 } 22 23 23 main ( ){ 24 main ( ) { 24 foo'( ) 25 foo ( ) 25 } 26 } Figure 6.5: A11 example of the general case of (pliescent/entry states The Cascade Adaptation Mechanism. In order to handle the general case, we propose a cascade adaptation. 77?.6(:h(1.725i8'nl in which the program control first cascades outwards (outbound cascade) from the (_piiescent. state. in the source program. then cascades inwards (inbound cascade) to reach the entry state in the target program. As shown in Figure 6.6. the program control first. cascades from the quiescent state (line 7) of the inner method outwards until it. reaches a location (line 24) outside the code segment. to be changed (lines 1-21). After a state transfornration from the. source program to the target program. the control cascades inwards until it reaches the entry state of the target program. We i1111.)lement the outbound cascade using the exception handling mechanism in Java. First. we insert code at the location for the quiescent 106 state (line 7) to test for adaptation requests. If an adaptation request has been received. then the source program creates an exception object SourceState and stores the local state information of the method bar() in SourceState. Then SourceState is thrown to the method foo() (line 17). which catches SourceState and incrementally records its own local state information in the object. Finally, the main() method catches SourceState (line 24). After transforming SourceState to TargetState, the program control is transferred to the target program. In order to implement the inbound cascade in the target. program. we transform the program into a different target. program T’ such that its initial state is equivalent to the entry state of the. target program. This is achieved by (1) constructng the control flow diagram for the target program, (2) designating the entry state the initial state of the diagram. and (3) regenerating the program from the diagram. “7 hen T’ is invoked. the program control will jump to the state equivalent to the entry state of the target prc‘igram. The target. state object is then passed inwards as a. parameter (line 35 —> line 18 —> line 3 in Figure 6.6). Enable adaptation in legacy code To enable adaptation in legacy programs, we replace calls to the constructors of the non-adaptive classes with those of the adaptive class. Manually identifying the construction statements in the. legacy code and modifying the code directly is unde- sirable. First, there may be numerous locations where the objects are constructed, making the manual approach tedious and error prone. Second. the adaptation con- cern will be entangled with the legacy code. making future maintenz‘mce difficult. Therefore, we apply the aspect-oriented technique to perform the code replacement. We define pointcuts to identify the calls to the constructors of the non—adaptive class. Then we use an aromzd advice to replace them with calls to constructors of the adap- tive class. 107 source program S transformed target program T’ 01 bar ( ) { 01 bar' ’ (State TargetState) { 02 j 02 03 while( condition ) { i 03 // entry state ‘ 04 ............... 04 .......... . 05 /code block 3 i 05 //code block 4' 06 ............... < 06 07 //quiescent state f 07 while ( condition' ) l 08 ............... 08 . 09 //code block 4 1 09 //code block 3' . 1o } 10 Inbound outbound 11 } 11 ---------- - cascade 13 £00 ( ) { 13 l 13 while (true) ( a 14 i 14 ............... 15 15 // code block 1 l 16 foo' ’ (State TargetStateH 16 i 17 17 bar () ; 18 bar’ ' (TargetState) ; < 19 y 19 we”. 19 // cdoe block 2 ' 2O // code block 2' outbound 20 } 21 cascade 21 ) . 22 22 23 while (true) { 23 main ( ) { . 24 .......... . 24 £00 ( ) I 25 // code block 1' 25 } ; 26 . . 27 bar’ 0: Inbound 28 cascade SourceState 29 . 30 // code block 2' . 31 } state transformation 32 , 33 34 main ( )( ' 35 foo' ' (TargetState) / TargetState r 35 } Figure 6.6: The cascade adaI.)tation mechanism For example, we can define a. pointcut FOO that identifies the constructor of a. non-adaptive class Foo as follows: public pointcut FOOC): call (Foo.new(.. )); The following around advice definition replaces the constructor of class Foo iden- tified by FOO with the constructor of an adaptive class Bar: FOOBAR around():FOD(){ return new Bar(); 108 The objects of the adaptive class (Bar) then can be used throughout the legacy program in the same way as for the non-adaptive objects (Foo), except that they are capable of performing the designed adaptation. By using the aspect—oriented approach, we do not directly modify the legacy code. thus separating the adaptation concerns from the non-adaptation concerns. 6.3 Case Study: Adaptive Java Pipeline Program In this section, we demonstrate the process of re-engineering legacy programs using the Java pipeline example introduced in Section 5.4. 6.3.1 Requirements Analysis In Section 5.4. we have introduced the requirements analysis of the Java pipeline program with the MASD approach. The approach introduced in this chapter differs from the MASD approach in the way steady-state models and steady-state programs are obtained. For completeness of the example. we repeat the requirements analysis step here. we apply the goal—based requirements analysis to the Java pipeline program. The program is required to achieve the high-level goal: To enable efficient pipelined data transmission between a writer thread and a reader thread in Java. We create a. top-level goal G in a goal—model as shown in Figure 6.7. Specify global invariants. We have discussed the global invariants for the adap— tive Java pipeline program in Section 5.4. For convenience. we repeat the specifi- cations here. The set of global invariants serve as a contract between the adaptive 1 09 / 0R global invariants /"/ local property for low CPU load verify query implement / ' . . / G: achieve efficrent inter-thread pipelined data transmission -._X. modei iorv\ . async pipe \ _r.___\ /; \\‘ ,/ \\\ \\ _—__ __ _\\ —_ /_._ A /achieve G when CPU / achieve G when is under/ceded / Z CPU is overloaded / \ local property for flgh CPU load) [ high CPU load _____1/ low CPU load changes / model for ~ h”. ””99“? it \ \ change imam [adaptive pipexS 4 change ''''' '- AdaptivePipedlnput.java . \‘E _‘K —.____ J //COnSUUCi / j a, -_.,.d._- _ p ..T _g.‘. Jr‘v*fl f’/ T i .l.sync Pipedlnput Java I goal [:7 p requrrement dynamic change Figure 6.7: 0 condition Fm" Ja- code base he; A», l ‘i D implementation / _. l model Goal model for a(la1i)tive Java pipeline 110 pipeline program and its clients. and therefore. must hold regardless of adaptations. The gl(_)l“)al invariants are as follows: 0 Safety invariant: no deadlock. The program must never (.leadlock, i.e., the pipeline should continue to read from the input data. buffer until the input buffer becomes empty. Or simply stated, the input buffer must. finally become empty. In LTL, O hipiiLempty. (6.1) o Liveness invariant: no data loss. All data input to the pipeline must. eventually be output from the pipeline. In LTL, C](input(:L‘):><>0utput(r)). (6.2) O Invariant: no erroneous output. The pipeline must not output any data that is not a part of the input data. In LTL. (noutpufir) U input(.r))VCl—rzi‘n].m.t(;r). (6.3) o Invariant: data ordering. The order in which data. are read from the. pipeline must match the order in which data are written to the pipeline... i.e.. no out-of- order transmission. In LTL. 111 (n17n.put(.r)Z/{ 'i7'2.12'zrt'(;y) ):>(-—I0utput(.r) U 0'1itp'11,t(y)). (6.4) We create a requirement node under the top-level goal to include these global invariants in the goal model. Specify local properties. Again, we repeat the analysis of the local properties of the adaptive Java pipeline program that we discussed in Section 5.4. The adaptive pipeline program executes in a high CPU workload domain and a low CPU workload domain. In the high CPU workload domain. we require the program to operate in the synchronous mode. which has lower level of concurrency, but is less CPU intensive. The writer and the reader are not a.ll(.)wed to operate simultaneously. In addition to the global im'ariants. we have the. following local property for this domain: 0 Local property: mutual exclusion. The read and the write operations must not be enabled simultaneously at any time. In LTL. El—1(write_en.u.bled /\ 7‘6(L(1-871(1b[€d). (6.5) In the low CPU w(:)rkl(')ad don'iain. we require the program to 01,)ere-rte. in the asynchronous mode, which is more, CPU intensive. but has higher concurrency. In this mode, we require maximal concurrency, i.e., when the buffer is not full (or empty). the write (or read) operation must be. enabled. 0 Local property: writer concurrency. The write operation must be enabled when the buffer is not full. In LTL. 112 C](—ifulléwrite_cnnblcd). ((5.6) 0 Local property: reader concurrency. The read Operation must be enabled when the buffer is not empty. In LTL. El ( —1 empty:'7'c(111_en(1.blcd).((5-7) As shown in Figure 6.7. we create. an OR-refined subgoal for each domain in the goal model. Under each subgoal. we create a requirement. node to include local properties and a condition node to include domain conditions. Query the Code Base. In the existing code base. we have two different im- plementations for the Java pipeline: a synchronized implementation comprising sync.PipedInput and sync.PipeOutput classes. and an asynchronous implemen- tation comprising async.PipedInput and async.Piped0utput classes. The descrip- tions for these two implementations indicate that they are appropriate candidates for the high CPU load domain and the low CPU load domain, respectively. 6.3.2 Design and Analysis This section describes the model design and analysis for the adaptive Java pipeline example. 113 Generate state diagrams \Ve use the metamodel-based approach to translate the Java classes into State- chart diagrams. The piped input and output. classes in the legacy program basically implement a read() method for input and a write() method for output. The im— plementations of these two methods are largely symmetric. For brevity. we use the read() method in the piped input classes (synchronized and asynchronous) as an example to illustrate the procedure, The translation includes two substeps: F irst, we translate each Java program into an equivalent J ava. program in a canonical form, then we apply the metamodel-based model translation technique to translate the canonical form Java. program into a Statechart diagram. The same procedure is applied to the write() method and all other methods in the Java classes. Substep 1: Convert Java code into canonical form. “"0 translate a. Java. program into an equivalent program in a canonical form in order to simplify the number of different. cases we have to handle in the nretamodel—based translation. First, we translate all “while" loops and “for“ loops in the program into “while(true)" loops and “if" blocks. Figure (5.8 (a) shows the translation of a while loop. where the loop body is corned from line 2 in the non-canonical form pro- gram to line 3 in the canonical form program. Figure 6.8 (b) shows the translation of a for loop. where the loop body is copied from line ‘2 in the non-canonical form program to line 4 in the canonical form program. Second, we perform an inline operation to method invocations in the program to flatten the syntactic structure of the program. Note that this step currently does not. handle recursions. Figure 6.9 shows the canonical form conversion of the read() method of the sync.PipedInput class. The method invocation (line 3) in the non-canonical form is inlined in the canonical form (lines 4 ‘21 (L1)). The while loop in the non-canonical 114 01 while (condition){ 01 while(true){ 02 [body block] 02 if (condition){ 03 l 03 [body block] 04 }else 05 break; 06 } non-canonical form canonical form (a) whi le loop conversion 01 for (x; y; z){ 01 X: 02 [body block] 02 *while (true){ 03 } 03 if (y){ 04 [body block] 05 z; 06 } else 07 break 08 } non-canonical form canonical form (D) f c-r loop conversion Figure 6.8: Java code canonical form conversion form (lines 6 -16) is converted to the. while (true) loop in the canonical form (lines 24— 39 (1.2)). substep 2: Translate Java code into Statechart diagrams. We use the metamodel—based approach to translate the canonical form Java programs into State— chart diagrams. The translation of the read() method of the sync . PipedInput class is illustrated in Figure 6.10, where Figure 6.10 (a) shows the canonical form Java. program that we constructed in the previous substep, and Figure 6.10 (b) shows the corresponding Statechart diagram created with IBM Rational XDE [62]. In the Statechart diagram. each state corresponds to a point in the program, where a point is defined to be the location between two lines of code. In Figure 6.10 (a) we. have labeled these points with bold comment lines prefixed with “//state:”. The dotted lines between Figure 6.10 (a) and Figure 6.10 (b) show the traceability links between 115 01.public synchronized int read(byte b[], 30 int ret = buffer[out++] & OxFF; 31 if (out >= buffer.1ength) { 32 out = O; 33 l 34 if (in == out) { 35 35 in = -1; 37 l 38 return ret; 39 } (a) non-canonical form 02 int off, int len) 03 c = read(); 04 b[off] = (byte) c: 05 int rlen = 1; 06 while ((in >= 0) 88 (--len > 0)) { O7 b[off + rlen] = buffer[out++] 08 rlen++; 09 if (out >= buffer.1ength) { 10 out = 0; 11 } i: if (in —- out) { loopL1 14 in = -1; 15 } 16 } 17 return rlen; 18 } 19 20 public synchronized byte read() 21 while (in < 0) { 22 ...... 23 notifyA11(); 24 try 1 // 25 wait(1000); 26 } catch ( ..) { ( 27 ...... 28 l \ ‘ 29 } loop L2 01 public synchronized int read(byte b[], 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 4O 41 int off, int len) while (in < 0) { notifyA11(); try ( wait(1000): } catch (...) { int ret = buffer[out++] 8 OxFF; if (out >= buffer.length) { out = 0: i if (in == out) { in = -1; } int c = ret; b[off] = c; int rlen = 1: while (true){ 1en--; if ((in >= 0) 58 (len > 0)) ( b[off + rlen] = buffer[out++]; rlen++; if (out >= buffer.length) { out = 0; 1 if (in == out) { in = -1; i } else break; 1 return rlen; (b) canonical form Figure 6.9: Before and after canonical form conversion of sync.PipedInput . read() 116 the points in the Java. program and the states in the Statechart ('liagram. We have included more details of the metamodel-liased model translation rules in Appendix C. We generate a. Statechart diagram for each of the Java pipeline classes. Every model represents a certain kind of behavior. and thus is considered a steady—state model. Verify steady-state models We use the Hydra tool suite to transform the steady—state models to Promela models. Then we use the Spin model checker [12] to verify the Promela models against. the local properties specified in Section (5.3.1. In this example. we did not find errors in the models, and therefore. proceed to the next step. Design adaptation models N ext, we design the Statechart adaptation models. Figure 6.11 shows the adap- tation model for the read() method of the piped input. where Figure 6.11 (a) and Figure 6.11 (b) represent the synchronized and asynchronous steady—state models, respectively. The bold-lined states and transitions connecting Figure 6.11 (a) and Figure 6.11 (b) represent. adaptix-e states and transitions. The shaded states in the steady-state models are the quiescent. state and the entry state. We chose the state finished? in the source model to be the quiescent state because (1) it is in a loop that is frequently executed. and (2) its condition is relatively simple to specify when compared to other states in the same loop. We. chose the state more? in the target. model to be the entry state because its position in the target. is similar to that of the state finished? in the source, thus simplifying the state transformation. The adaptation procedure includes three transititms: (1) test the adaptation request and save current state. (2) transform the saved state of the source model to a state of the target model. and (3) restore the state in the target model. 117 01 02 O3 O4 05 O6 07 08 09 10 11 L2 13 14 15 16 17 18 19 20 A, K 22 23 2 I\' L 26 27 ") L 29 30 31 32 33 34 35 36 37 38 39 4O 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 public synchronized int read(byte b[], int: off, int len) . //state: read__entered ‘ L while (in < 0) ( newPipedInput :otifyhll () : “C ( )l /"a1tp1.mtjfy();"reader.read_ rY . A donetrlen); lock = 0 wait(1000) ; ~ 1 connected //state: read_entry _________ . _ //state: read_entered x“ ................... Flea“ Ieng ? Irt) } catch (Interruptedsxception ex) { umtjfy() ---------- V ~ ‘ \ read_entry l ‘ int ret = buffer[out++] & OxFF; reader_sleeping . “00‘ ==0|l0¢k == TID]/lock=TID //state: read__first T if (out >= bufferdength‘)“ L. v out = O; 2““-‘4‘. l read_ent.. ) [in < oi/Aommuryo; lock = o //state:circle_out [in>=01/out:=out+1 if (in = out) { ‘ - Y in = _1; - ‘ read_first } ‘ a //s ta ta _. re set_ i n [out>= bufferjengthllout: wl bum buffer_length] int c = ret; b[off] = c; Circle—out int rlen = 1; - //s ta ta : read_rest while (trueH """ 1en--; h ‘ //state: .t'J'.nJ'.shed?‘~..x if ((in >= 0) s5 (--1eh > 0)) { b[off + rlen] = buffer[out++];a rlen++; //state: read_more ~- if (out >= buffer.len‘gth\) ( out = O; } . //state: circ1e_out if (in == out) { I‘ in = -1; ‘ ~ . } //state: reset_in ,. ’ } “N. 2”" ',4i" else 32-1“ //state:read_exit ‘ break; x“ l —‘ //state:read_exit return rlen; (a) Java code ~ \ 4 ~ ~ ~ s s ~ ‘ 4 “meme . == urine-11 lfin 2:011] / reset_in /rlen:=1 read_e(it a \ > read_rest ‘ ‘~ g ’ itin= 0) & (leng > 0)1/0Lt: =0t1+1;rien:=rlen+1 'v‘ a ‘ readwmore , ; [out< Merrie-”93111 irout >= bufferJengthl/out: =0 drcle_out . [in !=‘0Utl [in == out],’in:=-1 reset_in (b) Statechart diagram I] event —> transition E: state ......... traceability holding indicates adaptive [ condition ]/action states and transitions link Figure 6.10: Statechart. translation of sync.PipedInput.read() 118 The at‘laptation model for the write() method of the piped output can be con- structed sin‘iilarly. Verify adaptation models We use the Hydra tool suite to transform the adaptation models to Promela models. Then we use. the Spin model checker [12] to verify the Promela models against the global invariants specified in Section 6.3.1. In this example, we did not find errors in the models, and therefore, successfully completed the model design and analysis for the. adaptive Java pipeline program. 6.3.3 Code Generation After the adaptation models are created. we generate Java code for the adapta- tion. Two techniques are used for code generation: cascade adaptation mechanism and aspect-orieiited adaptation enabling. Cascade adaptation mechanism We create two adaptive classes Adapt ivePipedInput and AdaptivePipedOutput for the piped input and output, respectively. Figure 6.12 (a) shows the read() method for the AdaptivePipedInput class. The class comprises an object of the sync.PipedInput class (syncInput), and an object of the async.PipedInput class (asyncInput). The read() method of AdaptivePipedInput invokes the read() method of one of the objects based on its current state (lines 5 and 15). The invocation of syncInput.read() (line 8) (the source) is placed in a try block since we use the exception handling mechanism to implement the cascade adaptation. When an adaptation occurs during the invocation of the read() method. a. state object of the type SyncInputState will be thrown as an exception (line 8), and caught by the catch block (lines 109713). where the state object is stored in the variable outgoingState. 119 (a) Source model for piped input (synchronized piped input) (b) Target model for piped input (asynchronous piped input) . I r"“" l ”-e .55 '"-ped1'm}"‘i i .. ”Piped?” ,1 i new” : n ( )l /"outpu.notify();"reader.read_ “CW( )1 Tcomeded L5 done(rlen); Iod< = o (___ . ‘.. 7, ~77 7“. i W "read(lengzint)l — —4 "balm ) _> read—eh” flreaduengzint) ' ,. . -./"sharednew.setQt(0i£) ., . . arde_oui rm "'f ertered ' reader_sleeping l [lod< ==0 I lock == TlD]/lod<=TID . ,r, IV]; 7,) :\ __ __ .1 L czar;- i.e.... [in < oontmime); lock = 0“ " . ~ -- : read l[in>=01/0Lt: = out+1 \. a “a” W... M... [0L1>= tufier_length]/out:=0l l[out< buffer_length] in==ou l i>=len] :3 drde_out w i“ T T T ' Staten ) [in == outlfin==-1 [in l=outl read [in!=out so = 0) & (leng > (1) (2) 0)1/out: =out+l;rlen:=rleH-l [adapting]lSaveState IT ransformState l_ ”ii-"Ti 1| D event - quiescent/ ______ traceability [wk we: ,emljl l entry state link - [out >= Merjenglfl/oa: =0 )5 circle_out ———> transition [ condition ] / action [in ,'= (Judi [in == oul/in:=-1 . v . reset_in bolding indicates adaptive states and transitions E state Figure 6.11: Adaptation model for the piped input class 120 At lines 20 21. a state object, incomingStat. for the asynclnput is created l_)a.sed 011 outgoingState. and at line 22. the incomingStat is passed to the read() method of asyncInput (the target) as an argument. 01 public synchronized int read(byte b[], 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ) int off, int len){ InputState outgoingState = null; while(true){ if(!adapting 88 in_sync){ tryt int ret = syncInput.read(b, off, len); return ret; }catch(SyncInputState stateObj){ outgoingState= stateObj; adapting = true; } }else if(!adapting as in_async){ int ret = asyncInput.read(b,off,len); return ret; } else if(adapting){ AsyncInputState incomingStat= new AsyncInputState(outgoingState); int ret = asyncInput.read(incomingStat): return rat; (a) Java code for AdaptivePipedlnput.read() 01 if(adapting()){ 02 SyncInputState stateObj 03 = new InputState(): 04 stateObj.in = in; 05 stateObj.len len; 06 stateObj.off off; 07 stateObj.rlen = rlen; 08 stateObj.b = b; 09 throw stateObj; 10 } (b) Java code at quiescent state of sync.PipedInput.read() 01 public int read( 02 AsyncInputState stateObj){ 03 byte b[] = stateObj.b; 04 int len = stateObj.len; 05 int rlen = stateObj.rlen; 06 int off = stateObj.off; (c) Java code at entry state of async.PipedInput.read() Figure 6.12: The read() method for the adaptive piped input. class Since we. have chosen the state finished? in the synchronized input model as the quiescent. state, we follow the traceability links in Figure 6.10, and locate the point ('torresponding to the quiescent state (line 34 in Figure 6.10 (a.)) We insert Java code in this line. to implement the adaptation transitions. The code inserted is shown in Figure 6.12 (b). If an adaptation request. is present. (line 1), then we create a. state object stateObj (lines 2, 3) which saves the current state (lines 4 8). Then stateObj is thrown as an exception (line 9). which will be caught at. line 10 of Figure 6.12 (a). 1‘21 The entry point in the target. is identified similarly. Figure 6.12 (c) shows the. Java. code inserted at the entry point of the read() method of async.PipedInput. It first restores the program state from the input state object. stateDb j (lines 3~6 ), then continues with other operations in the method thereafter. we follow the same steps introduced above to generate the code for the AdaptivePipedDutput class as well. Enable adaptation in legacy code In the legacy Java 1.)rogra1n that uses the Java pipeline, we use an aspect—oriented approach to replace calls to the constructors of the non—adaptive piped input / output classes with those of the adaptive piped input/output. classes. Figure 6.13 shows an aspect definition in AspectJ that enables adaptation in legacy Java programs. The aspect defines two pointcuts, constructInput (lines 5—7) and constructDutput (lines 9—11), to identify the invocaticms to the constructors of the piped input and out- put classes, respectively. The aspect also defines two advices, PipedInput (lines l3~ 15) and PipedOutput (lines 17-19), which respectively, replace the constructors of the non-adaptive classes with (ironstructors of the adaptive classes. The aspect is then woven into the legacy program using the AspectJ compiler “a jc” [132]. After the above steps, the legacy Java program using the non-adaptive pipeline is transformed into a program using the adaptive pipeline, whose behavior has been verified against the set of requirements elicited in the requirements analysis. 6.4 Extensions In this section, we discuss a number of extensions to the technique introduced in Section 6.2 to handle more general adaptation scenarios. 122 01 package Main; 02 03 public aspect Adaptenabling { O4 05 public pointcut constructInput(): 06 call (sync.PipedInput.new( . . )) | | 07 call (async . PipedInput.new( . . ) ) ; 08 09 public pointcut constructOutput(): 10 call (sync.PipedOutput.new( . . ) ) | | 11 call (async. PipedOutput.new( . . )) ; 12 13 PipedInput around() :constructInput() { 14 return new AdaptivePipedInput(); 15 } 16 17 PipedOutput around () :constructOutput() { 18 return new AdaptivePipedOutputO; 19 } 20 } Figure 6.13: AspectJ code for adaptation enabling 6.4.1 I Collaborating Adaptive Components we apply role-based design [135] to handle collaborations among adaptive com- ponents during ada.1.)tation. In Section 6.2, we restricted our discussion to independent adaptations of a single thread. In a distributed system, collaborating components in multiple threads are usually required to adapt in a coordinated fashion. Examples include the adaptation of an encoder filter and a decoder filter on the sender and receiver processes, respectively. We create an adaptation collaboration. Statechart di- agram (ACSD) with a number of predefined roles (e.g., encoder, decoder, etc) to model each such kind of collaboration. Each collaborating adaptive component reg- isters with the ACSD as a role, and ACSD coordinates the adaptation of different roles in the collaboration by sending and receiving messages. The states in an ACSD represent different stages in the collaborative adaptation process; the transitions of an ACSD represent allowable adaptation sequences of the roles. The transitions are triggered by the receipt of messages from the roles. After both the. adz-iptation models 123 and the ACSD are designed. the ACSD is verified along with the adaptation models. The ACSD is then translated into an adaptation collaboration driver class (central- ized) with three primitive methods: (1) registering a role. (2) getting the current state of the adaptation, and (3) triggering an adaptive transition to the next state of the adaptation. 6.4.2 Adapting to Multiple Target Programs In Section 6.2, we restricted our discussion to adaptations from a. source program to only one ta rget program. Our approach can be extended to support. a more general case where a source program may adapt to different target. programs under different triggering conditions. For each target program. we create a separate adaptation UML model and verify the models independently. The adaptation to multiple different. tar- get. programs may not share. the same quiescent states. To support multiple quiescent states in a source program. we create a different type of state object for each quiescent state. The corresponding state transformation routine is then determined based on the type of the state object and the target program. 6.5 Related Work The work introduced in this chapter is directly related to two areas of research: using formal models for analysis and enabling adaptation in legacy code. The former has already been discussed in Section 5.5. Thus, we focus our discussion on the latter area. Numerous techniques have been proposed to enable dynamic: adaptation in legacy, non-adaptive software [47, 121, 139]. Sadjadi et al [121] introduced the trans- parent shaping (TRAP) technique to generate adaptable programs (i.e programs whose behavior can be changed at run time) from existing application. Their ap- 1‘24 proach comprises two steps. In the. first step, occurring before run time (i.e., compile time or load time). they use static weaving techniques to weave hooks into the pro- gram. These hooks are interceptors inserted in the program that support. run—time insertion and removal of adaptive code. In the second step. occurring at run-time, a composer manages the insertion and removal of adaptive code in the program using the hooks embedded in the first step. They have developed a Java implementation of the TRAP technique, TRAP/J [122], that uses the AspectJ compiler to enable dynamic adaptation in legacy Java progran‘is. Both our approach and theirs are rooted from the aspect-oriented adaptation enabling technique introduced by Yang ct al [139]. However. our focus is on assurance, while their focus is on the manage- ment of adaptation code, i.e., separating adaptation code from non-adaptive code. Their technique can be leveraged by ours to better manage the source code while providing assurance to adaptive software. 6.6 Discussion In this chapter, we introduced an approach to transform non—adaptive legacy software into adaptive software with assurance. Our approach leverages UML dia- grams and formal techniques to provide assurance in adaptation, i.e., satisfying local properties and global invariants. 111 order to enable assured adaptation, our approach introduces activities on three types of artifacts: the inmlen‘ientation in Java, the UML Statechart diagrams, and the formal models in Promela. The design of the adaptive programs is performed on the UML Stateclizjirt diagrams. The analysis for correctness is performed on the formal models. The final adaptive programs are generated for the implementation. The transformation of different types of artifacts is accomplished by using a combination of the metaI‘nodel-based technique, the cascade adaptation technique, and the aspect-oriented adaptation enabling technique. 125 We have made a. few assun'iptions about a number of existing techniques that we leverage. These techniques must. be more mature in order for the proposed approach to be practical in general. (.1) Reverse engineering: 111 our approach, we require the extraction of a Statechart diagram from the legacy code. The metamodel-based tech.- nique introduced in this chapter handles only a small subset of all possible software. Reverse engineering for a general system is still an open research topic, including Bandera [28], which extracts finite-state models from Java source code. (2) Legacy code base: We require a. legacy code base comprising a set of non-adaptive legacy programs and the properties associated with each program. The code base and the properties may not be available in some legacy software systems. (3) Model checking tools: Spin [59] originated in the teleconmiunications industry [60], but has gained increasing use in other industrial don’iains involving distributed systems, such as flight. systems [12] raib ay systems [2.5]. and systems that are fault—tolerant [123]. (4) Code generation tools: We briefly sketched a. technique that translates UML adaptation models into Java code using the cascade adaI‘)tation mechanism. However, this tech nique does not handle recursions or exception handling. We have developed guidelines for the transfornnttion between different types of artifacts using the metamodel—based technique. Thus far, the guidelines are intended to be systematically followed by developers, though they are amenable to automation. The technitpie is not yet. au- tomated, and therefore. the ccmformity between the models and the code still largely relies on the developers. The solutions to the above assumptions are much broader than the scope of this chapter or dissertation. Nonetl‘ieless, our approach provides a systematic approach to the development of adaptive software in which existing tools are applicable. This approach has been applied to a number of adaptive components for mobile con‘iputing applications that we have studied in our research. including the adaptive Java pipeline program and an adaptive forward error correction-based wireless communication program [16, 154]. “79 also hope that. this chapter will help 126 motivate more research on realizing the above assumptirms. 127 Part III Implementation of Adaptive Software 128 Chapter 7 Modular Model Checking for Adaptive Software In this Chapter. we intrmluce a sound approach [145] for modularlv verifying whethtr an adaptive program satisfies its requirements specified in LTL/A-LTL as a. means to provide assurance to adaptive software in the implcmenfation phase. Com- pared to existing adaptive program model checking teclmiques [2. 7‘2]. our approach has the following advantages: (1) It reduces the verification cost of adaptive programs by a factor of 72., where 72. is the number of steaulyzstatt-r programs encompassed by the adaptive program. (‘2) Our verification technique can be applied incrementally to verify an adaptive system. leveraging previously verified results. In Chapter 5 and Chimter 6. we have proposed using model checking to verify design models against global invariants and local properties specified in LTL for newly designed and re-engineered legacy adaptive software, l‘esl')«.‘ictively. Critical properties of an adaptive program need to be verified. including properties local to each steadv~ state program (local properties). properties that applv durmg adaptation from one steady-state program to another (tre‘ulsitional properties). and invariant properties that must. be held by the adaptive program throughout its execution (global invari- 1‘29 ants). Since the number n of steady-state programs in an n-plex adaptive program may be large. existing model checking approaches may be too expensive (in terms of time and space complexity) to verify large-scale adaptive programs. In addition, an adaptive program may be developed in a stepwise fashion as in extreme program- )tll ming [56]. where an (n + l steady—state program may be incrementally developed after an n-plex adaptive program has been developed and verified; the model check- ing approach should verify the adaptive program incrementally without repeating the model checking for the entire adaptive program. This chapter proposes a modular model checking approach to address the above problems. we avoid directly model checking a large adaptive. program by decomposing the task into smaller verification modules. First, we verify a. set of base cov‘idztions (i.e., the properties that can be verified locally in each steady-state program) using a traditional model checking approach [51. 103, 136, 138]. Second. for each steady- state program P1, we calculate the guarantees. the necessary conditions for P2 to satisfy its base conditions. Third. for each steady-state program P a. we calculate the assumptions, the sufficient conditions for P,- to satisfy transitional properties and/or global invariants. Then we prove that all the assumptions and guarantees can be directly or indirectly inferred from the set of base conditions, thus completing the overall verification process. Compared to existing approaches, our approach reduces the complexity of ver- ifying an adaptive program by a factor of n. where n is the number of steady-state programs in the adaptive program. R‘Ioreover, when one steady-state program P,- is modified or a new steady—state program Pn+l becomes available after the remainder of the adaptive program has been verified using our approach. the model checking that needs to be repeated is limited to only P2: or I’M-1 ( and /or related e‘idaptations). Also, our approach verifies not only LTL properties, but also A-LTL properties. whose verification, to the best. of our knowledge. has not been published elsewhere. In this 130 chapter. we assume that the steady-state programs are given in the form of finite state machines (FSMs) introduced in Chapter 2. \N’e have proved the correctness of the proposed approach and implemented the algorithms in a prototype model checker AMOEBA (Adaptive program h‘lOdular An— alyzer) in C++. we have successfully applied AMOEBA to verify a. number of adap— tive programs developed for the RAPIDware project [95]. including an adaptive TCP routing protocol [130] and the adaptive Java pipeline example [151]. The remainder of this chapter is organized as follows. In Section 7.1. we introduce the verificatiOn problems using an adaptive TCP routing protocol to illustrate the key verification challenges. Section 7.2 ('lescribes a number of basic algorithms and data. structures and their properties that are used in our approach. Section 7.3 uses the TCP routing example to.outline the basic idea. of our approach. Section 7.4 formally describes the model Checking algorithms and proves their soundness. Section 7.5 describes an example application of our technique. We discuss optimizations, complexity. and lim- itations of our approa-ich in Section 7.6. and related work is overviewed in Section 7.7. Section 7.8 summarizes this chapter and briefly discusses possible extensions of our proposed approach. 7.1 Specifying Adaptive Systems In this section, we introduce a, simplified version of an adaptive TCP routing [27'0- tocol [1.30] that. provides a. concrete demonstration for adaptive program verification challenges. and it is also used to illustrate our proposed solution. 7.1.1 Adaptive TCP Routing The adaptive TCP routing protocol is a network protocol involving adaptive middleware in which a router bah-races the network traffic by dynamically choosing 131 the next hop for delivering packets. \Ve consider two types of next hop nodes: trusted and untrusted. The protocol works in two different. modes: safe and normal. In the safe mode. only trusted nodes are selected for packet delivery. and in the normal mode. both types are used in order to maximize throughput. Any packet must be. encrypted before being transferred to an untrusted node. \Ve consider the program running in the safe and normal modes to be two steady-state progrmns P1 and P2. respectively. Figure 7.1 shows the FSM for the adaptive protocol program. For convenience, we assign a unique state name to each state. The upper rectangle in Figure 7.1 illustrates P1. Initially, P1 is in the readyl state, in which. P1 may receive a packet and move to state receivedl. At this point, P1 searches for a trusted next. hop and moves to state routedl. Then P1 sends the packet to the next hop and goes to state sentl. and returns to the readyl state. The lower rectangle in Figure 7.1 illustrates P2. The ready2. received’z. and sent2 states are similar to those in P1. In state received2, searching for the. next hop may return a trusted or untrusted node. and thus P2 moves to the safe2 and unsafe? states, respectively. From state unsafe2. P2 tests Whether the input packet has been encrypted. If so. then P2 goes to state encrypted2, otherwise, it. first. goes to state unencrypted2 and then encrypts the packet and goes to the encrypted2 state. Four adaptive transitions are. defined l‘)etween P1 and P2: 31, 32. a3, and 34. \\"e zumotate (in italics) each state with the conditions that are true for the, state (only the relevant conditions are. shown). Critical properties of the adaptive program are. specified as global invariants LTL [111]. For the adaptive. routing protocol, we require the 1')rogra.m not to drop any packet throughout its execution. i.e.. after it receives a packet, it. should not receive the next packet before it sends the current packet. Formally in LTL: 27720 = Cl(7‘ecciwcd :> (fircadyu sc7it)). (7.1) In addition to global invariants. we also specify local properties for each steady-state 132 P1: Safe Mode testsafety A trusted ready received sent J4 sent1 ) routed1 a1 a2 a3 a4 received --_---- --_—--_ 7 P2: Normal Mode - . .._... r....._.._.. - - unencryptedz received2 encrypted encrypted2 testsafety A trusted C) .' _s state initial state program transition adaptive transition Figure 7.1: Case study: adaptive routing protocol program. In this example. we require P1 to never use an untrusted (unsafe) next. hop. Formally in LTL. we write LP1 = C(fi-unsufe). (7.2) where unsafe E testsafety=>ltrusted. which is true only in state. unsafe2. For P2. we require the system to encrypt a. packet before sending the packet if the next hop is unsafe. Formally in LTL, we write LP; 2 D(’tt‘ll..'~;(1f€ :> (fiscnt U cm:7‘ypf.cd)). (7.3) For an execution of an adaptive program. if it adapts among the stt-‘ady-state 133 programs of the adaptive program, then the execution must satisfy the corresponding local properties of the steady-state programs sequentially in the same order. We dis- cussed this type of properties (transitional properties) in Section 3.4.2 and proposed using A-LTL and sequential composition to specify transitional properties. The kind of transitional properties we discuss in this chapter are restricted to those based on one-point adaptations semantics (see Section 3.3.1). That. is. we verify that. in an adaptive. program with 72.. steady-state programs P1, P2. - - - ,P,,, any execution 0] with a sequence of (k — 1) steps of adaptations. starting from PJ-l, going through PD, - - - , P], satisfies :2, (2,, n, _ . _ —iLP12——‘LPJ-3 - -- it} LPJ-k, where j,- 76 Jt+1- LP, 1 For simplicity, in the examples in this chapter we assume 9 E true and write <,t>—\¢i.v, instead of 03¢"). However. our aIi)proach also applies to cases where Q can be an arbitrary LTL formula. In the adaptive routing protocol. we express the transitional property that must be satisfied by executions adapting from P1 to P2 with the A-LTL formula LP1—\L-P-_). and the transitional property that. must be satisfied by executions adapting from P1 to P2 and then to P1 with the A—LTL formula LP1—\LI’2—\LP1, etc. Note that each different adaptation sequence corresponds to a different transitional property. Since in a general adaptive program. there are infinite numbers of different possil')le adaptation sequences, the number of possible transitional properties is also infinite. 7.1.2 Verification Challenges Regarding an adaptive program, we must. achieve two verification goals: (1) the global invariants hold for the adaptive program regardless of adzqitations. and (2) when the program adapts within its steady-state programs. the corresponding transitional properties are satisfied. 134 Model checking teclmiques determine whether a program satisfies a given tem— poral logic formula by (‘ixploring the state space of the program. Numerous model checking techniques have been proposed to verify various properties of different types of programs. However, to the best of our knowledge, none of the existing verification approaches can be applied to verify adaptive programs efficiently (in terms of time and space complexity), which is discussed next. Global invariants. Allen et a1 [2] used model checking to verify that an adaptive program adapting between two steady-state programs satisfies certain global proper— ties. VVh'ile they do not explicitly address adaptations of n-plex adaptive programs (for n > 2), a straightforward extension could be to apply pairwise model checking between each pair of steady-state programs. separately. The first drawback of the pairwise extension is that it requires n.2 iterations of model checking for a. program with n. steady-state programs. More importantly: this extension is theoretically un— sound since it verifies executions with only one adaptation step, and thus it does not guarantee the correctness of executions with more than one adaptation step. A sound solution proposed by Magee [93], called the monolithic approach ['27], treats an adaptive program as a. general program and directly verifies the adaptive program against its global invariants. The monolithic approach suffers from the state explosion problem: It is well known that the major limiting factor in model checking is the large amount of memory required by the computation ['29]. However, most known efficient model checking algorithms have space complexity 0(nlogn) [29], where n is the size (i.e., the number of states and transitions) of the program under verification. Although the example adaptive routing protocol has only two steady-state programs, the number of steady-state programs in an adaptive program can easily become very large in 1.)ra.ctice, and so can the required memory by the model checking computation. Furthermore, the monolithic approach is not. suited for incremental adaptive software development. For example. after the adaptive routing protocol with two steady-state programs is verified. if a third steady—state program for a different condition becomes availal.)le by incremental development, then the monolithic approach cannot leverage the existing verification results. Instead. the entire verification must be repeated for the adaptive program with three steady-state programs. Transitional Properties. The transitional property verification is even more chal- lenging. Since executions of an adaptive program may adapt within its set of steady- state programs in an infinite number of different sequences, the number of different transitional properties is also infinite. Therefore, it is impossible to verify each tran- sitional property seI.)a.ra.tely. To the best of our knowledge, no existing approaches address the transiticmal property verification problem defined in this chapter. In order to address the above problems. we propose a. sound modular model check- ing approach for adaptive programs against their global invariants and transitional properties that not only reduces verification complexity by a factor of n, where n is the number of steady—state programs, but also reduces verification cost by supporting verification of incrementally developed adaptive software. 7 .2 Preliminary Algorithms and Data Structures This section introduces preliminary notations, algorithms. and basic data struc- tures that are required by our model checking algorithm. We define an obligation of a state 5 of a program P to be a necessary condition that the state must satisfy in order for the program to satisfy a given temporal logic formula p. We describe the Partitioned Normal Form (PN F ) that is used to propagate the obligations for anal— ysis. Then we overview the property automaton needed to support the processing of the obligations. Next, we introduce an algorithm that. marks each state of a program 136 with a set of obligations. Intuitiyely. the algorithm first marks the initial states of P with obligation p. then the olfligations of each state are propagated to its successor state(s) in a way that preserves the necessary conditions along the propagation paths. If a. state is reachable from the initial states from multiple paths, then the obligations of the state is the conjunction of the necessary conditions propagated to the state along all these paths. 7 .2.1 Partitioned Normal Form The logic closest to A—LTL is the ITL studied by Bowman and Thompson [17]. They have introduced using the Partitioned Normal Form (PNF) [17] to support model checking ITL properties. 111 our work. we also use PNF to handle obligation tnopagation. \Ve rewrite each A-LTL/LTL formula. into its PN F as follows: (peAcmpty) V V(p,/\Oq,). (7.4) l The expression, (peAe'niply). depicts the condition when a sequence is empty, i.e., the last state of the sequence. where pe is a proposition that. must be satisfied by the last state. The expression, V2(p,/\Oq,), depicts the condition when the sequence is not the last state. The propositions p, partition true, and q, is the corresponding condition that must hold when p,- holds in the current. state. Formally, pe, 7),, and q, satisfy the following constraints: 0 pe and p, are all propositional formulae. 0 p, partitions true, i.e., V: p, E true and p,/\]),- E false for all z 7é ]. All A-LTL/LTL formulae can be rewritten in PNF by applying PNF—preserviug rewrite-rules [17]. For any A-LTL formulae 6). cit. and Q. the rewrite-rules are defined as follows. 137 The negation rule: an) = (eI)‘§/Ww)fy)VthibAOeQ?) (T5) The conjunction rule: (3)/W) = (e71212tq/\(1)§/\1)2))VVV(1)2/\1)Jv)/\O(qf°/\qf'). (7.6) The disjunction rule: A fl *1 v ova : (empty/\(S’p V1)2) )VVV(( 1) 0A1)! )AO(q,-Vq]w)). The next rule: 00 = ((17)).1)ty/\falsc)V(true/\Od). (7.8) The global rule: De = (enmity/wowV(Pz~/\O(qz/\Dc’)))- (7-9) The eventuality rule: 0d) 2 (e77).1)ty/\1)€)VV(1),/\O(q,V<>c))). (7.10) 138 The until rule: (pl/l 1,5) : (em1)ty/\1)2’)V V V(1)2i’/\1)f/\O((12‘2"quAq15Z/l (1')) (7.11) i j The adapt rule: (1)—Slab = (em1)ty/\false)V Cb _ I C s \/ V(n AppreaAOttqJQA'w)V(qz-i-*w)))v 2' 3' V(pfi/\-ipe°”/\O((1fb-W))- (7-12) 2 We use superscripts on 1),, (1,, and 1)e to represent the fornuila from which they are constructed. Since 1)e and 1),- are all propositions, their truth values can be directly evaluated over the label of each single state. Therefore, the obligations of a given state can be expressed solely by a next state formula, which will be the q, part of a. disjunct if the state has successor states, or empty if the state is a deadlock state. 7 .2.2 Property Automaton Bowman and Thompson’s [17] tableau construction algorithm first creates a prop- erty automaton based on an initial formula c), and then constructs the product an- tomaton of the property automaton and the 1.)rogram. Their approach is suited for verifying that all initial states satisfy the same initial formula. However, our model checking algorithm requires us to mark program states with necessary/sufficient conditions for different initial states to satisfy different initial formulae in the assumption computation step. We could create a different. property automaton for each formula. but it would have duplicate states. Instead, we extend 139 their property automaton construction algorithm to support multiple initial formulae for our purpose as follows. A property automaton is a tuple (S , S0. T, P, N), where o S is a set of states. 0 SO is a set. of initial states wl‘iere S0 Q S. o T : S <—> S maps each state to a set of next states. 0 P : S —> p'r(.)1)osition represent the 1;)1'opositionz-il condition that. must be satisfied by each state. 0 N : S ———> formula represents the condition that must be satisfied by all the next states of a given state. Property automaton construction algorithm: Given a set of A-LTL/ LT L for- nmlae (I), we generate a property automaton PROPUD) with the following features: 0 For each member ()5 E (I), create an initial state .9 E SO such that P(s) = true, N (s) = (1'). o For each state 8 E S, let the PNF of N(s) be (pe/\empty)V V,(p,;/\Oq,g), then — if the state .9; = (1),, (1,) does not exist in S, create 5; and add 52’- to S; — make 5;. a successor state of s. A path of a property automaton is an infinite sequence of states so. .91, - -- such that 50 6 S0, 3,, E S, and (s,,s,~+1) E T, for all i (0 S i < n). we say a path of a property automaton so, 31, - - - , simulates an execution path of a program 3], 3.2, - - —, 140 if P (s,) agrees with s; for all i (i > 0).1 We say a prOperty automaton accepts an execution path from an initial state s E So. if there is a path in the property au- tomaton starting from s that simulates the execution path. The property automaton constructed above, from initial state 3 E S0. accepts exactly the set of executions that satisfy N(s).'2 7 .2.3 Product Automaton Construction and Marking Our algorithm handles the case when each initial state of a program P is required to satisfy a different. A-LTL/LTL formula. Given a program P = (S P , SOP , TP, LP) and a formula. mapping function \11 : S0P —> A-LTL/LTL, we use the following algorithm to mark the states of P with sufficient / necessary conditions in order for P to satisfy \11. We can prove that the algorithm works for calculating both assumptions and guarantees. A product automaton Q is defined to be a tuple (SQ, SQ, TQ), where o S Q is a set of states with two fields: (pstate, nertform). The pstatc field repre- sents a state of program P. The ne'rtform field contains an A-LTL/LTL formula declaring what should be true in the next state. . . . . C o S? is a set of initial states. where S02 Q S Q. o TQ is a set of transitions, where TQ : SQ >< SQ. Product automaton construction algorithm: Given a program automaton P, and a mapping function ‘1’ from its initial states to a set of A-LTL/LTL formulae, we generate a product automaton PROD(P, \II) as follows: 1. Calculate the relational image (I) of the initial states of P under the mapping function \I/. lp agrees with q, if and only if p/\q i false . 2W’e ignore the eventuality constraint [103] (aka. self-fulfillment. [88]) at this point. However, later steps will ensure eventuality to hold in our approach. 141 () = {(15 ] 35’ 6 SO: (if) : \I/(S)}. 2. C(_>iist.i‘uct a property automaton PROPUD) with the set of initial formulae (.1) us- ing the 1,)1‘01‘)ertv automaton construction algorithm intro) from the above initial states [17, C31 103]. State marking algorithm: Given a program P and an initial formula mapping function \11, we first construct the product automaton PROD(P, \Il), then we construct. the marking of each. program state MARK(s) to be the set of nertfomn fields of states in PROD(P, ‘1!) that correspond to 3. Our marking algorithm generates the function MARK over selected states. We also apply optimizations to the algorithm so that. we do not need to store the entire product automaton. Theorem 6: The marking algorithm will terminate. Proof 6: The set of markings of each state is a subset of the formulae in the property automaton, i.e., the number of formulae in each marking is finite. Also the number of states in the property automaton is finite. We repeatedly check all nodes to see whether there will be update to the. node. For each iteration. we have. two possible outcomes: 1. If no change is made to the markings. then the process terminates. 14‘2 2. If at. least one formula is added to the marking of a state. then we will repeat the process. Since the number of nodes and the number of markings in each node are both finite, and the total number of formulae in all the state markings strictly increases by at least 1 in each update, the process has only a finite number of iterations before it e'uer'itually terminates. The markings generated by the marking algorithm contribute to the assumptions and guarantees in our model checking approach. We prove that the marking of a state contains the necessary conditions that the state must satisfy in order for the program to satisfy \IJ. Theorem 7: For a program P with initial states SO and an initial formula mapping function \11. let MARK be the marking for the states generated using the marking algorithm above, and let (9 be the conjururtion of the marking of a state 8, i.e., 9 r: /\ MARK (s), then P satisfies ‘11 implies that 3 satisfies 9. That is, (P l: x11) => (s e /\ MARK(3)). (7.13) Lemma 1: A path 7r of a property automaton A simulates an erecution 0, then a l: N(7r0) if and only if 01 l: N(7r1). (Note that 7r may or may not be a self- fulfilling path.3) This is a direct result of building the property automaton according to the PNF for a formula. Since 7r simulates a. we establish that 7n agrees with 00. Let N (no) be (pe/\empty)V Vi(p,-/\Oqi). W'ithout losing geI'ierality, we assume. P(7r1) = p1, and 3A path is self-fulfilling [88], if and only if it reaches accepting states infinitely often. 143 N(7r1) = q1. We have a l: N (no) 42> o l: le. Therefm'e. a l: N (no) if and onlyr if 01 ,2 17V(7T1). Lemma 2: A path 7r of a property automaton A simulates an ezrecution a, then o l: N(rr0) ¢:> of l: N(7rz-). (7.14) This is an inductive result of Lemma. 1. Now we prove Theorem 7. Proof 7: Prone by contradiction. Assume that s does not satisfy 6. There must ezrist a path a from s such that o lgé 0. Since 6 = /\ MARK (3). there must earist a formula (2') E MARK (s) such that o [79 c). (f) E MARK(s) implies that (5,959) is reachable by a path it in the product automaton starting from some so E 50 and \IJ(sO). Let 0’ be the finite erecution corresponding to 7r. From Lemma 2, we have 0' A 0 [:2 \I!(so) <=> 0 l: (:5. Given that the program satisfies \II, we have a l: (f). a contradiction. In fact, we can also prove that for a state 3. if 3 satisfies all the formulae in the marking of s, then all paths starting from 3,; E 50 going through 5 satisfy \I!(s.,~) [145]. Theorem 8: For a program P with. initial states 50 and an initial formula mapping function ‘1'. using the marking procedure above, for any state 5 of P. let 6 be the conjunction of all the marking of s. 9 = /\ MARK (s) then s satisfies (9 implies all paths of P starting from sz- E SO going through. 3 satisfy \II(s.,-). 144 Proof 8: Prove by contradiction. Assume s l: 6 and there erists a path 0 of P such that 0 starts from 81: E SO and goes through 3, and o bé \IJ(s,;). Let s be the jth state in o, and let it be the path in the product automaton corresponding to o. N (75-) must be a (“tonjunct in. 9. Therefore, we have s l: N (79-), and thus oj l: N (79-). From. Lemma 2. we have 0 }2 N(7r0) <:> oj l: N(7rj). Then we have 0 l: N(7r0), which implies a l: \I/(si): a cont?“adiction. Now, we introduce a 1*)roperty of the markings that allows us to compose and reason about already verified steadyzstate programs without repeating the verifica- tion. Theorem 9: For a program P2- with initial states SO and an initial formula mapping function \11, using the marking algorithm above, for any state 3 of Pi, let 6 be the conjunction of all the markings of s: 6 = /\ MARK(s). Let Pj be a state machine. Let P be the state machine including P25. Pj and transitions connecting a state 31 of P,- with some states in Pj. Let PM be the state machine constructed from s,;. P], and the transitions between. s. and Pj. The following two statements are equivalent: 0 All states in Pi still satisfy their markings in P. o 3. satisfies all its markings in PM. The theorem is illustrated in Figure 7.2. Proof 9: 0 We prove that if all states in. P1 still satisfy all their markings in P. then sz 145 Figure 7.2: Illustration for Theorem 9 satisfies its marking in Pig. Since 31 is one of those states, and PW- is a subset of P, the conclusion is a direct result of the condition. 0 We prone that if 31- satisfies all its markings in Pig}, then all states of P2- satisfy their markings in P. Let c be an arbitrary condition in the marking of an arbitrary state 3;; of Pi. For any self—fulfilling path a starting from sk, one of the following must be true: — The path is not adaptiue: Since this path does not include any adaptive transition from 31‘: it is also a path of Pi. Therefore, a l: c (Theorem 7). — The path goes through an adaptive transition through si: Let 0; be the last occurrence of si in a. The condition c’ propagated from c in sk to 31 along the path 00,01, - -- ,ot must be in the marking for si and we have at l: c". where o‘ is the tth suffix of 0. Thus, we have a l: c. Therefore, we have 0 t: c in either case. 146 7.2.4 Interface Definition We use an interface structure to record assumptions and guarantees. An interface I of a program is a function from a program state to a set of A-LTL/LTL formulae. 1’ : S(P) __) 2A-L'l‘L/L'I‘L' (715) We can compare two interfaces [1 and 12 with an :> operation, which returns true if and only if for all states s. the conjunction of the formulae corresponding to s in 11 implies the conjunction of the formulae corresponding to s in 12. 11:5 12 E Vs : S(P), 11(s) => 12(8). (7.16) \Ve also define a. special top interface T, which will serve as the initial value in the model checking algorithms presented in the next section. \Ve use a simple A— calculus [8] notation to define functions, where /\(;L' : X) f (1‘) denotes an anonymous function that maps each element in domain X to its value f (.L) T E )\(1: : S(P)).true. (7.17) The formula states that T is a function that maps all states to true. 7 .3 Modular Verification This section introduces a modular approach to the model checking of adaptive programs. where the verification result of an adaptive program can be derived from the model checking results of its individual steady—state programs. Our modular model checking uses the assume/guarantee reasoning [‘27. 63. 64, 74. 101], where for a given program module (i.e., a steady—state program), the assumptions are the condititms of the running environment of the module that are assumed to be true. and guarantees 147 are the assurz-‘inces provided by the module under the assumptions. We. first verify a set of base conditions in each steady-state program locally by using a traditional model checking approauth [51. 103. 136. 138]. Second, for each steady-state program P,» we calculate the guarantees. the necessary conditions for each stt‘iady-state program to satisfy its base (:(mditions locally. Third, for each steady-state program P.-. we calculate the assumptions. the s1.1flicient conditions that each state of Pj must satisfy in order for P7 to satisfy transitional properties and/ or global iiwariants. Fourth, we determine whether the guarantees logically imply the assumptions. If so. we conclude that all the assumptions and guarantees can be directly or indirectly inferred from the set of base conditions. thus completing the verification. Otherwise, a counterexample is generated. Next. we describe several preliminary algorithms and data structures used in our approach, and then illustrate our proposed approach using the adaptive routing protocol by verifying its global im'ariants and then its transitional properties. 7 .3.1 Global Invariant Verification The global invm‘iant verification )roceeds as follows: C) 1. Verify base conditions: Verify each steady-state program against the global invariants individually. 2. Compute guarantees: Mark each state of the steady-state programs with con- ditions that are satisfied by those states when there is no adaptation. 3. Compute assumptions: l\-'Ia.rk each state of the steady—state programs with conditions that must be satisfied by the state in order for the adaptive program to satisfy the global invariants. 4. Compare guarantees with assumptions: If the guarantees imply the assump- tions, then the process returns success. Otherwise, it returns with a counterexam- 148 ple. Next. we explain in further detail and illustrate each of these steps. (1) Verify base conditions. In this step. we verify each steady-state program against. the global invariants individually. Since the global invariants are specified in LTL and we assume each steady—state program P, to be non—adaptive itself. we use an existing LTL model checking algm‘ithm }‘)r(1)\-'ided in Spin [61] to verify the base conditions. By model checking. we determine that both P,- and Pj individually satisfy the invariant inc. (2) Compute guarantees. Next. we use a 772,(1.7'A:27n.g algorithm introduced in Section 7.2.3 to mark each state of the steady-state programs with obligations, i.e., conditions satisfied by the state when there is no adaptation. The obligations are true in each state if the steady-state program comprising the state has passed the base condition verification in step (1). According to the atomic propositions shown in Figure 7.1. since P1 satisfies 7772?). we conclude that state readyl satisfies 'i'nxz), therefore, we mark readyl with the obligation inc. Then we propagate this obligation to its successor state receivedl as follows: First, it. satisfies 17w. Second, since received is true in the state, it must also satisfy —w7"e(1.dyZ/Isent. Therefore, we mark receivedl with obligations rm; and —17"ca.d,z/Z/{ sent. Similarly. the markings of states are repeatedly propagated to their successors. If an obligation is propagated to an already marked state through a different. execution path, and the obligation is not. in the existing marking of the state. then the state marking will be updated with the conjunction of. the existing marking and the new obligation, otherwise the new obligation is ignored. This process will eventually converge (Theorem 6) and at that point. (firpm‘nt). no state markings will be updated any further. Figure 7.3 shows the result of applying the marking 149 algorithm to P]. where the guarantee markings are prefixed with “g. *”. (The “a. *” denotes assumptions described in the next step.) Similarly, we apply the marking algorithm to P2; results are shown in the bottom portion of Figure 7.3. P1: Safe Mode g.inv A g.inv A ( 'Iready U sent) ( ‘Iready U sent) g.inv f4 sent1 ) received1 routed1 'a.inv A 'a.inv' (‘1 ready U sent} (fl ready .U sent) ...-_.. . .. .. .. ‘r.‘ _.-._r,__ .._ -._. -....... _- ._ .. ._-_...‘,4..-,.A. . a P2: Normal Mode g.inv A ('1 ready U sent) -_—--____—--___—___.__— ____-———-g-_—-———--_—— ——_-____—~9————_—_—_..-—_ ———_-—_——q—-----__—__- x . Q-inV A g.inv /\ unencryptedz g.inv ( fiready U, sent) ( fiready U sent) ( fl ready U sent) (fiready U sent) . sent2 g. inv A ( fiready U sent) Figure 7.3: l\=larkings for global invariant. inv In our algorithm, the obligations are propagated in such a way that guarantees each state satisfies all the obligations in its markings when there is no adaptation. We call these markings guarantee markings. We have proved in Theorem 9 that at the fixpoint, the markings of a steady—state program P, have the following 1')roperty, which is a key insight of this approach: When a. state machine Pi is connected to a state .9 in P,- with transi- 150 tions from s to states in P], all the states of P, still satisfy their guar- antees in the new program if and only if 3 still satisfies its guarantee in the new program. (3) Compute assumptions. Starting from the guarz‘intee 1'1'1arkings generated in step (2), for 2ach state in each steady-state program P,- (the source) with outgoing adaptive transitions, we pr(')};)agate the obligations along the adaptive transitions until reaching states of a target steady-state program P1. Then we mark the reached states in P] with the propagated obligations. which we call the assumption 7'n.(1,rki7'2.gs (denoted by prefix “a. *l’ in Figure 7.3). In the adaptive routing protocol. we propagate the markings of receivedl to received2 along the zulaptive transition 31 and mark received2 with inv and fireang/l scat. Our process ensures that the assumption marking includes the exact set of conditions that. received2 must satisfy in order for all executions starting from readyl, taking adaptive transiticm al. and taking no further adaptations. to satisfy the global invariant inv. Similarly, we propagate the marking of routedl to unsafe2, from received2 to receivedl. and from unsafe2 to routedl, respectively. (4) Compare guarantees with assumptions. Next. we compare the guarantees with the assumptions. For ‘ach state, if the conjunction of its guarantee marking logically implies the conjunction of its assump- tion marking (checked automatically). then the process returns success. otherwise, it returns with a counterexample. For example. the guarantee marking for received2 indeed implies the assumption marking for received2. This result implies that all executions starting from readyl, taking adaptive transition 31. with no adaptation thereafter. satisfy inv. We perform the comparison on every state of the steady—state programs with incoming adaptive transitions. Successful con‘iparisons guarantee that any executicm starting from readyl or ready2, undergoing one step of adaptation, satis— fies inv. If the guarantee of a state does not imply its assumption, then we generate a counterexample showing the path violating the global invariant. The counterexample feature is illustrated in the discussion for the transitional properties below. 7.3.2 Transitional Properties The steps for transitional properties verification are similar to those used for global im'ariants. The first two steps are the same as the first two steps used for global invariant verification except that the LTL formulae that we verify are the local properties LP1 and L-Pg instead of inv. The last two steps are described in detail below. (1’) Verify base conditions. This step is the same as step (1) for global invariants except that we model check the local properties instead. (2’) Compute guarantees. This step is the same as step (2) for global invariants except that we compute guarantees by marking the initial states with local properties. The guarantee mark— ings after applying Stem (1’) and (2’) are shown in Figure 7.4, prefixed with “g. *”. (3’) Compute assumptions. In this step, we start from the guarantee markings generated in the previous step. For each state in a steady-state program P,- with outgoing adaptive transitions going towards program P-, we generate an obligation o—xLP, from each condition P1: Safe Mode 90 fl unsafe 907 unsafe g.[] fiunsafe 9.0 '1 unsafe l I l I a.( L'P2 _\L LP1 ) /\ ( Isent U encrypted _\-LP 7 ) " A’P2; warmed. ’ ‘ W ‘ ___._.__.¢____..—_ l g.LP2/l fl sent . U encrypted ‘ ‘ W g.LP2 \lg.LP2| Uencrypted , WP] ’1 encryptedz . g.LP2 g.LP2 _ . - i=1 safe2 I e. ' ’I I l I l | I I l I I ' I I | I | I I I I I I I I I I l I l I I l I I I I I | I I I Figure 7.4: Markings for transitional properties (a in its guarantee marking. where LP] is the local property for Pj. Then we. prop- agate the generated obligations to the states in P]- along the adaptive transitions to form their assumption markings. For example, the guarantee marking for un- safe2 is LP2 and (fisentu cncryptal). From this marking, we generate obligations LIB—ALP, and (fisentu encrypted)ALPI. respectively. These obligations are propa— gated to the state routedl, then we generate the assumption marking LP2—\LP1 and (-w.sen.tZ/{ encrypted )—ALP1 for routedl. We repeat this process for all states in P1 and P2 with outgoing adaptive transitions. and the resulting assumption markings are. shown in Figure 7.4. prefixed with “a. *”. (4’) Compare guarantees with assumptions. “"0 compare the assumption markings with the guarantee markings of all states to see whether the assumptions are implied by the guarantees. If so. then the model 153 checking returns success. otlmrwise, it returns with a counterexarnple. we will prove in Section 7.4 that if the process returns success, then all adaptive executions with finite steps of a,daI:)tations satisfy their corresponding transitional properties. In the adaptive routing protocol. we found that the guarantee for state routedl (LPI) did not imply the condition (fisent U encryptcd)—\LPI in its assumption. This assumption condition required the obligation encrypted to be satisfied before the adaptation. while the guarantee did not ensure this obligation. Therefore, the model checking for the transitional property failed. As such, we generated a. counterexample showing a path that violated the trzmsitional properties by using a backtracking method. In this example. we returned the trace (ready2, received2, unsafe2, routedl). Clearly, the failure was caused by the adaI_)tive transition 33 (from unsafe2 to routedl). we removed 33 from the adaptive program and re-performed steps (3’) and (4). The algorithm returned success. 7 .4 Details of Model Checking Algorithms In this section. we give the formal details of the model checking algorithms that we used in Section 7.3. The first. algorithm checks whether a simple adaptive program satisfies its transitional property. The second algorithm extends the first algorithm in order to check the transitional properties of an n-plex adaptive program. The third algorithm checks the global invariants of an n—plex adaptive program. In all three algorithms, we use traditional LTL model checking to verify the base conditions. The basic idea behind all three algorithms is to first calculate the assumptions and guarantees of each steady-state program using the marking algorithm, and store them in two interface structures I 1 and 12, respectively. Then we compare. [1 and 12 to determine whether the adaptive program satisfies the corresponding requirements. 7.4.1 Simple Adaptive Programs We first introduce the. modular model checking procedure for a. simple adaptive program. Given a source program 17,. a target program Pi» an adaptation set Am, a source local property (1),, a target local property 9%, and an adaptive constraint 0.“, the algorithm determines whether the adaptation from P,» to PJ- through Az’a’ satisfies 9,.) , (5)2: A (7). , J, that is. the program changes from satisfying 9%‘ to satisfymg (DI. ALGORITHM 1: Transitional properties for simple adaptive programs input 1",, Pj: EFSM input AL]: FSM input ¢,,oj,f2,~,j: LTL output ret: Boolean local 11, 132: Interface begin 1. Initialize two interfaues. II 2: T 12 I: T 2. Verify programs P,- and Pi against properties (5),- and c5]- locally using traditional LTL model checking methods. 3. Construct marking MARK” by running the marking algorithm on PJ- with initial formula 65-. 4. Calculate the state intersection tos, of A” and P-. where tos refers to target of outgoing adaptation state. tOSZ' I: 5(Aij) fl S(Pj). (7.18) 5. Construct interface [2 such that the conditions associated with states in tea, are. the same as their markings in MARK”, and the conditions associated with states not in tos, are true: 12 :: )\(r : State). (if a? 6 tea, then MARK'KI) else true endif ). (7.19) 6. Construct. marking MARK by running the marking algorithm on P,- with initial formula @51- ‘1 . Calculate the state intersection sos.Z of P,- and AM, where 808 refers to source of outgoing adaptation state. 8. Construct marking MARK" by running the marking algorithm on Az’a‘ with the initial formula. mapping function \I' as follows: 0 For each s E 303.. a. formula (.L——‘*'@,) is a conIunct of \Il(s) if and only if 12' €- AIME/{(3). 9. Construct interface 11 such that. 11 2: )\(J: : State). (if 1? E tos, then Il-IARK'(J:) else true endif ). (7.20) 10. Compare 12 and II to see if [2 implies II: ret := [2 => 11. end 7.4.2 N -plex Adaptive Programs This algorithm extends the algorithm for a simple adaptive program to a general n-plex adaptive program M. Given a set of steady—state programs P,. a set of adap- 156 tation sets Am. a set. of local properties (1),. and a set of adaptive constraints QM“ the algorithm determines: . . . ,. , 92.. . For all 2 # ,7, whether the adaptatlon from P, to P} through AL]- satisfies 0, 419%- That is. the program changes from satisfying (.5), to satisfying a, \Vhether any execution from P]- O‘oing through I ’ C . . . r (211.!) . . . . . PM. P1,, - - - , P11 (j, # ],-+1) satlsfies (1)], —> (912 . - ~92”, i.e., sequentially satisfymg I 6.).” . . . 6).“ . This algorithm repeatedly applies Algorithm 1 to each single adaptation from P11 to Pjr- As some of the marking and comparison oImrations overlap, the algorithm is optimized by removing the redundancies. ALGORITHM 2: Transitional property for 72-plex adaptive programs input P, (i=1---n): EFSM input A” (zlj=1---n): FS.\I illpllt $1.020 (1,]:1 ' ' Tl): LTL output ret: Boolean local 11,12: Interface begin I. Initialize two interfaces. [122T IQZZT 2. For each program P, (a) Verify programs P, against. properties (,5), locally with traditional LTL model checking methods. (b) Generate markings MARK by running the marking algorithm on P,» with initial formula (5),. (c) Calculate tis,-. the target states of all incoming adaptive transitions, for all j % i. i.e.. tzs, = UJ(J#'i)(P’ F) A“). where tis refers to target of incoming adaptation state. ((1) Update interface [3 with [2’ such that the conditions associated with states in tis, are the conjunction of their values in 12 and their markings in MARK , and the conditions associated with states not in tis, are those in 12: [9' := )\(J: : State). (if :r E tis, then MARI/(Cr) /\ [2(a) else 12(1) endif ). (7.21) (e) For each j % z (i) Calculate the state intersection sosw of P, and A”; (ii) Construct marking MARK" by running the marking algoritlmi on AIJ with the initial formula mapping function \II as follows: . 9., . . . , . . o For each s C— sosw, a formula (1? --—\ 6),) 1s a conIunct of \Il(s) if and only if :1? e MARK(3). (iii) Calculate the state intersection tosw of P] and Am. (iv) Update interface [1 with 11’ such that. [1’ 2: )\(a‘ : State). (if a: E tosm- then A’IARKI(JI‘) A11(.r) else [1(a) endif ). (7.22) 3. Compare [2 and II to see if [2 implies 11: Tet :2 [2 => 11. end 7.4.3 Global Invariants Given a set of stt‘eu’lyzstate programs P,. a set of adaptation sets A.7]. and a global invariant INI", the global invariant model checking algorithm determines whether all executions of an n-plex adaptive program satisfy the global invariant INV. ALGORITHIVI 3: Global invariants input P, (I :- l-- - n): EFSM input Aw’ (i,j = 1---n): FSM input INV: LTL output ret: Boolean local [1.12: Interface begin 1. Initialize two interfaces. N‘ For each program 1),: (a) \‘R‘irify programs P, against global in\-'ariants [NV with traditiomil LTL model ~ checking 111etlmds. (b) COIISII‘IICI the program composition C,- = comp(P,-. n'nion(.4,,1. A”. ' -- ,Ar.n.l)- (7.23) (c) Cm'istruct marking MARK by running the marking algorithm on C, with initial formula INV. (d) Calculate tisi, the union of the target states of all incoming tramsitions. i.e., tIS‘I = U](J¢z)([)i fl Ail) (e) Update interface 12 with 12’ such that the conditions associated with states in tis, are the same as the conjunction of their values in 12 and their markings in MARK. and the conditions associated with states not in his, are the values in I»: 12' = /\(J,' : State). (if :1? E tisi then IUARK(1?) /\ 13(1) else 12(1) endif ). (7.24) (f) Calculate test, the union of the state intersections of Am- and P]- for all j 74 7L, i.e.. tUSi : U](j=éi)(Alj fl Pj). (g) Update interface [1 with 11’ such that 11' = /\(;r : State). (if :1? E fosi then MARKU) /\ 11(1) else [1(r) endif ). (7.25) 3. Compare 112 and [1 to see. if 12 implies lg Wit 1: [2 ——'> [1. end 7.4.4 Claims The following theorems capture the claims that the algcn'ithms introduced in this section solve. the problems they at intended to solve. Note that. the claims in this section apply to the eventuality of A—LT L and LTL properties as well. Theorem 10: For a simple adaptive program from. P1 to P}, 'ifAlgor'ithm 1 returns true, then: 160 ’ . , fry) 1. ) v" z ‘I ~ "' . I" ) n ' :l n 4 n - u ). -. l i I‘ 0 All non-axlaptzu. crecatzons uwthm P1 (or [J) satisfy the local property (,9, (or . . . . . g Q. ,. o All adaptive earec'ut-zrms startng from P,- aml (1.(la.ptmg to P- satisfy 092‘ 4'99). Proof 10: The claim that all non.—adapttre meet/trons within. P1- (or Pj) satisfy the local property (3), (or OJ) is guarantectl by the tradltional LTL model. (::he(,:ltmg methods. Let (I he. an adaptive erecutz'on from Pl- to Pj. Let 3,; E S(P,~) be. the first state m the (uthtation and 31 E 5 (P1) he the last state m the adaptation. Let c”: and ti'j be the obligations for st and s propagated though. (7 m 771(L1‘Art'7’tgs Al and M’. respectteelg. (71 l: L"). 12(3)) ll Ids!) :3; tit-j, therefo'rc 0] I: at]. (VP (If l: L", 43:0}. . ». . . . t ,, , 52,. , There erzsts a fl'll'lltfi path. 7r m the propt-rty automaton PROPERTY (sari-5'01) that simulates sbsi+l.--- .3]. and N(n‘J-_1):>L’JJ. From Lemma. 2. we have 52, , (72 l: (,7'2‘ 4C).). y, (2,, .4 (7 l: (5)1» —$ (pg. . . . , . - $2.. , . As Illustrated 272. F more 7.0. smce 0’ l: Ila—"('7 there 6117'I8l8 sk. such v];- that 82-, 191-+1, - -- ,sk lzfiu (It: and Sit-+195“; - -- l: <5}- Then we have 87;. 8H1. - - - , sk. sk. 3k - - - l: L351, From Lemma 2. are have &Q(7)._ (91+l‘... ovsikt.os.k‘chk,.. . k ('92., . 1 r2. . (”id therefore, 31-81+19 ' " 95k lc:fin. (92'.- TIIW~ 0 l: (D: 4,95.)- 161 isll sl+1r ”-1 SK: ‘ Ok I: (pj fin’ A - ——— ———-—> 30, sl: "'1 Sk :fln (pl < - A an 0 l: (Di —¥ (Pj Figure 7.5: Illustration for Proof 10 Theorem 11: For an n-pler adaptive program M . if Algorithm 2 returns true, then .- 1. All non-adaptive executions within a single steady-state program Pi satisfy the local property of P1. 2. Any execution a starting from P]! gomg through P~_, - - - ij (jl % ji+1) satisfies (75 '_\ (I) .436). -.. ISAC/3J1. (7.26) 162 Proof 11: I. The first item is a direct conclusion from Theorem 10. 2. For the second item. we prove by induction. 0 Assume that all erccutions going through It programs satisfy the claim. 0 As shown. in Figure. 7. 6. let 0 = so sl, - -- be an creel/tion going through. A: + 1 programs P11. Pg), ° ° - . P h+1' Let s) and sm be the states exiting ij and entering P n +1. respectively. Let 0’ he afinite ‘)(llll in P' startin frcm an initial state S'- of P' endin . I g..“, 1.9.1, (to. 3,... 9 in. s,. Based on the assumption, the have 52 / .‘ Ik‘lt—FI .‘ 0 A «51+1s-9H-2v“ l: (rm A Om,- (7-27) . . , . ' , I Therefore. there ea'ists t (l < z < m). such that 0‘ l: p and (7 /‘\ Jk+1 8l+17 Sl+27 ' ' ° vsi l:fin (DJ). An g path starting in. le. ending in ij satisfies 011.)» . 91%13 . “IA—I'M , (t1, 4 a. <35). —\ o)... (7.28) Thus. Q) . Q. -. Q _ . . 102 13,13 ; Iii—1h: A) 507 31: ' ' ' 32' l:fin. (le —* 9'5]? -—‘ 991-3 - ~ ——* (pjk. (7.29) Therefore. 9.11.7» (2)-xi; , QIA-‘I‘H , 0 l: e]. e e). e elm —t obj-W. (7.30) 163 g ‘ WAS/+1. ---Si fin or?) iii/();; 2 El(full => pie?) 165 0,rlocked 1, rlocked r' 9 2, rlocked P1 r! r? ' r. \ . rlock? rlock! rlock? rlock! rlock? rlock! 1, unlocked 2, unlocked / wlock? wlock! I \ \ \ \ 'G --r_____,, § I w! r I I I 7 - . initial state C state ————— > adaptive transition r? r! w? _ . ————> start reading —’ end reading ’ start wrltlng w! ”00k? ac uirin rlock! ———-> end writing —’ q g —-—> releasing read lock readlock WIOCk? acquiring wlock! releasing write lock ' write lock Figure 7.7: Case study: adaptive Java. pipeline 166 0 Lock integrity: After locking (rlock?/wlock?) the buffer, the reader/writer should eventually unlock (rlock!/wlock!) it before it can be locked by the other type of lock. 1.71713 2 Cl('rlock‘? :> warlock? LI rlock!). inn, : Cl(wlock? :> —-rlock?Z/{ iiilockl). o Read/write integrity: After the program starts reading/writing, it should complete reading/ writing before the next. reading/ writing starts. inner, = [3(7'? :> O(—17'?Z/{ 7‘l)). incl-'6 = El(w? => 0(fim'? LI 11.4)). We. have successfully verified all of the above 1')1'ol’)erties with the Ah‘IOEBA model checker. In order to study the scalability of our approach, we synthesized a series of n-plex adaptive programs by duplicating the above steady-state programs 71/2 times (where n ranges from 2 to 200). and connecting these duplicates with adaptive transitions. We measured the execution time and memory consumption and conmared the results to the monolithic and the pairwise implementations introduced in Section 7.1. The experimented results were evaluatml on a. dual processor 800MHz Pentium III SMP with 32KB L1 and 256KB L2 caches, and 256MB main memory, 167 running Redhat Linux 7.2 (kernel version: Linux 2.4 smp). Figure 7.8(a) shows the experimental results of the average execution time of model checking for the above 6 global invariants, where the x-axis represents the number of steady—state programs in the adaptive programs and the y—axis represents the time used for executions. The assessment of the results indicates that the time consumed by our approach is linear to the number of steady-state programs. It is about n times faster than the pairwise approach, and slightly slower than the monolithic approach. Figure 7.8(b) shows the memory usage where the x-axis represents the number of steady—state programs in the adaptive programs and the y—axis represents the memory occupied by the programs in the numbers of states stored by the algorithms during model checking. The assessment of the results indicates that our approach uses almost. constant. amount of memory, which is approximately l/n of the amount of memory required by the monolithic approach and 1/2 of the memory required by the pairwise approach. The above results conform to the theoretical complexity analysis discussed in Section 7.6.2. 7.6 Optimizations, Scalability, and Limitations Next, we discuss performance issues including optimizations, scalability, and lim- itations. 7.6. 1 Optimizations In our algorithms, we need to compare the guarantee interface with the assump— tion interface to determine whether the guarantees imply the assumptions. The guar- antee and assumption of each state are both sets of A-LTL formulae. The implication between guarantees and assumptions is the. implication between the conjunctions of the formulae, i.e., /\(G,):> /\(A,~) E true, where 0,8 are conditions in the guarantee and A,s are 168 (a) Execution time changes with number of steady-state programs Q) 50 . _ _ g v: 40 , + AMOebA '0 .8 8 30 , "'F- monolithic 5: § 20 pairwise >< .5 <1) number of steady-state programs (b) Memory usage changes with number of steady-state programs G) u )0 . a g 6‘ ’ _ VJ CH :3 e 400 / + AMOebA E‘ B + monolithic / + ..m... .S 0 2 4O 80 120 160 200 number of steady-state programs Figure 7.8: Performance comparison between our approach and alternative. ap— proaches conditions in the assumpticm. A(Gj)=>/\(A,)Etrue e /\\/G,-=>A,~.——_t7~ue. (7.31) J' I J Its sufficient condition is /\(\/ GPA,» _:_: trim). (7.32) z .7 Therefore we turn the comparison between conjunctions of fm'nmlae into the 169 comparison ii)etween two formulae, where each f(_)rmula is the next formula of a node in the property automaton. To improve performance, we check the sufficient condition instead of the original formula. This is another reason why our approach is sound but. incomplete. The algorithm introduced in Section 7.3 stores the guarantee and assumption markings of all the states in the adaptive program, which may occupy a large amount of men'iory. However, in later steps. only markings of a s1'nall portion of the states are used, i.e., the states with incoming and outgoing adaptive transitions. we call these states interface states, and the assumption / guarantee markings of these states assiimption/guarantee interfaces, respectively. In the AMOEBA implementation, the required memory space is significantly reduced by storing the markings for only the interface states instead of for all the states during the marking computations. Our approach can also be further optimized to verify incrementally developed software by utilizing existing verification results. Assume that an n—plex adaptive program has been successfully verified with our approach. When a steady—state pro- gram P, of the adaptive program is changed after a successful model checking has been performed, we only need to repeat Algorithm 2 and Algorithm 3 on P,- (and/ or related adaptations) to determine whether the specifications are still satisfied. As- sume that if after reapplying marking algorithm on P,, we compute the interfaces to be If and I.’, then 0 if [2’ :> 12 and 11 :> 11', then we have [.2’ :> [1’ (since implicitly, [2 :5 [1), then no more model checking is required; 0 if II 76> I ’, then the model checking for outgoing adaptations from P,- needs to be repeated; 0 if 12’ 75> 1'2, then the model checking for incoming adaptations to P,- needs to be repeated. 170 The model checking results for all other parts still apply and therefore are reused. When a. new steady-state program Pn+l is incrementally introduced to an n—plex adaptive program after the adaptive program has been successfully verified with our algorithms, we will need to model check all the adaptations going into and out of Pu“. Incremental model checking significantly reduces the model checking overhead in incrementally developed software. 7.6.2 Complexities and Scalability Our proposed modular model checking approach increases scalability of model checking by reducing the time/ space complexities of the model checking algorithms. \Ve explain this point. by comparing our approach to the alternative approaches de- scribed in Section 7.1, namely. the pairwise approach I2] and the monolithic ap— )roach ‘27. 9.3. Assume an n- )lex ada)tive roeram M contains steady-state )ro-- . b . grams P1, P2, - -- ,P,,. We denote the size of the steady—state program P,- as I P,- I, and we assume all steady-state programs are of similar size I P I: I P Izi P,- I, for all '13. We assume that the size of adaptive states and transitions is significantly smaller than that of the steady—state programs (which we consider a key characteristic of adaptive software). Then we have I M Ix n * I P I where I M I is the size of M. The complexity of the global invariant model checking is the sum of the com- plexity of each step introduced in Section 7.3. The most. time/ space consuming step is computing the assumption and guarantee interfaces. We can prove that its time complexity is ()(QIWVI I M I) and its space complexity is ()(ZZIWVI I P I). The time and space complexities of the monolithic approach are both ()(QIUWl I M I). That is, while the monolithic approach requires the same order of time, it requires n. times more memory space than our approach. The time and space complexities of the pair- wise approach are 0(n.(2lmvl I M |)) and ()(2'1N VI I P I) respectively. That is. while 171 it requires the same. order of space, it requires 72 times more execution time than our approach. The time and space cmnplexities of our transitional property model checking algorithm are ()(2'1‘Pl I M I) and 0(2 "pl I P I), respectively. To the best of our knowledge, no alternative aI')I')roach solves the same problem. The approach that han- dles the closest problem is the ITL model checking algorithm proposed by Thompson and Bowman [17]. which can be applied to verify that executions of only a specific: adaptation sequence satisfy their corresponding transitional property, instead of all possible adaptation sequences as with our approach. The memory consumption of their approach is linear to the size of the program automaton, which implies that our approach is at least n times more efficient. in terms of space. In addition, their approach is non-element.ary to the length of the equivalent A—LTL formulae.4 while our approach is exponential, which means that our apIn'oach is at least exponentially more efficient. Based on the above analysis, we conclude that our a1.)proach reduces the verifi- cation space/ time complexity by a factor of n. and it is more scalable for verifying incremental changes to the adaptive program at development time. 7 .6.3 Limitations The following assumption must be considered when applying our approach: we regard the essential characteristic of an adaptive program to be that the n. steady- state programs are loosely coupled, i.e., the size of adaptive states and transitions are significantly smaller than the size of the steady-state programs. This assumption is reasonable since it is generally desirable software engineering practice to minimize the coupling among program mmlules [104]. Our study of the adaptive. Java. pipeline 2 22» )ILI 4The level of exponents is linear to the length of the formula. i.e., 172 example supports this assumption: The size of adaptive states and transitions consis- tently accounts for 1% of the... size of the overall adaptive Inogram. Similar phenomena are also observed in other adaptive programs that we have developed for the RAPID- ere project. i\r‘Iore thorough studies are required to validate this assumption in other adaptation scenarios. This assumption ensures that both the number of markings that we need to store and the comparisons between interface markings that we need to perform are minimal. Although our approach is sound, it. is incomplete, i.e., the algorithm may return a counterexample that actually satisfies the properties under verification. We consider the false positives to be an indication of lack of clarity in the specification. The false positives can be discharged by more clearly specifying the adaptive program. 7.7 Related Work In this section, we analyze the differences and relationships between our approach and other modular verification approaches. Krislman'iurthi et at [74] introduced a. modular verification technique for aspect- O'I‘iented programs. They model a program as a. finite state machine (F SM), where an aspect is a mechanism used to change the structure of the program FSM. An aspect, includes a. point-cut designator, which defines a set of states in the program, an ad— vice, which is an F SM itself, and an advice type, which determines how the program FSM and the advice FSM should be composed. By model checking the program and the aspect individually, they verify the Computation Tree Logic (CTL) properties of the composition of the program and the aspect. 111 other work [44, 86]. they intro- duced modular model checking techniques for cross—cutting features, which basically follow the same idea. Our approach is largely inspired by their approach. The two approaches share the following similarities: (1) Both have goals to decompose the 173 model checking of a large system into the model checking of smaller modules. (2) We both use the assume / guarantee reasoning first. formulated by l\'Iisra and Chandy [101] in 1981. while neither of us invented the concept. (3) We both define interface states and use conditions to mark these states. Then we reason about the conditions among these states to draw conclusions about. the. entire system. (4) There are behavioral changes involved in both approaches. although in different. ways. (5) Neither of our approaches is complete. However, despite the similarities. the two approaches also have significant differ- ences. and their approach is not ap1,)licable to our verification problems. (1) The most prmninent difference is the fundamental difference between the underlying structures for the temporal logics that we handle. CTL is evaluated on states. The classic model checking method for C TL is accomplished by using state marking I38]. Their aprn'oach reused the classic CTL state marking algorithm on interface states, which can be considered a natural extension of the basic CTL model checking idea. How- ever. LTL/A-LTL is evaluated on execution paths. Model checking of LTL cannot be solved by marking its states. Rather, tal)leau-based and automaton-based methods are conventionally used for path-based logic model checking. The challenge overcome by our approach is to design a novel marking algorithm to be applied to states for a path—based logic, which has not been previously published in the literature I145]. Our algorithm also deals with eventuality. which is also a challenge for LTL/A-LTL, but not for CTL model checking. (2) ()ur aprn'oaches also differ in the way the system behavior may change. With their approach, one may consider the behavioral change from the base to the feature as an adaptation. They require the execution to change back to the base, which behaves analogously to a stack. In contrast, our approach allows arbitrary adaptation sequences. such as an adaptation through a sequence of programs A to B to C, etc. (3) Our approaches differ in the definition of modules. In their approach, a module refers to a separate piece of code (physically). such as the 174 base program or a. feature. However. in our approach. each module refers to different behaviors. It. may be exhibited by the same piece of code under different modes, or different pieces of code. (4) Last but not least, our approach supports the verification of .A-LTL, which has not been previously addressed. Henzinger et at [56] proposed the Berkeley Lazy Abstraction Software verifica- tion Tool (BLAST) to support. extreme verification (XV). XV is modeled to be a sequence of programs and specifications (1”,, (1),), where (I),- is the specification for the . th i version of the program 1),. and (I), are non-decreasing, i.e., (I),- g (1),“. In order to h reduce the cost of each incremental verification when verifying the it program, they generate an abstract reachability tree T,. When model-checking PHI. they compare PHI to T, to determine the part. of PH], if any, should be re—verified. Our approach differs from their approach in that they verify propositional, instead of temporal logic properties. Also, their approach is for general progrzuns, while our incremental veri- fication is optimized specifically for adaptive programs. we consider their approach to be complementary to ours because, in practice, each steady-state program is de- veloped incrementally from some common base program. Their approach can verify the local properties and global invariants of each steady-state program locally for the base condition verification. Alur et at [4] introduced an approach to model-checking a. hierarchical state machine, where higl'ier—level state machines contain lower-level state machines. As the same state machine may occur at different locations in the hierarchy, its model checking may be repeated if we flatten. the hierarchical state machine before applying traditional model checking. Their objective is to reduce the verification redundancy when a lower-level state machine is shared by a number of higher level state machines. Their approach can be applied to optimize our solution in that the steady—state pro- grams may share parts of their behavior (sub—state machines). we can use their approach to reduce the redundancy when verifying the shared behavior. Many others. including Kupferman and Vardi [80]. Flanagan and Qadeer [46], have also proposed modular model checking approaches for different program models. They focus on verifying concurrent programs. where modules are defined to be concur— rent threads (processes). Their approaches essentially address a different perspective of model checking problems (e g., data sharing among threads). and we consider their approaches to be orthogonal and complementary to our approach. 7 .8 Discussion In this chapter, we introduced a sound modular approach to model—checking adaptive tnograms against their global invariants and transitional properties ex- pressed in LTL/A-LTL. For an n-plex adapt-ive program, given that the size of the adaptive states and transitions is usually significantly smaller than the size of the overall adaptive program, our approach is at least n times more efficient than al- ternative apl'n'oaches. Also, our approach is applicable to the transitional properties which, previously. had not been analyzable with other approaches. Our approach is highly scalable in that the time complexity is linear to the number of steady-state programs, and the space complexity is linear to the size of each steady-state program. More performance. improvement can be achieved by using our approach to incremen— tally verify adaptive programs. For validation purposes, we have implemented our approach in a prototype model checker AMOEBA using C++, and used the tool to verify a number of adaptive programs. The AMOEBA model checker still suffers from the state explosion. problem of static model checking, and may not be sufficient to be directly applied to an adaptive system if (1) the size of a single steady-state program exceeds the capacity of the model checker, or (2) the steady-state programs are not loosely coupled. and thus the size of the interface states exceeds the capacity of the model checker. In order to 176 address these cases, we have investigated run-time \.'(‘rifi('ati(,)ri of adaptive programs against their global im-‘ariants and transitional properties. which will be introduced in Chapter 8. As described in Section 7.7. our approach may be combined with existing tech- niques [2, 4. 44, 46. 56, 58. 7‘2. 74. 80. 86] to further improve the perfornnmce of model checking adaptive software. In general, because our technique is complementary to many existing model checking techniques, investigating the combinations of our ap- proach with these other techniques may yield interesting model checking capabilities for adaptive and non-adaptive systems. 177 Chapter 8 Run-Time Model Checking for Adaptive Software In this chapter. we propose a run-time mcmitoring and verification technique to guarantee that. executing adaptive software satisfies its adaptation requirements specified in A—LTL/LTL. In Chapter 7. we introduced the. AMOEBA static model checker [14.5] to modularly verify A—LTL properties in adaptive software. However, de— spite the. effort, AMOEBA still suffers from the state explosion problem and remains insufficient to be directly applied to certain types of adaptive software, including those with exceedingly large and tightly coupled steady-state programs. Run-time verification is an attractive complement to static \-'e1‘ifi(:ati()11 as a means to verify formal properties in adz-iptive software. Run-time model checking monitors executions of a software system and verifies them against a set of formal specifications, all at run time. Since only one execution path is exz—Imined at a time. the state explosion problem is largely avoided. Existing run-time model checking projects include Java PathExplorer (.lPaX) [55] the run- time monitoring and checking (MaC) architecture [83]. temporal rover tools [35]. EAGLE [9], etc. However. these projects do not. verify A-LTL properties for adaptive 178 software. we introduce AMOEBA-RT. a run-time A—LTL model checker for adaptive soft- ware. In AMORBA—RT. the run-time state information of an adaptive program is collected by instrinnenting the program. Since the instrun‘ientation is a crosscut- ting concern [21]. we use an aspect-oriented approach [132] to capture this concern in an aspect file. The aspect-oriented approach is non-invasive, meaning that the source code for the adaptive software is not. directly altered. At run time, the instru- mentation code sends the collected state infm‘mation to a run-time model checking server, running as a separate process. The run-time model checking server uses an automaton-based approach to determine whether the state information received from the adaptive program satisfies adaptz'ition properties specified in A—LTL. AMOEBA-RT is a prototype system for run—time model checking of adaptive software, where an aspect—oriented teclmique [132] is used to instrument the software for analysis. we illustrate AMOEBA—RT with the adaptive Java pipeline program introduced in Chapter 5. The remainder of this chapter is organized as follows. Section 8.1 briefly intro- duces the AMOEBA-RT model checker. In Section 8.2, we illustrate the run—time verification using the adaptive Java pipeline example. Related work is overviewed in Section 8.3, and Section 8.4 smnmarizes our approach and discusses limitations and extensions. 8.1 Run-Time Verification AMOEBA-RT is a teclmique for the verification of A—LTL properties in adaptive software. AMOEBA-RT is an extension of the AMOEBA model checker introduced in Chapter 7, extended with the support of run—time monitoring and run-time verifi— cation. In general. run-time verification monitors the run-time behavior of a software 179 system and checks its conformzuice to a requirements specification defined as a tem— poral logic property [9]. Run—time model checking can be considered a combination of testing and model checking as an effort. to achieve the benefit of both approaches, and meanwhile, to avoid the pitfalls of ad hoe testing and the complexity of model checking [55]. During the execution of the software, an execution trace is generated, which is fed to a. model checker to verify its conformance to the formal specification. A common implementation of the model checker is to use a property automaton run— ning in parallel with the adaptive software. Given a formal specification, the property automaton is (:(n’istructed in such a way that it accepts exactly the set of executions that satisfy the specification. Therefore, the model checker accepts the execution trace if and only if the trace satisfies the specification. The two key extensions in AMOEBA-RT are the run—time monitoring of the ex- ecuting adaptive softxerare achieved by an aspect—oriented instrumentation technique, and the run-time verification of the A-LTL/LTL adaptation specifications achieved by a run—time model checking server. As shown in Figure 8.1, AMOEBA-RT includes a program instrumentation module and a run—time model checking server module. 8. 1 . 1 Aspect-Oriented Instrumentation As shown in Figure 8.1, the instrumentation module instruments adaptive soft- ware with instrumentation code that collects run—time state inforn‘ia..tion in the soft- ware and transmits the inforn'iation to the run-time model checking server. The instrumentation is achieved by using an aspect-oriented technique [132], where the source code for the program to be verified is not. altered directly. The AspectJ com— piler compiles the Java source files and an aspect file specifying the instrumentation, and generates instrumented Java bytecode files. For example, the following aspect code inserts an evaluation of the variable bar after every time it is changed, and sends a. condition ‘ ‘high’ ’ or ‘ ‘ low’ ’ to the checker based on its comparison with 180 AspectJ . . instrumentation Adaptive Java Adaptation spec script program in A-LTL \ / ASDBCU compile?\ GIL interpreter: Pars) Weave instrumentation LTL spec and generate script into adaptive Java property automaton program Property lnstrumente automaton d bytecode V State sequence Run-time model Verification result JVM; Execute checker; Simulate state (bug report) bytecode T " sequence using the Developer K j Qwerty automata Instrumentation Run-Time Model Checking Server Figure 8.1: The dataflow diagram for AMOEBA-RT verification a threshold. public pointcut BAR(): set (int bar); after(): BAR(){ if( bar > threshold) checker.send ("high"); else checker.send("low"); The Java bytecode files are then executed on a general J VM. During run time, the instrumentation code collects run—time state information and sends the infor- mation to the run-time model checking server in a sequence through an inter- process/ host communication channel, such as remote procedure calls. etc. When the adaptive program terminates. an “end of err-martian" message (EOE) is attached to the end of the sequence and sent. to the run-time model checking server by calling 181 checker.send(”EOE"); 8.1.2 Run-Time Verification As shown in Figure 8.1, the run—time model checking server checks the confor— mance of the sequence of state information received from the instrumentation module with the adaptation requirements specified in A-LTL/LTL. An A—LTL interpreter pro— cesses the adaptation requirements specifications in A—LTL/LTL, and outputs a prop- erty automaton, i.e. a finite state automaton that accepts the exact set of execution paths satisfying the specification. The property automaton construct-ion algorithm is the same as that. used in the AMOEBA model checker (Section 7.2.2). However, instead of constructing a. product automaton as described in Section 7.2.3, we use the property automatcm to simulate the sequence of run—time state information receiver]. from the instrumenta..tion module, in parallel with the adaptive software. The simu-- lation process is described as follows: We use a variable curstate to denote the current state of the property automaton, initialized with the initial state of the property au- tomaton. Upon the receipt of a condition cond from the instrumentation module, we invoke a moveNexthond) method of the property automaton, which does the following: If (307111 2 EOE and curstate is an accepting state, then return success. If cond = EOE and curstate is not an accepting state, then return failure. If cond # EOE and curstate has a next state that agrees with the condition cond, then move curstate to the next state and return success. If cond 75 EOE and curstate does not have a next state that is agrees with the condition cond, then return failure. 182 If the property automaton returns failure during or at. the end of an execution, then the execution violates the A-LTL }‘)1'operty. \V'hen a violation of the property automaton is encountered. the. run-time model checking server module returns false, and the state sequence (i.e., a counterexample) is recorded in a bug report. Otherwise, the model checking server returns true. 8.2 Case Study: Adaptive Java Pipeline Program We. use the adaptive Java pipeline program introduced in Section 5.4 as an ex- ample to illustrate the run-time model checking technique. The architectm'e of the adaptive. Java pipeline is shown in Figure 8.2. A writer thread contains an adaptive piped output (€01111_)(.)Il(-‘11i. The writer writes data to the adaptive piped output through a write int.erf:—_1(.:e. The adaptive piped output includes a sync piped output component and an async piped output component. It delegates write requests received from the writer to either one of the piped output. conmonents based on its current configure-rtion. The writer configurations can be switched between synchronized and asynchronous modes by adaptation requests received from its adapt interface. Similarly, a reader thread contains an adaptive piped input con‘iponent. The reader reads data from the adaptive piped input through a. read interface. The adaptive piped input includes a. sync piped input con'iponent and an async piped input component. It delegates read requests received from the reader to either one of the piped input compomrnts basrd on its current configuration. The reader configuration can be changed by adaptation requests received from its adapt interface. The data transmission from the output. to the input is achieved by accessing shared buffers between the output and the input. A sync buffer and an async buffer are used for the synchronized and asynchronous pipeline components, respectively. Adaptations of the adaptive piped input /output components are coordinated by an adaptation driver component. 183 adapt adpatation driver adapt Delegate write method to sync piped output or async piped output based on its StreamOutput virtual write() Streamlnput virtual read() Delegate read method to sync piped input or async piped input based on its state state D adaptive piped output , writes to 0 write() L... writer ———> a dapt() reader t sync piped output ’ adaptive piped input 1 reads from read() l; adapu) i - ——‘ async piped output async piped input sync piped input write() write() read() read() writes to writes to reads from reads from I async shared I buffer sync shared S. buffer ‘ Figure 8.2: The adaptive Java pipeline 184 We illustrate the interactions among these components during adaptation with a sequence diagram in Figure 8.3. The reader and the writer threads are omitted for brevity. In the figure, bold message lines are for adaptation control messages, and thin message lines are for data read/ write messages. For com'enience, we separate our discussion of the sequence diagram into seven different phases, marked on the right-hand side of the figure. 0 Phase 1 illustrates the system behavior before adaptation, where the adaptive piped input and output components are both running in the synchronized mode. The adaptive input / output components delegate the read / write requests to the synchronized input/output components, which consequently, access the sync buffer. 0 Phase 2 shows the system behavior when the adaptation driver receives an adaptation request, adapt ‘req. The adaptation driver sends an adapt output request to the adaptive piped output, which then terminates the sync piped output and starts the async piped output. 0 In phase 3, after the adaptive piped output has adapted to the asynchronous mode, write requests to the adaptive piped output are delegated to the async piped output, and consequently, the data is written to the async buffer. 0 In phase 4, after the adaptive piped output completes its adaptation, the adap- tation driver sends an adapt input request to the adaptive piped input, which then attempts to terminate the sync piped input. However, the request cannot be executed immediately, since the sync piped input must. clear the contents in the sync buffer before it terminates, otherwise, data in the buffer will be lost. Therefore, future read requests are still delegated to the sync piped input. 0 In phase 5, upon receiving a read request read req, if the sync buffer is empty, the sync piped input returns a st0p ack message to the adaptive piped input, and terminates. o In phase 6, the adaptive piped input starts the async piped input and returns adapt done to the adaptation driver. The adaptation driver returns adapt done to its run-time environment, thus completing the adaptation process. 0 In phase 7, the read req is re—sent to the async piped input, which accesses data in the async buffer. 8.2.1 Adaptation Requirements We specify the adaptation requirements for the. adz-Lptive J ava pipeline program in A-LTL as follows. Before adaptation, the system (i.e.. the source program) is required to input data from the synchronized pipeline in response to the outputs. That is, for each output data J; in the synchronized mode, the system must eventually input data :13. In LTL: D ( Sync Output (1:)=> OSynclnput (:r. i) ). (8.1) The program behavior after adaptation can be specified in a similar manner. The system (i.e., the target program) is required to input data from the asynchronous pipeline in response to the outputs. In LTL: D ( Async Output (a:)=>(>Async[nput ( :r.) ) (8.2) 186 run-time adaptation adaptive adaptive sync sync async sync environme driver piped output piped input piped output buffer buffer piped input phase 1 phase 2 phase 3 read req phase 4 read ret read ret phase 5 adapt done _ _ _ _ phase 6 phase 7 Figure 8.3: The sequence diagram for the adaptive Java pipeline 187 For both the synchronized and asynclirmioiis pipelines, when an output event occurs, an input obligation is generated. The obligation will be fulfilled and discharged by a. subsequent input event. Formulae (8.1 and 8.2) state that an execution must fulfill all input. obligations before it terminates. In the adaptation from the source program to the target program, we allow the write operation of the asynchronous pipeline to overlap with the read operation of the synchronized pipeline. Therefore, we apply the overlap atila..1_)t.ation semantics intro- duced in Section 3.3 to the specification of the adaptation. During the overlapped pe- riod, the synchronized pipeline should not output data, and the asynchronous pipeline should not input data. The requirement for the adaptation from the source to the target. can be specified using the overlap adaptation semantics as follows: ((q" E] ( Sync Output 2:?» O Syn,c1'n,put)/\(<> A REQ Q . 4E] fiSymr Output )) 2* true ) MOAREQ 2x ( E] ( Async Output :><>A synclnput ) /\ ( C] fiAsyncInput 2* true )))). (8.3) This formula. states that. the system should adapt from the source program (in the synchronized mode) to the target program (in the asynchronous mode) in response to the adaptation request. A REQ. The source and target programs overlap. During the overlapped period, the source must not output data, and the target must not input data. The output obligation generated in the synchronized mode must be fulfilled before the adaptation completes. In our adaptive Java pipeline design, this obligation 188 fulfillment. requirement is achieved by the lnu‘fer empty check in the sync piped input component. 8.2.2 Instrumentation and Model Checking In order to monitor the run—time execution conditions of the adaptive Java pipeline program, we use the aspect-oriented programming tool, AspectJ [132], to insert instrumentation code into the program. Currently, the AspectJ script for in- strumentatitm is generated mamially. In this example, the “instriunentation concern” is encapsulated in an Instrumentation aspect, saved in a file named Instrumentationaj. As shown in Figure 8.4, we define a pointcut Main to identify the main() method of the adap- tive Java pipeline program (lines 1 2). In a before advice for the Main pointcut, i.e., at the very beginning of the entire program, we insert code to initialize the property auton'iaton in the run»time model checking server by sending an A—LTL formula. to the sen-"er (lines 6—16). The. AmoebaChecker class inrplernents a. stub that is respon- sible for the communication with the model checking server. Its constructor method takes three parameters: The first two parameters specify the IP address and the port number for the model checking server, respectively. The third parameter specifies the A-LTL property to be verified. In an after advice of the Main pointcut, i.e., at the very end of each execution, we insert code to send an “end—of—execution” message to the run—time model checking server to terminate the model checking (lines 1820). Second, we use pointcuts to identify the locations of the adaptive Java pipeline program at which the sync and the async buffers are accessed. Figure 8.5 (a) illus- trates the pointcut definition for the SyncOutput n'iessage. Line 2 defines that the point is within the receive() method of the synchronized input class. Line 3 defines that the point. is at the location where the buffer is accessed. When the lurffers are accessed for read / write, an input / output message will be generated and sent to 189 01 public pointcut Main(): 02 execution(* mypackage.main(..)); 03 04 05 O6 before():Main(){ 07 AmoebaChecker.checker = 08 new AmoebaChecker ("192.168.1.101",2211, 09 "((([](SyncOutput-><>SyncInput) /\\(<>AREQ 10 _>[]!Sync0utput)) 11 _>true) 12 /\\ (<>AREQ 13 _> ([1(AsyncOutput—><>AsyncInput)/\\([]!AsyncInput 14 _>true))))" 15 ): 16 } 17 18 after():Main(){ 19 AmoebaChecker.checker.terminate(); 20 } Figure 8.4: Instrun'ientation aspect definition for the main() method the run-time model checking server through network commimicai‘ion. Figure 8.5 (b) shows the advice definition for SyncOutput. Line 1 defines that it. is a before advice for the sync._output pointcut. Before each access to the sync buffer, the advice in- serts an iIlVO(".Z-iti(‘)n of the. nextState() method of the checker (Line 2), which sends the Synt'20uput message to the run-time model checking server. we executed the instrun'iented adaptive Java program and verified the program against the overlap adaptation requirement in Formula (8.3) using AMOEBA-RT. The model checker reported success in all the executions, indicating that the program satisfied the requirement in these executions. In order to demonstrate that the model checker is actually effective in catching errors, in a second experiment, we partially removed the buffer empty condition check from the sync piped input. This time, AMOEBA-RT caught violations of the property in Formula 8.3 in some of the random executions. As a response to the violations, AMOEBA—RT recorded the execution paths in a bug report for future debugging purposes. The bug report in the above experiment showed that, during those executions. the obligation fulfillment requirement was violated. 190 1 public pointcut sync_output(): 2 withincode(*sync.PipedInputStream.receive(..))&& 3 get (byte[] buffer ) ; (a) Pointcut definition for SyncOutput 1 before(): sync_output(){ 2 checker.next8tate("SyncOutput"); 3 } (b) Advice definition for SyncOutput Figure 8.5: Instrumentation aspect definition for SyncOutput 8.3 Related Work A limited number of run-time model checking projects have been investigated. in- cludinér Java Patl’iExplorer (JI’aX) [55], the run-time monitormg and. checking (MaC‘) architecture [83]. temporal rover tools [35]. EAGLE [9], etc. For exan'iple, .lPaX [55] is a run—time model checker for Java programs. It supports temporal logic-based veri— fication, including future time and past time linear temporal logics. The confornumce between an execution and a logic expression is performed by a. Maude rewrite engine. JPaX run-time model checking comprises three main modules: an instrumentation module, an observer module, and an interconnection module. The instrumentation module performs instrumentation of Java bytecode using an instrumentation script. woven in by a static instrunwntation tool, Jtrek. At run time, the instrumented Java programs emit events (pertz'iining to their run—time conditions) to the interconnec- tion module. The interconnection module ties the instrumentation module and the observer module together by further transmitting the events to the observer module. The observer module, which may be running on a ('lifferent machine. verifies the re— ceived events against a temporal logic specification. ()ur AMOEBA-RT is inspired 191 by JPaX. In the AMOEBA-RT architecture. the instrumentation module and the run—time model checking server module correspond to their instrumentation mod— ule and observer module. respectively. In our current. implementation, we do not have the interconnection module. since we. did not find it necessary for our proof-of- concept purpose so far. In the future. the intercormection module may be helpful in AMOEBA-RT in order to further decouple the instrumentation module from the model checking server module. The major difference between AMOEBA-RT and existing run—time model checking approaches is its support for model checking adap— tation requirements specified in A—LTL. Run—time model checking has also been applied to monitor executions of adaptive software. As described in Section 4.3, Feather et (Ll [41] proposed using FLEA [31] to specify and monitor run—time 1')roperties in adaptive software. Again, as described in Section 4.3. the properties they specified and verified were all non-adaptive properties. since the FLEA temporal logic is not specifically designed for specifying adaptz-ition behavior. Our approach extends theirs by adding the ca.1_)ability of specifying and monitoring adaptation behavior specified in A—LTL. 8.4 Discussion In the software industry. testing is still the prevailing approach for software qual- ity assurance. However, in large scale. complex software. the execution paths that can be covered by pre—release testing constitute only a. small portion of all possi- ble paths. Static model checking promises exhaustive coverage of all execution paths. Unfortunately, it suffers from the notorious state explosion problem. Run-time model checking provides a means to continuously monitor and verify the post-release behav- ior of a software system. W hen an error is detected. the software may either file a. bug report, or try to repair the error automatically. 192 In this (‘TlIE-IJNBI‘, we introduced AMOEBA—RT, a run—time verification scheme for adaptive software. AMOEBA-RT includes two parts, an instrumentation module and a. run-time model checking server. We used an aspect-oriented technique to instrument adaptive Java programs in order to separate the run-time model checking concern from the program. The AMOEBA—RT model checking server interprets A— LTL specifications and verifies execution sequences against A—LTL specifications at run time. We have applied AMOEBA-RT to the verification of the adaptive Java pipeline program and detected errors in our experiments. AMOEBA-RT currently supports filing bug reports. More sophisticated responses to run-time errors, such as self-healing and self-repairing. require feedback from the run-time model checking server to the adaptive program. which we plan to investigate in our future work. There are several limitations that nuist he considered when using run-time model checking techniques. We next overview these limitations. Performance overhead. Run-time, verification occurs at run time. The overhead incurred by the instrumentation code includes two parts: ( l) the overhead to initialize the model checking server at. the beginning of each execution, and (2) the overhead to update state condition information on the model checking server during the ex- ecution. The first part includes sending an A—LTL formula to the model checking server and constructing the property automaton on the model checking server. The complexity of the automaton construction is exponential to the length of the formula. However, this overhead is encountered only once for every execution. l\v’Ioreover, the automaton construction occurs on the server. which could be running on a different computer. Therefore, the impact of the overhead of this part to the run—time adaptive system performance is largely negligible. The overhead of the second part includes the evaluation of monitored state conditions and the transmission of these conditions 193 to the run-time model checking server. Its effect on the run—time adaptive program (’lepends on the density of the. instrumentation points. the number of conditions to be monitored at each point. and the encoding of the. conditions to be transmitted. In our experiment with the adaptive Java. pipeline example. the performance overhead incurred by run—time verification is largely in'iperceptible (< 1%). Non-exhaustive verification. Unlike static model checking, run-time model checking only checks one execution path at a time. Successes of run-time model checking in(_t-.rease our confidence in the correctness of the software, but they can never prove the correct. ness of the program in general. After the fact repair. Since run-time model checking occurs at run time, when errors are detected, damages (e. g., leakage of sensitive information) may have already been made to the run—time system. Although the run-time model checking server could potentially invoke damage control routines to recover or repair the system, some types of damage. caused by unforeseealfle errors are irreversible, such as security leakage. Run-time verification is a useful complenumt to testing and static model checking rather than a replacement. Adaptive software must be thoroughly tested before release in order to reduce the number of errors at run time. If the software is to be executed in safety critical domains, then static model checking, as described in Chapter 7, must be performed to ensure critical properties in the system . 194 Chapter 9 Safe Dynamic Adaptation Protocol This chapter describes a safe adaptation protocol for providing assurance to dy- namically adaptive tnograms during program execution [150. 152]. This protocol ensures that adaptation steps are performed in a way that does not violate depen- dency relationships among components in (listriljmted adaptive software- we assume that the requirements and design described in previous chapters have been refined in the implementation and have. been (Ir-scribed with communications and dependency relationships among components. The properties discussed in the chapter are. much more fine grained and are dependent on run—time information. Our approach pro— vides centralized management. of adaptations, thereby enabling optimizations when more than one set of adaptive actions can be used to reach a target configuration. A rollback mechanism is provided if a failure is encountered during the adaptation process. This chapter is organized as follows. In Section 9.1. we describe the theoreti- cal foundation of our approach. Section 9.2 describes our proposed approach to safe a(‘lz~1.1')t.ation in detail, and Section 9.3 describes its application in a video multicas- ting system. Related work is overviewed in Section 9.4. and Section 9.5 discusses limitations and extensions of the proposed apprrmch. 9.1 Theoretical Foundations for Safe Adaptation The adaptations that we consider in this chapter are component insertion, re— moval. replacement, and combinations thereof. A distril’mted component-based soft- ware system can be modeled as a set of communicating components running on one or more processes [77]. Components are considered to be eomn’nmicating as long as there is some type of interaction. such as message exchange, function calls, network com— munication. and so on. A commun-icotion. channel is the facility for communication, such as a TCP connection. an interface. etc. Communication channels are directed. A two-way comn‘nmication between two components is represented with two channels with traffic traversing in opposite directions. A component can communicate with another as long as there exists a path of one or more channels connecting these two components. In gei‘zeral, conmmnication among components can be decomposed into multiple non-overlapping comnumication segments of various granularity. A coarse—grained segment can be divided into multiple finer—grained segments. For example, the corn- munication between a video server and a video client can be divided into multiple transmit. / receive sessions; each session can be divided into multiple frames, where each frame can be divided into multiple packets. Comimmication can be either local or global. Local conmimiication, involves components of only one process, such as a local procedure. Global comm.unication involves components from more than one. process. A UDP dz‘itagram transmission over a network is an example of global communication. involving both a sender and a receiver processes. Dynamic adaptations may interrupt ongoing communication. The cormnunica- tion whose interruption may cause errors in the system is termed a critical cmmnuni- cation segment. We use a set. of finite sequence of indivisible actions (named atomic actions) to model the set of critical communication segments CCS. The con'miuni— 196 cation among components in a system is modeled as a (finite or infinite) semimice of (critical comm-urnMotion. identifier. atomic action) pairs. Given a communication sequence, S . and a critical communication identifier. CID, we can extract from S the sequence of atomic actions with the same CID, denoted as Sum. Given a. set of critical comnmnication segments CCS. we say that. an adaptive system does not interrupt. the critical communication segments if the comnnmication sequence of the adaptive. system is S and for all critical comnmnication CID, we have Saw 6 CCS. Unsafe ada}:)tation occurs when adaptive actions Involving communicating components dis- rupt normal functional communication between the adapted component and the rest of the system, thus introducing system incoi1sistencies. The formal requirements specifications introduced in Chapters 3 and 4 can be used to identify critical communication segments. For example, the liveness property [3(scml :> Oack) in the requirements specification implies that a “send” and “ack" action pair forms a critical conummication segment. and tliierefore, adaptation may occur in the states after the acknowh-dgcments for all the send operatirms have been received. ‘ In a given system. multiple components may collaborate by communicating with each other. We use dcpcmlcncy relationships to model these communicz‘ition patterns. The correct functionality of a cmnponcnt, c, may require the correct functionality of other component(s). The absence of other components may disrupt normal function- ality of c. Based on the discussion above, we define a safe (lg/na'IalC adaptation. process as follows: Definition: A dynamic adaptation process is safe if and only if: 0 It does not violate dependency relationships among components. 0 It does not interrupt critical comnumication segments. 197 In the following. we present. our safe dynamic adaptation process. 9.1.1 Dependency Relationships In a given system. if the, correct functionality of a component A depends on a condition Cond to be true. then we say A depends on the condition, denoted as n A=>Cond. where “=> denotes a dependency relationship. The condition takes the form of a. logic expression across the. components. For example, A=>(Bl GB Bg)/\C means that the correct functionality of component. A requires the correct functionality of either component 81 or B2. and C, where the operator “63” represents the logical “xor” operation, and “A” rcI'n‘esents the logical “and” Operation. We use a special type of dependency relationship. stractmal invariant, to specify correct conditions of the system structure. These structural invariants specifiy the software structure in the. implementation. which can be considered as a refinement of the global invariants described in Part I. For example. the structural invariant A/\B indicates that the correctness of the system (i.e.. to satisfy the global invariants described in Part I) depends on the correct functionality of both con'iponent A and component. B. In a safe adaptation process, the dependency condition of a. component should always be satisfied when the component is in its fully operational state. Since de- pendency relationships are based on connmmication, if we block the comnuu'iication channels of a component, then we may temporarily relax the dependency relation- ships and perform necessary adaptive actions. Before the connnunication in these channels is resumed, the dependency relationships must be reinforced. Safe Configurations and Safe Adaptation Paths. A system, configmation com- prises a set of components that work together to provide services. If a dependency relationship predicate dr is e ’aluated to be true when we associate true to all compo- nents in a configuration, and associate false to all components not in the configuration. 198 then we say the configuration satisfies the dependency relationship. If a configuration satisfies all the dependency relationships. then this configuration is considered to be safe. otherwise. it. is imsafe. A system can operate correctly only when it is in one of its safe configurations. All safe configurations can be obtained from the dependency relationships and available components. A system moves from one configuration to another by performing adaptive ac- tions. An adaptive action is defined as a function from one. configuration to another: adapt(confi_q 1) = configg, where configg is the resulting system configuration when the adaptive action, adapt. is applied to configl. A distributed adaptive action comprises multiple local adaptive actions of individ— ual processes. Each local adaptive action is divided into three parts: the pre-action. the zit-action, and the postfact'lon. The pre-action is the preparation operation. such as initializing new con’iponents. etc. The iii-action alters the structure of the pro- gram. The post-action specifies tasks to be performed after the in—action. such as the destruction of old components. The pro—actions and post-actions do not interfere with the functional behavior of the adapting process. we assume that an adaptive action is atomic and isolated. Atomicz'ty of an adaptive action implies that the adaptive action should either not start or run to completion. Isolation of an adaptive action implies that. the adaptive action is per— formed without interleaving with other operations. An adaptation step is an ordered configuration pair: step :(configl, configg), where step represents a system configuration transition from configl to configg. A safi adaptation process comprises a. set of safe configurations connected by a set of adaptation steps. These configurations and adaptation steps, together, form a safe adaptation path that starts from the source configuration of the first step and ends at the target configuration of the last step. We can construct a safe adaptation graph (SAG) for an adaptive system. where 199 vertices represent safe configurations and arcs represent adaptation steps connecting safe configurations. A SAG can be ccmstructed from a list of available adaptive actions. An adaptation step. (config-l.configg). is in the SAG if and only if: 0 Both configl and configg are safe configurations. 0 There exists an adaptive action adapt such that adapt(config1)—-(i-onfig._). 9.1.2 Critical Communication Segments Performing adaptive actions may disrupt c(nnmunication among components. A safe adaptation process should maintain the integrity of critical communication segments. The system state in which an adaptive action does not interrupt any critical (onmnlnication segments is called a global safe state for the action. If a connnunication is local. then the integrity of its segments can be maintained by a local process. A local process is said to be in a local safe state for an adaptive action. if the action does not, interrupt. local critical comnnmication segrnt—‘nts. The integrity of global critical comnnmication segments is guaranteed by a. global safe condition. meaning that. the adaptive action does not interrupt global critical com— munication segments. For example. the global safe condition for a UDP-dz’itagram transmission is that the receiver has received all the datagram packets that the sender has sent, where the transmission of each datagram packet is a critical connnunication segment. A system is in its global safe state if and only if: 0 All the processes are in their local safe states. 0 The global safe condition is satisfied. 200 9.1.3 Enabling Safe Adaptation Next. we introduce the theoretical foundation for our safe adaptation process. and prove the process is safe. Theorem 13: The following two statements are equivalent. (a) An adaptation process is safe. (b ) The adaptation process is a process that earecutes along a safe adaptation path, where each adapt-ice action along the path is performed in its global safe state. Proof 13: Proof sketch: 1. (b) (a). If an adaptation process is performed along a safe adaptation path and each adaptive action is performed in a global safe state, then during the adaptation process, the system is either at a safe configuration or in a transition from one safe configuration to another. When the system is at a safe configuration. it does not violate dependency re- lationships ( definition of safe configuration). Because no adaptive action is performed. critical communication. segments will not be inteirupted due to adap- tation. Adaptive actions are perfmvned in global safe states. which implies that no crit- ical communication segments will be interrupted. Since adaptations start and end in safe configurations, dependency relationships will not be violated be ore and after the adaptive action. Adaptive actions are atomic, and thus we can assume there is no intermediate state during an adaptive action. There ore, dependency relationships are not violated during adaptive actions. 201 2. Use proof by controdiction to establish (a) -—> (b). If (1)) does not hold. then there are two possibilities: (I) the process is not performed along a safe adaptation path or (2) there is an adaptive action taking place in a state that is not globally safe. In the first situation. there must be a configuration on the adaptation path oiolating d(_%pendcn(_:y relationships, and there ore. the adaptation process is unsafe. In the second situation. the (ulaptioe action might interrupt a critical communication segment, and thus, the (ulaptation process is unsafe. There ore, if (b) does not hold, (a) cannot hold. 9.2 Safe Adaptation Process The safe. adaptation is executed by an adaptation manager. typically a separate process that. is responsible for managing adaptations for the entire system. The adap— tation manager connnunicates with adaptation agents attached to processes involved in the adaptation. An agent receives adaptive commands from the adaptation man- ager, performs adaptive actions. and reports the status of the local process to the adaptation manager. Communication channels can be implemented to best match the comnuinicatim‘i patterns of the particular system. For example, both Arora [7] and Kulkarni [77] llz-IVC used spanning trees. which are well suited to components or- ganized hierarchically. In contrast. in a group communication system. multicast may be a. better mechanism for the coordination between the adaptation manager and the agent processes. Our approach comprises three phases: the analysis phase. the detection and setup phase, and the realization phase. The analysis phase occurs during development. time. In this phase. the programmers should prepare necessary information such 202 as (.letermining dependency invariants, specifying critical communication segments. etc. The detection and setup phase occurs at run time. \Vhen the system detects a condition warranting an adaptation. the adaptation manager should generate a safe adaptation path. In the r'ea-ilization phase. the adaptation manager and the, agents coordinate at run time to rurhieve the adaptation along the safe. adaptation path established during the previous phase. 9.2. 1 Analysis Phase At development time. the adaptive software developers should prepare a data structure P : (S. I. T. R. ..4) where 0 S is the set of all configurations. o I (I: S —-> 13001.) is the conjunction of the set of (glependeney relationship predicates evaluated upon configurations. 1(3) is true if and only if the system satisfies all the dependency relz-‘u‘ionships in the comiguratior: s. o T is a set. of adaptive actions. 0 R (R: T —* PROGRAM) maps each adaptive action to its ccn‘responding imple- mentation code in the program. where PROGRAM represents the implemen- tation. The reconfiguration is achieved by the execution of the implementation code. a A (A. T —+ VALUE) is a. cost function that. maps each adaptive action in T to a cost value in lCLlLLI’E. We associate a fixed cost to each adaptive action. Factors affecting cost values include systen‘i blocking time. adaptation duration. delay of packet. delivery. resource usage. etc. 203 9.2.2 Detection and Setup Phase Once the system detects a condition warranting adaptation, the ziidaptation man- ager obtains the target configuration and prepares for the adaptation. This phase contains three steps. 1. Construct Safe Configuration Set. Based on the source/target configura- tions of an adaptation request and dependency relationships. this step produces a set of safe configiu'ations. 2. Construct Safe Adaptation Graph. Next, we construct a safe, adaptation graph (SAG) that depicts safe configurations as nodes and adaptation steps as ' edges 3. Find Minimum Safe Adaptation Path (IVIAP). Finally, we apply Dijk- ' stra’s shortest path algorithm to the SAC to find a feasible solution with mini- mum weight, where the weight of a path is the sum of the costs of all the edges along the path. 9.2.3 Realization Phase This phase requires the coordination of the adaptation Inanzérger and the agents at run time to carry out the actual adaptation according to the safe adaptation path. The adaptation manager should ensure that each adaptive action is performed in its global safe state. We use state diagrams to describe the behavior of each agent. and the adaptation manager, respe(:ti\-='ely. The state diagram of an agent at each local process is shown in Figure 9.1, where the Courier font denotes message names. Before an adaptive action is performed, each agent is in a running state. In this state, every component in the process is running in its full operation. When the agent receives a reset message. it moves 204 to a resetting state. The agent performs the local pre-acticm. and initiates a reset. of the process. In the resetting state. the process is only partially operating: Some functionalities related to the adapted component are disabled. When the process reaches its local safe state, the agent performs a hold action to hold the process in the safe state. so that the local in-action can be performed safely. Then the agent sends the adaptation manager a reset done message, after which the process is in a safe state. In this state, the agent will perform its local in-action. “vilien the in- action has finished. the agent sends the adaptation manager an adapt done message and reaches an adapted state. If the process is the only one involved in the adaptive action. then it can directly proceed to a resuming state from the adapted state without blocking. (i)tlwrwise, the process needs to remain blocked in the adapted state until it. receives a resume message. When the agent receives a resume message. it knows that all processes have completed their adaptive in-actions. and thus 1*)rocecds to a.‘ resuming state, where the agent. attempts to resume the full operation of the process. Finally. when the full operation of the process is resumed, the agent sends the manager a resume done message and performs the local 1r)ost—action of the adaptive action, returning to the running state. The state diagram of the adaptation manager is shown in Figure 9.2. The adap— tation manager starts from a running state where the system is fully oI:)erational. When an adaptation request is received by the adaptation manager and a MAP is created after the planning phase, it sends reset I‘nessages to all the agents. Sending the first reset message brings the adaptation manager to an adapting state. In this state, the adaptation manager waits for the adapt done messages from all agents. When all adapt done messages are collected. the adaptation manager proceeds to an adapted state. Then the adaptation manager sends resume n'iessages to the agents and the manager proceeds to the resuming state. When the aclzmtation manager col- lects resume done messages from all agents, it transitions to the resumed state. If 205 [reset complete] / . send reset done »———— resetting / safe do: reset , II 1k I (in partial operation) ( 9 0L Pt ) L running ‘1’ (infull operation) ‘- ‘ ‘ \ ‘ f ‘ ‘ ~ Col/b resuming W ‘ ‘ ‘ ~ ~ V do. resume ‘ adaptation transition (3 state ‘ ‘ “ '> failure handling transition Figure 9.1: State. diagram of a local process during adaptation there are more adaptation steps remaining in the adaptation path, then the adapta- tion manager will repeat the traversal of preparing, adapting, adapted, resuming, and resumed states until the system configuration matches the target configuration. When the last adaptation step has finished. the adaptation manager returns to the running state. 9.2.4 Failures During Adaptation Process \Ne identify two major types of failures based on our experience. First. if the communication between the manager and the agents is unreliable, then the messages between them may be lost, causing loss-of-message failures. Second. when the. agent of a local process receives a reset message, the local process may not be able to reach 206 68‘ preparing (in fitll operation) [05'3ng \ k 860d r3741!) CO)” $66916 \\‘\ 88¢ Plere/ adapting (in partial operation) running (in full operation) \ “53' o \ '9 O 5 Q \ 939 £3. .— \ < a \ é: a : ‘ O/ E- O \ .6 -- C.) \ 9) g‘ C". \ O .2 x c g '5 \ "" \ rt 5} \ Q g \\ O H \ :3 \\ l (D resumed adapted (in full operation) [failure]/retr y . . . ‘ , , — ~ ~ \ (m partial operation) I, \‘ ' \ i: ll’ resuming do: resume in partial operation) . initial state “-—> adaptation transition C) state ' ‘ ‘ ‘> failure handling transition Figure 9.2: State diagram of the adaptation manager during adaptation a safe state in a reasonably short period of time, thus causing a fail-to-reset failure. Both types of failures can be detected by a time—out mechanism on the manager. Loss-of—Message Failure. Loss-of-message failures caused by transient network failures can be handled by multiple attempts to send the messages. However. loss—of- message failures caused by permanent network failures may cause system inconsisten— cies if the system does not respond to this type of failures correctly. The general rule for handling loss-of—message failures is that if the failures occur before the manager 207 sends out the first resume message, then the adaptation should be aborted. That is. the manager should stop sending any new reset and adapt messages and all the participating processes should roll back to the state prior to the adaptation. If the failures occur after the manager has sent out a resume message, then the adaptation should run to completion. That is, all the participating processes should eventually complete adaptation and resume. Fail-to—Reset Failure. In some cases, when an agent receives a reset message, the local process may be engaged in a long critical communication segment, which may prevent it. from reaching a safe state in a reasonably short period of time, thus causing a fail—to—reset failure. If a process cannot reach a safe state after it has received a reset message, then the adaptation process should be aborted, and all affected 1;)rocesses should roll back to the state prior to the adaptation. Failure Handling Strategies. In the event that a failure occurs during an adap- tation step, there are two possible outcomes: (1) The adaptation step succeeds and the system reaches the target safe configuration. (2) The adaptation step fails and the system reaches a safe configuration prior to the adaptation. If the adaptation step succeeds, then the manager should continue processing the remaining adapta- tion steps if there are. any. If the adaptation step fails, then the manager has four options: (1) Retry the same step. (2) Try other adaptation paths. (3) Attempt to return to the source configuration. (4) Remain in the. current safe configuration and wait. for user intervention. we suggest using the combination of all options: The adaptation manager first tries the same step a second time. If it fails again, then it tries an alternative adaptation path from the current configuration to the target configuration. If all possible paths to the target configuration have been tried and have failed. then the adaptation manager tries to return to the source configuration. If this attempt also fails, then the adaptation manager notifies the users and waits 208 for user intervention. The (lashed arrows in Figures 9.1 and 9.2 show the failure handling transitions on both the manager and the agents. We claim that the adamation process is still safe with the presence of failures. During an adaptation step, a rollback is invoked only when no process has been resumed, which ensures that no side. effect is produced before the rollback. Otherwise. the adaptaticm will run to completion, which has the same effect as if the adaptation had no failures. 9.3 Case Study: Video Streaming “7e use a video n‘mlticasting system to illustrate the safe adaptation process. The system setup is similar to the audio streaming example introduced in Chapter 5. However, in this section, we use different configurations on the clients. Figure 9.3 shows the. initial configuration of the application, comprising a video server and one or more video clients. In this example, one client is a hand-held (ifOIl'lDIItCI‘ (e.g., iI’AQ) with a short battery life and limited computing power, and the second client is a laptop computer (cg, Toughbook) with reasonable computing power, but limited battery capacity. On the server, a. web can'iera. captures video input and a. video processor encodes the stream. The encoded video, already packetized, is delivered to the network through a MetaSocket. After traversing a chain of zero or more (encoder) filters, the pz'ickets are eventually transn‘iitted on a multicast socket. On each client, the packets are processed by a chain of decoder filters in a receiving l\'IetaSoc.ket. Subsequently. they are passed to the video processor, where they are unpacketized into video frames. Finally the frames are displayed on video players. In this example, two main encryption schemes are ava.i1al.)le for processing the data: DES 64-bit encoding/decoding, and DES 128—bit encoding/decoding. The sender has two components: E1, a DES 64-bit encoder and E2, a DES 128—bit en- 209 video server video clients video chain of processor encoders video video processor player decoders Figure 9.3: ConfiU‘urz'ition of the video streaming a. ) )lication C) O C) coder. The hand—held client has three components: D1, a DES 64-bit. decoder, D2, a DES 128/ 64-bit compatible decoder, and D3, a DES 128—bit decoder. The laptop client has two components: D4, a DES 64-bit decoder and D5, a DES 128-bit decoder. In general, a DES encoder generates DES encrypted packets from plain packets and a DES decoder decrypts the DES encrypted packets. Each decoder implements the “bypass” functionality: When it receives a packet not encoded by its corresponding encoder, it simply forwards the packet to the next filter in the chain. The available adaptive actions are: (l) inserting. removing. and replacing a single encoder or de- cmler; (2) inserting, removing, and replacing an encoder/ decoder pair; (3) inserting, removing. and replacing an encoder/ decoder triple. The overall adaptation objective is to reconfigure the system from running the DES 64-bit encoder/ decoders to run— ning the DES 128-bit encoder/ decoders to “harden” security at. run time. We use a separate process to implement the adaptation manager and attach an agent thread to both the server and the clients, respectively. 9.3.1 Safe Adaptation Path By analyzing the cormnunication patterns between the encoders and the de- coders, we find that the correct functionality of a decoder does not. require an en- coder. but in order to decode a packet generated by an encoder, there must be a 210 cm‘responding decoder for each encoder. we have the. following invariants, where {B represents "exclusively select one from a given set of elements". 0 System Invariz‘ints: — Resource constraint: @(D1,D2, D3). The receivers allow only one DES decoder instance in each device at any given moment due to computing power constraints. -— Security constraint: @(El, E2). The sender should have one encoder in the system so that all data are encoded during ada}.)tation. o Dependency im'ariants: — E1 :> (Dl V 02) A D41. E1 encoder requires the D1 or D2 decoder to work with the D4 decoder. —- E2 —> (D3 V D2) /\ D5. E2 encoder requires the D3 or D2 decoder to work with the D5 decoder. W'e input. source and target crmfigurations and the above ("le1_)(»>ndency invariants to the adaptation manager, which gmierates the safe configuration set. For brevity and automatic processing purposes, we use a 7—bit vector (D5, D4, D3, D2, D1, E2, E1) to represent a configuraticm: If the corresponding bit is “1", then the com- ponent is in the. configuration. otherwise, it is not in the configuratimi. The source configuration is (0100101) and the target configuration is (1010010). The resulting safe configuration set is shown in Table 9.1. The adaptive actions shown in Table 9.2 are input to the adaptation manager. Only relevant actions are listed. The cost column gives packet delay in milliseconds. Note that in order to perform some. of the actions (e.g.. AG-AQ). the server has to be blocked until the last 211 packet processed by the encoder has been decoded by the decoder(s) on the client(s). As a result. these actions are much more expensive than other actions. [I bit vector I configuration H bit vector I configuration H 0100101 D4,D1,E1 1100101 D5,D4,D1,E1 1101001 D5.D4,D2,El 1101010 D5,D4.D2,E2 1110010 D5.D4.D3,E2 0101001 D4,D2,E1 1001010 D5,D2,E2 1010010 D5.D3.E2 Table 9.1: Safe configuration set Action Operation Cost Description (ms) A1 E1 —» E2 10 replace E1 with E2 A2 D1 —-> D2 10 replace D1 with D2 A3 D1 ——+ D3 10 replace D1 with D3 A4- D2 -——> D3 10 replace D2 with D3 A5 D4 —> 0.5 10 replace D4 with D5 ‘1 A6 (D1. E1) -—> (D2. E2) 100 A1 and A2 A7 (D1, El) —+ (D3. E2) 100 Aland A3 A8 (D2, E1) —+ (D3. E2) 100 Aland A4 A9 (D4, E1) —+ (D5 E2) 100 Aland A5 A10 (D1, D4) -—> (D2, D5) 50 A2 and A5 A11 (D1, D4) -—-> (D3, D5) 50 A3 and A5 A12 (D2, D4) ——> (D3, D5) 50 A4 and A5 .A13 (DI,D4.E1)——+ (D2.D5.E2) 150 Aland A10 A14 (DI,D4,E1)—+ (D3.D5,E2) 150 Aland A11 A15 (D2, D4, El)—> (D3. D5, E2) 150 A1 and A12 A16 —D4 10 remove D4 A17 +D5 10 insert D5 Table 9.2: Adaptive actions and corresponding cost. The adaptation manager creates the SAC shown in Figure 9.4 and uses Dijkstra’s shortest path algorithm to obtain the. shortest. path (A2, A17, A1. A16, A4), which in this example, has cos t 50 ms. 212 A15: (1)2.D4.El)—>(D3,D5.E2) A13: (D1,D4,F.l )—>(D2,DS.E2) A9: (04m )—>(D5.E2) (D4.D2.El) > (D5,D2.E2) A17: +05 / A16. —D4 A4. D2 —>D3 A2: Dl->D2 AlZEl—>E2 . A4: D2 —>D3 “7“” AI6: ->D4 A2: Dl—>DZ A7: (ULEI )—>(D3,E2) (l)5.D4,Dl,El) > (D5,D4,D3,E2_) Al4: (1)],D4.El _) —> (D3,D5,E2) Figure 9.4: Safe adaptation graph (SAG) 9.3.2 Performing Adaptive Actions Safely The adaptation steps for the safe adaptation path are: Step (1). Action A2: Replace D1 with D2. Step (2). Action A17: Insert D5. Step (3). Action A1: Replace E1 with E2. Step (4) Action A16: Rei’nove D4. Step (5). Action A4: Replace D2 with D3. Action A2 in step (1) only involves the process running the l\=letaSocket on the hand—held device. The adaptation manager sends a reset message to the agent on the hand-held device. The global safe state of this action is the same as the local safe state of the device: the DES decoder is not decoding a packet. Vt-"hen the agent receives the reset message. it sets a. “resetting" flag in the l\-'IetaSocket. When the 213 decoder finishes decoding a packet. it checks the “resetting” fiag. If it is set, then it notifies the agent and blocks itself. The agent sends a reset done message to the adaptation manager and performs the (A2 : D1 ——> D2) action. When the adaptive action is done. it sends an adapt done message to the adaptation manager. The agent directly resumes the hand-held’s full operation and sends a resume done message to I" the manager. Other steps (2—0) can be performed in a manner similar to that used in step (1). 9.4 Related Work Numerous techniques have been proposed to address dynamic adaptation at run time. Kulkarni et al [77] proposed a distributed approach to safely adapt (i.e., corn- ponent replacement) distributed fault-tolerance components at run time. In their work, a (lts't'r-‘z'bu.tcd component is defined as a. composition of a number of compo- nent fractions. each of which is installed at a. different process in a system. There exist dependency relationships among these component fractions, and thus certain components need to be blocked before they can be safely removed from the system. Their goal is to achieve system wide substitution of all the component fractions of a distributed component so that the dependency relz'itionships are not violated and the blocking of the component fractions is minimized. Towards this end, they proposed a distributed reset algorithm, which works in two steps: In the first step, termed initialization wave, initialization messages are sent from a designated process to its neighbors (directly connected processes). Upon receiving the initializaticm message, a process first relays the message to its own neighbors, and then resets its state to prepare for the component replacement. At the end of the initialization wave, the paths of the initialization messages form a spanning tree. In the second step. termed replacement wane, replacement messages are passed along the spanning tree to all 214 the processes in the tree in order to replace component fractions running on these processes. Using the above distributed reset, they achieved distributed component replacement with local minimal blocking (i.e., a component fraction is blocked only when it is replaced). The safe adaptation protocol introduced in this chapter is in- spired by their approach. However, we used a centralized management to minimize the cost for adaptation. The cost in our approach includes blocking time, resource usage. and other factors. Also, the minimum cost achieved by our approach is global, i.e., among all possible safe adaption paths, while their minimal blocking is local to a given adaptation path. Chen et al introduced Cactus [22]. a system for constructing highly configurable distributed services and protocols. In Cactus, a host is organized hierarchically into layers. where each law-"er comprises a number of adaptive components (ACS). Each AC contains multiple alternative adaptation-aware algorithm modules (AAMs) providing alternative implementations of a service. During normal operation (i.e., not. adapt- ing), only one AAi\-'I is running in each AC. The ACs in each layer receive incoming messages from the layer below it, process the messages, and then pop them up to the layer above it. Similarly, they receive outgoing messages from the layer above it, process the messages, and then push them down to the layer below it. They proposed a graceful adaptation protocol to coordinate the adaptation of adaptive components (switching from running AAMoM to running AAi'l/Inew) across multiple hosts. The protocol comprises three key steps: preparation, outgoing switchover, and incoming switchover. During preparation, each AC prepares to receive and buffer incoming mes— sages produced by other AAMnew. During the outgoing switchover, each AC that is responsible for outgoing messages switches from running AAMOM to running AAMncw. In incoming switchover, triggered by receiving an incoming message generated by AAMnew, each AC responsible for incoming messages switches from running AA MOM to running AAMnew. The graceful adaptation protocol avoids infiight message—loss during adaptation and reduces message blocking incurred by the adaptation. Their approach implicitly assumes a simple dependency rule among ACs. i.e., the senders depend on the receivers. However, in some, systems. the dependency relations may be more complex than that. such as in the case introduced in this chapter. Therefore. their approach cannot be directly applied to address the. safe. adaptation problem in this chapter. Appavoo et al [5] proposed a hot—sa’app'ing technique that supports run—time ob ject replacen'ient in an open source research operating system, K42. developed at IBM research. In K42. every service is encapsulated in a component. represented by an object. The services of the operating system can be adapted by replacing old objects with new ones. The key infrastructure supports in K42 for hot—swapping include: (1) The references of the objects are made indirectly through an Object Translation Table (OTT). (2) A generation count scheme, is used to determine whether the object has reached a state 'in which the object. is not. being used by other objects. (3) Object state transfer negotiation protocols are used to transfer state information from an old object. to a. new object. The hot swapping process of an object includes three stages. In the first. stage, the OTT entry of the object points to the old object. In the second stage. the OTT entry points to a mediator. and the mediator performs the hot-swapping from the old object to the new object. In the third stage. the OTT entry points to the new object. The second stage includes the following phases. 1. Preparation phase: In the preparation phase. the system inserts a mediator between the OTT and the object. The mediator forwards calls to the original object. and begins to track new calls to the object. 2. Blocking phase: When all the calls initiated before the preparation phase have completed, the adaptation goes to the blocking phase. In the blocking phase, new calls to this object are blocked until all the calls tracked by the. mediator have completed. 216 3. Quiescence phase: When all the calls initiated in the preparation phase have finished, the adaptation is in the quiescence phase. In the quiescence phase, the system performs state transformation from the old object to the new one. 4. Complete phase: After the state transformation is done, the adaptation enters the complete phase. In this phase, the mediator is removed, the OTT entries point to the new object, and blocked calls are (‘lelivered to the new object. The hot-swapping technique is specifically designed for the K42 operating system. Their approach assumes that all the calls can be c<;)1npleted in a short period of time, thus the object blocking is minimized. Their approach also assumes that there are no dependency relationships among the calls, since such kind of dependencies may cause deadlocks in the adaptation process with their approach, which we explain as follows. For example, if the completion of a call C,- initiated in the preparation phase depends on the completion of another call C]- ii'iitiated in the blocking phase, then C,- will never crmrplete Since Cl is liilocked, and thus a deadlock is incurred. Therefore, the hot-swzwping technique is not applit able to the safe adaptation problem introduced in this chapter. 9.5 Discussion This chapter presented an approach to safe dynamic adaptation. we use a cen- tralized adaptation manager to schedule the adaptation process, which results in a globally minimum solution. We block adapting components until the system has reached a new safe state and thus avoid unsafe adaptation. we also use timeout. and rollback mechanisms to deal with possible failures during the adaptation process to ensure consistency in the system. The interaction between the manager and the agents in our approach is similar to the two-phase commit protocol [53] if we combine the safe state with the adapted 217 state in the agents. However, in this work. we consider it clearer to have two separate states. ;\Ioreo\s'er, our protocol handles multiple a.(.la1:)tation steps where failures may occur at various phases of each step, whereas the two-phase commit protocol only addresses a single adaptatitm step. Scalability is a concern for our technique. Because. our technique searches the optimal path in a SAG, the con‘iputational complexity may be high when there are numerous adaptive components in the system (exponential to the number of com- ponents participating in an adaptation). To handle the complexity. we divide the atlaptiye components of a system into multiple collaboration sets where components collaborate with only those in the same set. The con'iponent adaptation of each set can be handled indepel'icjlently, thereby reducing the complexity 218 Chapter 10 Conclusions and Future Investigations In this thesis, we introduced an apprcmch to systematically apply formal methods to different phases in the development of adaptive softwa.r(-_~ in order to gain assurance in adaptation. These techniques are summz’rrized as follows: (1) In the requirements analysis stage, ue analyze. and f'_)1‘Ii'1a.lly specify the requireiiimts for adaptive software. We start from the high-level goals for the software and elicit the set of possible domains in which the software executes [144]. In order to systematically analyze the requirements for an adaptive software, we proposed the goal-based adaptation requirements analysis. \Ve separately specify the requirements for each domain (local requirements), for adaptations from one domain to another (adaptation variant properties), and for the overall system (global requirements) [143, 146]. We proposed A-LTL [143, 146] to formally specify adaptation requiren'ients and TA-LTL to formally specify real—time properties in adaptive SOfl.\VE-ll'0 [153, 155]. (2) In the model design and analysis stage, we design and analyze formal design models from adaptive software. In the MASD approach [144], we first con— struct a steady-state model (e.g. Petri net) for each domain. then design adaptations 219 among steady—state programs using adaptation models. After the models are cre— ated, we perform n'iodel analyses, including simulation and model-checking, on the models to verify and validate the models against. the requirements specified in the requirements stage. we also introduced a model~based technique to provide assur- ance in re-engineering legacy software for adaptation [149]. The adaptation design is performed upon UML Statechart models, which are initially extracted from legacy code using the metamodel-based language traI’islation technique. we add adaptive states and transitions to these UML Statechart models to construct adaptation mod- els. These models are automatically translated to formal specification languages, e.g., Promela. for formal analysis. We introduced the cascade adaptation technique and an aspect-oriented adaptation enabling technique to translate the UML models into adaptive in1plementations. (3) In the implementation stage. we provide assurance in the adaptive soft- ware implementation with formal analysis and-safe adaptation protocols. We use the models created from the design stage as blueprints to implement the adaptive software, and verify the implementations against requirements [144, 149]. we use the safe dynamic adaptation protocol to orchestrate adaptive actions among distributed adaptive components so that they take place in safe states, along safe adaptation paths. thus ensuring that the depei‘idencies among these components are not. violated during execution [143, 146]. In order to ensure that the implementation for adap— tive software satisfies critical properties elicited in the requirements analysis stage, we developed a modular model checking technique AMOEBA that statically verifies adaptive software against A—LTL properties and significantly reduces the complexity of model checking [144]. For programs that are. too complex to be verified stati- cally, we also extended AMOEBA to be applied at run time to monitor and verify conditions in a running adaptive software. 220 10. 1 Contributions The following is a list. of contributions made by the thesis. Requirements Phase. 0 we proposed a. finite state machine model for adaptive software [144, 145]. This model enables existing analysis techniques for traditional, non-adaptive software to be directly applied to ada.1_)tive software. Also. it enables us to exploit specific characteristrcs of axilaptive software in order to optimize their analysis [144] (Chapter ‘2). 0 we introduced A-LTL. the adapt operator-extended linear temporal logic. We used A-LTL to formally specify adaptation properties of adaptive software [143, 146] (Chapter 3) 0 we generalized three adaptation semantics, namely the one-point adaptation; the guided adaptation. and the overlar.) adaptation, and formally specified the semantics in A-LTL [143. 146] (Chapter 3). 0 we introduced a goal—based technique to systematically analyze the require— ments for adaptive software and to generate. formal requirements specifications in A-LTL [144, 149] (Chapter 4). 0 We introduced TA-LTL, a real-time extension to A—LTL for specifying critical real~tirne constra—rints in adaptive software. Three types of critical properties are identified, namely safeness, liveness. and stability properties [153, 155] (Ap- pendix A). Design Phase. 0 we proposed a formal design modeling technique for adaptive software. We identified the key features of adaptive software. design (i.e., quiescent states. 221 adaptive states. and adaptive transitions) and introduced a state-based mod- eling approach (MASD) to capture. these. features with design models [144]. \Ve also introduced rapid prototyping and model—based testing to carry these features in the models to their irnplernentations [144] (Chapter 5). 0 We proposed a _rno(lel-based r‘e—engineering technique to enable adaptation in legacy software with z:rssurance by leveraging the MASD approach [144], the metamodel-based language translation technique [97], and the as}.)ect-(.)riented adaptation enabling technique [139]. We also introduced the cascade adaptation mechanism to handle. state transforrnation from a. source program to a. target program in an adaptation [149] (Chapter 6). Implementation Phase. 0 \VC (leveloliied AMOICBA, a modular model checker that rnodularly verifies adaptive software against both LTL and _A—LTL properties. The proposed tech— nique reduces the model checking complexity by a. factor of n. where 72. is the number of ste;-r(ly-stat.e programs encompassed by the adaptive program [145] (Chapter 7). c We developed a run-time model checker AMOEBA-RT that monitors the run- time conditions in adaptive software and verifies these conditions against LTL and A-LTL properties (Chapter 8). 0 We designed a safe. software adaptation protocol that. minimizes the adaptation cost and ensures consistencies among adaptive components. A retry/roll—back mechanism is employed to ensure the consistency of adaptive actions in the presence of failures [150, 152] (Chapter 9). 222 10.2 Future Investigations We now discuss a number of future research (_lirections in order to provide assur- ance in dynamically adaptive software. The following areas are natural extensions to the work presented in the dissertation. 0 Integrated tool suites. This thesis introduces a. number of techniques to provide assurance in adaptive softhirre. These techniques are intended to be applied in a coordinated manner. l\*‘Iany parts of our tecl'rniques are automated, with the remaining parts amenable to automation. Our approach leverages ex- isting tools, including Renew. Maria. Hydra, Spin, Rational XDE, etc. We have also dweloped prototype tools to support our techniques, includincr AMOEBA and AMOEBA-RT. However, the tool support for our proposed approach can be further improved in the following reSpects: ( 1) New tools are required for certain tasks, including theextraction of adaptation state machines from source code [26. 28, 117], automatic selection of otf—the-shelf components to be used as/in steady—state programs [85, 87, 125]. auton‘ratic translation from adaptation models (e.g., UML Statechart models) to adaptive programs (e.g., Java) [97], and so on. (2) Existing tools are not sufficiently mature or easily integrataliile to be widely adopted. (3) Existing tools [78, 94] tend to use their own specific representations, making them difficult to be used together in practice, i.e., they are incompatible with each other. Based on the above observations, we propose building an integrated tool suite in order to promote practical applications of our proposed formal techniques to the development of adaptive software. 0 Integrate ADL and MASD. The MASD approach introduced in Chapter 5 focuses on the behavioral aspect of adaptive programs. We also realize the importance of expressing adaptations at the architecture level, where structural changes of adaptive software are modeled as disconnections and recomiections 223 of connectors between components. we believe an integration of our approach with an appropriate ADL represeritation, including Darwin [92] and Wright [2], will provide a con'rprehensive solution to both structural and behavioral changes in the development of dynamically adaptive software. Real-time adaptation model and model checking. Although we have pro- posed TA-LTL as a real-time temporal logic for adaptive software, the support for real-time adaptive software is still limited. A formal model for real—time adaptive programs needs to be defined, and a model checking algorithm is also required. We hypothesize that these can be accomplished by extending our adaptive program model (in Chapter 2) and the model checking algorithm for A-LTL (in Chapter '5) with real—time constructs introduced in other real-time formal representations [3, 30, 37, 48, 88, 89]. Support for run-time changes of requirements. This thesis focuses on the adaptation scenarios where the possible execution domains can be determined at development time, and the requirements for each domain are fixed at run time. In some autonomic computing systems, the. software may need to adapt to new execution domains that are unknown to the developers before run time. The requirements for a (.lor’nain may also be changed at run time to respond to unanticipated changes in the domain, or to achieve different goals. Techniques are required to address this type of requirement changes at run time. Incorporate product line concept. The re-engineering technique introduced in Chapter 6 requires a. legacy database comprising different implementations to be used in various execution domains. These implementations could be in the form of a product line or program families [108]. It is worthwhile to study the common properties of these programs and to encapsulate the differences among these programs. A(':cordingly, the adaptation design and irriplementation 224 technique can be optimized. o Integrate AMOebA with modular verification of features. In the mod- ular verification technique introduced in Chapter 7. we did not place any con- straints on the structure of the different steady—state programs. In practice, they usually share significant commonalities in their behavior. and therefore, may be constructed using compositions of features, where a feature is a piece of code pluggable to another plOCG of code (called base) at development time [86]. Fisler et al proposed a. modular approach to verifying featrue-oriented software [45]. It will be interesting to study a technique that. leverages the modular feature verification technique used in AMOEBA so that not only adaptive software, but also each steady-state program itself can be verified modularly and incre- mentally. As a result, the complexity for model checking adaptive. software can be further reduced. 10.3 Final Thoughts As s1_1ch. more rigorous software development techniques specially focusing on adaptation needs are needed. This dissertation presents the first (izollection of tech- niques that addresses assurz’mce for adaptive systems. that starts with requirements and takes it all the way to implementati011 and testing, including re-engineering legacy systems for adaptation. Appendix A Timed A-LTL In this chapter we. introduce the Tina-11 A-LTL (TA-LTL) [153, 155], an exten- sion to A-LTL that includes a, metric for time [37. 153]. Specifically, TA-LTL enables temporal constraints to be expressed in a quantitative form. in addition to specifying relative temporal ordering. We assume that the adaptation from one program to arr-- other has been specified with an A-LTL formula. TA-LTL enables us to specify three types of properties of adaptation that involve absolute time, namely the safety, lice- ness, and stability properties. The rmnainder of this chapter is organized as follt.)ws. Section A.1 provides background information on an underlying logic TPTL [3]. In Section A.2, we introduce the syntax and semantics of TA-LTL, and Section A.3 de— scribes the use of TA-LTL in specifying the timing properties of the three commonly occurring ada1.)tation semantics. Section A.4 illustrates the approach with a mobile con'inuinication example. Related work is overviewed in Section A.5. Section A.6 discusses future directions. A. 1 Background: TPTL The TA—LTL language is based on two existing temporal logics: A-LTL and TPTL [3]. In this section. we. briefly overvieu.’ TPTL. 226 Temporal logics with a metric for time allow the definition of quantitative tempo- ral relatimiships (in addition to qualitative ordering). such as distance among events and duration of events. A typical way to add a metric for time in a. propositional temporal logic is to replace the unrestricted temporal 01.)erators by time-bounded (")perators. For example, the bounded operator 0]],3](f) states “9:5 will become. true eventually within 1 to 3 time units from the current time.” All alternz'itive way to represent time quantitatively is to use the freeze quantifi— cation. proposed by Alur and Henzinger [3] in Tl’T L (Timed Propositional Temporal Logic). TPTL is a formalism for sI‘)ecifying real-time properties. In TPTL, a time variable r can be bound by a freeze. quantifier (“:r.” ) which freezes a; to the time the temporal context is e\-'aluated. Formally. given a supply of variables V = {:r, y, z - - - }, a set of atomic propositions P = {1). q. - - - } the syntax of TPTL is inductively defined as follows: 7r 2 r -1— L I c, (A 1) O l: P ] 7T1 ‘5 772 l 771 Ed 772 $13952 l 00] for l‘ E V p E P, and c, (l E N, d # 0. In TPTL. the timing constraints are of the form 7n 3’ 7n, 7T1 —:—d 77-2. The “5'" operator is the trz‘idit'ional arithmetic inequality operator. and 7m Ed 7r2 means that 7r; is congruent to 7T2 module the constant (I [3]. More. notably. the formula re“) means that (f) is true if all occurrences of :L‘ in (b are replaced by the current. time To. Other propositional and temporal operators are defined as usual in LTL. 227 A.2 Timed Adapt Operator-Extended LTL Next. we describe the syntax and semantics of TA-LTL. A.2.1 Syntax TA-LTL is intended to address the safety, l-iee'ziess, and stability of adaptation behzwiors. Safety and llveness are adaptation properties to restrict. the duration of ac‘lzunation state transiticms. Stability constrains the frequency of state changes to ensure a system not to “thrash" by oscillating between states. To support real-time in A-—LTL. we adopt the real-time metric in TPTL and apply it to formulae in A—LTL. The formulae of TA-LTL are constructed from proposition symlmls and timing constraints comprising inequality operators. temporal operators, and time reading variables (i.e.. variables representing time values). Let P be a. set of proposition symbols (1). q. r. . . .) let N be the set of nonnegative integers. and let V be an infinite supply of time I‘(‘2'l(llllg variables (1‘. y. z. . . .). T he terms of time 7i and formulae cf) of TA—LTL are inductively defined as follows: 7r :2 I+c|C O 2: P|W1~W2IQ5IAO2lfl¢2'0‘?” as | 04» | (plum I (.3131ch I 2: for a: E V, p E P, and integer constant c E N. In TA-LTL, the timing constraints are of the form 7n ~ m, where 7n and 7e; are two terms, and ~ represents one of the aritlnnetic inequality operators <, 3, >, 2, and =. The temporal operators O (next). 21 (until). 0 (finally), and D (always) are . . . . $2 . . . snmlar to the temporal operators defined in LTL. The operator ——\ ls snmlar to the 228 77 adapt operator defined in A-LTL. “at. is similar to the freeze operator in TPTL, and other propositional operators, such as V (or), :> (imply). <:> (co-imply), are defined similarly to those in propositional logic. A.2.2 Semantics of TA-LTL TA-LTL is evaluated over timed state sequences, where each state is an interpre— tation of a. subset of P. A timed state sequence is an infinite sequence of states, each of which is labeled with a. discrete time. t (t E N) [3]. Formally, a state sequence a U = 00010-2 . . . is an infinite sequence of states. where 0,— C_: P for all t 2 O. A time sequence is an infinite sequence of time readings. where T, E N for all t 2: 0, and T satisfies Monotonicity: T, 5; 3+1 for all t Z 0; Progress: For all t E N. there is some '2? (2. Z 0) such that T, > t. Thus, a. timed state sequence is denoted as a. pair p 2: (0,7), or it can also be considered as an infinite sequence of pairs p = (00. To), (01.71), , . .. we use pi to 1‘ suffix of p, i.e., ,02 = p,,p,-_;_1, - - -. represent. the it An emrtmnment s is an assignment of all the variablc-rs in V, which is also ex- tended to be applied to terms. For example, 5(;r + e) is defined to be 5(a) + e. we use ,0 i=5 9”) to represent the satisfaction relationship between a timed state sequence ,0 and a TA-LTL formula (,0 under the envirom‘nent 5. Similar to A—LTL, we also define TA-LTL semantics in the domain of timed state sequences with finite length. We use A fzfins gt to represent. that a finite timed state sequence /\ satisfies TA-LTL formula d) in the finite sequence domain. The TA-LTL semantics is formally defined as follows: 229 A 2H”; (5) if and only if p 25. q’), where p is the infinite timed state sequence obtained by extending A with its last state infinite times. This rule relates the semantics in infinite and finite timed state sequence domains. 0 p 2 p for all p E P. if and only if p E 00. o p 2 71 ~ 1’3. if and only if €(7r1) N £‘(7r2). o p 2 oAz,-‘I. if and only if p i=5 a) and p )25 gt. 0 p 2 2c). if and only if p 25 (j) o p 2 QC). if and only if p1 25 f). o p 2 (pt/I L if and only if there exists t (z. _<_’ 0) such that pi 2 1,6”), and for allj (0 0) such that p“ 25 a") and p07 ‘ ' 'Pt—l )::finf @- Other operators (V.2>. O, D, etc) are defined similarly as those used in LTL. In this chapter. we assume that all TA—LTL formulz‘ie. are closed, i.e., any occur— 93 rence of time variable ;I,' must be. within the scope of a freeze operator “at. A.3 Specifying Adaptation Timing Properties Figure A.1 depicts three commonly used adaptation scenarios introduced in Chapter 3. Figure A.1(a) depicts the o-ne-potnt adaptation (the source program adapts to the target program at a certain point during its execution); Figure A.1(b) 230 shows the guided adaptation (the source program adapts to the target program with a. restriction condition); and Figure A.1(c) depicts the overlap adaptation (under the restriction conditions. the tmget program behavior starts before the source program behavior stops). In this chapter. we focus on specifying the timing properties of these adaptation semantics with TA-LTL by binding time variables to the temporal formu- lae. Specifically. we investigate three real—time properties of adaptation semantics: safety. liveness, and stability. F S .vmac ' F TSPEC fl OOOCfJOOOOOOOOOOO ARI-.‘Q (a) one-point adaptation RCUND I——-—-I % SSH-1" 4 I2 . TS'PEC GOO (pOOOOCOCCOCC AREQ (b) guided adaptation SRCUNI) TRCOND TS'PEC SSPI-IC I l r I OOOgOOOOOOOCCCC ARtQ (c) overlap adaptation 0 state before adaptation Q state after adaptation 0 state during adaptation H state interval Figure A.1: Three adaptation scenarios Safety asserts that properties that may invalidate the adaptation will not happen. The timing properties of safety enforce that the source/ target program should enter 231 the restriction condition or safe state within a certain time period after receiving the. adaptation request. If this time constraint cannot be satisfied, then it might be too late for the system to react to the dynamically changing environment, attacks, user requirements. etc. Safety properties are most important to adaptations when responding to security threats in which late responses may cause irreversible damage to the system. Liveness asserts properties that. respond to the adaptation requests and asserts that. the adaptation goals will eventually be satisfied in time. The liveness timing properties restrict. the time periods allowed for the source program to adapt to the target program upon receiving adaptation requests. If this time constraint cannot be satisfied, then the adaptation needs may become invalid or the adaptation may cause the system to enter an inconsistent state [150, 152]. Stability asserts how long the system should remain in certain states. In order to avoid adaptation oscillation caused by frequent adaptation triggering, the timing properties of stability constrain the time interval between two successive adaptations and thus ensure that. the system executes in a steady-state program for a certain amount of time. before the next adaptation is allowed. If the stability time constraint is violated, then the system might. enter an unstable state. While each steady-state program may have its own real—time constraints, in this chapter, we only focus on adaptation—related real—time constraints and assume that the source and the target steady-state programs have been both specified in LTL. We refer to these specifications as base specifications and denote them as SSPEC and TSPEC, respectively. We specify the adaptation from the source program to the target program with A-LTL by extending the base specifications of the source and the target programs. For some adaptations, the source / target program behavior may need to be constraint-3d during the adaptation. We use an LTL formula. to specify the restriction condition. RCOND, during the. adaptation. 232 We use a 1,)1'oposition Anzac) to indicate the. receipt. of an adaptation request to a target program. \Ye identify the fullfilment states for adaptation to be those states where all the obligations required by the source specification nggc are. fulfilled by the source program. and thus it is safe to terminate the source behavior. A.3.1 One-Point Adaptation Under one-1.>oint adaptation semantics, after receiving an adaptation request, the source program adapts to the target program at a certain point during its execution. The prerequisite is that the source program should always eventually reach fullfilment states during its execution. This smnantics is depicted in Figure A.1(a), where each circle represents a state. Solid lines represent state intervals and the label of each solid line represents the property that is held by the interval. The arrow points to the state in which an adaptzjition request is received. This adzqnation semantics can be specified by A—LTL as follows: SPECONE-POINT : (SSPEC A OARHQ) 2‘ SPEC. (A3) This fornnila states that. the program initially satisfies Sspfgc. When the program reaches a fullfilment state. i.e., all obligations demanded by SSpEC are fulfilled, the program stops being obligated by SSPEC and starts to satisfy Tspfgp. In the context of adaptive processing of data streams. an example is to insert. the encoder at the sender side where each operation is applied independently to each packet in the 01.1tgoi1‘1g stream. we specify the liveness timing constraints for one—point adaptation as follows: 233 LIVEONHPO/NT I GIT-(AHEQ o _ALU-Tsmc/Wy S 1‘ + tliire))- (A4) The formula states that once the source program receives an adaptation request (spec— ified by A 5?]?le it should adapt to the target program (specified by TSPEC) within tum. where thaw rem‘esents the upper bound of the constrained time frame. The stability timing constraint for one-point adaptation is specified as follows: STABLEONE-PO/NT : 1:‘((SSPE(}' A OAREQ) Q 41/. TSPECA(y Z 1" + tf'r'eq>)- (A5) The fm‘mula states that the source program should continue to satisfy SSPEC for at least tfmq before it can start to satisfy T31750- The timing constraints for one-point adaptation are illustrated in Figure A2, where the time difference constrained by each formula is denottx'l by At. SSI’EC’ I ' TSPEC ONE-POINT ADAPTATION 00 0000000000.. E Am) i LIVEo~spomr l<——-—— At/Ive ——-————): Atlive 5 t/ive STABLEONE-PO'N’ <————— Atfreq —>/' Atfreqs {freq Figure A2: Real-time constraints for one—point adaptation. 234 A.3.2 Guided Adaptation Under guided adaptation semantics (visually depicted in Figure A.1(b)), after receiving an adaptation request. the. system first applies a restriction condition to the source program behavior, and then adapts to the target program when it. reaches a fullfilment state. This semantics is suited for adaptations where source programs can— not otherwise guarantee reachiug a fullfilment state within a given amount. of time. The restriction condititm ensures that the source program will reach a fullfilment state. This situation arises when the system needs to provide continued service of the old system before switching to the new system. For example, if we are applying eI'icryption encoding/ decoding to a sender and a receiver. the receiver needs to pro— cess any unencmled packets that are in-fiight or buffered at the receiver, before the insertion of the decoder. This adaptation semantics can be specified by A-LTL as follows: SPECUL'IDED : (SS'PI:‘("/\(<>ARFQ $2 _LR...0.w)) g Tsm-sc- (A6) The specification states that initially 55,.“- is satisfied. After an adaptation request, rink-Q, is received, the progrz-nn should satisfy a restriction condition RUM-D (denoted with SJx). “7 hen the program reaches a fullfilment. state of the source, the program stops being constrained by SSPEC, and starts to satisfy TSPEC (denoted with (it). The hot—surapping technique introduced by Appavoo (at al [5] and the safe adaptation pro- tocol [150, 152] introduced in our previous work use the guided adaptation semantics. The safety timing property of SPEC(;(_;H);;D can be specified by TA—LTL as fol- lows: 235 SA FECU!I)ED = OCH-(Anise Q “Ly-RCOi-vnAfl' + tsafe 2 y» g Tsmsc- (A7) This formula states that upon receiving the adaptation request, the source program should satisfy the restriction condition, Rama), within tsafe, otherwise the response is too late to ensure safety in the system, where tsafe represents the upper bound of the constrained time frame. The liveness property LI l/"'E(;(_,.r,,)ED for the guided adaptation is specified as fol- lows: L1 VEG'UIDED = 01' - f A REQ Q A RCOND on 4y- TSPECM?! S 33 + tlieeD' (A8) This formula has a similar meaning to the liveness property for the one—point adapta- tion. It states that once the source program receives an adaptation request (specified by AREQ), it should adapt to the target program (specified by TngC) within tlm, where thaw represents the upper bound of the constrained time frame. The stability property S TABLEGUIDED for the guided adaptation is as follows: STA BLEGUIDED = $-(SSPEc/\(<>AREQ 913 COND ) 236 no The formula states that the source program should continue to satisfy 6‘3ng for at. least tfreq before it can start to satisfy TSPEC. The real—time constraints for guided adaptation are illustrated in Figure A3. RUM/l) I———l S T. . . 51 I.( a F bPlzC I GUIDED ADAPTATION Q O O Q Q O O C . . . . . . i AM 5 i SAFEGUIDED i F‘— Atsafe—_>i i Atsafestsafe LIVE-GUIDED i K__——_ At/jve ———_fi At/IVGS t/ive STABLEGU’DED §(—’__‘“ Atfreq % tfreqSAtfreq Figure A.3: Real—time constraints for guided adaptation. A.3.3 Overlap Adaptation Under overlap adaptation semantics, depicted in Figure A.1(c), the target. steady- state program behavior starts before the source steady-state program behavior stops. During the overlap of the source and the target behavior, a restriction condition. RCOND, is applied to safeguard the correct l.)eha.vior of the program. This adaptation semantics is suited for the case when continuous service from the adaptive. program is required. The restriction condition ensures that the source program reaches a fullfilment state. For example. in an adaptive communication program. a receiver 237 may need to be able to handle both unencoded and encoded packets for a period of time. after the filter has been reconfigured. The overlap adaptation semantics is formally specified in A—LTL as follows: SPECOWCRLAP = (SSPI;‘(:/\(<>AREQ Q _{ SRCOND) ) £3 tm e ) A ( O A li'l-IQ S2 _g T5}-'b‘(i.'/\( TRCON D fl 137716)). (A.10) This formula states that the. entire adaptation 1.)1'ocess comprises two parts. Initially only the ngm is satisfied. After an adaptation request, AREQ, is received, the system should start to satisfy a source restriction condition, SECOND (denoted with 3). “hen the system reaches a fullfilment state of the source prograu'n, the adaptive program stops being obliged by qugc and SECOND (denoted with 573). The second part of the formula is used to s1')ecify the target program behavior which starts to satisfy Tgpgg and target restriction condition THCOND upon receiving ade'iptation request A HEQ (denoted with 8i). Finally only the TSPEC continues to be satisfied (denoted witl‘i 21). The graceful adaptation. protocol introduced by Chen et al [22] and the distributed reset protocol introduced by Kulkarni er, (1,! [77] use the overlap adaptation semantics. The real-time properties of SPECOVERL‘W are, shown in Formulae A.11—~A.16 as follows: 238 SAFEnEQ-Armpr = OI-(ARFJQ Q ——Ly.SR(.'()ND/\($ + ts—ra Z 31)) Sb —3truc, LIV ‘nEQ-FULF/LL = O$~(‘4REQ Q ASRC‘OND o. Jy.(:r+t1—rf 2 11)), LIL!ERFJQ-TSTART : OT-(AREQ Q1 . Alli TSPECA TRCOND Q- —-“(:E +tl—rt 211/») , LIVE'I'START-COMPLETE : 0‘4 REQ r2 —3(:r. TSPECM TRCOND L33 190' + tl—tc 2 y)»: LIVEREQ-COMPLETE Z (>1:°(AREQ 5'2. '3 TseecAf TRCOND 239 (A.11) (A12) (A.13) (A.14) {Hy-(11" + tl-rc .>_. y»). (A15) STABLEUVFJRLAP = ~77- (Ssps<'/\<>AREQ L? SR CON!) ta 2) —‘y-(J: + tfreq g y))- (A16) In Formula A.11, SAFEREQMAPT states that upon receiving the adaptation re- quest. the source program should satisfy the source restriction condition SRCOND within t3_m. otherwise the response time is too long to be effective. In Formula A.12, L1I'Enngrrpnj, states that once the source program receives an adaptation request, it should complete the whole adaptation process within t1_,.f. Sin'iilarly, in For— mula. A.13, LII/EHEQTSTMT indicates that once the adaptation request is recelved, the target progran’i must start within t1_rt. In Formula A.14, LIl-/’E,,-5TAR,cmMPLETE states that once the target behavior starts, the entire adaptation process must com~ plete with than In Formula. A.15, LILIVEHEQ—COMPLETE states the time from receiving the adaptation request to being fully functioning must be within t1_,.c. And finally, in F(.)rnnila A.16, S TABLEOVERM p states that the source program should continue to satisfy SSPEC for at least 25],“, before it may stop satisfying Sgpmry. The real-time constraints for overlap adaptation are illustrziited in Figure A.4. A.4 Case Study: Live Audio Streaming Program To illustrate the use of TA-LTL in specifying adaptation timing properties, we use TA—LTL to specify the safety. liveness. and stability of the adaptation behavior of an adaptive audio streaming system. The interconnection of the systems is depicted in Figure AS. A live audio stream is multicast from a wired desktop computer to 240 S SI’FC SRC '( )Nl ) 7RCMNI) TS'I’EC OVERLAP F _- ADAPTATIONQ O O O O O O O O O O... SAFEREO-ADAPT Ee—AtSJa ———) . i LIVEREO.FULF11§ “__ Aft” 9E LIVEREO-TSTARTE K———— Ats-rt _H E L/VETSTART-COMPLETEE 6— AtES-tc fl): LIVEREO-COMPLETE : é [it/“’0 i ; STABLEOVERLAP 3% Affreq > Ats-ra-<-ts-ra AtI-rfStI-n‘ Ats—rt‘sts-rt Ats-tcsl‘s-tc Atl-rcStl-rc tfrequ (freq Figure A4: Real-time constraints for overlap adaptation. 241 multiple mobile devices via. the 802.111) VV'LAN. Effectively. the receivers are used as nuilticast-capable Internet “phones” participating in a conferencing application. We expect the system to operate in environments with different packet loss rates. The loss rate of the wireless connection changes over time and the program should adapt its behavior accordingly: When the loss rate is low, the sender / receiver should use a low loss-tolerance and low bandwidth—consuming encoder/decoder or should remove the encoder/ decoder completely; when the loss rate is high, they should use a l‘iigh—loss-tolerance and consequently high bandwidth-consuming encoder/ decoder. ’th $57} , . Audio Stream -- ’ 7 I . }W-~_-_’é ° Ll Wired Access Wireless Sender Point Receivers Figure A.5: Audio streaming system connection A.4.1 Forward Error Correction (FEC) Filters The adaptive behavior in this system is realized using MetaSockets [120] (Sec- tion 3H5) In this case study, we use two different FEC filters to respond to the dynamically changing loss rate. One filter uses block erasure codes [116], and the other uses the GSM 06.10 encoding algorithm also used for cellular telephones [34]. The FEC encoded data are different from the original audio data and can be played only after they are decoded by the FEC decoder. A run—time scenario is to have all collected audio samples at the sender side stored in a data buffer waiting to be encoded by the FEC encoder. The encoded data packets are then multicast to the receivers and stored in a data buffer waiting to be decoded by the FEC decoder. The decoded audio data will eventually be played by the. receivers. Here. we use the 242 guided adaptation semantics to specify the adaptation of inserting an FEC filter at the receiver side. The use of (n,k) block erasure codes for error correction was popularized by Rizzo [116] and is now used in many wired and wireless distributed systems. Fig- ure A.6 depicts the basic operation of these codes. An encoder converts k source packets into n encoded packets, such that any k of the n encoded packets can be used to reconstruct the k source packets [116]. In this chapter, we use only system- atic codes, which means that the first k of the n encoded packets are identical to the k source packets. We refer to the first 1: packets as data packets, and the remaining n — 1: packets as parity packets. Each set of n encoded packets is referred to as a group. The advantage of using block erasure codes for multicasting is that any x (r S n — k) parity packet can be used to correct independent 1: packet losses among different receivers [116]. In the remainder of the chapter, we refer to the block-oriented FEC simply as “F EC (n, k)”. While block—oriented FEC approaches are effective in improving the quality of interactive audio streams on wireless networks, the group sizes must be relatively small in order to reduce playback delays. ENCODED RECENED DATA DATA RECONSTRUCTED DATA 8300030 Figure A.6: Operation of block erasure code. An alternative approach with lower delay and lower overhead is the GSM-oriented FEC encoding (also known as signal processing—based FEC (SFEC)) [16], in which a lossy, compressed encoding of each packet p, is piggybacked onto one or more 243 subsequent 1,)ackets. If packet p, is lost. but one of the encodings of packet p,- arrives at the receiver. then at least. a lower quality version of the packet can be played to the listener. The parameter 9 is the offset between the original packet and its compressed version. Figure A.7 shows two different. examples, one with 0 = 1 and the other with 6 = 2. It. is also possible to place nuiltiple encodings of the same packet in multiple subsequent. packets, such as using both 6’1 = 1 and ()2 = 3. Although GSM is a CPU-intensive coding algorithm [16]. the bandwidth overhead is very small. In the remainder of the chapter. we refer to the. GSM-oriented FEC simply as “GSM (0. c) which means copies of the coded packet p are placed in c successive packets, Hill beginning from the packet after p. Data Flow . l‘ ' ' ' l H ‘ ' "i i‘ ‘ ' 8m di+2 8i di+1 81:] di 81:2 di-I (a) GSM encoding with 9 = 1 Data Flow 8,7 di+2 81:1 di+1 81:2 di 1-3 di-l (b) GSM encoding with 6 = 2 Figure A.7: GSM encoding on a packet stream ((1,: data, gt: copy). 244 A.4.2 Specifying QoS Constraint with TA-LTL In previous research [154]. Zhou et al have investigated the quality—of-service (QoS) properties (such as packet delivery rate and delay) of these two FEC protocols. Table A.1 shows the loss rate perceived by the receiving application, that is, after F E C decoding. for (:lifferent F EC (77., A7) or GSM (6, c) settings. Another factor important to real—tin’ie communication is the additional delay introduced into the packet stream by the encoding and decoding filters. Table A.2 shows the worst case delay introduced by different FEC codes to wait for the encoded packets. For example. considering PEG (8. 4) and GSM (3.1). if the first data packet is lost. then the receiver will need to wait for at least 3 packets until the first parity packet or piggybacked packet arrives to recover the loss. Table A.1: Loss rate comparison of different F EC codes c )d _‘ Raw GSM GSM GSM GSM GSM GSM ‘ “ (1.1) (1.2) (1.3) (2,1) (2,2) (2.3) "/0 28 11.05 6.28 3.53 13.88 6.8 3.78 C d] _____ GSM GSM GSM GSM GSM GSM ° ° ' (3,1) (3.2) (3.3) (4.4) (6.4) (8.4) % """ 10.27 4.8 2.75 28.20 16.29 9.16 Table A2: Delay comparison of different FEC codes c d Raw GSM GSM GSM GSM GSM GSM " e (1.11 (1.2) (1.3) (2.1) (2.2) (2.3) % 0 19.95 39.95 59.9 40.83 60.37 80.10 c )de ______ GSM GSM GSM GSM GSM GSM “ (3.1) (3.2) (3.3) (4,4) (6.4) (8,4) % """ 62.55 82.52 102.87 17.79 37.73 52.33 From the QoS comparison we know that. for a 245 . certain wireless environment (e. g 28% raw loss rate as shown in Table A.1). different FEC codes have different loss recovery perftgn‘mance while holding different timing characteristics. For example, if the raw loss rate of the wireless environment changes to 28% and the expected loss rate after FEC decoding is g 10%, PEG (8, 4), GSM (1, 2), GSM (1, 3), GSM (2, 3), GSM (2. 3), GSM (3. 2) and GSM (3. 3) are possible filter candidates for adaptation. However. in order to satisfy the real—time interactive audio streaming requirements, the accumulated delay after adaptation should not. exceed 150 milliseconds [118]. For this example. in the guided adaptation, the restriction condition is that the data is blocked from transmission during the adaptation. Thus the accumulated delay comprises two parts: the adaptation delay and the F EC delay. If there is no additional operation (e.g., dropping packet, fast playback) to reduce the delay, this accumulated delay will remain in the post—adaptation conununication. Thus we can use TA-LTL to specify the real—time constraint, and the selection of FEC filters and the inmlementation of adaptation should satisfy this constraint. This adaptation semantics and its timing properties can be specified by A-LTL and TA—LT L as shown in Figure A.8. In Formula A.17, SPEC/71:50 states that ini- tially the system is running without F EC filters (the source program), i.e., it satisfies NoF E CSPEC. After an adaptation request, Ali/3Q: is received, the system should sat- isfy a restriction condition Rcoivn, which requires the receiver to temporarily stop receiving incoming encoded F EC data. When the source program reaches a fullfilment state. i.e., all original audio data in the buffer have been played, the system stops satisfying the NoFEC'ngC, and starts to satisfy Wit/1F ECngC. In Formula A.18, SAFEFEC states that. upon receiving the F EC decoder insertion adaptation request, the source program should stop receiving incoming packets within tsafe, otherwise the encoded FEC data. will be received and stored in the data buffer, causing a failure since it camiot be played directly. In Formula A.19, L1 VEFEC specifies the real-time constraint of the adaptation for inserting FEC filters discussed above. Specifically, 246 thug is the criteriz—i. for ensuring real—time audio communication (150 milliseconds in this study). In Formula A20. STx'lBLEHgC prevents the system from inserting/removing the FEC filter too frequently. This example illustrates the use of TA-LTL in specifying qtuintitative timing requirements for adaptive comnmnication software. SPEC};,(‘ = NOFECSPECA(OAHEQ Q ARC‘OND) 3% Li’z’tleECngEc. (A17) SAFEmgc = Gill-(AREQ Q ALll-Rt'vmnAfl‘ ‘l' tsafc 2 y» (29 -“ ”I‘ll/2,.FECgp/50, (A.18) L I l "Elv'lgti' 2: 011' - (A 8130 $2 . _l H( 'UNI) {—23.1}. ll"zit/(FECSPEC/\(.U S 1' + tin-ell (A19) 814 BLEFEC = :17.(1V0FECSPEC/\(<>AREQ Q "L R COND) {—233}. ll/7thFEC5'pm’AU/ 2 .7: + tf7€(])) (A20) F i0 ure A.8: S ecifvinv‘ ada )tation semantics with TA-LTL O .. O A.5 Related Work Tempm'al logics have been extended in many different. ways to support the speci- fication of quantitative timing properties [3. 31, 48. 107. 113], including the TPTL [3] introduced in Secticni A.1 and FLEA [31] introduced in Section 3.6. Most of the dis- 247 cussion in the related work section for A-LTL (Section 3.6) applies to TA-LTL as well. Similar to the discussion for A-LTL. none of the above reed-time logics is designed specifically for specifying real—time adaptation properties as TA-LTL described in this Chapter. TA—LTL differs from all other real-time temporal logics by its convenience of specifying real—time ('(‘instraints in adaptation semantics. A.6 Discussion In this section. we discuss the expressiveness. decidability. and model checking algorit Inns of TA— LTL. A.6.1 Expressiveness The TA-LTL introduced in this chapter can be ccmsidered a combination of A-LTL and TPTL. A-LTL is a. non-real time temporal logic, and thus it does not reference T. the time sequence. in a timed state sequence. Therefore. TA-LTL is strictly more expressive than A—LTL. Theorem 14: TA-LTL is strictly more. earpresstzie than A-LTL. Proof 14: We prove that TA-LTL ts 771.07‘6 express-tee than. A-LTL but not titce-uersa. 1. Til-LTL is more expressive than A-LTL: TA-LTL 2 A-LTL. The proof of this statement is strait/ht._fc)7"1.m1.7'(l. Since TA—LTL is a super set of A-LTL. for any fm‘7n.‘u.la (f) 0f A-LTL. there is also a formula c‘)’ of exactly the same form. in. TA—LTL. (1,72,.(195’ : (,5). Therefore. we have TA—LTL Z A—LTL. 2. A-LTL is not more c.1f])7'csstre than. TA-LTL: A-LTL Z TA-LTL. 248 For the ttmecl state sequences a = (pp. 0). (p. 1). (p. 2). - -- and 0" = (‘WOl (79,5) (1216).. - there is a formula. in. TA-LTL, ct) = O:r.p/\(:L' < 5), that cltstlngutshes these two sequences. That is. o [:mf (t). 'whtlc o’ hémf (t). However. for any formula a? in. A-LTL. we have. 0 i: if.” if and only 2f 0’ ]: Lb. Therefore, there is no formula in A-LTL that accepts eractly the some set of sequences as (I) does. Thus we have. A-LTL 2 TA-LTL. Th ere fo re. Tr‘l — L TL > A - L TL. A.6.2 Decision Procedure and Model Checking For a given logic. a formula. (,5) of the logic is sattsfiable if and only if there exists a model 0. such that o [r— o A (leaston procedure of a logic determines whether each fornmlzii of the given logic is satisfiable. Te:i.l')leaux—based decision procedures have been adopted by researchers to address the satisfiability problem of temporal logics including LTL [88]. the choppyr logic [119]. etc. In Chapter 7 we also proposed using a tableaux-based decision procedure for the satisfiability problem for A-LTL [146]. Alur et al. [3] extended the traditicmal tableaux-based approach with tune-difference propositions to address the real-time properties in TPTL satisfiability problem. The same technicme can be used to extend the A-LTL decision procedure for TA—LTL. Model checking techniques determine whether a program satisfies a given tem— poral logic formula. by exploring the state space of the program. \I‘Ve adopt the timed state graph [3] as the formal model for real—time prog ‘ams. A timed state graph is a. tuple T = (L. ,a. V, L035). where o L is a. finite set of locations; 249 o p. : L —> 2P is a labeling function that labels each location I E L with a state I"; g P; o u is a. time (jliffcrence function that labels each location with the time different from its predecessor location; 0 LO Q L is a set of initial locations; 0 ‘I C__' L'2 is a set of transitions. A (:'(‘)1n},)utz-1tion of the timed state. graph is a timed state sequence p = (o, r) if and only if there, is an infinite path I = [0. 11,- -- of T. such that 02- = “in and the time (:lifl’crcnce in l.¢(+l) equals (rt-+1 — T-,-). The model checking of a timed state graph T against a TA-LTL formula (3) proceeds as follows: Calculate the negation “O of the formula o; t--* to construct the. tableau TtHO) for v.5); 3. calculéiite the product P = T * T(wf)); 4. determine whether the product P contains an accepting path a starting from a. state that contains a initial state of T and 1b. The program model T satisfies the formulz-i. (f) if and only if there does not exist such a. path a in P. A.6.3 Summary This chapter introduced the specification of adaptation timing properties in au- tonomic systems in terms of TA-LTL. After an adaptation temporal specification is constructed. it may serve as guidance for the adaptation developers to clarify the, intent for the adaptive program. It can also be. used to check for consistency in the 250 temporal logic specifications. By using run—time verification techniques, we. may also automatically identify the safe states for an adaptation and insert. adaptation logic at approljn'iate points in the program. We may even perform model Checking to verify the correctness of the program model alga-iinst the temporal logic specifications. Appendix B Supporting Material for MASD B.1 Rapid Prototyping for Adaptive GSM- Oriented Protocol We include the Java code for the rapid prototyping example we introduced in Section 5.3.1. The prototype uses sockets to transmit packets from the sender to the reteiver. Prototype . java instantiates the sender. receiver. and internet, and defines the relationships among them. Sender.java defines the functions to be invoked by the GSM sender adaptation net. in Chapter 5 (Figure 5.10). Receiver.java defines the functions to he invoked by the GSM receiver adaptation net in Chapter 5 (Figure 5.11). Internet.java defines the functions to be invoked by the GSM lossy network net (em'ironment model) in Chapter 5 (Figure 5.7). [0 CH [\3 Prototype. java /******************************************** * instantiates the sender, receiver, and * * internet, and defines the relationships * * among them. * ********************************************/ package gsm; public class Prototype { static public Sender sender; static public Receiver receiver; static public Internet inet; static public void main(String[] args){ inet = new Internet(); receiver = new Receiver(inet); 1net.initReceiver(); receiver.receive(); //initialize sender and receiver static public void init(){ inet = new InternetC); sender = new Sender(inet); receiver = new Receiver(inet); //initialize sender object static public void initSender(){ inet = new Internet(); sender = new Sender(inet); inet.initSender(); //initialize receiver object static public void initReceiver(){ inet = new Internet(); receiver = new Receiver(inet); inet.initReceiver(); initialize the internet object. static public void initInet(){ inet = new Internet(); } } 254 Senderjava /******************************************** * The functions implemented in this class * * are to be invoked by the GSM sender * * adaptation net in Chapter 5 (Figure 5.10).* 30‘*******************************************/ package gsm; import java.util.*; public class Sender{ final int SIZE = 10; int index =1; byte[] dataInput; LinkedList gstata = new LinkedListC); ArrayList gsmPacket = new ArrayList(); Internet inet; Random ran = new Random(); int simulateInput = 1; int para; SenderNet net = new SenderNet(); static public void main(String[] args){ Prototype.initSender(); Prototype.sender.start(); } static public void test(){ SenderNet net = new SenderNetC); net.init(); net.readdata(); net.S_encode(); net.S_send(); public void start(){ para = 2; whi1e(true){ readData(); packetGen(1,para); if(ran.nextFloat() > 0.95) break; sendDataC); } } adapt(); para = 3; sendData(); while(true){ readDataC); packetGen(1,para); sendDataC); } public Sender{Internet internet){ net.init(); inet = internet; gstata.addLast(null); gstata.addLast(null); public void readData(){ } net.readdata(); simulateInput ++; gstata.addLast(GSM.encode(datalnput)); try{ Thread.sleep(300); }catch(Exception e){ System.out.println("exception during sleep)"): } public void packetGen(int i,int j){ if(para == 2){ net.S_encode(); l else { net.T_encode(); } gsmPacket = new ArrayListC); gsmPacket add(new IntegerCindex)); gsmPacket.add(dataInput); //if j =2 then we should add 1, O for (int k = O;k 0.6) { iprara == 2) { net.S_lose(); } else{ net.T_lose(); } System out.println("losing "+ (index-1)); } else{ if(para == 2) { net.S_send(); } else { net.T_send(); } inet.send(gsmPacket); } } public void adapt(){ gstata.addFirst(gsmPacket.gett1)); gsmPacket.add(null); net.adapt(); } 257 Receiver. java /******************************************** * The functions implemented in this class * * are to be invoked by the GSM receiver * * adaptation net in Chapter 5 (Figure 5.11).* ********************************************/ package gsm; import java.util.*; public class Receiver{ ArrayList receiveDataBuffer = new ArrayList(16); ArrayList gsmPacket=null; int[] index = new int[16]; Internet inet; static public void main(String[] args){ int n; Sample.initReceiver(); n=2; Sample.receiver.rece1ve(); while (true) { while(true) { Sample.receiver.decode(0); Sample.receiver.decode(1); Sample.receiver.putData(); Sample.receiver.decodeLast(2); if (Sample.receiver.getPacketIndex() == 0) break; } Sample.receiver.receive(); if(Sample.receiver.packetLength() !=4) break ; } if (Sample.receiver.packetLength() == 5) Sample.receiver.adapt(3); n = 3; while (true) { while(true) { Sample.receiver.decode(0); Sample.receiver.decode(1); Sample.receiver.decode(2); Sample.receiver.putData(); Sample.receiver.decodeLast(3); if (Sample.receiver getPacketIndex() == 0) 258 break; } Sample.receiver.receive(); if(Sample.receiver.packetLength() !=5) break; public Receiver(Internet internet){ for(int i = O;i<16;i++){ indexfi]=i-2; receiveDataBuffer.add(nu11); } inet = internet; } public void receive(){ gsmPacket = inet.receive(); } public int getPacketlndex(){ if (gsmPacket == null) return 0; else return ((Integer) gsmPacket.get(O)).intValue(); } public void putData(){ byte[] data = (byte[]) receiveDataBuffer.get(O); if(data ==null) System.out.print1n("losing "+index[0]+" "+gsmPacket.size()); else System.out.println(new String(data)+" "+gsmPacket.size()); } public void decode(int j){ int k,i; byte[] s,r; k = index[j+1]; s= (byte[]) receiveDataBuffer.get(j+1); if (gsmPacket == null) { r=s; else{ 1 = ((Integer) gsmPacket.get(O)).intValue(); if(k i) r=s; 259 else r = (byte fl) GSM.decode((byte[l)gsmPacket.get(i-k+1)); } receiveDataBuffer.set(j,r); index[j] = k; } public void decodeLast(int j){ int k,i; byte[] s,r; k = index[j]+1; s= (byte[]) receiveDataBuffer.get(j+1); if (gsmPacket == null) { r=s; i = O; } else{ i = ((Integer) gsmPacket.get(O)).intValue(); if(i>k+2 ll k > i) r=s; else I = (byte fl) GSM.decode((byte[])gsmPacket.get(i-k+1)); } receiveDataBuffer.set(j,r); index[j] = k; if(i<=k) gsmPacket = null; } public int packetLength(){ if (gsmPacket == null) return 0; else return gsmPacket.size(); } public void adapt(int par){ if (par == 3){ for(int i = par;i>0;i--) indein]=index[i-1]; index[0]=index[1]-1; receiveDataBuffer.add(0,null); } } 260 Internetjava /********It*********************************** * The functions implemented in this class * * are to be invoked by the Lossy network net* * in Chapter 5 (Figure 5.7). * ********************************************/ package gsm; import java.util.*; import java.net.*; import java.io.*; public class Internet { LinkedList buffer = new LinkedList(); public Socket senderSocket; public Socket receiverSocket; public ServerSocket receiverServerSocket; public DbjectOutputStream senderDut = null, public ObjectInputStream senderIn = null; public ObjectDutputStream receiverDut = null; public ObjectInputStream receiverIn = null; public synchronized void send(ArrayList gsmPacket){ try{ senderOut.writeObject(gsmPacket); }catch(Exception e){ System.out.println("exception occurs when write"); } } public synchronized ArrayList receive(){ ArrayList ret = null; try{ ret = (ArrayList) receiverIn.readObject(); } catch(Exception e){ System.out.println("exception occurs when receive"); } return ret; } public synchronized void dataLoss(){ } public void initSender(){ 261 try{ senderSocket = new Socket("localhost", 1234); senderOut = new ObjectOutputStream(senderSocket.getOutputStream()); senderIn = new ObjectInputStream(senderSocket.getInputStream()); }catch(Exception e){ System.out.println("exception happens when initiating sender"); _ } } public void initReceiver(){ try{ receiverServerSocket = new ServerSocket(1234); }catch(Exception e){ System.out.println("exception creating receiver server socket"); } try{ receiverSocket= receiverServerSocket.accept(); }catch(Exception e){ System.out.println("exception when accepting a connection"); } try{ receiverOut = new ObjectOutputStream(receiverSocket.getOutputStream()); receiverIn = ' new ObjectInputStream(receiverSocket.getInputStream()); }catch(Exception e){ ‘ System.out.println("exception creating output input streams"); } } l 262 B.2 Stub Files for Model-based Testing In this section, we include the Renew stuh files used to generate handler methods for the transitions in the Petri net adaptation models introduced in Section 5.3.2. The file ReceiverNet.stub defines handler functions for the transitions in the receiver adz’iptation net. in Figure 5.11. The. file SenderNet . stub defines handler functions for the transitions in the. sender adaptation net. in Figure 5.10. 263 ReceiverNet.stub /********************************************************* * The ReceiverNet stub file defines handler functions * * for the transitions in the receiver adaptation net * * in Figure 5.11. The handler functions will be * * generated in ReceiverNet.java. * *********************************************************/ package gsm; class ReceiverNet for net gsmreceiver { void init() { this:init(); } void writedata(){ this:writedata(); } void S_decode(){ this:S_decode(); } void S_receive(){ this:S_receive(); } void adapt(){ this:adapt(); } void T_decode(){ this:T_decode(); } void T_receive(){ this:T_receive(); } } 264 SenderNet.stub /********************************************************* * The SenderNet stub file defines handler functions * * for the transitions in the sender adaptation net * * in Figure 5.10. The handler functions will be * * generated in SenverNet.java. * *********************************************************/ package gsm; class SenderNet for net gsmsender { void init() { this:init(); } void readdata(){ this:readdata(); } void S_encode(){ this:S_encode(); } void S_lose(){ this:S_lose(); } void S_send(){ this:S_send(); } void adapt(){ this:adapt(); } void T_lose(){ this:T_lose(); } 265 void T_encode(){ this:T_encode(); } void T_send(){ this:T_send(); } l 266 Appendix C Obtaining Statechart Diagrams from Java Code In this chapter. we include the metamodels and the translaticm rules for the Java to UML translation used in Chapter 6. Figure Cl and Figure (7 2, respectively, show the metamodels for the subset of Java. legacy programs and the subset of UML Statechart diagrams that, we currently support. The metarnodel--based translation rules are. sununarizcx'l in Table (3.1. we now describe. the translation of each feature in Java programs. 0 Java legacy program: we translate a Java legacy program into a. UML State— chart. model. 0 Java class: Each java. class is represented as a state machine describing its behavior. Note that. for each instance of the class, we need to create a separate instance of the state machine. 0 Constructor method: The constructor method of a class is modeled as a tran- sition from the initial state of the state machine to a state labeled new object. The new object state represents the state in which no method of the class is 267 Java Program Features UML Statechart Diagram Features Java legacy program UML Statechart model Java class Constructor method >—- Statc machine )— Transition from the initial state to a state labeled “new object" Method Submachinc Atrributcs Attribute ‘ Condition J Condition Simple block Transition with local action lt'lclsc block A combination of states and branching transitions _- ____k_.._ -__ ____a___ l A combination of states and transitions with a transrticn going l Loop block g back to the entry state ,j o_- -_i-g_. -- _-- - --__.__fi Sequential block A sequence of states and transitions t__‘ l. - u-.. ___._ -H Statement Local action Entry point ofsimplc block Incoming state ofthc transition for the block l__ .._. -._ Exit point ot‘simplc block Outgoing state ofthc transition for the block Entry point of method call with objcct Obj and method M A state with outgoing transition T, where T includes 3 send event action with target Obj and event M Exit point of method call with method M A State wrth outgoing transition triggering cvcnt M_RETURN Entry point ofa synchronized block —-_—-~4 A substatcmachine that waits for a monitor and sets the monitor Exit point ot'a synchronized block A state with an outoing transition action releasing the monitor Table C.l: Rules for the .\ Ietamodel-based Java to UML translation 268 Java legacy program 0 1..* V Java class o———> A \ 1 * - - 0..1 object condition Y i f—block g<> Q Q 1 , 7 v _ w - ebe-clause - if—cbuse - montors’ variable sync block * 1 my *tr loop block method .-_? methodbody _(> block /’ g ,. A 0..1 / 9 simple block 1 sequential block ' EXt DON - entry pont * 1 1 , » ~ . V function call smple statement point - target F igurc C.1: Mctamodel for subset. of Java legacy programs being invoked by others. 0 Method definition: A method M definition of a. class corresponds to a sub- machine connected to the new object state with an incoming and an outgoing t-I‘ElllSillOIlS from and to the new object, respectively The incoming transition is labeled with an even M(). The outgoing transition notifies the invoking state machine with a M_done message. 269 * t ‘ Statediart model [ * > attribute <0..1 A A * w 1 machine state machine 4 a name , Ii; e o f §<> - name of ; (l _ - _target * condition *l i * i A A i i * send event action submachine . ' .. <> t\ <5 * l \ event“ _6 l at: * ' _State * v 1 6 . \7 state ‘1 _ incomming* transution <> > action \ - outgoing * #:le a Figure C2: l\-‘Iet.a~.tmodel for subset of UML Statechart diagrams local action 270 changes /"ca|ler.Foo_done() fl. - “bx anytype Foo (){ ///////’ ‘\\\\\\\ {block} Foo / } CREW D 1L» [block] —— - - - oflec __/ (a) Java feature (b) Statechart feature Method definition Method invocation: The invocation of a method M of an object Obj is modeled as a transition that sends a message. M to the object Obj. Method return: The return from a method invocation M is modeled as a tran- sition labeled with an event M_return. . . . /"Obj.M( ) D M_return/ Ob] M t) ; (‘f__-..-_-_m_-..___>(3 +0 (a) Java feature (b) Statechart feature Method invocation and return Attributes: Attributes of a Java. class correspond to the attributes of the state Ineiichine for the class. Conditions: A condition in Java is a. logic expression evaluated over attributes. They correspond to conditions in the Statechart. diagram. Simple block: A simple block is a Java statement that only accesses local at— tributes. A simple block is modeled as a transition with local actions. i.e., actions that only accesses local attributes. lf—else block: An if—else block corresponds to a. branching structure in the state machine, where the two branches correspond to the if and else blocks, respec— 271 tively. ... «0° else if (condition) { oblx/ blOCk . ‘1 \00 f/. ‘ [1f blocs] ' / \\\\\ }else { <:}K[loop body] -————>(_,, (a) Java feature (b) Statechart feature Loop block 0 Sequential block: A sequential block contains a secpienre of blocks. It is modeled as a sequence of state machines connected by transitions. [block Al ” \\ _ lb; OCT-2 a] block Ajfl_ block 9-__ block C ' \xwl \\_.// [block L} (a) Java feature (b) Statechart feature Sequential block 0 Synclu'onized block: At the entry point of a svncl'n'onized block, the program must acquire the monitor for the object. At the exit point of a syin-ln‘onized 272 l)l()('l\'. the program must release the monitor. These are modeled as transitions with actions setting and resetting the. value of the attribute lock. in???” zed [lock ==0 I lock == pid ]/ /Iock = 0 0‘3 k lock = thile x } [bloc 1 Q— [block] >0 (3) Java feature (b) Statechart feature Synchronized block 273 [31 [4] [8] [9] {10] [11] [121 Bibliography Vikram Adve, Vinh Vi Lam, and Brian Ensink. Language and compiler support for adaptive distributed applications. In Proceedings of the ACM SIGPLAN l'll’orkshop on Optimization. of Middlewarc and Distributed Systems ( OM 2001), Snowbird. Utah, June 2001. Robert Allen, Remi Douence, and David Garlan. Specifying and analyzing dynamic software architectures. In Proceedings of the 1998 Conference on Fun- damental Approaches to Software Engineering (FASE’QS), Lisbon, Portugal, March 1998. Rajeev Alur and Thomas A. Henzinger. A really temporal logic. J. ACM, 41(1):”181—203, 1994. Rajeev Alur and Mihalis Yanrmkakis. Model checking of hierarchical state ma- chines. A CM Trans. Program. Lang. Syst, 23(3):273—303, 2001. J. Appavoo, K. Hui, C. A. N. Soules, et al. Enabling autonomic behavior in systems software with hot swapping. IBM System Journal, 42(1):60, 2003. Jonathan Appavoo, Robert W. \‘Visniewski, Craig A. N. Soules, et al. An in- frastructure for multiprocessor run-time adaptation. In Proceedings of the ACIW SIGSOFT Workshop on. Self-Healmg Systems (WOSSOQ), NOVQIIIl)E'I‘ 2002, Anish Arora and blf‘llfllllf‘d G. Gouda. Distributed reset. IEEE ’Iions. Cornput, 43(9):1026——1038, 1994. HP. Barendregt. The. Lambda calculus: Its Syntax and Semantics. North Holland, 2nd reprint 1997 edition edition, November 2006. H. Barringer, A. Goldberg, K. Havelund, and K. Sen. Program monitoring with ltl in eagle. In 18th International Parallel and Distributed Processing Sympo- sium, Parallel and Distributed Systems: Testing and Debugging - PADTAD ’04. IEEE Computer Society Press, April 2004. ISBN 0769521320. Daniel M. Berry, Betty H. C. Clieng, and Ji Zhang. The four levels of re- quirements engineering for and in dynamic adaptive systems. In Proc. of 11th International Workshop on Requirements Engineering: Foundation for Software Quality (REFSQ’OS), Porto, Portugal, June 2005. B. N. Bershad, S. Savage, P. Pardyak, E. G. Sirer, M. E. Fiuczynski, D. Becker, C. Chambers, and S. Eggers. Extensibility safety and performance in the SPIN operating system. In Proceedings of the fifteenth ACM symposium on Operating systems principles, pages 267—283. ACM Press, 1995. Brian N. Bershad et al. Spin - an extensible microkcrnel for application—specific operating system services. Technical report, Dept. of Computer Science and Engineering, University of “iiishington, 1994. 274 [13] [14} [15] [16] m [19} P. Bertraml, R. Darimont. E. Delor, P. l\'Iassonet, and A. van Lamsweerde. GRAIL/KAOS: an environment for goal drivent requirements engineering. In Proceedings of the :30th International Conference on. Software Engineering. IEEE Computer Society, 1998. Karun Bivani and Sandeep Kulkarni. Mixed-mode aclzmtation in distrilnited systems: A case study. In Proceedings of International I'Vorhshop on Software Engineering for Adaptive and Sclf-Managing Systems - SEA MS, at I CSE, I\-~‘Iay 2007. Gordon S. Blair, G. Coulson. P. Robin, and M. Papatliomas. An architec- ture for next generation middleware. In PI‘()(_.'('f(i(ll7'l-,(]S of the IFIP International Conference on. Distributed Systems Platforms and Open Distributed Processing, London. 1998. Springer-Verlag. Jean-Chrysotome Bolot and Andres VcO‘a-Garcia. Control nicchzmisms for x D packet audio in Internet. In Proceedings of I EEE INF OCOM '96, pages 232-239. San Francisco. California, April 1996. Howard Bowman and Simon J. Thompson. A tableaux method for Interval Temporal Logic with projection. In TABLEA UX'98, International Conference on .-'ln.alytic Tableaar and Related Il-Iethods, number 1397 in Lecture Notes in Al pages 108423. S1._)ringer-Verlag. May 1998. J.S. Bradbury, JR. Cordy. J. Dingel, and M. W'ermelinger. A survey of self management in dynamic software architecture specifications. In Proc. of the ACM SICSOPT Inter-'na.t.iona.l I‘Vorhshop on. Self~A-Ianaged Systems ('WOSS ’04) pages 2877333. Newport Beach. California, October/ November 2004. Carlos Canal, Ernesto Pimentel. and Jose M. Trove- Specification and re- finement of dynamic software architectures. In Proceedings of the TC2 First Working IFIP Conference on Software Architecture ( WI (SA 1 ) pages 107—126. Kluwer, B.V., 1999. A. Cau. B. L\I(_)s'/.l