NONSMOOTH OPTIMAL CONTROL FOR COUPLED SWEEPING PROCESSES WITH JOINT ENDPOINT CONSTRAINTS UNDER MINIMAL ASSUMPTIONS By Samara Chamoun A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Mathematics—Doctor of Philosophy 2025 ABSTRACT A sweeping process typically refers to a dynamical system represented by a differential in- clusion in which the set-valued map is the normal cone to a “nicely” moving closed set called the sweeping set. Although the sweeping process was originally developed for elastoplas- ticity applications, it has been widely recognized for its application in many other fields, including hysteresis, ferromagnetism, electric circuits, phase transitions, traffic equilibrium, economics, population motion in confined spaces, and other areas of applied sciences and op- erations research. Due to the nonstandard differential inclusions involved—with unbounded and discontinuous right-hand sides produced by the normal cone—classical results from the literature on differential inclusions are not applicable. In this dissertation, the study of nonsmooth optimal control problems (P ) involving a controlled sweeping process with three main characteristics is launched. First, the sweeping sets are nonsmooth, time-dependent, and uniformly prox-regular. Second, the sweeping process is coupled with a controlled dif- ferential equation. Third, a joint-state endpoints constraint set S is present. This general model incorporates various significant controlled submodels, such as a class of second or- der sweeping processes, and coupled evolution variational inequalities. A full form of the nonsmooth Pontryagin maximum principle for strong local minimizers in (P ) is derived for bounded or unbounded moving sweeping sets satisfying local constraint qualifications (CQ) without any additional restriction. The existence and uniqueness of a Lipschitz solution for the Cauchy problem of our dynamic is established and the existence of an optimal solution for (P ) is obtained. Two of the novelties in achieving the first goal are (i) the construc- tion of a problem over truncated sweeping sets and truncated joint endpoints constraint set preserving the same strong local minimizer of (P ) while automatically satisfying (CQ), and (ii) the complete redesign of the exponential-penalty approximation technique for problems with moving sweeping sets that do not require any assumption on the sets, their corners, or on the gradients of their generators. The utility of the optimality conditions is illustrated with an example. Copyright by SAMARA CHAMOUN 2025 To my mom, dad, Antonio and Chase. iv ACKNOWLEDGEMENTS First and foremost, I would like to express my deepest gratitude to my advisor, Dr. Vera Zeidan. Thank you for your generosity with time and resources, and for your willingness to spend countless hours working through mathematics with me. I have learned so much from your diligence, hard work, creativity, and thoroughness. You have helped me learn to think outside the box and to trust my mathematical instincts. One phrase of yours will always stay with me: “Everything has a solution”—in mathematics and in life. I hope I’ve made you proud as your first student. I am also grateful to my committee members, both past and present—Dr. Jun Kitagawa, Dr. Baisheng Yan, Dr. Gábor Francsics, Dr. Ignacio Uriarte-Tuero and Dr. Matthew Hirn—for their support throughout this journey. Thank you to Dr. Chadi Nour, who introduced me to the field of control theory and non- smooth analysis, made the connection with Dr. Vera, and encouraged me to apply to the U.S.—even though it wasn’t the standard path. I owe much of where I am today to your guidance and encouragement. Thank you to Dr. Boris Mordukhovich for his encouraging words and for writing letters of recommendation on my behalf. I am also deeply grateful to all my math teachers in middle school and high school, as well as to the professors at the Lebanese University, the Lebanese American University, and Michi- gan State University, who ignited my passion for mathematics and shaped my journey as a mathematical scholar. Your guidance has left a lasting impact. I would especially like to thank Dr. Charbel Klaiany, Dr. Rony Touma, Dr. Nader El Khatib, Dr. Carole El Bacha, and Dr. Leila Issa for their support and inspiration along the way. Graduate school was not only a time of learning and doing mathematics, but also a time when I truly discovered my passion for teaching. I grew into the role of an educator with the help of incredible mentors and leaders who supported me, challenged me, and believed in me. v Andy Krause and Tsveta Sendova, my experience in graduate school would not have been the same without your professional and personal support. What you are doing in the mathe- matics department is nothing short of amazing. Through your training and the personalized care you offer, you are cultivating a cohort of outstanding and competitive educators. Thank you for all the experiences and opportunities you have given me, for the advice, the resources, the conversations, and for always rooting for me. Rachael Lund, thank you for being an incredible mentor and leader. I truly enjoyed working with you on the support program and the FAST Fellowship—these were among the most rewarding experiences of my graduate journey and deeply shaped my teaching philosophy. Your kind and compassionate mentorship is something I admire and hope to emulate. Stefanie Baier, thank you for your fierce and unwavering advocacy for graduate students, and for supporting both my professional and personal aspirations. I am so grateful for the opportunity to be part of the GREAT Office—a space that connected me with an incredible community of educators, both inside and outside MSU. Being part of this office—and the engaging conversations, fun outings, and meaningful friendships it offered—has been one of the highlights of my graduate journey. Thank you to the AMAZING advisory group—Hima Rawal, Gloria Ashaolu, Sewwandi Abeywardana, Arya Gupta, Tianyi (Titi) Kou-Herrema, Seth Hunt, Saviour Kitcher, and Ellen Searle—for your support and friendships. And thank you to the GREAT Fellows—Qi Huang, Amie Musselman, Tara Mesyn and Tianyi (Titi) Kou-Herrema—for sharing this journey with me. Thank you to the FAST Fellowship community for giving me the opportunity to study the impact of the support program on students’ attitudes toward mathematics. I truly enjoyed this work. I am grateful to Dr. Rique Campa, Sevan Chanakian, and the FAST committee and cohort for their support, guidance, and leadership throughout the process. Thank you to Dr. Jenny Green for being an incredible mentor—always encouraging and supportive of our ideas. It was in your class that I first engaged with the concept of compas- sionate teaching, an idea that has since taken me so many places. I will continue to share vi my ideas with you because I know they’ll be met with support, curiosity, and care. Thank you to Dr. Marybeth Heeder for being a powerful example of a kind, compassionate mentor, leader, and researcher in academia. I have deeply appreciated our conversations, your insights, and the resources you’ve so generously shared with me. Thank you to Dr. Filomena Nunes for teaching the Women in STEM course, which provided us with the tools and language to advocate for ourselves in STEM fields. Your course offered not only valuable resources, but also a space of encouragement and empowerment. Thank you to Dr. Amy Martin for connecting me with the Campus Student Success Group. I truly enjoyed engaging with this community and appreciated the opportunity to share re- sources, ideas, and conversations. Most importantly, thank you to my students—you have given me some of the most joyful moments of graduate school. Your curiosity, hard work, engagement, kind words, smiles, and vulnerability will always be an inspiration to me. I vividly remember many days when I walked into the classroom feeling tired, and left filled with energy and gratitude. You have my deepest thanks. To all medical professionals, physicians, mental, spiritual, and physical health providers, and emergency service workers—thank you. Your care, presence, and dedication have made a real difference. I would not be in the same physical and mental headspace without your help. A special thank you to Corey Decker, Elizabeth Malsheske, Dr. Jabir Sawaya, and Brittany Jurek. Thank you to Dana Carley for always being in my corner. You have stood by me through the toughest and happiest moments of my life. You’ve seen my vulnerability and insecurities, held my hand through hard times, and helped me come out the other side a better person. I am forever grateful that our paths crossed. Thank you to Fr. Mike, who helped me reconnect with my spirituality in the U.S. and encouraged me to lean on both God and the arts in moments of uncertainty. Because of you, I started painting. vii Thank you to all the unseen helpers—the MSU staff whose work so often goes unrecognized but is deeply appreciated. To the Mathematics Department staff, thank you for your con- stant support. A special thank you to Taylor Alvarado for always being on top of things and for helping us navigate department requirements on our way to graduation. Thank you to the Office for International Students and Scholars (OISS), and especially to Ismail Adawe, for your support with the OPT process and other visa-related matters. You’ve been the best OISS advisor I could have asked for—thank you for your guidance and care. Thank you to Deanne Arking, who supported me during my time as Vice President for In- ternational Affairs with the Council of Graduate Students. I couldn’t have made it through this PhD without the love and support of my friends and family. To the friends I met in the U.S., you made living away from home not only more bearable, but also more joyful and fun. Thank you for the coffee dates, study sessions, sleepovers, for helping me through tough times, and for celebrating the big moments with me. A heartfelt thank you to Marena Haidar, Charif Yassine, Elliott Haddad, Zuebida (Zee), Jamie Kim- ble, Nicole Hayes, Valeri Jean-Pierre, Jinting Liang, Shikha Bhutani, Astrid Olave-Herrera, Eloy Moreno Nadales, Rob McConkey, Chamila Malagoda Gamage, Lubashan Pathirana Karunarathna, Hitesh Gakhar, Reshma Menon, Brady Tyburski, Sofia Abreu, Saul Bar- bosa, Anthony Dickson, Hope Lewis, Sewwandi Abeywardana, Gloria Ashaolu, Arya Gupta, Sevan Chanakian, Ken Bundy, and Hannah Jeffery. Thank you to Ayesha Farheen for the sleepovers, for being there during the hard times, for opening your home to me, and for being such an incredibly kind and thoughtful human being. Thank you to Ayesha Bundy for your friendship, for making me feel at home in your space, for the study sessions, the delicious meals, the sense of community, and your constant check- ins and support. Thank you to Chloe Lewis for being one of the most generous people I know — for sharing viii resources, helping me navigate the job market, and for all our rich and inspiring teaching conversations. Thank you to Danika Van Niel for the fun trips and hangouts, for visiting Chicago together for the first time, for traveling to Lebanon and meeting my family, and for being such a supportive and encouraging friend. Thank you to Hima Rawal for being a trusted friend. I’ve deeply appreciated the way we’ve shared our vulnerabilities with one another, without judgment and with so much compas- sion. Thank you to Rahaf Ahmad for being a safe space for voicing my anxiety. You were one of the first close friends I made in the U.S., you always show up, and I know I can always count on you. Thank you to my childhood friend, Perla Alam. We will always be there for each other. Thank you for your constant presence—always being there for me, rooting for me, and walking through life with me. You have been a steady and loving part of my world since childhood. Thank you to Monica El Khoury and Marina El Khoury for 13 years of friendship. I’m lucky to have you—celebrating my happiness and feeling my sadness as if it were your own. My life wouldn’t have been the same without you. Thank you to my extended family: my grandma, uncles, aunts, cousins, Chase’s family, and all the other family members who have supported me along the way. You have given me so much love throughout the years, and I feel incredibly lucky to have so many people in my corner. Finally, to my family: Mom, Dad, Antonio, and Chase. Simply put, I wouldn’t be where I am without you. I have the best support system in you. Mom, thank you for your unconditional love—for asking for nothing but for me to be okay. I have been able to achieve everything I have because of you. You never asked me for anything in return—just to be well—and that kind of love is everything. Dad, thank you for talking things through with me, for being ix excited about my successes and wanting to celebrate them loudly, and for being there for me during the lows. To my brother Antonio, thank you for being there through the scariest and most anxious moments of my life. You calm me down in a way no one else can. To my fiancé Chase, you lived it all with me—the scary, the bad, the happy, the sad, the anger, the joy, the boredom, and the love. I am the luckiest person to have such a smart, sweet, kind, and supportive partner walking through life by my side. Meeting you as a graduate student will always be the most fortunate part of my PhD. Thank you to the four of you for being my safe place and my home. x LIST OF ABBREVIATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii TABLE OF CONTENTS CHAPTER 1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Intersectionality of different fields . . . . . . . . . . . . . . . . . . . . . . 1.2 Sweeping process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Results and outline of this dissertation . . . . . . . . . . . . . . . . . . . CHAPTER 2 PRELIMINARIES . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Basic notions and concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Nonsmooth analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Differential equations, set-valued analysis, and control theory . . . . . . . 2.4 Functional analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CHAPTER 3 STUDY OF A COUPLED SWEEPING PROCESS DYNAMIC (D) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Study of the dynamic (D) under local assumptions 3.2 Development and study of a new truncated dynamic ( ¯D) under local assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Study of the dynamic (D) under global assumptions . . . . . . . . . . . . 1 1 6 13 25 25 27 44 46 55 57 63 90 CHAPTER 4 OPTIMAL CONTROL PROBLEM (P ) OVER A COUPLED SWEEPING PROCESS DYNAMIC (D) . . . . . . . . . . . . . . 4.1 Existence of optimal solution for (P ) under global assumptions . . . . . . 4.2 Pontryagin maximum principle for (P ) under local assumptions 99 99 . . . . . 101 CHAPTER 5 VALIDATING THEORETICAL RESULTS USING AN EXAMPLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 CHAPTER 6 CONCLUSION AND POSSIBLE FUTURE DIRECTIONS . . . . 152 6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.2 Future directions BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 APPENDIX APPENDIX TO CHAPTERS 3-4 . . . . . . . . . . . . . . . . . . 161 xi ∥ · ∥ ⟨·, ·⟩ Rn R+ Ba(x) ¯Ba(x) B ¯B LIST OF ABBREVIATIONS Euclidean norm Usual inner product n-dimensional Euclidean space Set of positive real numbers Open ball centered at x and of radius a Closed ball centered at x and of radius a Open unit ball Closed unit ball Mm×n[a, b] Set of m × n-matrix functions on [a, b] Ir×r int S bdry S cl S conv S Sc Gr S(·) σ(·, S) dS(x) projS(x) N P S (s) N L S (s) NS(s) dom f epi f The identity matrix in Mr×r Interior of a set S ⊂ Rn Boundary of a set S ⊂ Rn Closure of a set S ⊂ Rn Convex hull of a set S ⊂ Rn Complement of a set S ⊂ Rn Graph of the set-valued map S(·) The support of a set S ⊂ Rn The distance from x to a set S ⊂ Rn The closest point or projection of x onto S ⊂ Rn The proximal normal cone to S at s The Limiting normal cone to S at s The Clarke normal cone to S at s Effective domain of f Epigraph of f xii Gr f ∇f ∂P f (x) ∂Lf (x) ∂f (x) ∂2f (x) ∂g(x) Graph of f Gradient of f Proximal subdifferential of f at x Limiting subdifferential of f at x Clarke generalized gradient of f : Rn → R ∪ {∞} at x Clarke generalized Hessian of f at x Clarke generalized Jacobian of g : Rn → Rn at x C([a, b]; Rn) Space of continuous functions from [a, b] to Rn AC([a, b]; Rn) Space of absolutely continuous functions from [a, b] to Rn BV ([a, b]; Rn) Space of bounded variation functions from [a, b] to Rn V b a (f ) Total variation of f Lp([a, b]; Rn) Lebesgue space of p-integrable functions from [a, b] to Rn W 1,p([a, b]; Rn) Sobolev space of continuous functions f whose derivative ˙f ∈ Lp M(S) M+(S) M1 + (S) The set of Radon measures on S The set of positive Radon measures on S The set of probability measures on S C∗([a, b]; Rn) The dual space of C([a, b]; Rn) equipped with the supremum norm ∥ · ∥T.V C⊕(a, b) The induced norm on C∗([a, b]; Rn) The set of elements in C∗([a, b]; R) taking non-negative values on nonnegative-valued functions in C([a, b]; R) xiii CHAPTER 1 INTRODUCTION 1.1 Intersectionality of different fields The research in this thesis centers around different fields of mathematics: control theory, optimization, dynamical systems, nonsmooth analysis, set-valued analysis, and functional analysis. 1.1.1 Control theory If you are reading this thesis as a non-mathematician or as a mathematician from a differ- ent field, this section provides everything you need to know about control theory, including its foundational concepts and its applications in everyday life. A friend shared a fascinating map of mathematics with me (see Figure 1.1), and I invite you to take a moment to explore it and see where control theory fits within the mathematical landscape. Figure 1.1 Placement of key areas mentioned above on the map of mathematics 1 Control theory is one of the most interdisciplinary areas of research, serving as a critical intersection between mathematics and engineering. It is a subfield of mathematics that fo- cuses on using feedback to influence the behavior of dynamical systems—whether physical, biological, or otherwise—to achieve specific goals. Before emerging as a distinct field in the late 1950s and early 1960s, control theory was deeply connected to other areas of mathemat- ics, such as calculus of variations and differential equations. Early research often adapted classical theories and techniques from these fields to address control problems, laying the groundwork for the development of modern control theory. I discovered a map of control theory itself, which I invite you to explore as it highlights the different structures and con- nections within this field (See Figure 1.2). Figure 1.2 Map of control theory This field can be broadly divided into two branches: linear control systems and nonlinear control systems. While linear control systems are foundational and often easier to analyze, 2 all real-world control systems exhibit nonlinear behavior, making nonlinear control systems more applicable to practical scenarios. Control system theory can contribute to1: • Developing mathematical models to describe system dynamics. • Simulating and predicting system behavior under various scenarios. • Analyzing and understanding dynamic interactions within complex systems. • Filtering and rejecting noise to enhance signal clarity and system accuracy. • Selecting and designing appropriate hardware to implement control strategies. • Testing and validating system performance in unpredictable environments. • Gaining foundational insights into system behavior and functionality. A controller (see Figure 1.3) operates through different types of feedback loops. As discussed in this article2 on the difference between open-loop and closed-loop systems, an open- loop controller, also known as feedforward, does not use any information about the current state or output of the system to influence its control actions. In contrast, a closed-loop con- troller, also known as a feedback controller, incorporates feedback into its decision-making process. Closed-loop controllers can be further categorized based on the type of feedback they use: system feedback controllers, which rely on feedback from the internal state of the system, and output feedback controllers, which utilize feedback from the system’s output. Figure 1.3 Open loop system versus closed loop system 2 1The following was adapted from educational materials presented by Brian Douglas on his YouTube channel. 2https://www.ntchip.com/electronics-news/difference-between-open-loop-and-closed-loop. 3 Open-loop control systems are typically used for simple processes with well-defined input- output relationships. For instance, consider a dishwasher. The objective of the dishwasher (the plant) is to clean dishes (the output). Once the user sets the wash time (the input), the dishwasher will operate for the specified duration, regardless of the actual cleanliness of the dishes. If the dishes were already clean at the start, the dishwasher would still run for the full prescribed time. Similarly, a dryer operates on the same principle. The user sets the drying time (the input), which determines how long the dryer runs. This duration is fixed and unaffected by whether the clothes are already dry. On the other hand, a closed-loop control system dynamically adjusts its operation based on feedback from its output. The system continuously monitors the output, compares it to the desired outcome, and adjusts the input accordingly to minimize any discrepancies. For example, consider a dryer equipped with a sensor that measures the dryness of the clothes. This sensor provides feedback that is compared to a reference signal representing the desired dryness level (set by the manufacturer or the user). The difference between the measured and desired levels generates an error term, which is sent to a controller. The controller uses this feedback to determine when to shut off the dryer, ensuring the clothes are dried to the desired level (see Figure 1.4). Figure 1.4 Closed loop system in a dryer 2 Control theory finds extensive applications across diverse fields, including biology (e.g. optimal vaccination strategies) , physics (e.g. spacecraft control), engineering (e.g. robotics), 4 economics (economic growth models), medicine (e.g. drug target identification in cancer re- search), and finance (e.g. risk management). It is important to note that a mathematical solution to a control problem may not always ex- ist. In the late 1950s, rigorous conditions for existence were established, with controllability being a key criterion, ensuring that some form of control is possible. Optimal control fo- cuses on finding a control law for a given system that satisfies a specified optimality criterion. It involves a cost functional, which depends on the state and control variables. An optimal control solution consists of differential equations that describe the evolution of the control variables to minimize the cost function. Such solutions can be derived using Pontryagin’s Maximum Principle or by solving the Hamilton-Jacobi-Bellman equation. 1.1.2 Nonsmooth analysis Nonsmooth analysis, which can be considered a subdomain of nonlinear analysis, refers to differential analysis without the differentiability. It concerns the local description of nondifferentiable functions and sets lacking smooth boundaries, in terms of generalizations of classical concepts of derivatives, normals and tangents. Although this subject has traditional roots, it is only over the last few decades that it has developed rapidly. The reason behind this progress is the acknowledgment of the importance of nondifferential setting, its universal presence and its direct relation with some unusual behaviors such as chaos and catastrophes. It can be viewed, within differential (functional) analysis, as a topic in itself. However, it has also gained a major part in several applications such as optimization and control theory. Among F. Clarke and R. T. Rockafellar, many more such as J. Borwein, A. D. Ioffe, B.Mordukhovich and R. B. Vinter have contributed in its development. The need for nonsmooth analysis in control theory is connected to finding proofs of necessary conditions for optimal control, in particular with the use of Pontryagin Maximum Principle. In general, nonsmooth analysis intervenes when considering nonlinear problems (studying the sensitivity of the problems, deriving necessary conditions or applying sufficient conditions). 5 1.2 Sweeping process As mentioned above, optimal control theory involves minimizing an objective function subject to a given control system. The specific system I focus on in this thesis is known as the sweeping process. My work centers on studying the dynamics of the sweeping process and addressing optimal control problems governed by such systems. For readers unfamiliar with the sweeping process, this section provides a brief introduction to its background. 1.2.1 Definition, interpretation, and applications J.J. Moreau introduced the sweeping process as being a differential inclusion in which the set-valued map is the normal cone to a nicely moving non-empty closed set C(t), called the sweeping set (see [49, 50, 51]). The simplest form of the sweeping process is given by ˙x(t) ∈ −NC(t)(x(t)), a.e. t ∈ [0, T ]. (1.1) When the set C(t) is convex, NC(t) corresponds to the normal cone of convex analysis. However, when C(t) is non-convex, then it is taken to be uniformly prox-regular, in which case NC(t) is the Clarke normal cone. When we add a perturbation or external force f to (1.1), we call the dynamic a perturbed sweeping process, and when f depends on a control u, we call it a perturbed controlled sweeping process. To understand what the word “sweeping” means, we can think of a large ring moving while containing a small ball inside. The ring starts moving at t = 0, and the movement of the ball depends on how the ring interacts with it. If the ball is not hit by the ring, it remains stationary. However, if the ring hits the ball, the ball is “swept” towards the inside of the ring. The main idea here is that the velocity of the ball must point inwards so that the ball does not escape the ring’s bounds (See Figure 1.5). 6 Figure 1.5 Sweeping process interpretation Sweeping processes have various applications in different fields including elastoplasticity, hysteresis, ferromagnetism, electric circuits, phase transitions, and traffic equilibrium (see, for example, [1, 4, 7, 45, 67]). In the past decade, interest in sweeping processes has grown due to their significant role in emerging applications such as mobile robot models [27], and pedestrian traffic flow models [27]. In these contexts, the primary goal is to efficiently control the state of events by optimizing a specific objective function over the controlled sweeping process. One of the most fascinating applications of sweeping process is the crowd motion models for emergency evacuation [14, 10]. In case of an emergency evacuation, we want to find the most effective way to leave the room. While we would prefer to move at our “desired” velocity, we need to take into account the direct contact between each other, as well as our contact with different objects and obstacles present in the room. Thus, our “actual” velocity—the closest achievable velocity to our desired one while accounting for direct contact with others—is determined by a sweeping process dynamic. Figure 1.6 Emergency evacuation 7 1.2.2 Theoretical results Due to the unboundedness and discontinuity of the normal cone in in (1.1), standard results involving differential inclusions cannot be used for sweeping processes. Extensive literature exists on the question of existence and uniqueness of an absolutely continuous or Lipschitz solution for the Cauchy problem associated with different forms of the following perturbed controlled sweeping process ˙x(t) ∈ f (t, x(t), u(t)) − NC(t)(x(t)) a.e. t ∈ [0, T ], x(0) = x0 ∈ C(0), (1.2) in which the constraint x(t) ∈ C(t) is implicit. Initially, such results commonly required the absolute or Lipschitz continuity of the set-valued map C(·) (see, e.g., [36]). However, motivated by the need to consider set-valued map C(·) for which these conditions are too strong (see [65]), similar results are derived by merely assuming the same conditions on the ρ-truncated set-valued map C(·)∩ρ ¯B (see e.g., [52, 64, 65]). In [41], when C(t) is polyhedral, a constraint qualification is shown to besufficient for those conditions to be satisfied on the ρ-truncated polyhedral sets. Numerous efforts have been made to derive existence theory for optimal solutions and/or optimality conditions in terms of Euler-Lagrange equation or Pontryagin-type maximum principle for optimal control problems driven by variants of (1.2). The main approach used to solve different versions of such an optimal control problem is the method of approximation, either discrete (see, e.g., [12, 13, 10, 11, 23, 25, 26, 28]), or continuous (see, [6, 30, 33, 34, 55, 57, 58, 70]). Our focus in this paper is on the latter, and more specifically, on the exponential penalty-type. 1.2.2.1 Selected results for constant sweeping set C. Work of dePinho et al. in [30, 31] The exponential penalization technique was first used in [30, 31] to derive existence of solution of (1.2), existence of optimal solution and Pontryagin-type maximum principle for global minimizers of a Mayer problem over (1.2), in which: 8 • f is smooth and convex, • C is a constant compact set defined as the zero-sublevel set of a C 2-convex function ψ satisfying a constraint qualification on Rn, • with initial state-constraint set C0 ⊂ C and free final state. The novelty of this technique resides in approximating NC(·) by the exponential penalty term γkeγkψ(·)∇ψ(·) such that the so-obtained approximating dynamic is a standard control system without state constraints for which C is shown to be invariant: ˙x(t) = f (t, x(t), u(t)) − γkeγkψ(x(t))∇ψ(x(t)) a.e., x(0) = x0 ∈ C. (1.3) The absence in (1.3) of the explicit state constraint, x(t) ∈ C, that is implicitly present in (1.2), has also been shown to be instrumental in constructing numerical algorithms for controlled sweeping processes (see [32, 56, 59]). In summary, the exponential penalization technique works as follows: rather than deriving necessary conditions for optimal solutions of a problem (P ), governed by (1.2), directly, we approximate (P ) with a sequence of standard optimal control problems (Pk) governed by (1.3). Using existing results, we determine necessary conditions for (Pk), and by analyzing the limit as k → ∞, we then obtain necessary conditions for (P ) (see Figure 1.7). Figure 1.7 Exponential penalization technique 9 Work of Zeidan et al. in [70, 55, 58] The domain of applicability of the exponential penalization technique for the results in [30, 31] was later enlarged in [70, 55, 58], to include strong local minimizers for controlled sweeping processes having: • nonsmooth perturbation f, • a constant sweeping set C that is nonsmooth prox-regular (i.e., C is the intersection of a finite number of zero-sublevel sets of C1,1-generators ψ1(x), · · · , ψr(x) near C), and the functions ψi’s satisfy a constraint qualification on the set C, • a final state constraint set CT ⊂ Rn, the cost depends on both state-endpoints. Furthermore, therein: • the normal cone, NC, in (1.2) is replaced by a subdifferential, ∂φ, of a function φ with domain C, and it is shown that such a system is equivalent to (1.2) with a different f . Indeed, the function φ is extended to a function ϕ that is C1,1 on Rn and that enjoys a globally Lipschitz gradient. Using a formula recently established in [35] for the Clarke subdifferential of an amenable function, the following formula is obtained ∂φ(x) = {∇ϕ(x)} + NC(x), ∀x ∈ C. Using this formula, the dynamic can be rephrased as the original sweeping process ˙x(t) ∈ fϕ(t, x(t), u(t)) − NC(x(t)), (1.4) where fϕ(t, x, u) = f (t, x, u) − ∇ϕ(x). • When CT ⊊ Rn, the convexity of the sets f (t, x, U (t)) is required in [55, 58]. • When C is unbounded, a restrictive assumption, (A2.4), is imposed in [58] on the set C and is shown to hold for convex, compact boundary, or polyhedral sets, but not for general prox-regular sets. • Note that the nontriviality condition in the maximum principle of [55] is simply λ + ∥p(T )∥ = 1 and does not invoke the total variation of the measure. 10 • New subdifferentials are used that are strictly smaller than the Clarke and Mor- dukhovich subdifferentials. The key for this surprising result is the design of an ap- proximating problem whose optimal state remains entirely in the interior of the set C. • Note that in the case when r > 1, the invariance of C itself is not always valid, but requires extra restrictive hypothesis (see [34]). However, as shown in [58], the invariance of C itself is not essential for the success of this method, as it suffices to establish the invariance of certain ingenuous approximations of C from its interior, namely, C γk, the zero-sublevel set of a special single function ψγk corresponding C γk(k) ⊂ C γk. Furthermore, the uniform bounded variation property approximating ψ := max{ψi}, and the for the adjoint variables pγk by employing the strict diagonal dominance condition on the Gramian matrix for the of the approximating problems, was cleverly established gradients of the active constraints at the prescribed optimal solution ¯x of the original problem. 1.2.2.2 Selected results for time-dependent sweeping set C(t). Work of dePinho et. al in [33, 34] In [33] and later in [34] (independently from [58]), the authors extended their previous smooth Pontryagin principle for global minimizers in [30], using the exponential penalty- type technique, to the case where: • the perturbation f (t, ·, u) is smooth, f (t, x, U ) is convex, • the sweeping set C(t) is time-dependent and nonsmooth, • Gr C(·) is compact, • the sweeping sets are assumed to have C2- generators (ψi(t, x))r i=1 satisfying a global constraint qualification, and a global diagonal dominance condition on the Gramian matrix for the gradients of the active constraints is imposed, • other demanding conditions are assumed on the set C(t): • ∇xψi(t, ·) = 0 on the complement in C(t) of a uniform band around the boundary 11 of C(t), • ⟨∇xψi(t, x), ∇xψj(t, x)⟩ ≥ 0 in a band around the boundary of C(t), that is, all the corners of C(t) must have obtuse angles. In particular, this last assumption excludes many important sets, including simple ones, like triangles, polytopes or sets with one or more acute angles, etc, • the initial and final state sets are compact. • In [33], their exponential penalty technique here deviated from (1.3) by using instead of ψ(t, x), the function ψ(t, x) − σk, where σk ↘ 0, and hence, C(t) is approximated by sets C k(t) ⊃ C(t) from the outside and not from the interior of C(t). Work of Hermosilla-Palladino in [42] In [42], a different approach is used to establish a variant of nonsmooth Pontryagin-type maximum principle for strong local minimizers in a controlled sweeping process when: • the moving set C(t) is, as in [58, 34], nonsmooth and non-convex, • the set C(t) is uniformly prox-regular, • the generating functions hi together with ∇hi are Lipschitz on a neighborhood of Gr ¯x, and (∇hi)i=r i=1 satisfy a positive linear independence constraint qualification, • the multifunction C(·) is Lipschitz continuous, • the initial state is fixed and the final state is free, • unlike the expected nontriviality condition (λ = 1 in their case), an atypical nonde- generacy condition is obtained which would require further understanding, • the results involve the standard Clarke and Mordukhovich subdifferentials. The authors constructed a sequence of standard optimal control problems having auxiliary controls and explicit state constraints emanating from the sweeping set, such that all admit the same optimal solution as the original problem. 12 1.3 Results and outline of this dissertation 1.3.1 Gaps in the literature and answering open questions We summarize key results from the existing literature in the following comparison tables. These tables will help identify gaps in the past research, that will be addressed once the dis- sertation results are presented. Table 1.1 and Table 1.2 serve as a foundation for identifying open questions and demonstrating how this dissertation contributes to filling those gaps. Table 1.1 Comparison of data in [58], [34], and [42] Data: let ¯x the prescribed optimal solution of the original problem Reference Assumptions on Assumptions on the Other assump- the perturbation sweeping set tions on the f data [58] f (t, ·, u) Lipschitz C is constant nonsmooth prox- Initial state C0 is on a neighborhood regular, and the generators ψi’s closed of ¯x C 1,1 on a neighborhood of C and satisfy a constraint quali- fication on the set C [58] When CT ⊊ Rn, When C is unbounded, a re- Final state CT is the convexity of the strictive assumption is im- closed, and the cost sets f (t, x, U (t)) is posed on the set C and is depends on both required shown to hold for convex, com- state-endpoints pact boundary, or polyhedral sets, but not for general prox- regular sets 13 Table 1.1 (cont’d) Reference Assumptions on Assumptions on the Other assump- the perturbation sweeping set tions on the f data A strict local diagonal dom- U (t) is time- inance condition on the dependent, closed Gramian matrix for the gradi- and uniformly ents of the active constraints bounded in t is imposed at ¯x [34] f (t, ·, u) is C1 C(t) is time-dependent, nons- Initial state C0 is mooth, prox-regular (implied), compact and the generators ψi’s C 2 and satisfy a constraint qualifica- tion f (t, x, U ) convex Gr C(·) is compact Final state CT is compact A global diagonal dominance U is constant com- condition on the Gramian ma- pact trix for the gradients of the active constraints is imposed, ∇xψi(t, ·) = 0 on the comple- ment in C(t) of a uniform band around the boundary of C(t), ⟨∇xψi(t, x), ∇xψj(t, x)⟩ ≥ 0 in a band around the boundary of C(t), that is, all the corners of C(t) must have obtuse angles 14 Table 1.1 (cont’d) Reference Assumptions on Assumptions on the Other assump- the perturbation sweeping set tions on the f data [42] f (t, ·, u) Lipschitz C(t) is time-dependent, non- Initial state is fixed on a neighborhood smooth, prox-regular, and the of ¯x generators hi’s C 1,1 locally and satisfy a local constraint quali- fication f (t, x, U (t)) not The set-valued map C(·) is Final state is free necessarily convex Lipschitz U (t) is dependent time- and not necessarily unifromly bounded in t. Table 1.2 Comparison of results in [58], [34] and [42] Results Reference Pontryagin’s maximum principle Existence results [58] Exponential penalty approximation Existence solution of the sweep- method ing process, and existence of op- timal solution Typical non-triviality condition Subdifferentials smaller than standard subdifferentials are used 15 Table 1.2 (cont’d) Reference Pontryagin’s maximum principle Existence results [34] Exponential penalty approximation No existence results method Typical non-triviality condition Standard subdifferentials are used [42] Different approximation method No existence results Atypical non-triviality condition Standard subdifferentials are used Conclusion I. Therefore, the question of establishing a Pontryagin maximum principle in its expected form (i.e., standard nontriviality condition, adjoint equa- tion, transversality condition, and the maximality condition on the Hamiltonian) for optimal control problems over the sweeping process (1.2), remains open in each of the following settings: (i) when the nonsmooth moving sweeping sets C(t) are bounded and general (no restriction); (ii) when the general nonsmooth sweeping sets are unbounded (constant or mov- ing); (iii) when joint state endpoints constraint set is present, the convexity of f (t, x, U (t)) is absent, or the global constraint qualification is only local, for all types of sweeping sets: smooth, nonsmooth, constant, moving, bounded, or unbounded. In addition to the open problems in Conclusion I, new challenges arise when coupling (1.2) with a standard controlled differential equation, and when the joint endpoints constraint is 16 on both states. So, throughout this thesis, we work on the optimal control problem (P ), introduced in Chapter 4, governed by the following coupled dynamic (D), where x(t) ∈ Rn, y(t) ∈ Rl, and u(t) ∈ U (t) a.e., (D)   ˙x(t) ∈ f (t, x(t), y(t), u(t)) − NC(t)(x(t)), a.e. t ∈ [0, T ],  ˙y(t) = g(t, x(t), y(t), u(t)), a.e. t ∈ [0, T ], (x(0), y(0), x(T ), y(T )) ∈ S. Our model incorporates different controlled submodels as particular cases: • coupled evolution variational inequalities (see [1], [3], [6]), • a subclass of Integro-Differential sweeping processes of Volterra type (see [5]), • second order sweeping processes, in which the sweeping set is solely time-dependent (see, e.g., [53] for the general setting), • and Bolza-type problems associated to (P ). In other words, optimal control problems governed by either of the four submodels can readily be formulated as a special case of (P ) to which all the results of this thesis are applicable. In [23], necessary conditions in the form of a weak maximum principle are derived for a certain form of a Bolza problem over a sweeping process. Excluding the part of their integrand involving ˙x that is not covered in our setting, the remaining problem, therein, can be phrased as a special form of our problem (P ) over a coupled sweeping process (D), where the sweeping set is a constant polyhedron, and the state endpoints are at most periodic. On the other hand, in [6], a smooth Pontryagin maximum principle in its expected form is derived for a special case of our problem (P ), namely, where the sweeping set is constant, smooth, and strictly convex, the perturbation f is linear in u, the function g = (g1, g2) in the coupled controlled differential equations has g1 linear in u and g2 is quadratic and convex in u, the initial state is fixed, and the final state is free. The authors of [6] clearly noted that their method of standard smooth penalization does not apply even for the case of a constant polyhedron (which is a particular case of our general sweeping sets), and that including an 17 “additional terminal constraint”, a fortiori joint endpoints constraint, causes issues that are not treated therein. Conclusion II. Therefore, all the problems stated in Conclusion I are open when replacing the sweeping process (1.2) by (D), even when the sweeping set is constant polyhedral. 1.3.2 Findings and results of this thesis In Chapters 3-4, we resolve all the aforementioned open problems in Conclusions I and II, while also establishing existence results for solutions to (D) and (P ). In Chapter 5, we illustrate these theoretical results with an example and present several models that our findings can help solve. 1.3.2.1 Chapter 3 Chapter 3 is divided into local and global sections. The local sections focus on analyzing the dynamic (D) and the sweeping set C(t), as well as developing and studying a truncated dynamic ( ¯D) and a truncated sweeping set C(t) ∩ ¯B¯ε(¯x(t)) under local assumptions on the data. Two key local results in this chapter are Theorem 3.2.14, which approximates the truncated dynamic ( ¯D) using a sequence of standard control systems ( ¯Dγk uniqueness of Lipschitz solutions to the Cauchy problem associated with ( ¯D). ), and Corollary 3.2.16, which establishes the existence and The main result of the global section, Theorem 3.3.7, proves the existence and uniqueness of a Lipschitz solution for the Cauchy problem corresponding to our dynamic (D) without requiring any Lipschitz behavior on the nonsmooth moving sets C(t). Instead, we assume Gr C(·) is bounded and the gradients of the active generators are positively linear independent (A3.2)G. Note that this is the first result of its kind for general nonsmooth moving sweeping sets, even for system (1.2), that is based on the method of exponential penalty approximation. It is essential for developing a numerical algorithm to solve optimal control problems over 18 such sweeping processes, which is the topic of our forthcoming project. 1.3.2.2 Chapter 4 The only global result of Chapter 4 is Theorem 4.1.1, which establishes the existence of a global optimal solution for our problem (P ) over (D) with joint endpoints constraint set S. This result justifies the pursuit of a Pontryagin maximum principle for an optimal solution of (P ). The main result of Chapter 4, which is local, answers collectively all the open questions dis- played above and generalizes all previously known results on Pontryagin maximum principle in multiple ways. More specifically, in Theorem 4.2.11 we derive under minimal assumptions on the data, a complete set of necessary conditions in the form of nonsmooth Pontryagin maximum principle for a strong local minimizer ((¯x, ¯y), ¯u) of the Mayer problem (P ) gov- erned by the coupled sweeping system (D) together with the joint endpoints constraint set S. The moving sweeping sets C(t) are general, nonsmooth, bounded or unbounded, uniformly prox-regular, and defined as the intersection of a finite number of zero sub-level sets of the generators (ψi(t, ·))r i=1 . Note the following. • The optimal control problems studied in [34, 58] are over (1.2) and not over the general system (D). • Noteworthy, unlike the result derived in [34] where the sweeping sets are not only assumed to be bounded, but satisfy restrictive assumptions on their corners (obtuse angles) and on the gradients of their generators (∇xψi(t, ·) = 0 in a zone in C(t)), no such restrictive assumptions are required in our result over (D), whether the nonsmooth moving sweeping sets C(t) are bounded or unbounded. While when C(t) ≡ C is a constant set, this corner assumption in [34] was removed in [58], its removal is far more intricate when C(t) are moving sets (see Section 3.2.2 and Theorem 3.2.14). • In contrast of the result in [58] established for a restrictive class of constant unbounded sweeping sets, our result here is valid for general unbounded, moving, and prox-regular sets that do not necessarily satisfy the restrictive assumption (A2.4) of [58]. 19 • In addition, the convexity assumption of the sets f (t, x, U (t)) in [34] and [58] is now discarded and not only for the separable endpoints case treated therein, but also for general joint endpoints constraints. • Furthermore, as opposed to the global constraint qualification on the generators of the sweeping set C in [58] and of C(t) in [34], our constraint qualifications are required to only hold at ¯x(t) (see (A3.2) and (A3.3)), where (A3.3) is vacuous in the smooth case (r = 1). • Our nontriviality condition is simply λ + ∥p(T )∥ = 1 and does not invoke the measure corresponding to x(t) ∈ C(t). This is the expected form in a Pontryagin maximum principle for problems over controlled sweeping processes (see [55, 58, 34]). • In our adjoint inclusion and transversality condition we employ the recently introduced subdifferentials in [55] that are strictly smaller than the Clarke and Mordukhovich subdifferentials. 1.3.2.3 Chapter 5 In this chapter, we provide an example that highlights the significance of our initial model and the practical utility of our results. 1.3.3 Novelty of the methods employed. There are three separate matters to tackle when establishing a Pontryagin maximum principle for a ¯δ-minimizer ((¯x, ¯y), ¯u) of our problem (P ). 1. The first matter is the possible unboundedness of the moving sweeping sets C(t) and the joint endpoints set S, and the unboundedness of Rl (the sweeping set for the coupled controlled ODE). 2. The second, which is present even if C(t) is bounded and/or the sweeping process is taken to be (1.2) instead of (D), is that the constraint qualification, (A3.2), on the active generators of C(t) is only valid at ¯x. 3. The third is the absence of a Pontryagin maximum principle in its expected form for a Mayer problem over (1.2), where the nonsmooth moving sweeping sets are general 20 and bounded. The two diagrams in the following pages (see Figures 1.8-1.9) outline the key steps of our approach for the maximum principle, illustrating how we transition from working on the problem (P ) to defining a new truncated problem ( ¯Pδ,δ) to establishing a nonsmooth Pon- tryagin maximum principle for this truncated problem. We encourage the reader to first examine the two diagrams before proceeding to the following paragraphs. In addition to the techniques shown in the diagrams, we also present the following additional techniques: • To avoid imposing ∇xψi(t, ·) = 0 on the complement in C(t) of a uniform band around the boundary of C(t), and to establish the uniform bounded variation property of the adjoint variable for the approximating problem, we construct a modified version of ψi, ˆψi, that preserves the original constraint set C(t) and the properties of ψi, and whose gradient is zero in certain areas. • Another useful technique for the uniform bounded variation property of the adjoint variable for the approximating problem is to construct another transformation ˜ψi of ψi such that the Gramian matrix of the gradients of ˜ψi is strictly diagonally dominant, a condition stronger than the local strict diagonal dominance of the Gramian matrix corresponding to the gradients of the active constraints assumed for ψi at ¯x ((A3.3)). After formulating the max principle in terms of ˆψi and ˜ψi, we then translate the conditions to be formulated in terms of ψi. • To remove the convexity assumption on (f, g)(t, x, y, U (t)), we extend the relaxation technique from [70] to address: (a) strong local minimizers, (b) time-dependent sweep- ing sets, C(t), not necessarily moving in an absolutely continuous way, and (c) general joint state endpoints constraint set S. In our case, obtaining the necessary conditions via the penalty-type approximating technique, can be summarized in Figure 1.10. Using our approach to the exponential penalty method without truncating C(t), we prove in Section 3.3.2 the existence and uniqueness of a Lipschitz solution to the Cauchy problem associated with (D), Theorem 3.3.7. The existence of an 21 optimal solution for the problem (P ), Theorem 4.1.1 employs general results developed in the Appendix. We first introduce a new dynamic of sweep- ing processes, ( ¯D) (see (3.29)), obtained from (D) by truncating the given prox-regular nonsmooth moving sweeping sets C(t) and the space Rl using the balls ¯B¯ε(¯x(t)) and (¯y(t)). Note that the system ( ¯D) takes ¯B¯δ the form of (1.2), where the state’s dimen- sion is n + l. Next, in (4.9), we define for any δ ∈ (0, ¯ε), a compact truncation Sδ,δ of S, and we show that the same ((¯x, ¯y), ¯u) is a δ-strong local minimizer for the problem ( ¯Pδ,δ), whose ob- jective function is the same as (P ), but its dynamic is ( ¯D) and its joint endpoint set is Sδ,δ (see Remark 4.2.6). At this point, our attention completely shifts away from ((¯x, ¯y), ¯u) being ¯δ-strong local minimizer for (P ) over (D) and S, to being a δ-strong local minimizer for the problem ( ¯Pδ,δ) over ( ¯D) and Sδ,δ, where the moving (t) := [C(t) ∩ ¯B¯ε(¯x(t))] × sweeping sets ¯B¯δ (¯y(t)) are bounded and nonsmooth, and Sδ,δ is compact. ¯N (¯ε,¯δ) The radius ¯ε ∈ (0, ¯δ) is chosen via (A3.2) so that: • the bounded truncated sets C(t)∩ ¯B¯ε(¯x(t)) are uniformly prox-regular, • and their (r + 1) generators satisfy a uni- form constraint qualification (see (3.34)). Here, ψr+1(t, x) (see (3.30)) is the genera- tor of the ball ¯B¯ε(¯x(t)). When restricting our domain to the trun- cated sweeping sets, several properties are es- tablished, such as uniqueness and Lipschitz properties of the solutions to (D) and ( ¯D), and explicit forms of the normal cones NC(t) and NC(t)∩ ¯B¯ε(¯x(t)) in terms of the generators of the corresponding sweeping sets. Observe that our truncation approach always leads to moving sweeping sets even when C(t) ≡ C is constant, and hence, it cannot be employed if we only admit constant sweeping sets, as in [58]. This fact is behind the need to impose in [58] the unnatural assumption (A2.4), which is not needed here due to our form of the truncation approach. Our main objective now is to obtain the nonsmooth Pontryagin maximum principle for (P ) via that for ( ¯Pδ,δ), for one δ ∈ (0, ¯ε). However, in the literature, there is no Pontryagin maximum principle in its expected form that applies to problems like ( ¯Pδ,δ). This is so, not only because of the presence of the joint endpoints constraint set, but because such a result does not exist for sweeping processes in the form of (1.2), where C(t) is a general, nonsmooth, bounded, and moving sweeping set (see Conclusion I). This is the third matter stated above. Figure 1.8 Flowchart of addressing the first and second matters above 22 To resolve this matter, for a specific choice δo of δ, we establish a nonsmooth Pontrya- gin maximum principle for the δo-strong lo- cal minimizer ((¯x, ¯y), ¯u) of ( ¯Pδo,δo ) that does not require any demanding conditions like the ones in [34] described prior to Conclu- sion I. We then produce a carefully crafted se- quence of smooth sets, ( ¯C γk(t) × ¯B¯ρk (¯y(t)))k (see (3.59)), approximating our nonsmooth moving sweeping set (t) := [C(t) ∩ ¯B¯ε(¯x(t))] × ¯B¯δ (¯y(t)) from its interior, and we show that this smooth sequence (as opposed to the sweeping set itself) is actually invari- ant (see Remark 3.2.15) for our approximat- ing dynamic ( ¯Dγk ¯N (¯ε,¯δ) ). (¯ε,¯δ) To obtain the boundedness of the multipli- ers for the approximated normal cones in our approximating dynamic, we construct ¯A (t, k) of another smooth approximation ¯A (t, k) := ¯N (t), from its interior, where ¯C γk(t, k)× ¯B¯ρk (t)× ¯B¯ρk (¯y(t)). ¯A (t, k) is invariant for We then show that our approximated dynamic, the multipliers therein are bounded, and its solutions approx- imate the solutions to the original dynamic. (¯y(t)) ⊂ int ¯Cγk To address the presence of joint endpoint constraints, we pick carefully the value δo of δ so that we can successfully craft an approx- such that for k large, imation Sγk(k) of Sδo,δo any solution of ( ¯Dγk ) with state endpoints in Sγk(k) remains at all times in the invariant set ¯A (t, k). As such, we derive this result for problems over (1.2) with general nonsmooth, bounded, and moving sweeping sets, including the joint state endpoints constraints. This is accom- plished by developing a complete new design of the method of exponential penalty approx- imation that drastically differs from the one in [34] and generalizes the one in [58] to the complex setting of time-dependent sweeping sets and to joint endpoint constraints. This is different than [34] and far more intri- cate than [58]. Observe that in [34], the strin- gent condition on the corners is imposed so that the exponential penalty method works for nonsmooth bounded moving sweeping sets in the same manner as it did for smooth sets in [33], that is, by forcing their sweeping set C(t) itself to be invariant for their ap- proximating dynamic. Whereas the authors in [34] approximated C(t) from the outside by a sequence of invari- ant sets (as they did for the smooth bounded sets in [33]). As our sweeping sets and their approxima- tions are time dependent, the derivation of these results turns out to be quite involved and challenging in comparison with the con- stant sweeping set C treated in [58] (see sec- tion 3.2.2 and Theorem 3.2.14). ) (closely related to ( ¯Dγk At this point, we are ready to design for ( ¯Pδo,δo ), defined over ( ¯Dβ )) and the joint endpoints constraint set Sγk(k), whose optimal solution γk converges to ((¯x, ¯y), ¯u), see Proposition 4.2.8. Using intricate calculations, we showed that the corresponding adjoint variable sequence is uniformly of bounded variation, without assuming any restrictive condition like ∇xψi(t, ·) = 0 in the zone of C(t). A careful analysis of the limit as k → ∞ to the standard maximum principle for (P α,β γk ) an approximating problem (P α,β γk ) leads to our nonsmooth Pontryagin principle. Figure 1.9 Flowchart of addressing the third matter above 23 Figure 1.10 Exponential penalization technique in our setting 24 CHAPTER 2 PRELIMINARIES In this chapter, we review foundational concepts, definitions, and key theorems from func- tional analysis, nonsmooth analysis, and control theory as presented in the literature, which we will use throughout this thesis. 2.1 Basic notions and concepts In the first section, we present the basic notations and concepts used in the thesis. • We denote by ∥ · ∥ and ⟨·, ·⟩ the Euclidean norm and the usual inner product, respec- tively. • For x ∈ Rn and a > 0, we denote, respectively, by Ba(x) and ¯Ba(x) the open and closed ball centered at x and of radius a. More particularly, B and ¯B represent the open unit ball and the closed unit ball, respectively. • A vector function f = (f1, · · · , fn) : [0, T ] −→ Rn is said to be positive if fi is positive for each i = 1, · · · , n. • R+ denotes the set of positive real numbers. • We use Mm×n[a, b] to indicate the set of m × n-matrix functions on [a, b]. For r ∈ N, we denote the identity matrix in Mr×r by Ir×r. • The interior, boundary, closure, convex hull, and complement of a set S ⊂ Rn are represented by int S, bdry S, cl S, conv S and Sc, respectively. • We note that ∇f of a function f is taken here to be a column vector, that is, the transpose of the standard gradient vector. • For a set valued-map S(·) : [0, T ] ⇝ Rn, Gr S(·) denotes its graph. Definition 2.1.1. A matrix A = (aij) of size n × n is said to be strictly diagonally dominant if it satisfies the following condition: |aii| > X |aij| for all i = 1, 2, . . . , n. j̸=i 25 Lemma 2.1.2. By the Levy–Desplanques theorem, any strictly diagonally dominant matrix is nonsingular. Hence, its rows and columns form a basis in Rn. Limits of sets Definition 2.1.3 (Limit of sets in the Kuratowski sense). Let (Sk)k a sequence of nonempty subsets of Rn. We say that (Sk)k converges in the Kuratowski sense, or simply converges, to S whenever lim inf k→∞ Sk = lim sup k→∞ Sk = S. This lemma can be found in [62, Exercise 4.3]. Lemma 2.1.4 (Limits of monotone and sandwiched sequences). We have that: (a) limk Sk = cl S (b) limk Sk = T k∈N Sk whenever Sk ↗, meaning Sk ⊂ Sk+1 ⊂ · · · ; k∈N cl Sk whenever Sk ↘, meaning Sk ⊃ Sk+1 ⊃ · · · ; with S1 k → S and S2 k ⊂ Sk ⊂ S2 k k → S. (c) Sk → S whenever S1 This definition can be found in [62, Example 4.13]. Definition 2.1.5 (Pompeiu-Hausdorff distance). For C, D ⊂ Rn closed and nonempty, the Pompeiu-Hausdorff distance between C and D is the quantity d∞(C, D) := sup x∈Rn |dC(x) − dD(x)|, where the supremum could equally be taken just over C ∪ D, yielding the alternative formula d∞(C, D) = inf (cid:26) η ≥ 0 (cid:12) (cid:12) (cid:12) (cid:12) C ⊂ D + ηB, D ⊂ C + ηB (cid:27) . Definition 2.1.6 (Limit of sets in the Hausdorff sense). Let (Sk)k a sequence of nonempty closed subsets of Rn. We say that (Sk)k converges with respect to Pompeiu- Hausdorff distance to S when d∞(Sk, S) → 0. We introduce definitions and results related to the support function of a set S. Support function of a set 26 Definition 2.1.7 (Support function of a set S). Let S ⊂ Rn be nonempty. The support of S is σ(·, S) : Rn → R ∪ {∞} defined by σ(s∗, S) := sup{⟨s∗, s⟩ | s ∈ S}. Lemma 2.1.8. Let S be a closed and convex set of Rn. Then s ∈ S ⇐⇒ ⟨s∗, s⟩ ≤ σ(s∗, S) ∀s∗ ∈ Rn. (2.1) (2.2) Lemma 2.1.9 (Limits of sets and their support functions). Let (Sk)k a sequence of nonempty compact convex subsets of Rn. Then (Sk)k converges to S, as k → ∞ ⇐⇒ σ(s∗, Sk) −→ σ(s∗, S), as k → ∞, ∀s∗ ∈ Rn. (2.3) Lemma 2.1.10 (Hausdorff limits of sets and their support functions). Let (Sk)k a sequence of nonempty closed convex subsets of Rn. We have, by [63, Theorem 6], that (Sk)k Hausdorff −−−−−→ k→∞ S ⇐⇒ σ(s∗, Sk) unif in s∗ −−−−−→ k→∞ σ(s∗, S), ∀s∗ ∈ Rn : ∥s∗∥ ≤ 1. (2.4) 2.2 Nonsmooth analysis Normal cones: proximal, limiting, and Clarke Some of the definitions and results in this section are adapted from [17]. For standard references, see the monographs [18, 22, 48, 62, 66]. Proximal normal cone Let S ⊂ Rn a nonempty closed set. For x ∈ Rn, we recall that the distance from x to S is defined by dS(x) := inf s∈S ∥x − s∥. We can verify that dS(·) is 1-Lipschitz on Rn, and that there exists at least one point s ∈ S such that dS(x) = ∥x − s∥. 27 This point s is called closest point or projection of x onto S. We note that all closest points form a set denoted by projS(x), see Figure 2.1. For x ∈ Rn \ S and s ∈ projS(x), we have: • The vector x − s is called a proximal normal direction to S at s. • For all t > 0 any vector ζ = t(x − s) is called proximal normal vector to S at s. Figure 2.1 Proximal normal cone 1 Definition 2.2.1. The set of all nonnegative multiple ζ of x−s is called the proximal normal cone to S at s and is denoted by N P S (s), see Figure 2.1. Thus N P S (s) := {t(x − s) : s ∈ proj (x) and t ≥ 0}. S We can also characterize the proximal normal cone analytically and geometrically through the following two representations. Let s ∈ S. ζ ∈ N P S (s) ⇐⇒ ∃ λ > 0 such that proj S (s + λζ) = {s} ⇐⇒ ∃ σ = σ(ζ, s) ≥ 0 s.t. ⟨ζ, s′ − s⟩ ≤ σ∥s′ − s∥2 ∀s′ ∈ S ⇐⇒ ∃ σ = σ(ζ, s) ≥ 0, η > 0 s.t. ⟨ζ, s′ − s⟩ ≤ σ∥s′ − s∥2 ∀s′ ∈ B (cid:16) s, η) ∩ S ⇐⇒ ∃ r = r(ζ, s) > 0 s.t. B (cid:18) s + r (cid:19) ; r ζ ∥ζ∥ ∩ S = ∅, i.e. ζ is realized by an r-sphere (ball characterization, see Figure 2.2) 1This image was generated by Dr. Chadi Nour. 2This image was generated by Dr. Chadi Nour. 28 Figure 2.2 Ball characterization 2 Remark 2.2.2. We have the following: • N P S (s) = {0} if s ∈ S is not the projection of any point x /∈ S onto S. Hence, N P S (s) = {0} when s ∈ int S. • The proximal normal cone is a convex cone. It is not necessarily open nor closed. Proposition 2.2.3 (Convex cone). Let S ⊂ Rn be a nonempty, closed and convex set. Thus ζ ∈ N P S (s) ⇐⇒ ⟨ζ, s′ − s⟩ ≤ 0 ∀s′ ∈ S. (2.5) In this case • If s ∈ bdry S then N P S (s) ̸= {0}. • For s ∈ bdry S 0 ̸= ζ ∈ N P S (s) ⇐⇒ ζ is realized by an r-sphere ∀ r > 0. (2.6) Lemma 2.2.4 (Local property of limiting normal cone). We deduce from the third equivalence in Definition 2.2.1 that the P -normality is a local property, meaning that the proximal normal cones N P S1 (s) = N P S2 (s) if S1 and S2 are the same in a neighborhood of s. 29 Limiting or Mordukhovich normal cone Definition 2.2.5. The Limiting or Mordukhovich normal cone to S at s, N L S (s), is defined as N L S (s) := {v ∈ Rn : ∃ si S−→ s, ∃ vi −→ v, vi ∈ N P S (si)}, where si S−→ s means that si −→ s and si ∈ S ∀i. Figure 2.3 Limiting normal cone 3 Remark 2.2.6. We have the following: • If s ∈ bdry S, then N L S (s) ̸= {0}. • The limiting normal cone is a closed cone. It is not necessarily convex. Clarke normal cone Definition 2.2.7. The Clarke normal cone to S at s, NS(s), is defined as NS(s) := conv {v ∈ Rn : ∃ si S−→ s, ∃ vi −→ v, vi ∈ N P S (si)}. Remark 2.2.8. We have the following: • The Clarke normal cone is a closed convex cone. 3This image is taken from [66]. 30 Lemma 2.2.9 (Monotonicity of the normal cone operator). If S ⊆ C, then the normal cone satisfies the inclusion: NS(x) ⊇ NC(x) for all x ∈ S. This means that if S is a subset of C, then any normal vector to C at a point in S is also a normal vector to S. Definition 2.2.10. We say that a given set of vectors {xi : i = 1, 2, · · · , k} in X is positively linearly independent if the following implication holds: k X i=1 λixi = 0, λi ≥ 0 =⇒ λi = 0 ∀i ∈ {1, 2, · · · , k}. Lemma 2.2.11. [19, Corollary 10.44] Consider a set S ⊂ Rn, given by S = {x ∈ Rn : fi(x) ≤ 0, i = 1, 2, · · · , k}, where each function fi is C 1 (locally, at least). Let x ∈ S, and assume I(x) := {i : fi(x) = 0} is a nonempty set, and {f ′ i (x) : i ∈ I(x)} is positively linearly independent (we say that the active constraints are positively linear independent). Then, NS(x) = { X λif ′ i (x) : λi ≥ 0}. i∈I(x) Proximal, Limiting, Clarke subdifferentials We start some definitions and assumptions on an extended real-valued function f . Definition 2.2.12. Let X ⊂ Rn and f : X → (−∞, ∞]. • f is lower semicontinuous (lsc) at x0 ∈ X iff f (x0) ≤ lim inf n→∞ f (xn), for all (xn)n ∈ X with xn → x0. • f is upper semicontinuous (usc) at x0 ∈ X iff −f is lsc at x0. • The effective domain of f is the set dom f := {x ∈ X : f (x) < +∞}. 31 • The epigraph of f is the subset of Rn+1 given by epi f := {(x, w) ∈ X × R : x ∈ dom f, f (x) ≤ w}. • The graph of f is the subset of Rn+1 given by Gr f := {(x, f (x)) ∈ X × R : x ∈ dom f }. Proximal subgradient In the following X ⊂ Rn is an open set. We have f : Rn → R ∪ {+∞} lsc on X such that X ∩ dom f ̸= ∅, and x ∈ X ∩ dom f . Definition 2.2.13. We denote by ∂P f (x) the proximal subdifferential of f at x. We have ζ ∈ ∂P f (x) ⇐⇒ (ζ, −1) ∈ N P epi f (x, f (x)) ⇐⇒ ∃ σ ≥ 0 and η > 0 such that for all y ∈ B (cid:16) x, η (cid:17) ∩ X, f (y) ≥ f (x) + ⟨ζ, y − x⟩ − σ∥y − x∥2 ζ is said to be a proximal subgradient of f at x, see Figure 2.4. Figure 2.4 Proximal subgradient 4 4This image was generated by Dr. Chadi Nour. 32 Remark 2.2.14. Note the following properties of the proximal subdifferential. • ∂P f (x) is convex, however, it is not necessarily open, closed or nonempty. • For all c > 0, we have ∂P (cf )(x) = c∂P f (x). • ∂P f (x) + ∂P g(x) ⊂ ∂P (f + g)(x). Remark 2.2.15. Let U ⊂ X be open. • If f is Gateaux differentiable at x ∈ U , then ∂P f (x) ⊂ {f ′ G (x)}. • If f ∈ C 2(U ), then ∂P f (x) = {f ′(x)} ∀x ∈ X. Proposition 2.2.16. We assume that X is as well a convex set. Then f is K-Lipschitz on X iff Limiting subgradient ∥ζ∥ ≤ K ∀ζ ∈ ∂P f (x) ∀x ∈ X. In the following X ⊂ Rn is an open set. We have f : Rn → R ∪ {+∞} lsc on X such that X ∩ dom f ̸= ∅, and x ∈ X ∩ dom f . Definition 2.2.17. We denote by ∂Lf (x) the limiting subdifferential of f at x. We have ζ ∈ ∂Lf (x) ⇐⇒ (ζ, −1) ∈ N L epi f (x, f (x)). ζ is said to be a limiting subgradient of f at x. Equivalently, we have ∂Lf (x) := { lim i→+∞ ζi : ζi ∈ ∂P f (xi), xi f→ x}, where xi f→ x means that xi → x and f (xi) → f (x). Remark 2.2.18. Note the following properties of the limiting subdifferential. • ∂Lf (x) is closed for every x, and the multifunction ∂Lf (·) has a closed graph. • For all c > 0, we have ∂L(cf )(x) = c∂Lf (x). • If one f, g is Lipschitz near x, then ∂L(f + g)(x) ⊂ ∂Lf (x) + ∂Lg(x). 33 Clarke generalized gradient, Hessian and Jacobian Definition 2.2.19. Assume that f : Rn → R ∪ {∞} has dom f closed with non-empty interior, and that f is locally Lipschitz on int (dom f ). We denote by ∂f (x) the Clarke subdifferential or Clarke generalized gradient of f at x ∈ int (dom f ). We have ζ ∈ ∂f (x) ⇐⇒ (ζ, −1) ∈ Nepi f (x, f (x)) ⇐⇒ ⟨ζ, v⟩ ≤ f ◦(x; v) ∀v ∈ Rn, f ◦(x; v) := lim sup y→x, h↓0 f (y + hv) − f (y) h . ∂f (x) = conv ∂Lf (x), where Equivalently, we have and ∂f (x) = conv (cid:26) lim i→+∞ ∇f (xi) : xi (cid:27) O−→ x, ∇f (xi) exists ∀i , where O is any full-measure subset of int (dom f ). Proposition 2.2.20. Take a lower semicontinuous function f : Rn → R ∪ {+∞} and a point ¯x ∈ Rn. Assume that f is Lipschitz continuous on a neighborhood of ¯x with Lipschitz constant K. Then: ∂f (¯x) ⊂ KB. Definition 2.2.21. Assume that f is C1,1 on int (dom f ). We denote by ∂2f (x) the Clarke generalized Hessian of f at x ∈ int (dom f ). We have ∂2f (x) = conv (cid:26) lim i→+∞ ∇2f (xi) : xi (cid:27) O−→ x, ∇2f (xi) exists ∀i , where O is any full-measure subset of int (dom f ). Remark 2.2.22. Notice that 34 • ∂f (·) and ∂2f (·) are locally bounded and measurable multifunctions with closed graph, and their values are nonempty, compact and convex. Definition 2.2.23. Assume that g : Rn → Rn be a Lipschitz function near x ∈ Rn, i.e. Lipschitz on a open set Ω containing x. We denote by ∂g(x) the Clarge generalized Jacobian of g at x. We have ∂g(x) = conv (cid:26) lim i→+∞ Jg(xi) : xi (cid:27) O−→ x, Jg(xi) exists ∀i , where O is any full-measure subset of Ω, and J is the Jacobian operator. Remark 2.2.24. Notice that • The multifunction ∂g(·) is measurable and has closed graph. Its values are nonempty, convex, and compact in the space of n × n matrices. Non-standard notions of subdifferentials In [70], and later in [55], Zeidan, Nour and Saoud extended the notions of Limiting subdifferential, Clarke generalized gradient, Hessian and Jacobian to nonstandard notions for subdifferentials that are strictly smaller than their standard counterparts. Extended Limiting subdifferential, Clarke generalized gradient, Hessian and Ja- cobian Definition 2.2.25. Let f : Rn → R ∪ {∞} be a lsc function and S ⊂ dom f be a closed set with int (dom f ) ̸= ∅. For x ∈ cl (int S), we denote by ∂L l f (x) to be the limiting subdifferential of f relative to int S at x, and we have: l f (x) := { lim ∂L i→+∞ ζi : ζi ∈ ∂P f (xi), xi ∈ int S, xi f→ x}. Remark 2.2.26. We have that • The multifunction ∂L l f (·) has closed graph, and closed values. • If f Lipschitz on int S, then for any x ∈ cl (int S), ∂L l f (x) is nonempty and compact. • For all x ∈ S, we have ∂L l f (x) ⊂ ∂Lf (x), and equality holds when x ∈ int S. 35 Definition 2.2.27. Assume that f : Rn → R∪{∞} is locally Lipschitz on int (dom f ) ̸= ∅. We denote by ∂lf (x) the Extended Clarke generalized gradient of f at x ∈ cl (int (dom f )). We have ∂lf (x) = conv (cid:26) lim i→+∞ ∇f (xi) : xi (cid:27) O−→ x, ∇f (xi) exists ∀i , where O is any full-measure subset of int (dom f ). Definition 2.2.28. Assume that f : Rn → R ∪ {∞} is C1,1 on int (dom f ) ̸= ∅. We denote by ∂2 l f (x) the Extended Clarke generalized Hessian of f at x ∈ cl (int (dom f )). We have l f (x) = conv ∂2 (cid:26) lim i→+∞ ∇2f (xi) : xi (cid:27) O−→ x, ∇2f (xi) exists ∀i , where O is any full-measure subset of int (dom f ). Remark 2.2.29. Notice that • ∂lf (·) and ∂2 l f (·) are measurable multifunctions with closed graph, and their values are nonempty, compact and convex. • We have ∂lf (x) ⊂ ∂f (x) and ∂2 l f (x) ⊂ ∂2f (x), with equalities holding when x ∈ int (dom f ). Definition 2.2.30. Assume that g : Rn → Rn be a Lipschitz function on a closet set S ⊂ Rn. We denote by ∂lg(x) the Extended Clarke generalized Jacobian of g at x ∈ S that extends the Clarke generalized Jacobian to the boundary of S . We have ∂lg(x) = conv (cid:26) lim i→+∞ Jg(xi) : xi (cid:27) O−→ x, Jg(xi) exists ∀i , where O is any full-measure subset of int S, and J is the Jacobian operator. Remark 2.2.31. Notice that • The multifunction ∂lg(·) is measurable and has closed graph. Its values are nonempty, convex, and compact in the space of n × n matrices. 36 • We have ∂lg(x) ⊂ ∂g(x), with equality holding when x ∈ int S. Definition 2.2.32. Assume that h : Rn → R be C1,1 on an open set containing a closet set S ⊂ Rn. We denote by ∂2 l h(x) the Clarke generalized Hessian of h relative to int S at x ∈ S. We have l h(x) = conv ∂2 (cid:26) lim i→+∞ ∇2h(xi) : xi O−→ x, ∇2h(xi) exists ∀i (cid:27) , where O is any full-measure subset of int S. Remark 2.2.33. Notice that • ∂2 l h(·) is a measurable multifunction with closed graph, and its values are nonempty, compact and covex. • We have ∂2 l h(x) ⊂ ∂2h(x), with equality holding when x ∈ int S. Prox-regular sets We proceed to define the φ0-convexity property and the prox-regularity of a set. A detailed analysis of this may be found in [21, 29]. For other related properties, we refer the reader to [60, 61, 62, 8, 9] and the references therein. Definition 2.2.34. Suppose S ⊂ Rn is closed. S is said to be φ-convex, where φ is taken to be a continuous function from S to [0, +∞), if ⟨ζ, y − x⟩ ≤ φ(x)∥ζ∥∥y − x∥2, for all x ∈ bdry S, y ∈ S and 0 ̸= ζ ∈ N P S (x). Definition 2.2.35. Suppose S ⊂ Rn is closed. S is said to be φ0-convex if we can find φ0 ≥ 0 such that ⟨ζ, y − x⟩ ≤ φ0∥ζ∥∥y − x∥2, (2.7) for all x ∈ bdry S, y ∈ S and 0 ̸= ζ ∈ N P S (x). Remark 2.2.36. We remark that S is φ-convex iff S is φ0-convex locally. 37 Definition 2.2.37. Let S ⊂ Rn a closed set. We say that S is r-proximally smooth or r-prox-regular iff there exists r > 0 such that for all x ∈ bdry S and ζ ∈ N P S (x) ⟨ζ, y − x⟩ ≤ 1 2r ∥ζ∥∥y − x∥2 ∀y ∈ S. Equivalently, S is r-proximally smooth if and only if for all x ∈ bdry S and 0 ̸= ζ ∈ N P S (x), ζ is realized by an r-sphere, i.e. (cid:18) s + r B (cid:19) ; r ζ ∥ζ∥ ∩ S = ∅. Remark 2.2.38. Notice the following: • For φ0 > 0, S is φ0-convex ⇐⇒ S is 1 2φ0 -prox-regular (or has positive reach with radius 1 2φ0 ) (see Figure 2.5). • S is convex ⇐⇒ S is 0-convex ⇐⇒ S is r-prox-regular for all r > 0. Figure 2.5 φ0-convexity 5 Proposition 2.2.39. Let S be r-prox-regular set in Rn, with r > 0. Then we have: (i) [21, Corollary 4.15] For all x ∈ S, NS(x) = N P S (x) = N L S (x), and for all x ∈ bdry S, we have N P S (x) ̸= {0}. (ii) [21, Theorem 4.8] Let r′ ∈ (0, r). Then πS(·) is Lipschitz of rank r r−r′ on {u ∈ Rn : 0 < dS(u) < r′}, where πS(·) is the projection map into S. 5This image was generated by Dr. Chadi Nour. 38 ! ! !!!! ! SSSSSisnot'0-convexSisnot'0-convexSis'0-convexSis'0-convex (iii) The normal cone N P S (·) is hypomonotone, i.e. for every x1, x2 ∈ S, for every ξ1 ∈ N P S (x1), ξ2 ∈ N P S (x2), ξ1, ξ2 unit vectors, we have ⟨ξ2 − ξ1, x2 − x1⟩ ≥ − 1 r ∥x2 − x1∥2 . (2.8) The following result holds for uniform prox-regular sets S(t). It is an extension of a special case of [55, Lemma 3.2] from a constant compact set S to the case of set-valued maps S(·) with non-compact values. It requires NS(·)(·) to have closed graph. Lemma 2.2.40. Let S(·) : [0, T ] ⇝ Rn be such that, for all t ∈ [0, T ], S(t) is nonempty, closed, and uniformly ρ∗-prox-regular, for some ρ∗ > 0. Let ¯x ∈ C([0, T ], Rn) with ¯x(t) ∈ S(t) for all t ∈ [0, T ]. Assume that for some δ ∈ (0, ρ∗), the map (t, y) → NS(t)(y) has closed S(·) ∩ ¯Bδ(¯x(·))(cid:17).Then, the following holds. graph on the domain Gr (cid:16) (i) Let t ∈ [0, T ], and y ∈ S(t) ∩ ¯Bδ(¯x(t)). Then N P S(t) (y) ∩ −N P ¯Bδ(¯x(t)) (y) = {0}. (ii) There exists ρδ > 0 such that for all t ∈ [0, T ] the set S(t) ∩ ¯Bδ(¯x(t)) is ρδ-prox-regular. (iii) For t ∈ [0, T ], π(t, ·) := π(S(t)∩ ¯Bδ(¯x(t)))(·) is well-defined on (cid:16) S(t) ∩ ¯Bδ(¯x(t))(cid:17) + ρδB and S(t) ∩ ¯Bδ(¯x(t))(cid:17) + ρδ 2-lipschitz on (cid:16) ¯B. 2 Proof. (i)-(ii): The results are derived by following the proof of [55, Lemma 3.2], where for t ∈ [0, T ], we take S := S(t) and x := ¯x(t), and hence, Nx and ρx there , are respectively Nt := N¯x(t) and ρt := ρ¯x(t). It follows that ρ there is now ˆρ := inf{ρt : t ∈ [0, T ]}. Following the rest of the proof there, and employing that the map (t, y) → NS(t)(y) has closed graph on the domain Gr (cid:16) S(·) ∩ ¯Bδ(¯x(·))(cid:17), we conclude that ˆρ > 0. Thus, for all t ∈ [0, T ], we deduce that S(t) ∩ ¯Bδ(¯x(t)) is ρδ-prox regular, where ρδ := ρ∗ ˆρ (iii): It follows from (ii) and Proposition 2.2.39(ii). . 2 We now present Theorem 9.1 in [2]. 39 Theorem 2.2.41. Let gk : [0, T ] × Rn → R with k = 1, . . . , m be functions such that, for each t ∈ [0, T ], the set S(t) = {x ∈ Rn : g1(t, x) ≤ 0, . . . , gm(t, x) ≤ 0} (2.9) is nonempty. Assume that there exists some ρ ∈]0, +∞] such that: (i) for all t ∈ [0, T ], for all k ∈ {1, . . . , m}, gk(t, ·) is of class C 1 on {x ∈ Rn : d(x, S(t)) < ρ}; (ii) there exists a real γ > 0 such that, for all t ∈ [0, T ], for all x ∈ bdry S(t), for all y ∈ {y ∈ Rn : d(y, S(t)) < ρ}, for all k ∈ {1, . . . , m} with gk(t, x) = 0, ⟨∇gk(t, ·)(y) − ∇gk(t, ·)(x), y − x⟩ ≥ −γ∥y − x∥2. Assume also that there is a real δ > 0 such that, for any (t, x) ∈ [0, T ]×Rn with x ∈ bdry S(t) and any ζ ∈ conv{∇gk(t, ·)(x) : k ∈ K(t, x)} where K(t, x) := {k ∈ {1, . . . , m} : gk(t, x) = 0}, there exists v(t, x, ζ) ∈ ¯B satisfying ⟨ζ, v(t, x, ζ)⟩ ≤ −δ. Then, for all t ∈ [0, T ], the set S(t) is r-prox-regular with r = min n ρ, δ γ o. Amenable and Epi-Lipschitzian sets The following definitions and properties can be found in [22, 62]. Definition 2.2.42. Let S ⊂ Rn. The set S is amenable at one of its points ¯x if there exists an open neighborhood V of ¯x, a C1 mapping F from V into a space Rm, and a closed, convex set D ⊂ Rm such that S ∩ V = {x ∈ V | F (x) ∈ D}, (2.10) with the only vector y ∈ ND(F (¯x)) with ∇F (¯x)T y = 0 is y = 0. (2.11) 40 Lemma 2.2.43. Let S = {x ∈ X | F (x) ∈ D} for closed, convex sets X, D, and a C1 mapping F . The set S is amenable at any of its points ¯x where the constraint qualification holds, meaning that the only y ∈ ND(F (¯x)) with −∇F (¯x)T y ∈ NX(¯x) is y = 0. Definition 2.2.44. For a strictly continuous function f : Rn → R, let S = {x | f (x) ≤ ¯α} and consider a point ¯x of S with f (¯x) = ¯α. S is said to have an epi-Lipschitzian boundary at ¯x if 0 /∈ conv ∂f (¯x). (2.12) Lemma 2.2.45. Let S := {x : f (x) ≤ 0}, where f : Rn → R is Lipschitz near x and 0 ∈ ∂f (x). Then, S is epi-Lipschitzian at x. Lemma 2.2.46. (i) A set S ⊂ Rn with boundary point ¯x is epi-Lipschitzian at ¯x if and only if S is locally closed at ¯x and the normal cone NS(¯x) is pointed, i.e. NS(¯x) ∩ −NS(¯x) = {0}. (ii) If the set S is epi-Lipschitzian at every x ∈ S, then S = cl int S. Lemma 2.2.47. [57, Remark 4.8(ii)] If the lower semicontinuous multifunction F has closed and r-prox-regular values, for some r > 0, (as opposed to convex), then conv (cid:16) ¯N L F (t) (·)(cid:17) = N P F (t) (·) = N L F (t) (·) = NF (t)(·), and this cone is pointed at x ∈ F (t) if and only if F (t) is epi-Lipschitz at x. Here, ¯N L F (t) (y) stands for the graphical closure at (t, y) of the multifunction (t, y) 7→ N L graph of ¯N L (·) is the closure of the graph of N L (·). F (t) F (·) F (·) (y), that is, the The following is adapted from Lemma 3.3-3.4, Theorem 3.1 in [55], and Proposition 3.1 Sub-level sets of a function in [70]. Lemma 2.2.48. Let S be a nonempty set given by S := {x ∈ Rn : ψ(x) ≤ 0}, (2.13) 41 where ψ is C1,1 on S + ρB, for some ρ > 0, ψ is coercive (i.e. lim∥x∥→∞ ψ(x) = +∞) or S bounded, and there is a constant η > 0 such that ψ(x) = 0 =⇒ ∥∇ψ(x)∥ > 2η. Part I. Let 2Mψ be the Lipschitz constant of ∇ψ(·) over the compact set S + ρ 2 ¯B such that Mψ ≥ 4η ρ . Then, (i) bdry S ̸= ∅ and bdry S = {x ∈ Rn : ψ(x) = 0}. (ii) int S ̸= ∅ and int S = {x ∈ Rn : ψ(x) < 0}. (iii) The nonempty set S is compact, amenable (in the sense of [62]), epi-Lipschitzian, S = cl(int S), (2.14) and S is η Mψ (iv) For all x ∈ bdry S we have -prox-regular. NS(x) = N P S (x) = N L S (x) = {λ∇ψ(x) : λ ≥ 0}. (2.15) Part II. For k ∈ N, we define the set S(k) by S(k) := {x ∈ S : ψ(x) ≤ −αk}, (2.16) where (αk)k the real sequence defined by αk := ln (cid:16) ηγk 2M γk (cid:17) , k ∈ N, M > 0 positive constant, (γk)k a sequence satisfying γk > 2M η for all k ∈ N, and γk → ∞ as k → ∞. Then, we have (i) For all k, the set S(k) ⊂ int S and is compact, (ii) bdry S(k) = {x ∈ Rn : ψ(x) = −αk} and int S(k) = {x ∈ Rn : ψ(x) < −αk} for k sufficiently large,; (iii) int S(k) is nonempty, C(k) is amenable, epi-Lipschitzian, n 2Mψ -prox-regular, S(k) = cl int C(k), and ∀x ∈ bdry S(k), NS(k)(x) = N P S(k) (x) = N L S(k) (x) = {λ∇ψ(x) : λ ≥ 0}. (2.17) 42 (iv) There exist ro > 0 and ¯k ∈ N such that h S ∩ ¯Bro (c)i − ρk ∇ψ(c) ∥∇ψ(c)∥ ⊂ int S(k), ∀k ≥ ¯k and ∀c ∈ bdry S. (2.18) In particular, we have c − ρk ! ∇ψ(c) ∥∇ψ(c)∥ ∈ int S(k), ∀k ≥ ¯k and ∀c ∈ bdry S. (2.19) The following theorem can be found in [66, Theorem 3.3.1]. Other important results Theorem 2.2.49 (Ekeland variational principle). Take a complete metric space (X, d(·, ·)), a lower semicontinuous function f : X → R ∪ {+∞}, a point x0 ∈ dom f , and numbers α > 0 and λ > 0. Assume that f (x0) ≤ inf x∈X f (x) + λα. Then there exists ¯x ∈ X such that (i) f (¯x) ≤ f (x0), (ii) d(x0, ¯x) ≤ λ, (iii) f (¯x) ≤ f (x) + αd(x, ¯x) for all x ∈ X. This result can be found in [47]. Lemma 2.2.50. Let F (x) = max 1≤i≤m fi(x), for x ∈ X. Define the smooth approximation: Fp(x) = m X ln ! exp (pfi(x)) . 1 p (2.20) (2.21) i=1 Then, for x ∈ X, Fp(x) is a monotonically decreasing function in terms of p, and the following inequality holds: F (x) ≤ Fp(x) ≤ F (x) + ln(m) p . (2.22) 43 2.3 Differential equations, set-valued analysis, and control theory Theorem 2.3.1 (Existence and Uniqueness for ODE). Consider the (IVP) system Existence of ODE    ˙x(t) = f (t, x) x(t0) = x0. If f is continuous in (t, x) in a rectangle D = {(t, x) : t0 − δ < t < t0 + δ, x0 − b < x < x0 + b}, and f (t, x) lipschitz with respect to x on R = {(t, x) : t0 − a < t < t0 + a, x0 − b < x < x0 + b, a < δ}, then the solution in R of the (IVP) exists and shall be unique. The following is found in [39, Theorem 5.3]. Definition 2.3.2 (Carathéodory function). Suppose D is an open set in Rn+1. The function f : D → Rn is said to satisfy the Carathéodory conditions on D, if: • f (t, x) is measurable in t for each fixed x, • f (t, x) is continuous in x for each fixed t, • For each compact set U ⊂ D, there exists an integrable function mU (t) such that |f (t, x)| ≤ mU (t), (t, x) ∈ U. (2.23) Theorem 2.3.3 (Existence and Uniqueness for ODE). Suppose D is an open set in Rn+1. Assume that the function f : D → Rn satisfies the Carathéodory conditions on D (see Definition 2.3.2). Additionally, for each compact set U ⊂ D, there exists an integrable function kU (t) such that |f (t, x) − f (t, y)| ≤ kU (t)|x − y|, (t, x), (t, y) ∈ U. (2.24) Then, for any (t0, x0) ∈ U , there exists a unique solution x(t, t0, x0) of the initial value problem passing through (t0, x0). The domain E of definition of x(t, t0, x0) in Rn+2 is open, and ˙x = f (t, x), x(t0) = x0, (2.25) x(t, t0, x0) is continuous in E. 44 Existence of optimal solution for optimal control problems The following is Theorem 23.10 in [19]. Theorem 2.3.4 (Existence of optimal solution for optimal control problem).    Minimize J(x, u) = ℓ(x(a), x(b)) subject to x′(t) = f (t, x(t), u(t)) a.e. u(t) ∈ U (t) a.e. (t, x(t)) ∈ Q ∀t ∈ [a, b], (x(a), x(b)) ∈ E. (OC1) Let the data of (OC1) satisfy the following hypotheses: (a) f (t, x, u) is continuous in (x, u) and measurable in t; (b) U (·) is measurable and compact-valued; (c) f has linear growth on Q: there is a summable function M such that (t, x) ∈ Q, u ∈ U (t) =⇒ |f (t, x, u)| ≤ M (t)(1 + |x|); (d) For each (t, x) ∈ Q, the set f (t, x, U (t)) is convex; (e) The sets Q and E are closed, and ℓ : Rn × Rn → R is lower semicontinuous; (f) The following set is bounded: {α ∈ Rn : (α, β) ∈ E for some β ∈ Rn}. Then, if there is at least one admissible process for the problem, it admits a solution. We now present Filippov Selection Theorem (see Theorem 2.3.13 in [66]). Other results Theorem 2.3.5 (Filippov Selection Theorem). Let T > 0. Consider a nonempty multifunction X : [0, T ] ⇝ Rs, a function H : [0, T ] × Rs → Rd, and a function v(·) : [0, T ] → Rd satisfying (i) The set Gr X is L × Bs measurable; 45 (ii) The function H is L × Bs measurable; (iii) The function v(·) is a measurable function such that v(t) ∈ {H(t, λ) : λ ∈ X (t)} a.e. Then, there exists a measurable function λ : [0, T ] → Rs such that u(t) ∈ X (t) a.e. H(t, λ(t)) = v(t) a.e. and 2.4 Functional analysis We first start by general notations and concepts in functional analysis. • For S ⊂ Rn compact, C(S; Rn) denotes to the set of continuous functions from S to Rn. • The class of all functions Lipschitz on S with Lipschitz constant k ≥ 0 is denoted by Lip k (S). • C1,1([a, b]; Rn) is the space of continuously differentiable functions f whose derivative ˙f is Lipschitz continuous. • The set of all absolutely continuous functions f : [a, b] −→ Rn is denoted by AC([a, b]; Rn). • We say that a function f is absolutely continuous on [a, b] if for every positive number ε > 0, there exists δ > 0, such that whenever a finite sequence of pairwise disjoint sub-intervals (ai, bi) of [a, b] with ai < bi satisfies PN i=1 (bi − ai) < δ, then N X i=1 (f (bi) − f (ai)) < ε. • Equivalently, we say that f is absolutely continuous if f has a derivative ˙f a.e., ˙f is Lebesgue integrable, and f (t) = f (a) + Z t a ˙f (s)ds, ∀t ∈ [a, b]. • The set of all functions f : [a, b] −→ Rn of bounded variations is denoted by BV ([a, b]; Rn). 46 • The total variation of f is given by V b a (f ) = sup P ∈P nP −1 X i=0 |f (xi+1) − f (xi)|, where the supremum is taken over the set P = {P = {x0, . . . , xnP } | P is a partition of [a, b] satisfying xi ≤ xi+1 for 0 ≤ i ≤ nP − 1} of all partitions of the interval considered. • If f is differentiable and its derivative is Riemann-integrable, then its total vari- ation is V b a (f ) = Z b a |f ′(x)| dx. • For a function f , we say that f ∈ BV ([a, b]; Rn) ⇐⇒ V b a (f ) < +∞. • The Lebesgue space of p-integrable functions f : [a, b] −→ Rn is denoted by Lp([a, b]; Rn), where the norms in Lp([a, b]; Rn) and L∞([a, b]; Rn) (or C([a, b]; Rn)) are written as ∥ · ∥p and ∥ · ∥∞, respectively, where for f ∈ Lp([a, b]; Rn), we have ∥f ∥p = ! 1 p , |f |pdx Z b a and for f ∈ L∞([a, b]; Rn), ∥f ∥∞ = inf {C ≥ 0 : |f (x)| ≤ C a.e. x ∈ [a, b]} . • The Sobolev space W 1,p([a, b]; Rn) denotes the set of continuous functions f : [a, b] → Rn having ˙f ∈ Lp([a, b]; Rn). More specifically, we have: • If f ∈ W 1,1([a, b]; Rn), then f continuous and ˙f ∈ L1([a, b]; Rn). Hence, W 1,1([a, b]; Rn) is the set of all absolutely continuous functions from [a, b] to Rn. • If f ∈ W 1,2([a, b]; Rn), then f continuous and ˙f ∈ L2([a, b]). The norm on W 1,2([a, b]; Rn) is ∥f (·)∥W 1,2 := ∥f (·)∥∞ + ∥ ˙f (·)∥2. 47 • Denote M(S), M+(S), and M1 + (S) to be, respectively, the set of Radon, positive Radon, and probability measures on S. Note that by Radon measure on S, we mean a finite regular measure on B(S), the σ-algebra generated by the Borel subsets of S. • The space C∗([a, b]; Rn) denotes the dual of C([a, b]; Rn) equipped with the supremum norm. C∗([a, b]; Rn) consists of all bounded linear functionals from C([a, b]; Rn) to R. • We denote by ∥ · ∥T.V. the induced norm on C∗([a, b]; Rn). • For ν ∈ C∗([a, b]; Rn), its support is denoted by supp {ν}, and it is the smallest closed subset A ⊂ [a, b] with the property that for all relativey open subsets B ⊂ [a, b] \ A, we have ν(B) = 0. • By Riesz representation theorem, each element in C∗([a, b]; R) can be interpreted as an element in M([a, b]), the space of finite signed Radon measures on [a, b] equipped with the weak* topology. In other words, every Λ bounded linear func- tional on C([a, b]; R) is represented as an integral against a finite signed Radon measure ν: and Λ(f ) = Z b a f (x)dν(x), ∥Λ∥ = ∥ν∥T.V. • The set of elements in C∗([a, b]; R) taking non-negative values on nonnegative- valued functions in C([a, b]; R) is denoted by C⊕(a, b). • For ν ∈ C⊕(a, b), ∥ν∥T.V., as defined above, coincides with the total variation of ν, i.e. ∥ν∥T.V. = Z [a,b] ν(ds). Important results We start by Gronwall’s Lemma, see [66, Lemma 2.4.4]. 48 Lemma 2.4.1. Take an absolutely continuous function z : [S, T ] → Rn. Assume that there exist nonnegative integrable functions k and v such that (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) d dt (cid:12) (cid:12) z(t) (cid:12) (cid:12) (cid:12) ≤ k(t)|z(t)| + v(t) a.e. t ∈ [S, T ]. Then |z(t)| ≤ exp k(σ)dσ (cid:19) (cid:20) |z(S)| + (cid:18)Z t S Z t S exp (cid:18) − Z τ S k(σ)dσ (cid:19) (cid:21) v(τ )dτ for all t ∈ [S, T ]. This lemma can be found in see [69, equation (3.1)]. Lemma 2.4.2. If the function W (·, ·) is lipschitz and x(·) is an absolutely continuous arc, then W (·, x(·)) is absolutely continuous, and we have d dt W (t, x(t)) ∈ ∂W (t, x(t)).(1, ˙x(t)) a.e. The following can be found in [43, Theorem 1], and it basically says that a function that is Lipschitz on S ⊂ E could be extended to the whole space E by preserving a Lipschitz condition. Theorem 2.4.3. Let S ⊂ E non-empty. If f ∈ Lip k (S), then fS,k ∈ Lip k (E) and coincides with f on S, where fS,k(x) = inf u∈S {f (u) + k∥x − u∥} for all x ∈ E. (2.26) Lemma 2.4.4. Let S ⊂ Rn be a compact set, and f : S → R ∪ {∞} a lower semicon- tinuous function, and assume there exists x0 ∈ S such that f (x0) < ∞. Then, inf x∈S exists and is finite. We now present Arzelà–Ascoli theorem. Theorem 2.4.5 (Arzelà–Ascoli theorem). Let {fk} a sequence of continuous func- tions on [0, T ]. If {fk} is uniformly bounded and uniformly equicontinuous, then there exists a subsequence of {fk} (we do not relabel) that converges uniformly to a function f . 49 Theorem 2.4.6 (Helly theorems). Let {fk} be a sequence of bounded variation on [a, b]. Assume there is a constant M such that V b a (fk) ≤ M and ∥fk∥∞ ≤ M for all k. Then: (i) Helly’s first theorem. There is a subsequence of {fk} which converges pointwise everywhere to a function f of bounded variation, with V b a (f ) ≤ M and ∥f ∥∞ ≤ M . (ii) Helly’s second theorem. We have: Z b a g dfk → Z b a g df for all g ∈ C([a, b]). Strong convergence, weak convergence, weak* convergence A significant portion of this section is adapted from Evans lecture notes [37], with further reference to his textbook [38]. Definition 2.4.7. Let p ∈ [1, ∞]. We say that a sequence {fk} converges strongly to f in Lp if ∥fk − f ∥p → 0, as k → ∞. Definition 2.4.8 (When p ∈ [1, ∞)). Let U an open, bounded, smooth subset of Rn, with n ≥ 2. We assume that 1 ≤ p < ∞, and let q be the conjugate exponent, i.e. 1 p + 1 q = 1, (q := ∞ when p = 1.) A sequence {fk} ∈ Lp(U ) converges weakly to f ∈ Lp(U ), in which case, we write if fk ⇀ f in Lp(U ), Z U fkgdx → Z U f gdx, ∀g ∈ Lq(U ). Definition 2.4.9 (When p = ∞). Let U an open, bounded, smooth subset of Rn, with n ≥ 2. A sequence {fk} ∈ L∞(U ) converges weakly* to f ∈ L∞(U ), in which case, we write if fk ∗⇀ f in L∞(U ), Z U fkgdx → Z U f gdx, ∀g ∈ L1(U ). 50 Theorem 2.4.10 (Boundedness of weakly converging sequence). Suppose 1 ≤ p < ∞ and fk ⇀ f in Lp(Ω) ( ∗⇀ in L∞(Ω) if p = ∞). Then, fk is bounded in Lp(Ω) and ∥f ∥Lp(Ω) ≤ lim inf k→∞ ∥fk∥Lp(Ω). Theorem 2.4.11 (Weak convergence in Lp). Suppose 1 < p < ∞ and the sequence {fk}k≥1 is bounded in Lp(U ). Then there is a subsequence, still denoted by {fk}k≥1, and a function f ∈ Lp(U ) such that If p = ∞, the result still holds with ⇀ replaced by ∗⇀. fk ⇀ f in Lp(U ). Theorem 2.4.12. Let {fk} be a sequence of functions that converges pointwise to f and is uniformly bounded in L∞, i.e., there exists M > 0 such that ∥fk∥L∞ ≤ M for all k. Suppose that {Ak} is a sequence in L2 that converges weakly to A in L2, i.e., Ak ⇀ A in L2. Then, the sequence {Akfk} converges weakly to Af in L2, i.e., Akfk ⇀ Af in L2. We now prove the following theorem. Theorem 2.4.13. Let {fk}k sequence of functions in W 1,2([0, T ]; Rn) (respectively W 1,∞([0, T ]; Rn)) such that ∥fk∥∞ ≤ M and ∥ ˙fk∥2 ≤ M (respectively ∥ ˙fk∥∞ ≤ M ). Then, along a subsequence (we do not relabel), we deduce that there exists a function f ∈ W 1,2([0, T ]; Rn) (respectively f ∈ W 1,∞([0, T ]; Rn)) such that fk(·) unif−−→ f (·) and ˙fk(·) ⇀ ˙f (·) weakly in L2 (respectively ˙fk(·) ∗⇀ ˙f (·) weakly* in L∞). 51 Proof. Let ε > 0 and let 0 ≤ t1, t2 ≤ T such that t2 − t1 ≤ ε2 M 2 (respectively ≤ ε M ). Hence, for every k, we have that ||fk(t2) − fk(t1)|| = || Z t2 ˙fk(s)ds|| || ˙fk(s)||ds t1 Z t2 t1 √ √ ≤ ≤ ≤ t2 − t1 || ˙fk(s)||ds (cid:19) 1 2 (cid:16)respectively (t2 − t1)|| ˙fk||∞ (cid:17) (cid:18)Z t2 t1 t2 − t1 M (respectively (t2 − t1)M ) ≤ ε. This shows that (fk(·))k is equicontinuous. In addition, we have that (fk(·))k is uniformly bounded. We deduce from Arzela-Ascoli theorem (Theorem 2.4.5) that along a subsequence of fk(·) (we do not relabel), we have fk converges uniformly to an absolutely continuous function f . Since ( ˙fk(·))k is uniformly bounded in L2 (respectively in L∞), we conclude that we can extract a subsequence where fk converges weakly in L2 to a limit g (respectively weakly* in L∞) (see Theorem 2.4.11). Now, for such subsequence (we do not relabel), we have fk(t) = fk(0) + Z t 0 ˙fk(s)ds −−−→ k→∞ f (0) + Z t 0 g(s)ds, we deduce that f (t) = f (0) + Z t 0 g(s)ds, that is, f (·) is absolutely continuous and ˙f (t) = g(t) a.e. t ∈ [0, T ]. The following is Theorem [66, Proposition 9.2.1]. Theorem 2.4.14 (Convergence of measures). Take a weak* convergent sequence {µi} in C ⊕(S, T ), a sequence of Borel measurable functions γi : [S, T ] → Rn, and a sequence of closed sets {Ai} in [S, T ] × Rn. Take also a closed set A in [S, T ] × Rn, and a measure µ ∈ C ⊕(S, T ). 52 Assume that A(t) is convex for each t ∈ dom A(·) and that the sets A and A1, A2, . . . are uniformly bounded. Assume further that lim sup i→∞ Ai ⊂ A, γi(t) ∈ Ai(t) µi a.e. for i = 1, 2, . . . and Define ηi ∈ C ∗([S, T ]; Rk) by Then, along a subsequence, µi ∗⇀ µ0 weakly*. ηi(dt) := γi(t)µi(dt). ηi ∗⇀ η0 weakly*, for some η0 ∈ C ∗([S, T ]; Rk) such that η0(dt) = γ0(t)µ0(t), in which γ0 is a Borel measurable function that satisfies γ0(t) ∈ A(t) µ0 a.e. The following is Theorem 6.39 in [19]. Theorem 2.4.15 (Weak-closure theorem). Let [a, b] be an interval in R and Q a closed subset of [a, b] × Rn. Let Γ(t, u) be a multifunction mapping Q to the closed convex subsets of Rn. We assume that (a) For each t ∈ [a, b], the set G(t) = {(u, z) : (t, u, z) ∈ Q × Rn, z ∈ Γ(t, u)} is closed and nonempty; 53 (b) For every measurable function u on [a, b] satisfying (t, u(t)) ∈ Q a.e. and every p ∈ Rn, the support function map t → HΓ(t,u(t))(p) = sup{⟨p, v⟩ : v ∈ Γ(t, u(t))} is measurable; (c) For a summable function k, we have Γ(t, u) ⊂ B(0, k(t)) ∀(t, u) ∈ Q. Let ui be a sequence of measurable functions on [a, b] having (t, ui(t)) ∈ Q a.e. and converging almost everywhere to u∗, and let zi : [a, b] → Rn be a sequence of functions satisfying |zi(t)| ≤ k(t) a.e. whose components converge weakly in L1(a, b) to those of z∗. Suppose that, for certain measurable subsets Ωi of [a, b] satisfying limi→∞ meas Ωi = b − a, we have zi(t) ∈ Γ(t, ui(t)) + B(0, ri(t)), t ∈ Ωi a.e., where ri is a sequence of nonnegative functions converging in L1(a, b) to 0. Then we have in the limit z∗(t) ∈ Γ(t, u∗(t)), t ∈ [a, b] a.e. 54 CHAPTER 3 STUDY OF A COUPLED SWEEPING PROCESS DYNAMIC (D) In this chapter, we study the following dynamic (D) given by a sweeping process coupled with a differential equation: (D)   ˙x(t) ∈ f (t, x(t), y(t), u(t)) − NC(t)(x(t)), a.e. t ∈ [0, T ],  ˙y(t) = g(t, x(t), y(t), u(t)), a.e. t ∈ [0, T ], (3.1) where T > 0 is fixed, f : [0, T ] × Rn × Rl × Rm −→ Rn, g : [0, T ] × Rn × Rl × Rm −→ Rl, C(t) is the intersection of the zero-sublevel sets of a finite sequence of functions ψi(t, ·) where ψi : [0, T ] × Rn −→ R, i = 1, . . . , r, NC(t) is the Clarke normal cone to C(t), U (·) : [0, T ] ⇝ Rm is nonempty, closed, and Lebesgue- measurable set-valued map, and the set of control functions U is defined by U := {u : [0, T ] −→ Rm : u is measurable and u(t) ∈ U (t), a.e. t ∈ [0, T ]}. (3.2) We first introduce the following assumptions on C(·) and U (·) which will be used at different points of the chapter. (A1) Assumption on U (·): The measurable set-valued map U (·) has compact images. (A2) Assumption on C(·): For t ∈ [0, T ], the set C(t) is nonempty, closed, uniformly ρ-prox-regular, for some ρ > 0, and is given by C(t) := r \ i=1 Ci(t), where Ci(t) := {x ∈ Rn : ψi(t, x) ≤ 0} ⊂ Rn, (3.3) where (ψi)1≤i≤r is a family of continuous functions ψi : [0, T ] × Rn −→ R. We shall use the following notations. For x(·) ∈ C([0, T ]; Rn) such that x(t) ∈ C(t) ∀t ∈ [0, T ], and for (τ, z) ∈ Gr C(·), we define I - i (x) := {t ∈ [0, T ] : x(t) ∈ int Ci(t)} and I 0 i (x) := [0, T ] \ I - i (x), ∀i = 1, . . . , r,(3.4) I -(x) := r \ i=1 I - i (x) = {t ∈ [0, T ] : x(t) ∈ int C(t)}, I 0(x) := {t ∈ [0, T ] : x(t) ∈ bdry C(t)} = [0, T ] \ I -(x) = {t : I 0 (t,x(t)) ̸= ∅}, where I 0 (τ,z) := {i ∈ {1, . . . , r} : ψi(τ, z) = 0}. (3.5) (3.6) (3.7) 55 We now introduce some local assumptions on C(·), f and g. For a given pair (¯x, ¯y) ∈ C([0, T ]; Rn×Rl) such that ¯x(t) ∈ C(t) ∀t ∈ [0, T ], and for a constant ¯δ > 0, we say that the following assumptions hold true at ((¯x, ¯y); ¯δ) if the corresponding conditions hold true. (A3) Local assumptions on the functions ψi at (¯x; ¯δ): (A3.1) There exist ρo > 0 and Lψ > 0 such that, for each i, ∇xψi(·, ·) exists on Gr (cid:16) C(·) ∩ ¯B¯δ (¯x(·))(cid:17) +{0} × ρoB, and ψi(·, ·) and ∇xψi(·, ·) satisfy, for all (t1, x1), (t2, x2) ∈ Gr (cid:16) C(·) ∩ ¯B¯δ (¯x(·))(cid:17) + {0} × ρo 2 ¯B, max {|ψi(t1, x1) − ψi(t2, x2)|, ∥∇xψi(t1, x1) − ∇xψi(t2, x2)∥} ≤ Lψ( |t1 − t2| + ∥x1 − x2∥). (A3.2) For every t ∈ I 0(¯x), the following constraint qualification at ¯x(t) holds:    X i∈I0 (t,¯x(t))  λi∇xψi(t, ¯x(t)) = 0, with each λi ≥ 0   =⇒ h λi = 0, ∀i ∈ I 0 (t,¯x(t)) i . For the given ¯δ and for any a, b > 0, we introduce the following sets C¯x := [ h C(t) ∩ ¯B¯δ (¯x(t))i , B¯y := [ ¯B¯δ (¯y(t)), U := [ U (t),(3.8) t∈[0,T ] t∈[0,T ] t∈[0,T ] ¯N(a,b)(t) := h C(t) ∩ ¯Ba(¯x(t))i × ¯Bb(¯y(t)), for t ∈ [0, T ]. (3.9) (A4) Local assumptions on h(t, x, y, u) := (f, g)(t, x, y, u) at ((¯x, ¯y); ¯δ): (A4.1) For (x, y, u) ∈ C¯x × B¯y × U, h(·, x, y, u) is Lebesgue-measurable and, for a.e. t ∈ [0, T ], h(t, ·, ·, ·) is continuous on ¯N (¯δ,¯δ) (t) × U (t). There exist Mh > 0, and Lh ∈ L2([0, T ]; R+), such that, for a.e. ¯N (t) and u ∈ U (t), (¯δ,¯δ) t ∈ [0, T ], for all (x, y), (x′, y′) ∈ ∥h(t, x, y, u)∥ ≤ Mh and ∥h(t, x, y, u) − h(t, x′, y′, u)∥ ≤ Lh(t)∥(x, y) − (x′, y′)∥. (A4.2) The set h(t, x, y, U (t)) is convex for all (x, y) ∈ ¯N (¯δ,¯δ) (t) and t ∈ [0, T ] a.e. 1 1This condition is not needed for Theorem 4.2.11. 56 3.1 Study of the dynamic (D) under local assumptions We start by presenting some properties pertaining to the sweeping set C(t) and the sweeping process (D). For the reader’s convenience, Table 3.1 at the end of this subsection summarizes all the results presented here. The following lemma provides an equivalent condition to (A3.2) which allows to obtain the formula for the normal cone to C(t) at points x in C(t) near ¯x(t) (Lemma 3.1.3). Lemma 3.1.1 (Assumption (A3.2)). Let C(·) satisfying (A2) for ρ > 0. Consider ¯x ∈ C([0, T ]; Rn) with ¯x(t) ∈ C(t) for all t ∈ [0, T ], and ¯δ > 0 such that (A3.1) holds at (¯x; ¯δ). Then, the validity of assumption (A3.2) at ¯x is equivalent to the existence of 0 < εo < ¯δ and ηo > 0 such that (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) X i∈I0 (t,c) (cid:13) (cid:13) (cid:13) λi∇xψi(t, c) (cid:13) (cid:13) (cid:13) (cid:13) > 2ηo, ∀(t, c) ∈ n(τ, x) ∈ Gr (cid:16) C(·) ∩ ¯Bεo (¯x(·)(cid:17) : I 0 (τ,x) ̸= ∅ o , (3.10) where I 0 (τ,x) is defined in (3.7) and (λi) i∈I0 (t,c) is any sequence of nonnegative numbers satis- fying P i∈I0 (t,c) λi = 1. Proof. It suffices to show that (A3.2) implies (3.10). tn ∈ [0, T ], cn ∈ C(tn) ∩ ¯B 1 i ≥ 0, for all i ∈ I 0 and λn and n ∈ N, such that (¯x(tn)) with I 0 (tn,cn) ̸= ∅, and (λn (cid:13) (cid:13) (cid:13) (cid:13) i∈I0 n i ) n , ∀n ∈ N. As up to a subsequence, (tn, cn) → (to, co) := (to, ¯x(to)), Lemma .0.1 yields the existence of (tn,cn) (tn,cn) P with P i∈I0 i∈I0 (tn,cn) (cid:13) (cid:13) i ∇xψi(tn, cn) λn (cid:13) (cid:13) (tn,cn) ≤ 2 If not, then there exist sequences = 1 λn i ∅ ̸= Jo ⊂ {1, . . . , r} and a subsequence of (tn, cn)n we do not relabel, such that I 0 (tn,cn) = Jo ⊂ I 0 (to,co) for all n ∈ N. It follows that (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) X i∈Jo (cid:13) (cid:13) (cid:13) i ∇xψi(tn, cn) λn (cid:13) (cid:13) (cid:13) ≤ 2 n , X i∈Jo λn i = 1 (∀n ∈ N), and, λn i ≥ 0 (∀i ∈ Jo, ∀ n ∈ N). (3.11) Hence, after going to a subsequence if necessary, it follows that for all i ∈ Jo, λn with P i∈Jo λo i = 1. Upon taking the limit as n → ∞ in (3.11) and by defining λ0 i i ∈ I 0 (to,co) \ Jo, (A3.1) implies P i∈I0 (to,co) i ∇xψi(to, co) = 0, which contradicts (A3.2). λo 57 i ≥ 0 i → λo = 0 for all Remark 3.1.2. We can prove that (A3.1) and equation (3.10) imply that for all (t, x) ∈ Gr (cid:16) is positively linearly independent. Indeed, assume there exist (t, x) ∈ Gr (cid:16) (¯x(·)(cid:17) such that I 0 (t,x) ̸= ∅, the family of vectors {∇xψi(t, x)}i∈I0 C(·) ∩ ¯Bεo C(·) ∩ ¯Bεo (¯x(·)(cid:17) (t,x) such that I 0 (t,x) ̸= ∅, (λi) such that λi ̸= 0, then P i∈I0 (t,x) i∈I0 (t,x) contradicts (3.10). ≥ 0 such that P i∈I0 (t,x) λi∇xψi(t, x) = 0. If there exists i λi ̸= 0, and we have P i∈I0 (t,x) λiP i∈I0 (t,x) λi ∇xψi(t, x) = 0. This Lemma 3.1.3. Let C(·) satisfying (A2) for ρ > 0. Consider ¯x ∈ C([0, T ]; Rn) with ¯x(t) ∈ C(t) for all t ∈ [0, T ], and ¯δ > 0 such that (A3.1) and (A3.2) hold at (¯x; ¯δ). Let εo be the constant from Lemma 3.1.1. Then, we have NC(t)(x) = N P C(t) (x) = N L C(t) (x), ∀x ∈ C(t), and, for all (t, x) ∈ Gr (cid:16) C(·) ∩ ¯Bεo (¯x(·)(cid:17), NC(t)(x) = P       {0} λi∇xψi(t, x) : λi ≥ 0    i∈I0 (t,x) ̸= {0} if x ∈ bdry C(t) (3.12) if x ∈ int C(t). Proof. Notice that C(t) is prox-regular. By applying Proposition 2.2.39(i) ([21, Corollary 4.15]), we conclude that the limiting, Clarke and proximal normal cones are all equal to each other. Now, to prove equation (3.12), we apply Lemma 2.2.11 ([19, Corollary 10.44]) and Remark 3.1.2. Lemma 3.1.4 (Equivalence). Let C(·) satisfying (A2) for ρ > 0. Consider (¯x, ¯y) ∈ C([0, T ]; Rn × Rl) with ¯x(t) ∈ C(t) for all t ∈ [0, T ], and ¯δ > 0 such that (A3.1) holds at (¯x; ¯δ), and (A4) is satisfied by (f, g) at ((¯x, ¯y); ¯δ). Let (x, y) ∈ W 1,1([0, T ]; Rn+l) be a pair such that (x(t), y(t)) ∈ ¯N (t) ∀t ∈ [0, T ]. The following equivalences hold true. (¯δ,¯δ) There exists u ∈ U such that    ˙x(t) ∈ f (t, x(t), y(t), u(t)) − NC(t)(x(t)) a.e. t ∈ [0, T ] (3.13) ˙y(t) = g(t, x(t), y(t), u(t)) a.e. t ∈ [0, T ] 58 (I) ⇐⇒ There exist u ∈ U and (λ1(·), · · · , λr(·)) non-negative measurable functions such that for every i ∈ {1, · · · , r}, λi(t) = 0 for t ∈ I - i (x) and    ˙x(t) = f (t, x(t), y(t), u(t)) − Pr i=1 λi(t)∇xψi(t, x(t)) a.e. t ∈ [0, T ] (3.14) ˙y(t) = g(t, x(t), y(t), u(t)) a.e. t ∈ [0, T ] (II) ⇐⇒ There exist (λ1(·), · · · , λr(·)) non-negative measurable functions such that for every i ∈ {1, · · · , r}, λi(t) = 0 for t ∈ I - i (x) and ( ˙x(t), ˙y(t)) ∈ h(t, x(t), y(t), U (t)) − λi(t)∇xψi(t, x(t)), 0 ! r X i=1 (3.15) (III) ⇐⇒ There exist (λ1(·), · · · , λr(·)) non-negative measurable functions such that for every i ∈ {1, · · · , r}, λi(t) = 0 for t ∈ I - i (x) and ∀z ∈ Rn × Rl, ⟨z, ( ˙x(t), ˙y(t))⟩ ≤ σ(z, h(t, x(t), y(t), U (t))) − ⟨z, ( r X λi(t)∇xψi(t, x(t)), 0)⟩ a.e. (3.16) i=1 Proof. Equivalences (I) and (II) hold true by applying Filipov Selection Theorem, see The- orem 2.3.5 ([66, Theorem 2.3.13]), and using equation (3.12) for (I). Whereas equivalence (III) holds true by applying the support property in (2.2) on the compact and convex set S = h(t, x(t), y(t), U (t)). An important consequence of Lemma 3.1.1 and Lemma 3.1.3 is manifested in the following result that establishes the Lipschitz continuity and the uniqueness of the solutions near (¯x, ¯y) for the Cauchy problem of (D) via its equivalent form. We note that, under global assumptions, the existence of a solution for the Cauchy problem of (D) is given in Theorem 3.3.7, which will be established in Section 3.3.2. First, define µ to be µ := Lψ(1 + Mh). (3.17) Lemma 3.1.5. Let C(·) satisfying (A2) for ρ > 0. Consider (¯x, ¯y) ∈ C([0, T ]; Rn × Rl) with ¯x(t) ∈ C(t) for all t ∈ [0, T ], and ¯δ > 0 such that (A3.1) and (A3.2) hold at (¯x; ¯δ), and (A4.1) is satisfied by (f, g) at ((¯x, ¯y); ¯δ). Let u ∈ U and (x0, y0) ∈ ¯N (0) be fixed. (ε0,¯δ) 59 Then, a pair (x, y) ∈ W 1,1([0, T ]; Rn+l), such that (x(t), y(t)) ∈ ¯N (ε0,¯δ) (t) ∀t ∈ [0, T ], is a solution of (D) corresponding to ((x0, y0), u) if and only if there exist measurable functions (λ1, · · · , λr) such that, for all i = 1, · · · , r, λi(t) = 0 for t ∈ I - i (x), and ((x, y), u) together with (λ1, · · · , λr) satisfies    ˙x(t) = f (t, x(t), y(t), u(t)) − P i∈I0 (t,x(t)) λi(t)∇xψi(t, x(t)) a.e. t ∈ [0, T ], ˙y(t) = g(t, x(t), y(t), u(t)), a.e. t ∈ [0, T ], (x(0), y(0)) = (x0, y0). Furthermore, we have the following bounds    ∥λi∥∞ ≤ ∥ Pr i λi∥∞ ≤ µ 4η2 0 , ∀i = 1, · · · , r, ∥ ˙x∥∞ ≤ Mh + µ 4η2 0 Lψ, ∥ ˙y∥∞ ≤ Mh. (3.18) (3.19) Consequently, (x, y) is the unique solution of (D) in ¯N (ε0,¯δ) (·) corresponding to ((x0, y0), u). In particular, if ((¯x, ¯y), ¯u) solves (D), then (¯x, ¯y) is Lipschitz and is the unique solution of (D) corresponding to ((¯x(0), ¯y(0)), ¯u). Proof. The equivalence in the first part of this lemma follows immediately from Filippov selection theorem and the normal cone formula in (3.12) (see Lemma 3.1.4). Now, we proceed to prove the bounds in (3.19). Since for all i = 1, · · · , r, ψi(·, x(·)) ∈ W 1,1 (cid:18) ψi(·, ·) is lipschitz and x(·) is absolutely continuous dt ψi(t, x(t)) exists for almost all t ∈ [0, T ]. Using assumption (A3.1) and Lemma 2.4.2 (see [69, equation (3.1)]), we deduce that, ∀i = 1, · · · , r, , then d (cid:19) d dt ψi(t, x(t)) ⊂ ∂(t,x)ψi(t, x(t)).(1, ˙x(t)). But, ∂(t,x)ψi(t, x(t)) = conv nlim ∇(t,x)ψi(tj, xj) : (tj, xj) O−→ (t, x(t))o = conv nlim(∇tψi(tj, xj), ∇xψi(tj, xj)) : (tj, xj) O−→ (t, x(t))o = ˆ∂tψi(t, x(t)) × ∇xψi(t, x(t)), (3.20) 60 where O is full- measure subset of a neighborhood of (t, x(t)), and for (t, z) ∈ Gr(C(·) ∩ ¯B¯δ (¯x(·))), ˆ∂tψi(t, z) := conv{ lim j→∞ ∇tψi(tj, zj) : (tj, zj) → (t, z)}. (3.21) Hence, ψi(t, x(t)) ⊂ ∂(t,x)ψi(t, x(t)).(1, ˙x(t)) = ˆ∂tψi(t, x(t)) + ⟨∇xψi(t, x(t)), ˙x(t)⟩, t ∈ [0, T ] a.e., d dt Thus, there exist measurable θi(·) ∈ ˆ∂tψi(·, x(·)) a.e., such that d dt ψi(t, x(t)) = θi(t) + ⟨∇xψi(t, x(t)), ˙x(t)⟩ a.e. t ∈ [0, T ], ∀i = 1, · · · , r. (3.22) Note that, by (A3.1), we have, for t ∈ [0, T ] a.e., for all θi(t) ∈ ˆ∂tψi(t, x(t)), and for all i = 1, · · · , r, |θi(t) + ⟨∇xψi(t, x(t)), f (t, x(t), y(t), u(t))⟩| ≤ Lψ(1 + ∥f (t, x(t), y(t), u(t))∥). (3.23) Define in [0, T ] the set of full measure: T := {t ∈ (0, T ) : ˙x(t) and d dt ψi(t, x(t)) exist, ∀i = 1, · · · , r}. (3.24) Let t ∈ I -(x) ∩ T . Then, I 0 (t,x(t)) = ∅, and hence, ∀i = 1, · · · , r, λi(t) = 0. This implies that ˙x(t) = f (t, x(t), y(t), u(t)), and hence ∥ ˙x(t)∥ ≤ Mh. Let t ∈ I 0(x) ∩ T with P i∈I0 (t,x(t)) λi(t) ̸= 0; otherwise we join the conclusion of the pre- vious case. Since for all i ∈ I 0 (t,x(t)) , we have ψi(t, x(t)) = 0 and x(s) ∈ C(s) ∀s ∈ [0, T ], it follows that d dtψi(t, x(t)) = 0, for all i ∈ I 0 (t,x(t)) . Hence, for the finite sequence (θi)r i=1 in (3.22), we have 0 = θi(t) + ⟨∇xψi(t, x(t)), ˙x(t)⟩. (3.25) Multiplying (3.25) by λi(t), and using the fact that x(·) satisfies the first equation of (3.18), we get that 0 = λi(t)θi(t) + λi(t) * ∇xψi(t, x(t)), f (t, x(t), y(t), u(t)) − X j∈I0 (t,x(t)) 61 λj(t)∇xψj(t, x(t)) .(3.26) + Summing (3.26) over all i ∈ I 0 (t,x(t)) and using (3.23), we deduce that ∥ X i∈I0 (t,x(t)) λi(t)∇xψi(t, x(t))∥2 = X i∈I0 (t,x(t)) λi(t) (θi(t) + ⟨∇xψi(t, x(t)), f (t, x(t), y(t), u(t))⟩) ≤ Lψ(1 + ∥f (t, x(t), y(t), u(t))∥) X i∈I0 (t,x(t)) λi(t). Hence, utilizing (3.10) on the term on the left hand side, and then dividing by P i∈I0 (t,x(t)) 0 the last inequality, we deduce from (3.17) that X λi(t) ≤ i∈I0 (t,x(t)) Lψ 4η2 0 (1 + ∥f (t, x(t), y(t), u(t))∥) (A4.1) ≤ µ 4η2 0 . λi(t) ̸= (3.27) Therefore, ∥ Pr i=1 λi∥∞ ≤ µ 4η2 0 . Finally, employing (A4.1) for f and g, along with (3.18), the bounds on ∥ ˙x∥∞ and ∥ ˙y∥∞ follow. For the uniqueness, let X := (x, y), ˜X := (˜x, ˜y) in ¯N (ε0,¯δ) (·) be two solutions of (D) corre- sponding to ((x0, y0), u), and let (λi)r i=1, (˜λi)r i=1 be their corresponding multipliers satisfying (3.18). Using the hypomonoticity of the normal cone to the ρ-prox-regular sets C(t) (see Proposition 2.2.39(iii)), the Lh-Lipschitz property of h(t, ·, ·, u(t)), and the bounds in (3.19) for the multipliers, we deduce that 1 2 d dt ≤ (Lh(t) + µ 4ρη2 o (∥X(t) − ˜X(t)∥2) = ⟨ ˙X(t) − ˙˜X(t), X(t) − ˜X(t)⟩ Lψ)∥X(t) − ˜X(t)∥2 := κ(t)∥X(t) − ˜X(t)∥2. (3.28) Hence using Gronwall’s lemma (see Lemma 2.4.1), we deduce that ∥X(t) − ˜X(t)∥2 ≤ e2 R t 0 κ(s)ds∥X(0) − ˜X(0)∥2 = 0. Then, X(t) = ˜X(t) ∀t ∈ [0, T ], and the uniqueness is proved. Now, we arrive at the table promised earlier, summarizing all the results from this sub- section. 62 Table 3.1 Summary of results from Subsection 3.1 . Result Lemma 3.1.1 Description We provide an equivalent condition to (A3.2) that allows to obtain the formula for the normal cone to C(t) at points x in C(t) near ¯x(t). We prove that for all (t, x) ∈ Gr (cid:16) C(·) ∩ ¯Bεo (¯x(·)(cid:17) such that I 0 (t,x) ̸= ∅, Remark 3.1.2 the family of vectors {∇xψi(t, x)}i∈I0 independent. (t,x) is positively linearly Lemma 3.1.3 Lemma 3.1.4 We use Lemma 3.1.1 to obtain the formula for the normal cone to C(t) at points x in C(t) near ¯x(t). We prove an equivalence between the system (D) and three other systems of equations. We use Lemma 3.1.1 and Lemma 3.1.3 to establish the Lipschitz Lemma 3.1.5 continuity and the uniqueness of the solutions near (¯x, ¯y) for the Cauchy problem of (D) via its equivalent form. 3.2 Development and study of a new truncated dynamic ( ¯D) under local assumptions 3.2.1 Preliminary results To avoid imposing the boundedness of Gr C(·) and a global constraint qualification on the sweeping sets C(t) of (D), we shall truncate C(t) by a ball around ¯x(t) of a specific radius ¯ε (that will be determined in Remark 3.2.2), so that the uniform prox-regularity of C(t) ∩ ¯B¯ε(¯x(t)) is ensured, its constraint qualification is satisfied, and its normal cone explicit formula is valid (see Remark 3.2.2 and Lemmas 3.2.4-3.2.6). After establishing certain properties of the truncated sweeping set C(t) ∩ ¯B¯ε(¯x(t)), we now turn our focus to the associated truncated dynamic ( ¯D). Our goal is to derive analogous results to those presented in Section 3.1, but now in the context of the truncated sweeping set C(t)∩ ¯B¯ε(¯x(t)) and the truncated dynamic ( ¯D). See Table 3.2 for summary of the results. 63 A key element to proving the uniform prox-regularity of the truncated sweeping set C(t) ∩ ¯B¯ε(¯x(t)) is the following lemma, which uses Lemma 3.1.1 to prove the closed graph property of NC(·)(·) in the domain where (3.12) is valid. Lemma 3.2.1. Let C(·) satisfying (A2) for ρ > 0. Consider ¯x ∈ C([0, T ]; Rn) with ¯x(t) ∈ C(t) for all t ∈ [0, T ], and ¯δ > 0 such that (A3.1) and (A3.2) hold at (¯x; ¯δ). Then, for εo obtained in Lemma 3.1.1, the set-valued map (t, y) → NC(t)(y) has closed graph on the set Gr (cid:16) C(·) ∩ ¯Bεo (¯x(·))(cid:17). Proof. Let vn ∈ NC(tn)(yn) such that vn → vo and (tn, yn) → (to, yo) in Gr (cid:16) We shall prove that vo ∈ NC(to)(yo). If vo = 0 then obviously vo ∈ NC(to)(yo). Now, let vo ̸= 0, then for n large enough, vn ̸= 0, and hence, equation (3.12) implies that yn ∈ bdry C(tn) and C(·) ∩ ¯Bεo (¯x(·))(cid:17). vn = P i∈I0 (tn,yn) i ∇xψi(tn, yn) for some (λn λn i )i ≥ 0. By Lemma .0.1, we deduce the existence of ∅ ̸= Jo ⊂ {1, . . . , r} and a subsequence of (tn, yn)n we do not relabel, such that we have I 0 (tn,yn) = Jo ⊂ I 0 (to,yo) for all n ∈ N. Hence, for n large enough, vn = P i∈Jo λn and P i > 0 (since vn ̸= 0). Define, for each i ∈ Jo, the bounded sequence (βn i i ∇xψi(tn, yn) )n, ≥ 0. Since also P = 1 for all n, then for each i ∈ Jo, along a i∈Jo βn i i → βi ≥ 0 with P subsequence (we do not relabel), βn i∈Jo βi = 1. Using (A3.1) and Lemma 3.1.1, we have 0 ̸= P i∈Jo βn i ∇xψi(tn, yn) → P i∈Jo βi∇xψi(to, yo) ̸= 0. By writing vn = (cid:16) X λn j (cid:17)(cid:16) X i ∇xψi(tn, yn)(cid:17) βn , j∈Jo i∈Jo and using the fact that vn → vo ̸= 0, we deduce that P j∈Jo λn j is convergent to a limit βo > 0. Hence, vo = P i∈Jo βoβi∇xψi(to, yo). Now, define αi :=   βoβi  0 if i ∈ Jo if i ∈ I 0 (to,yo) \ Jo. αi∇xψi(to, yo) ∈ NC(to)(yo). Then, vo = P i∈I0 (to,yo) Combining Lemma 3.2.1 with Lemma 2.2.40 immediately produces a range for ¯ε > 0 ensuring the uniform prox-regularity of the truncated sets C(t) ∩ ¯B¯ε(¯x(t)). 64 i∈Jo λn := where βn i λn iP j∈Jo λn j Remark 3.2.2. Let C(·) satisfying (A2) for ρ > 0. Consider ¯x ∈ C([0, T ]; Rn) with ¯x(t) ∈ C(t) for all t ∈ [0, T ], and ¯δ > 0 such that (A3.1) and (A3.2) hold at (¯x; ¯δ). Then, for ¯ε ∈ (0, ρ) ∩ (0, εo], where εo is given in Lemma 3.1.1, there exists ρ¯ε > 0, obtained from Lemma 2.2.40, such that for all t ∈ [0, T ], C(t) ∩ ¯B¯ε(¯x(t)) is ρ¯ε-prox-regular. Introducing the new truncated sweeping process ( ¯D) Now, our attention shifts from the dynamic (D) to working on the dynamic ( ¯D) obtained from (D) by replacing the sweeping set C(t) by the truncated sweeping set C(t) ∩ ¯B¯ε(¯x(t)), where ¯ε ∈ (0, ρ) ∩ (0, εo], and by adding −N ¯B¯δ(¯y(t)) to the right hand side of the differential equation, which becomes a differential inclusion as a result. Denote by ( ¯D) the aforemen- tioned truncated system obtained from (D) by localizing C(·) around ¯x and Rl around ¯y, that is, ( ¯D)    ˙x(t) ∈ f (t, x(t), y(t), u(t)) − NC(t)∩ ¯B¯ε(¯x(t)) (x(t)), a.e. t ∈ [0, T ], (3.29) ˙y(t) ∈ g(t, x(t), y(t), u(t)) − N ¯B¯δ(¯y(t)) (y(t)), a.e. t ∈ [0, T ]. Notice that the truncated sweeping set for x, C(t) ∩ ¯B¯ε(¯x(t)), ψ1(t, ·), · · · , ψr(t, ·), and ψr+1(t, ·), where ψr+1 is given by is the sub-level set of ψr+1(t, x) = ψr+1(t, x; ¯x, ¯ε) := 1 2 [∥x − ¯x(t)∥2 − ¯ε2]. (3.30) Therefore, for Cr+1(t) := ¯B¯ε(¯x(t)) = {x ∈ Rn : ψr+1(t, x) ≤ 0}, C(t) ∩ ¯B¯ε(¯x(t)) = C(t) ∩ Cr+1(t) = r+1 \ {x ∈ Rn : ψi(t, x) ≤ 0}, i=1 and hence, it is always generated by at least two functions. On the other hand, the truncated sweeping set for y, ¯B¯δ (¯y(t)), is generated by a single function φ : [0, T ] × Rl −→ R , where φ(t, y) = φ(t, y; ¯y, ¯δ) := 1 2 [∥y − ¯y(t)∥2 − ¯δ2], i.e. ¯B¯δ (¯y(t)) = {y ∈ Rl : φ(t, y) ≤ 0}. (3.31) (3.32) The following remark shows the relation between pairs that are admissible for (D) and those admissible for ( ¯D). 65 Remark 3.2.3. We have: • Any admissible pair ((x, y), u) for (D) such that (x(t), y(t)) ∈ ¯N t ∈ [0, T ], is also admissible for ( ¯D). This is due to Lemma 2.2.9. • On the other hand, any admissible pair ((x, y), u) for ( ¯D) such that (x(t), y(t)) ∈ ¯N(δ1,δ2)(t) with δ1 < ¯ε and δ2 < ¯δ is also admissible for (D). This is due (¯y(t)), then, using the local to the fact that if ∀t ∈ [0, T ], (x(t), y(t)) ∈ B¯ε(¯x(t)) × B¯δ property of the proximal normal cone, we have N P (x(t)) = N P (x(t)) and (t) for all (¯ε,¯δ) C(t) C(t)∩ ¯B¯ε(¯x(t)) {0} = N P • In particular, ((¯x, ¯y), ¯u) solves (D) if and only if it solves ( ¯D). (y(t)). ¯B¯δ(¯y(t)) For x(·) ∈ C([0, T ]; Rn) such that x(t) ∈ C(t) ∩ ¯B¯ε(¯x(t)) ∀t ∈ [0, T ], and (τ, z) ∈ Gr (cid:16) C(·) ∩ ¯B¯ε(¯x(·))(cid:17), we define the following sets obtained through adding to those in (3.4)-(3.7) the extra constraint produced by ψr+1: I - r+1 (x) := {t ∈ [0, T ] : x(t) ∈ B¯ε(¯x(t))} and I 0 r+1 (x) := [0, T ] \ I - r+1 (x), ¯I -(x) := r+1 \ i=1 I - i (x) = {t ∈ [0, T ] : x(t) ∈ int C(t) ∩ B¯ε(¯x(t))} = I -(x) ∩ {t ∈ [0, T ] : x(t) ∈ B¯ε(¯x(t))}, ¯I 0(x) = [0, T ] \ ¯I -(x) = I 0(x) ∪ {t ∈ [0, T ] : ∥x(t) − ¯x(t)∥ = ¯ε} = {t ∈ [0, T ] : ¯I 0 (t,x(t)) ̸= ∅}, where ¯I 0 (τ,z) := {i ∈ {1, . . . , r, r + 1} : ψi(τ, z) = 0}. (3.33) Since ¯x(t) ∈ B¯ε(¯x(t)), then ψr+1(t, ¯x(t)) < 0 and hence, ¯I 0(¯x) = I 0(¯x) and, for t ∈ ¯I 0(¯x), ¯I 0 = I 0 (t,¯x(t)) (t,¯x(t)). The following lemma provides a second condition, (3.34), equivalent to (A3.2) which, unlike (3.10), validates the formula for the normal cone to the uniform prox-regular truncated sets C(t) ∩ ¯B¯ε(¯x(t)), obtained in Remark 3.2.2, (see Lemma 3.2.6 stated below). Note that since ψr+1, given by (3.30), is a function of ¯ε, this lemma is of a different nature than Lemma 3.1.1. Observe that, for any given ¯ε > 0, we have that ψr+1(t, x) and ∇xψr+1(t, x) := x− ¯x(t) exist and continuous everywhere. 66 Lemma 3.2.4 (Assumption (A3.2)). Let C(·) satisfying (A2) for ρ > 0. Consider ¯x ∈ C([0, T ]; Rn) with ¯x(t) ∈ C(t) for all t ∈ [0, T ], and ¯δ > 0 such that (A3.1) holds at (¯x; ¯δ). Then, (A3.2) is satisfied at ¯x if and only if for ¯ε ∈ (0, ρ) ∩ (0, εo] and its corresponding ψr+1 given by (3.30), there exists ¯η ∈ (0, η0) (without loss of generality ¯η ≤ ¯ε 2 ) such that (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) X i∈¯I0 (t,c) (cid:13) (cid:13) (cid:13) λi∇xψi(t, c) (cid:13) (cid:13) (cid:13) (cid:13) > 2¯η, ∀(t, c) ∈ {(τ, x) ∈ Gr (cid:16) C(·) ∩ ¯B¯ε(¯x(·)(cid:17) : ¯I 0 (τ,x) ̸= ∅}, (3.34) where (λi) i∈¯I0 (t,c) is any sequence of nonnegative numbers satisfying P i∈¯I0 (t,c) λi = 1, and ¯I 0 (τ,x) is given by (3.33). Proof. We only need to show that (A3.2) yields (3.34). For this, assume (A3.2) is valid and C(·) ∩ ¯B¯ε(¯x(·)(cid:17) : ¯I 0 let ¯ε ∈ (0, ρ) ∩ (0, εo]. From Lemma 3.1.1, it follows that, for any ¯η ∈ (0, ηo), (3.34) holds for all (t, c) ∈ {(τ, x) ∈ Gr (cid:16) (τ,x) ̸= ∅} such that (r + 1) /∈ ¯I 0 prove that (3.34) is valid for all (t, c) such that (r + 1) is necessarily in ¯I 0 ¯I 0 (t,c) ∪ {r + 1} and λr+1 ̸= 0. Arguing by contradiction, then there exist sequences , (t,c) tn ∈ [0, T ], cn ∈ C(tn) with ∥cn − ¯x(tn)∥ = ¯ε, and (λn i i ≥ 0, for all i ∈ I 0 , that is, when . It remains to with λn ) i∈¯I0 = I 0 (t,c) (t,c) (tn,cn) (tn,cn) r+1 > 0, and λn ( X i∈I0 (tn,cn) λn i ) + λn r+1 = 1, (3.35) i ∇xψi(tn, cn) + λn λn r+1 (cid:13) (cid:13) (cn − ¯x(tn)) (cid:13) (cid:13) ≤ 2 n , ∀n ∈ N. Using the compact- such that P i∈I0 (tn,cn) (cid:13) (cid:13) (cid:13) (cid:13) ness of [0, T ], (A3.1), and the continuity of ¯x, it follows that up to subsequences, tn → to ∈ (tn,cn) ̸= ∅, since otherwise, (3.35) , which is in- [0, T ] and cn → co ∈ C(to) with ∥co−¯x(to)∥ = ¯ε. Note that I 0 = 1, and in this case the above inequality becomes ∥cn − ¯x(tn)∥ ≤ 2 n = Jo ⊂ I 0 valid for n large. Thus, by Lemma .0.1, for some ∅ ̸= Jo ⊂ {1, . . . , r}, I 0 yields λn r+1 , (tn,cn) (to,co) for n large. This implies that, for n large enough, i ∇xψi(tn, cn) + λn λn r+1 (3.36) 2 (cid:13) (cid:13) (cid:13) (cn − ¯x(tn)) (cid:13) n (cid:13) (cid:13) i ≥ 0 ∀i ∈ Jo. ≤ , and λn X (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) i∈Jo + λn r+1 λn i X i∈Jo = 1, λn r+1 > 0, 67 Hence, up to a subsequence, λn i → λo i ≥ 0 for all i ∈ Jo, and λn r+1 → λo r+1 ≥ 0. Upon taking the limit as n → ∞ in (3.36), (A3.1) yields that X i∈Jo i ∇xψi(to, co) + λo λo r+1 (co − ¯x(to)) = 0, X λo i + λo r+1 i∈Jo = 1, λo i ≥ 0 ∀i ∈ Jo ∪ {r + 1}. (3.37) From (3.37) and Lemma 3.1.1 we get that λo r+1 > 0. As ∥co − ¯x(to)∥ = ¯ε, (3.37) is translated to saying 0 ̸= v := X i∈Jo and hence, per (3.12), 0 ̸= v ∈ N P i ∇xψi(to, co) = −λo λo r+1 (co − ¯x(to)), (co) ∩ −N P ¯B¯ε(¯x(to)) (co). As ¯ε ∈ (0, ρ), then, this inclusion C(to) contradicts Lemma 2.2.40. Remark 3.2.5. We can prove that (A3.1) and equation (3.34) imply that for all (t, x) ∈ Gr (cid:16) C(·) ∩ ¯B¯ε(¯x(·)(cid:17) such that ¯I 0 (t,x) ̸= ∅, the family of vectors {∇xψi(t, x)}i∈¯I0 is (t,x) positively linearly independent. Important consequences of Lemma 3.2.4 are the following explicit formulae for the normal cone to the truncated sets C(t) ∩ ¯B¯ε(¯x(t)) and for their prox-regularity constant, which shall replace ρ¯ε. Assume without loss of generality that Lψ ≥ 4¯η , where ρo is the constant from ρo (A3.1). Lemma 3.2.6. Let C(·) satisfying (A2) for some ρ > 0. Consider ¯x ∈ C([0, T ]; Rn) with ¯x(t) ∈ C(t) for all t ∈ [0, T ], and ¯δ > 0 such that (A3.1) and (A3.2) hold at (¯x; ¯δ). Let ¯ε ∈ (0, ρ) ∩ (0, εo] with its corresponding ψr+1, given by (3.30), and ¯η from Lemma 3.2.4. Let ρ¯ε be the uniform prox-regular constant of C(t) ∩ ¯B¯ε(¯x(t)) obtained from Remark 3.2.2. For all (t, x) ∈ Gr (cid:16) C(·) ∩ ¯B¯ε(¯x(·)(cid:17), NC(t)∩ ¯B¯ε(¯x(t)) (x) = N P C(t)∩ ¯B¯ε(¯x(t)) (x) = N L C(t)∩ ¯B¯ε(¯x(t)) (x), 68 and NC(t)∩ ¯B¯ε(¯x(t)) (x) =    { X i∈¯I0 (t,x) {0} Furthermore, C(t) ∩ ¯B¯ε(¯x(t)) is uniformly 2¯η at every x ∈ C(t) ∩ ¯B¯ε(¯x(t)), and Lψ λi∇xψi(t, x) : λi ≥ 0} ̸= {0} if x ∈ bdry(C(t) ∩ ¯B¯ε(¯x(t))) if x ∈ int(C(t) ∩ ¯B¯ε(¯x(t))). (3.38) -prox-regular, C(t) ∩ ¯B¯ε(¯x(t)) is epi-lipschitzian cl (cid:16)int (cid:16) C(t) ∩ ¯B¯ε(¯x(t))(cid:17)(cid:17) = C(t) ∩ ¯B¯ε(¯x(t)). (3.39) Proof. Since C(t) ∩ ¯B¯ε(¯x(t)) is prox-regular, we apply Proposition 2.2.39(i) ([21, Corollary 4.15]), and we conclude that the limiting, Clarke and proximal normal cones are all equal to each other. To prove equation (3.38), we apply Lemma 2.2.11 ([19, Corollary 10.44]) and Remark 3.2.5. Now, we prove that C(t) ∩ ¯B¯ε(¯x(t)) is uniformly 2¯η Theorem 2.2.41 (see [2, Theorem 9.1]). Indeed, in Theorem 2.2.41, take m := r + 1, gi := ψi, S(t) := C(t)∩ ¯B¯ε(¯x(t)). Notice that ∇xψr+1(t, x) = x− ¯x(t) and condition (A3.1) is satisfied, , and γ := Lψ. Finally, hence conditions (i)-(ii) of Theorem 2.2.41 are satisfied for ρ := ρo 2 -prox-regular using Lψ Lemma 3.2.4 implies that the last condition of Theorem 2.2.41 is satisfied by translating [58, Lemma 6.1] to our setting. As a result, for all t ∈ [0, T ], we have C(t) is prox-regular with constant min n ρo ). To prove that C(t)∩ ¯B¯ε(¯x(t)) is epi-lipschitzian for every x ∈ C(t)∩ ¯B¯ε(¯x(t)) and that (3.39) is satisfied, we use Lemma 2.2.46, and equations (3.34)-(3.38). (since Lψ ≥ 4¯η ρo o = 2¯η Lψ 2 , 2¯η Lψ We now prove an equivalence between the system ( ¯D) and three other systems. Lemma 3.2.7 (Equivalence). Consider C(·) satisfying (A2) for ρ > 0. Consider (¯x, ¯y) ∈ C([0, T ]; Rn × Rl) with ¯x(t) ∈ C(t) for all t ∈ [0, T ]. Let ¯δ > 0 such that (A3.1) and (A3.2) hold at (¯x; ¯δ) and (A4) is satisfied by (f, g) at ((¯x, ¯y); ¯δ), and let ¯ε ∈ (0, ρ) ∩ (0, εo] with its corresponding ψr+1 given by (3.30). Let (x, y) ∈ W 1,1([0, T ]; Rn+l) be a pair such that (x(t), y(t)) ∈ ¯N (t) ∀t ∈ [0, T ]. The following equivalences hold true. (¯ε,¯δ) 69 There exists u ∈ U such that    ˙x(t) ∈ f (t, x(t), y(t), u(t)) − NC(t)∩ ¯B¯ε(¯x(t)) (x(t)), a.e. t ∈ [0, T ], (3.40) ˙y(t) ∈ g(t, x(t), y(t), u(t)) − N ¯B¯δ(¯y(t)) (y(t)), a.e. t ∈ [0, T ] (I) ⇐⇒ There exist u ∈ U and there exist measurable functions (λ1, · · · , λr+1) and ζ such that, ∀i = 1, · · · , r + 1, λi(t) = 0 ∀t ∈ I - i (x), ζ(t)φ(t, y(t)) = 0 ∀t ∈ [0, T ], and ((x, y), u), (λi)r+1 i=1 , and ζ satisfy   ˙x(t) = f (t, x(t), y(t), u(t)) − Pr+1  ˙y(t) = g(t, x(t), y(t), u(t)) − ζ(t)∇yφ(t, y(t)), a.e. t ∈ [0, T ]. i=1 λi(t)∇xψi(t, x(t)) a.e. t ∈ [0, T ], (3.41) (II) ⇐⇒ There exist measurable functions (λ1, · · · , λr+1) and ζ such that, ∀i = 1, · · · , r + 1, λi(t) = 0 ∀t ∈ I - i (x), ζ(t)φ(t, y(t)) = 0 ∀t ∈ [0, T ], and ( ˙x(t), ˙y(t)) ∈ h(t, x(t), y(t), U (t)) − λi(t)∇xψi(t, x(t)), ζ(t)∇yφ(t, y(t)) ! (3.42) r+1 X i=1 (III) ⇐⇒ There exist measurable functions (λ1, · · · , λr+1) and ζ such that, ∀i = 1, · · · , r + 1, λi(t) = 0 ∀t ∈ I - i (x), ζ(t)φ(t, y(t)) = 0 ∀t ∈ [0, T ], ∀z ∈ Rn × Rl, ⟨z, ( ˙x(t), ˙y(t))⟩ ≤ σ(z, h(t, x(t), y(t), U (t)))− * z, r+1 X i=1 λi(t)∇xψi(t, x(t)), ζ(t)∇yφ(t, y(t)) !+ a.e. (3.43) Parallel to Lemma 3.1.5, and based on Lemma 3.2.4 and Lemma 3.2.6, we shall obtain here the Lipschitz continuity and the uniqueness of the solutions of the Cauchy problem corresponding to the truncated system ( ¯D), defined in (3.29). We note that the existence of a solution for this more general Cauchy problem is obtained in Corollary 3.2.16. Lemma 3.2.8. Consider C(·) satisfying (A2) for ρ > 0. Consider (¯x, ¯y) ∈ C([0, T ]; Rn× Rl) with ¯x(t) ∈ C(t) for all t ∈ [0, T ] and (¯x(·), ¯y(·)) is L(¯x,¯y)-Lipschitz on [0, T ] for some constant L(¯x,¯y) ≥ 1. Let ¯δ > 0 such that (A3.1) and (A3.2) hold at (¯x; ¯δ) and (A4.1) is 70 satisfied by (f, g) at ((¯x, ¯y); ¯δ), and let ¯ε ∈ (0, ρ)∩(0, εo] with its corresponding ψr+1 given by (3.30). Fix u ∈ U as well as (x0, y0) ∈ N (0). Then, a pair (x, y) ∈ W 1,1([0, T ]; Rn+l), such that (x(t), y(t)) ∈ ¯N (t) ∀t ∈ [0, T ], solves the system ( ¯D) associated with ((x0, y0), u) if and only if there exist measurable functions (λ1, · · · , λr+1) and ζ such that, ∀i = 1, · · · , r +1, (ε0,¯δ) (¯ε,¯δ) λi(t) = 0 ∀t ∈ I - i (x), ζ(t)φ(t, y(t)) = 0 ∀t ∈ [0, T ], and ((x, y), u), (λi)r+1 i=1 , and ζ satisfy    ˙x(t) = f (t, x(t), y(t), u(t)) − P i∈¯I0 (t,x(t)) λi(t)∇xψi(t, x(t)) a.e. t ∈ [0, T ], ˙y(t) = g(t, x(t), y(t), u(t)) − ζ(t)∇yφ(t, y(t)), a.e. t ∈ [0, T ], (3.44) (x(0), y(0)) = (x0, y0). Furthermore, we have the following bounds max{∥ Pr+1 i=1 λi∥∞, ∥ζ∥∞} ≤ ¯µ 4¯η2 , max{∥ ˙x∥∞, ∥ ˙y∥∞} ≤ Mh + ¯µ 4¯η2 ¯L, (3.45) max{∥ ˙x(t) − f (t, x(t), y(t), u(t))∥, ∥ ˙y(t) − g(t, x(t), y(t), u(t))∥} ≤ ¯µ 4¯η2 ¯L, t ∈ [0, T ] a.e.,(3.46) where ¯L := max{Lψ, ¯δL(¯x,¯y)} ≥ ¯δ, ¯µ := ¯L(1 + Mh) ≥ µ. (3.47) Consequently, (x, y) is the unique solution of (3.29) corresponding to ((x0, y0), u). Proof. The equivalence follows from Filippov Selection theorem, the normal cone formula in (3.38), and the fact that N ¯B¯δ(¯y(t)) if φ(t, y) = 0 (see Lemma 3.2.7). (y) equals {0} if φ(t, y) < 0, and equals {λ(y− ¯y(t)) : λ ≥ 0} For the bounds pertaining ∥ Pr+1 i=1 λi∥∞ and ∥ ˙¯x∥∞, we follow the same steps as in proof of Lemma 3.1.5, with the main difference here is that we add an extra constraint to C(t), namely, ψr+1(t, x) ≤ 0. For this reason, it suffices to show that (3.22) and (3.23), where Lψ is enlarged to ¯L, are also valid for i = r + 1, and that the set T can be modified to take into account the addition of ψr+1. Once these goals are achieved, the proof follows from that of Lemma 3.1.5, where ¯I 0 , Lemma 3.2.4, ¯η, and ¯µ are used instead of I 0 , Lemma (t,x(t)) (t,x(t)) 3.1.1, η0, and µ, respectively. 71 Note that, by (3.30), ∇xψr+1(t, z) = z − ¯x(t) exists for all (t, z) ∈ [0, T ] × Rn. Furthermore, as ¯x is L(¯x,¯y)-Lipschitz, we have, on Gr C(·) ∩ ¯B¯ε(¯x(·)), that ψr+1(·, ·) is ¯εL(¯x,¯y)-Lipschitz, ∇xψr+1(·, ·) is bounded by ¯ε ≤ ¯L, and ˆ∂tψr+1(·, ·), defined via (3.21) for i = r + 1, satisfies ˆ∂tψr+1(t, z) = ⟨z − ¯x(t), −∂ ¯x(t)⟩ = ∂tψr+1(t, z), ∀(t, z) ∈ Gr C(·) ∩ ¯B¯ε(¯x(·)), (3.48) and hence, ∀θr+1 ∈ ˆ∂tψr+1(t, z), |θr+1| ≤ ¯εL(¯x,¯y) ≤ ¯L. Thus, for t ∈ [0, T ] a.e., and for all θr+1(t) ∈ ˆ∂tψr+1(t, x(t)), we have |θr+1(t) + ⟨∇xψr+1(t, x(t)), f (t, x(t), y(t), u(t))⟩ ≤ ¯L(1 + ∥f (t, x(t), y(t), u(t))∥).(3.49) Therefore, (3.23) and (3.49) yield that (3.23) holds up to i = r + 1, that is, for t ∈ [0, T ] a.e., for all θi(t) ∈ ˆ∂ψi(t, x(t)), we have for i = 1, · · · , r + 1 |θi(t) + ⟨∇xψi(t, x(t)), f (t, x(t), y(t), u(t))⟩| ≤ ¯L(1 + ∥f (t, x(t), y(t), u(t))∥). (3.50) On the other hand, from (3.30), (3.48), and the fact that ˙¯x(t) ∈ ∂ ¯x(t) a.e., we have d dt ψr+1(t, x(t)) = ⟨x(t) − ¯x(t), − ˙¯x(t) + ˙x(t)⟩, t ∈ [0, T ] a.e., = θr+1(t) + ⟨∇xψr+1(t, x(t)), ˙x(t)⟩, t ∈ [0, T ] a.e., (3.51) (3.52) where θr+1(t) = ⟨x(t) − ¯x(t), − ˙¯x(t)⟩ ∈ ˆ∂tψr+1(t, x(t)) a.e. Therefore, (3.22) holds up to i = r + 1, that is, ∀i, there is measurable θi(·) ∈ ˆ∂tψi(·, x(·)) a.e., with d dt ψi(t, x(t)) = θi(t) + ⟨∇xψi(t, x(t)), ˙x(t)⟩, a.e., ∀i = 1, · · · , r + 1. (3.53) Instead of the set T given in (3.24) in the proof of Lemma 3.1.5, we use the following modified set ¯T that involves ˙¯x and on which d dtψr+1(·, x(·)) readily exists, ¯T := {t ∈ (0, T ) : ˙x(t), ˙¯x(t), and d dt ψi(t, x(t)) exist, ∀i = 1, · · · , r}. Therefore, similarly to (3.27) we obtain X λi(t) ≤ i∈¯I0 (t,x(t)) ¯L 4¯η2 (1 + ∥f (t, x(t), y(t), u(t))∥) (A4.1) ≤ ¯µ 4¯η2 , (3.54) 72 implying, via first equation of (3.44) and (A4.1), the required bound in (3.45) for ∥ ˙x∥∞ and the first bound in (3.46). ¯A := {t ∈ (0, T ) : For the bounds of ζ and ˙y in (3.45), we use the full- measure set ˙¯y(t) and ˙y(t) exist}. If t ∈ ¯A and φ(t, y(t)) < 0, then ζ(t) = 0 and the bound on ˙y follows using (A4.1). If t ∈ ¯A and φ(t, y(t)) = 0, then ∥y(t) − ¯y(t)∥ = ¯δ and, since φ(·, y(·)) ≤ 0, d dtφ(t, y(t)) = 0. Hence, as (3.31) implies that φ(t, y(t)) = ⟨y(t) − ¯y(t), − ˙¯y(t) + ˙y(t)⟩, t ∈ [0, T ] a.e., (3.55) d dt then, using ¯η < ¯ε ¯δL(¯x,¯y) ≤ ¯L (by (3.47)), we get that for t ∈ [0, T ] a.e., 2 < ¯δ 2 (by Lemma 3.2.4), second equation of (3.44), L(¯x,¯y) ≥ 1, and 4¯η2ζ(t) ≤ ¯δ2ζ(t) = ⟨y(t) − ¯y(t), g(t, x(t), y(t), u(t)) − ˙¯y(t)⟩ ≤ ¯L(1 + ∥g(t, x(t), y(t), u(t))∥). (3.56) Therefore, by (A4.1) we have, ∥ζ∥∞ ≤ ¯µ 4¯η2 , which when combined with the second equation of (3.44), yields the bound on ∥ ˙y∥∞ in (3.45) and the second bound in (3.46). The uniqueness proof of (x, y) is similar to that in Lemma 3.1.5, where system (D) is replaced by ( ¯D), the ρ-prox-regularily of C(t) is replaced by the 2¯η -prox-regularity of C(t) ∩ ¯B¯ε(¯x(t)) Lψ obtained in Lemma 3.2.6, and (3.18)-(3.19), µ, ηo, and Lψ, are replaced by (3.44)-(3.45), ¯µ, ¯η, and ¯L, respectively. The ∞-prox-regularity of ¯B¯δ valid. (¯y(t)) keeps the inequality in (3.28) The following table summarizes the results of this subsection. Table 3.2 Summary of results from Subsection 3.2.1 . Result Lemma 3.2.1 Description We use Lemma 3.1.1 to prove the closed graph property of NC(·)(·) in the domain where (3.12) is valid. 73 Table 3.2 (cont’d) Result Description We use Lemma 3.2.1 with Lemma 2.2.40 to produce a range for ¯ε > 0 Remark 3.2.2 Remark 3.2.3 ensuring the uniform prox-regularity of the truncated sets C(t) ∩ ¯B¯ε(¯x(t)). We show the relation between pairs that are admissible for (D) and those admissible for ( ¯D). We provide a second condition equivalent to (A3.2) which validates the Lemma 3.2.4 formula for the normal cone to the uniform prox-regular truncated sets C(t) ∩ ¯B¯ε(¯x(t)). Remark 3.2.5 Lemma 3.2.6 Lemma 3.2.7 We prove that for (t, x) ∈ Gr (cid:16) family of vectors {∇xψi(t, x)}i∈¯I0 C(·) ∩ ¯B¯ε(¯x(·)(cid:17) such that ¯I 0 (t,x) ̸= ∅, the is positively linearly independent. (t,x) We use Lemma 3.2.4 to derive explicit formulae for the normal cone to the truncated sets C(t) ∩ ¯B¯ε(¯x(t)) and for their prox-regularity constant. We prove an equivalence between the system ( ¯D) and three other systems of equations. We use Lemma 3.2.4 and Lemma 3.2.6 to obtain the Lipschitz Lemma 3.2.8 continuity and the uniqueness of the solutions of the Cauchy problem corresponding to the truncated system ( ¯D), defined in (3.29). This is parallel to Lemma 3.1.5. 3.2.2 Exponential penalty approximation for the system ( ¯D) This section aims to establish the relationship between ( ¯D) and its approximating stan- dard control system ( ¯Dγk the Cauchy problem associated with ( ¯D). Throughout this whole section, we assume C(·) ), as well as the existence and uniqueness of Lipschitz solutions to satisfying (A2) for ρ > 0, and (¯x, ¯y) ∈ C([0, T ]; Rn × Rl) with ¯x(t) ∈ C(t) for all t ∈ [0, T ], and (¯x(·), ¯y(·)) is L(¯x,¯y)-Lipschitz on [0, T ] for some L(¯x,¯y) ≥ 1. Let ¯δ > 0 be such that (A3.1) 74 and (A3.2) hold at (¯x; ¯δ) and (A4.1) is satisfied by (f, g) at ((¯x, ¯y); ¯δ). Fix 0 < ¯ε < ¯δ, its cor- responding ψr+1 given by (3.30), and ¯η ∈ (0, ¯ε 2 Assuming that Lψ ≥ 4¯η ρo , set ¯ρ := 2¯η Lψ ), such that ¯ε, ψr+1, and ¯η satisfy Lemma 3.2.4. , the prox-regular constant for the sets C(t) ∩ ¯B¯ε(¯x(t)) from Lemma 3.2.6. We start by extending the function h(t, x, ·, u) from ¯B¯δ satisfies for all y ∈ Rl, (A4.1), and also (A4.2) whenever it is satisfied by h. This extension (¯y(t)) to Rl so that this extension shall be later used in Theorem 3.2.14. Remark 3.2.9 (Extension). For t ∈ [0, T ] a.e., x ∈ h C(t) ∩ ¯B¯δ (¯x(t))i , and for u ∈ U (t), it is possible to extend the function h(t, x, ·, u) := (f, g)(t, x, ·, u) so that, whenever h satisfies (A4) (including (A4.2)), its extension also satisfies (A4) for all y ∈ Rl. Indeed, the convexity for all t ∈ [0, T ] of ¯B¯δ 1-Lipschitz on Rl. (¯y(t)) yields that π(t, ·) := π ¯B¯δ(¯y(t)) (·) is well-defined and Define for a.e. t ∈ [0, T ], and (x, y, u) ∈ h C(t) ∩ ¯B¯δ (¯x(t))i × Rl × U (t), ¯h(t, x, y, u) := h(t, x, π(t, y), u). Whenever h satisfies (A4) at ((¯x, ¯y), ¯δ), arguments similar to those in [55, Remark 4.1] show that ¯h (whose name we keep as h) also satisfies (A4), where (¯y(t)), is now replaced by h h C(t) ∩ ¯B¯δ (t), which is C(t) ∩ ¯B¯δ (¯x(t))i (¯x(t))i × ¯B¯δ × Rl. ¯N (¯δ,¯δ) The following notations, which depend on (¯x; ¯ε) and (¯y; ¯δ), will be used in the proofs of the results that follow as well as the proof of Theorem 4.2.11. They are instrumental in constructing a dynamic ( ¯Dγk ) that approximates ( ¯D) and has rich properties. • Let ¯L and ¯µ be the constants given in (3.47). Define a sequence (γk)k such that, for all ) and γk → ∞ as k −→ ∞, and the real sequences (¯αk)k, (¯σk)k, k ∈ N, γk > 2¯µ ¯η2 e (> e ¯δ and (¯ρk)k by ¯αk := 1 γk ln ! ¯η2γk 2¯µ ; ¯σk := (r + 1) ¯L 2¯η2 ln(r + 1) γk ! + ¯αk ; ¯ρk := q¯δ2 − 2¯αγk. (3.57) 75 Our choice of γk with the fact that ¯µ > ¯δ > ¯η yield that ¯δ2 > (cid:17) ¯δ 2 ln(cid:16) γk γk > 2¯αγk, and γke−γk ¯αk = 2¯µ ¯η2 , (¯αk, ¯σk, ¯ρk) > 0 ∀ k ∈ N, ¯αk ↘ 0, ¯σk ↘ 0 and ¯ρk ↗ ¯δ. (3.58) • For each t ∈ [0, T ] and k ∈ N, we define the compact sets ( ¯C γk(t) := x ∈ Rn : ) eγkψi(t,x) ≤ 1 r+1 X i=1 ⊂ int C(t) ∩ B¯ε(¯x(t)), ¯C γk(t, k) := ( x ∈ Rn : r+1 X i=1 eγkψi(t,x) ≤ 2¯µ ¯η2γk ) = e−γk ¯αk ⊂ int ¯C γk(t). (3.59) (3.60) • For u ∈ U, the approximation dynamic ( ¯Dγk ) of ( ¯D) is defined by ( ¯Dγk )   ˙x(t) = f (t, x(t), y(t), u(t)) −  r+1 P i=1 γkeγkψi(t,x(t))∇xψi(t, x(t)), a.e. t ∈ [0, T ], ˙y(t) = g(t, x(t), y(t), u(t)) − γkeγkφ(t,y(t))∇yφ(t, y(t)), a.e. t ∈ [0, T ]. Using Lemma 3.2.4, a translation of [58, equation (8)], and arguments parallel to those used in the proofs of [58, Propositions 4.4 & 4.6] and [59, Proposition 5.3], it is not difficult (3.61) to derive the following properties for our sets ¯C γk(t) and ¯C γk(t, k), knowing that the sets C(t) ∩ ¯B¯ε(¯x(t)) are 2¯η time-dependent, uniformly localized near ¯x(t), and are defined not only via ψ1, · · · , ψr but - prox-regular. Notice, from (3.59) and (3.60), that these sets here are Lψ also via the extra function ψr+1. For completeness, we provide the adjusted proofs below. Proposition 3.2.10. The following holds true. (i) There exist k1 ∈ N and r1 ∈ (0, ρo 2 ], such that ∀k ≥ k1, ∀(t, x) ∈ {(t, x) ∈ [0, T ] × Rn : Pr+1 i=1 eγkψi(t,x) = 1}, and ∀ (τ, z) ∈ B2r1 (t, x), we have (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) r+1 X i=1 (cid:13) (cid:13) eγkψi(τ,z)∇xψi(τ, z) (cid:13) (cid:13) (cid:13) > 2¯η r+1 X i=1 eγkψi(τ,z). (3.62) (ii) There exists k2 ≥ k1 and ¯ϵo > 0 such that for all k ≥ k2 we have h x ∈ ¯C γk(t) & ∥ r+1 X i=1 eγkψi(t,x)∇xψi(t, x)∥ ≤ ¯η r+1 X i=1 eγkψi(t,x) # =⇒ "r+1 X i=1 eγkψi(t,x) < e−¯ϵoγk # . (3.63) 76 (iii) For all t ∈ [0, T ], for all k, ¯C γk(t) ⊂ int (cid:16) C(t) ∩ ¯B¯ε(¯x(t))(cid:17) and ¯C γk(t, k) ⊂ int ¯C γk(t), and these sets are uniformly compact. Moreover, there exists k3 ∈ N such that for k ≥ k3, for all t ∈ [0, T ], we have ¯C γk(t) = cl (cid:16)int ¯C γk(t)(cid:17) , ¯C γk(t, k) = cl (cid:16)int ¯C γk(t, k)(cid:17) , bdry ¯C γk(t) := ( x ∈ Rn : ) eγkψi(t,x) = 1 ̸= ∅, r+1 X i=1 int ¯C γk(t) := ( x ∈ Rn : ) eγkψi(t,x) < 1 ̸= ∅, r+1 X i=1 bdry ¯C γk(t, k) := ( x ∈ Rn : r+1 X i=1 eγkψi(t,x) = int ¯C γk(t, k) := ( x ∈ Rn : r+1 X i=1 eγkψi(t,x) < 2¯µ ¯η2γk 2¯µ ¯η2γk = e−γk ¯αk ) ̸= ∅, = e−γk ¯αk ) ̸= ∅. Furthermore, ¯C γk(t) and ¯C γk(t, k) are amenable, epi-Lipschitz, and are respectively ¯η Lψ - and ¯η 2Lψ -prox-regular. (iv) ( ¯C γk(t))k and ( ¯C γk(t, k))k are nondecreasing sequences whose Painlevé-Kuratowski limit is C(t) ∩ ¯B¯ε(¯x(t)) and satisfy (cid:16) int C(t) ∩ ¯B¯ε(¯x(t))(cid:17) = [ k∈N int ¯C γk(t) = [ k∈N ¯C γk(t) = [ k∈N C(0) ∩ ¯B¯ε(¯x(0))(cid:17), there exist kc ≥ k3, rc > 0, and a vector dc ̸= 0 such int ¯C γk(t, k) = [ k∈N ¯C γk(t, k).(3.64) (v) For c ∈ bdry (cid:16) that h(cid:16) C(0) ∩ ¯B¯ε(¯x(0))(cid:17) ∩ ¯Brc (c)i + ¯σk ! dc ∥dc∥ ⊂ int ¯C γk(0, k), ∀k ≥ kc. (3.65) In particular, for k ≥ kc we have c + ¯σk ! dc ∥dc∥ ∈ int ¯C γk(0, k). (3.66) Proof. (i). If this statement is not true, then there exist (γkn )n with kn ≥ n, (tγkn , xγkn ) ∈ [0, T ] × Rn with Pr+1 i=1 eγkn ψi(tγkn ,xγkn ) = 1, and (τγkn , zγkn ) ∈ B 2 n (tγkn , xγkn ) such that for all 77 n > 2 ρo we have ∥ r+1 X i=1 eγkψi(τγkn ,zγkn )∇xψi(τγkn , zγkn )∥ ≤ 2¯η r+1 X i=1 eγkψi(τγkn ,zγkn ). (3.67) Now, let ¯ψ(t, x) := max 1≤i≤r+1 {ψi(t, x)}. Using Lemma 2.2.50, we have that ¯ψ(tγkn , xγkn ) ≤ 1 γkn r+1 X ln i=1 ! eγkψi(tγkn ,xγkn ) = 0 ≤ ¯ψ(tγkn , xγkn ) + ln(r + 1) γkn . (3.68) , xγkn ) ≤ 0, for all i = 1, · · · , r + 1, we deduce that the sequence C(·) ∩ ¯B¯ε(¯x(·))(cid:17) and hence, there exists a subsequence, we do not relabel, )n converge to the same el- )n and (τγkn , xγkn , zγkn )n along which the sequences (tγkn C(·) ∩ ¯B¯ε(¯x(·))(cid:17) . Taking n −→ ∞ in (3.67)-(3.68) and using the fact , xγkn Using the fact that ψi(tγkn ) ∈ Gr (cid:16) (tγkn of (γkn ement (to, zo) ∈ Gr (cid:16) that eγkn ψi(τγkn ,zγkn ) −→ 0 whenever ψi(to, zo) < 0, we get the existence of a sequence of nonnegative numbers (λi) i∈¯I0 such that ¯ψ(to, zo) = 0 and X ≤ 2¯η with X λi = 1. i∈¯I0 This contradicts Lemma 3.2.4 since ¯ψ(to, zo) = 0 is equivalent to ¯I 0 (to,zo) i∈¯I0 (τo,zo) (to,zo) ̸= ∅. (cid:13) (cid:13) (cid:13) λi∇xψi(τo, zo) (cid:13) (cid:13) (cid:13) (cid:13) (to,zo) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (ii). If this statement is not true, there exist (γkn )n with kn ≥ n and (tγkn , xγkn ) ∈ [0, T ] × Rn such that e− γkn n ≤ r+1 X i=1 eγkn ψi(tγkn ,xγkn ) ≤ 1, ∥ r+1 X i=1 eγkψi(tγkn ,xγkn )∇xψi(tγkn , xγkn )∥ ≤ ¯η r+1 X i=1 eγkψi(tγkn ,xγkn ). (3.69) (3.70) , xγkn This yields that (tγkn ) −→ 0. Since Gr (cid:16) ¯ψ(tγk n, xγk n ) −→ (to, xo) ∈ Gr (cid:16) (tγkn , xγkn (3.70) and using that eγkn ψi(tγkn ) ∈ Gr (cid:16) C(·) ∩ ¯B¯ε(¯x(·))(cid:17). Using (3.68)-(3.69), we deduce that C(·) ∩ ¯B¯ε(¯x(·))(cid:17) is compact, we can assume that C(·) ∩ ¯B¯ε(¯x(·))(cid:17), and hence ¯ψ(to, zo) = 0. Taking n −→ ∞ in ) −→ 0 whenever ψi(to, xo) < 0, we get the existence of ,xγkn a sequence of nonnegative numbers (λi) i∈¯I0 such that (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) i∈¯I0 (to,xo) X λi∇xψi(to, zo) ≤ ¯η and X λi = 1. i∈¯I0 (to,xo) (to,xo) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) 78 This contradicts Lemma 3.2.4, since ¯ψ(to, zo) = 0 implies that ¯I 0 (iii). To prove this part, we define for every k ∈ N, the function ψγk that (to,zo) ̸= ∅. : [0, T ] × Rn → R such ψγk (t, x) := 1 γk r+1 X ln i=1 ! eγkψi(t,x) . In that case, ∇xψγk (t, x) = Notice that, for each t ∈ [0, T ], for each k, ψγk and applying it to a particular case, we deduce that Pr+1 i=1 eγkψi(t,x)∇xψi(t, x) Pr+1 i=1 eγkψi(t,x) (t, ·) is C1,1 on ¯C γk(t) + ρoB. Translating (i) . for every (t, x) ∈ bdry ¯C γk(t), we have ∥∇xψγk (t, x)∥ > 2¯η. (3.71) So, in summary, for each t ∈ [0, T ], we apply Lemma 2.2.48 (part I.), for S := ¯C γk(t), (t, ·). This proves all the properties in (iii) pertaining to ¯C γk(t), except the and ψ(·) := ψγk uniform constant for the prox-regularity. To prove that, we follow the same steps to prove the second part of (c) in [59, Proposition 5.3]. Now, to prove the properties pertaining to ¯C γk(t, k), we use Lemma 2.2.48 (part II.), for S(k) := ¯C γk(t, k). We use arguments similar to those used in the proof of the second part of (c) in [59, Proposition 5.3] to show that the prox-regular constant of ¯C γk(t, k) is uniform and equal to ¯η 2Lψ (iv). Fix t ∈ [0, T ]. Let x ∈ int (cid:16) . C(t) ∩ ¯B¯ε(¯x(t))(cid:17), then ¯ψ(t, x) < 0. Since ¯αk → 0, γk → ∞, then there exists kx ∈ N, such that for all k ≥ kx, we have ¯αkx + ln(r + 1) γkx < − ¯ψ(t, x). Then, using Lemma 2.2.50, we have that ¯ψ(t, x) ≤ 1 γkx r+1 X ln i=1 eγkx ψi(t,x) ! ≤ ¯ψ(t, x) + ln(r + 1) γkx < −¯αkx. (3.72) Hence, Pr+1 i=1 eγkx ψi(t,x) < e−γkx ¯αkx , and hence x ∈ int ¯C γk(t, k). Then, int (cid:16) C(t) ∩ ¯B¯ε(¯x(t))(cid:17) ⊂ [ k∈N ⊂ [ k∈N int ¯C γk(t, k) ⊂ [ k∈N ¯C γk(t) ⊂ int (cid:16) ¯C γk(t, k) int ¯C γk(t) ⊂ [ k∈N C(t) ∩ ¯B¯ε(¯x(t))(cid:17) . 79 This proves that (3.64) is satisfied. Using Lemma 2.2.50, we notice that for each (t, x), the function ψγk (t, x) is non-increasing in k, and hence for each t, the sequence ( ¯C γk(t))k is non-decreasing. As a result, using Lemma 2.1.4, we show that the Painlevé-Kuratowski limit is ¯C γk(t) = cl lim k→∞ ¯C γk(t)   .   [ k∈N However, using (3.64) and (3.39), we deduce that   cl [ ¯C γk(t) k∈N   = cl int (cid:16) C(t) ∩ ¯B¯ε(¯x(t))(cid:17) = C(t) ∩ ¯B¯ε(¯x(t)). (t, x) is non-increasing in k and the On the other side, since for each (t, x), the function ψγk sequence ¯αk is decreasing, then ( ¯C γk(t, k))k is nondecreasing. Then, using Lemma 2.1.4, we show that the Painlevé-Kuratowski limit is ¯C γk(t, k) = cl lim k→∞   [ ¯C γk(t, k) k∈N   = C(t) ∩ ¯B¯ε(¯x(t)). (v). We follow the same steps to prove Proposition 4.6(iii) in [58] replacing C, by C(0) ∩ ¯B¯ε(¯x(0)), I 0 , r by r + 1, αk by ¯αk, σk by ¯σk, ψi(·) by ψi(0, ·), ¯Mψ by ¯L, η by ¯η. by ¯I 0 c (0,c) Remark 3.2.11. We deduce, from Proposition 3.2.10, that for any c ∈ C(0)∩ ¯B¯ε(¯x(0)), )k such that, for k large enough, cγk ∈ int ¯C γk(0, k) ⊂ int ¯C γk(0), there exists a sequence (cγk and cγk −→ c. Indeed: (i) For c ∈ bdry (cid:16) C(0) ∩ ¯B¯ε(¯x(0))(cid:17), we choose cγk for all k. For k ≥ kc, we have from (3.66) that cγk ∈ int ¯C γk(0, k). Moreover, since ¯σk −→ 0 we have cγk −→ c. C(0) ∩ ¯B¯ε(¯x(0))(cid:17), Proposition 3.2.10(iv) yields the existence of ˆkc ∈ N, := c + ¯σk dc ∥dc∥ (ii) For c ∈ int (cid:16) such that c ∈ int ¯C γk(0, k) for all k ≥ ˆkc. Hence, there exists ˆrc > 0 satisfying c ∈ ¯Bˆrc (c) ⊂ int ¯C γk(0, k), ∀k ≥ ˆkc. In this case, we take the sequence cγk ≡ c ∈ int ¯C γk(0, k) that converges to c. On the other hand, for the ball ¯B¯δ (3.32), we have the following property. (¯y(0)) generated by the single function φ(0, ·) in (3.31)- 80 Proposition 3.2.12. There exists ko ∈ N such that ¯B¯δ (¯y(0)) ∩ ¯B ¯δ 4 (d) − 2¯αk ¯δ V (d) ∈ B¯ρk (¯y(0)), ∀k ≥ ko and ∀d ∈ bdry ¯B¯δ (¯y(0)), (3.73) where V (d) := ∇yφ(0,d) ∥∇yφ(0,d)∥ = d−¯y(0) ¯δ . Proof. This property follows by applying Lemma 2.2.48 (Part II)(iv) (or Theorem 3.1(iii) of [55]) to S := ¯B¯δ ∥∇yφ(0, d)∥ = ¯δ gives 2, and by noting the triangle inequality with (¯y(0)), ro := ¯δ , and η := ¯δ 4 ∥∇yφ(0, z)∥ > ¯δ 2 and ⟨∇yφ(0, z), V (d)⟩ > ¯δ 2, ∀d ∈ bdry ¯B¯δ (¯y(0)) and ∀z ∈ B ¯δ (d). 2 Parallel to Remark 3.2.11 and using Proposition 3.2.12, we deduce the following. Remark 3.2.13. For any d ∈ ¯B¯δ (¯y(0)), there exists a sequence (dγk )k such that, for k large enough, dγk ∈ int ¯B¯ρk (¯y(0)), and dγk −→ d. Indeed: (i) As ¯ρk ↗ ¯δ, we deduce from (3.73) that for any d ∈ bdry ¯B¯δ (¯y(0)), there exists a sequence (dγk d. )k such that, for k large enough, dγk ∈ B¯ρk (¯y(0)) ⊂ B¯δ (¯y(0)), and dγk −→ (ii) For d ∈ int ¯B¯δ (¯y(0)), there exists kd ∈ N, such that d ∈ int ¯B¯ρk (¯y(0)) for all k ≥ kd. Hence, there exists rd > 0 satisfying d ∈ ¯Brd (d) ⊂ int ¯B¯ρk (¯y(0)), ∀k ≥ kd. In this case, we take the sequence dγk ≡ d ∈ int ¯B¯ρk (¯y(0)) that converges to d. The next theorem is fundamental for the thesis, as it illustrates two key ideas. First, ) of ¯C γk(·, k) × ¯B¯ρk it highlights the invariance for ( ¯Dγk precisely, for k large, if the initial condition is in ¯C γk(0, k) × ¯B¯ρk unique solution which is uniformly Lipschitz and remains in ¯C γk(t, k) × ¯B¯ρk This result extends that in [55, 58] in two directions: (i) when the original problem has (¯y(·)) ⊂ int ¯C γk(·) × B¯δ (¯y(0)), then ( ¯Dγk (¯y(t)) ∀t ∈ [0, T ]. (¯y(·)). More ) has a 81 coupled sweeping processes, and (ii) when the sweeping set is time-dependent and localized ) uniformly approximates that of ( ¯D). near (¯x, ¯y). Second, it shows that the solution of ( ¯Dγk Theorem 3.2.14. Let (cγk, dγk )k be such that (cγk, dγk ) ∈ ¯C γk(0, k) × ¯B¯ρk (¯y(0)) for be a given sequence in U. The (¯ε,¯δ) (0). Let uγk ) −→ (x0, y0) ∈ ¯N every k, and (cγk, dγk following results hold: (I). [Existence of solution to ( ¯Dγk For k large enough, the Cauchy problem of the system ( ¯Dγk (cγk, dγk , has a unique solution (xγk, yγk ) and Invariance] ), and u = uγk ) corresponding to (x(0), y(0)) = ) ∈ W 1,∞([0, T ]; Rn × Rl) such that (xγk (t), yγk (t)) ∈ ¯C γk(t, k) × ¯B¯ρk (¯y(t)) ∀t ∈ [0, T ], max{∥ξγk∥∞, ∥ζγk∥∞} ≤ 2¯µ ¯η2 , max{∥ ˙xγk∥∞, ∥ ˙yγk∥∞} ≤ Mh + 2¯µ ¯η2 ¯L, (3.74) (3.75) (·) and ζγk where ξγk tively to the solutions xγk and yγk via the formulae (·) are the positive continuous functions on [0, T ] corresponding respec- ξγk (·) := r+1 X i=1 ξi γk (·); ξi γk (·) := γkeγkψi(·,xγk (·)) (i = 1, . . . , r + 1); and ζγk (·) := γkeγkφ(·,yγk (·)). (3.76) (II). [Solution of ( ¯Dγk) converges to a unique solution of ( ¯D)] There exist (x, y) ∈ W 1,∞([0, T ]; Rn × Rl) and (ξ1, · · · , ξr, ξr+1, ζ) ∈ L∞([0, T ]; Rr+2 + ) such that a subsequence of ((xγk, yγk ), (ξ1 γk , · · · , ξr γk , ξr+1 γk , ζγk )) (we do not relabel) satisfies (xγk, yγk ) unif−−→ (x, y), ( ˙xγk, ˙yγk ) w∗−−−→ in L∞ ( ˙x, ˙y), ξi γk w∗−−−→ in L∞ ξi (∀i), ζγk w∗−−−→ in L∞ ζ, (3.77) and ξγk converges weakly* in L∞([0, T ]; R+) to ξ := Pr+1 i=1 ξi. Moreover, (x(t), y(t)) ∈ ¯N (¯ε,¯δ) (t) := (C(t) ∩ ¯B¯ε(¯x(t))) × ¯B¯δ 2¯µ ¯η2 , max{∥ ˙x∥∞, ∥ ˙y∥∞} ≤ Mh + (¯y(t)) ∀t ∈ [0, T ], 2¯µ ¯η2 ¯L, max{∥ξ∥∞, ∥ζ∥∞} ≤ ξi(t) = 0 for all t ∈ I - i    ξ(t) = 0 for all t ∈ ¯I -(x), and ζ(t) = 0 for all t such that φ(t, y(t)) < 0. i ∈ {1, · · · , r, r + 1}, (x), 82 (3.78) (3.79) (3.80) admits a subsequence that converges a.e. to some u ∈ U, or if (A1) and (A4.2) hold, If uγk then there exists u ∈ U such that (x, y) is the unique solution of ( ¯D) corresponding to ((x0, y0), u), and, for almost all t ∈ [0, T ], ˙x(t) = f (t, x(t), y(t), u(t)) − r+1 X ξi(t)∇xψi(t, x(t)), = f (t, x(t), y(t), u(t)) − X i=1 ξi(t)∇xψi(t, x(t)), ˙y(t) = g(t, x(t), y(t), u(t)) − ζ(t)∇yφ(t, y(t)). i∈¯I0 (t,x(t)) (3.81) (3.82) (3.83) Proof. Part (I). ) of ( ¯Dγk Step I.1. A unique solution (xγk, yγk Recall that in Remark 3.2.9, for t ∈ [0, T ] a.e., x ∈ C(t) ∩ ¯B¯ε(¯x(t))i, and for u ∈ U (t), we h extended h(t, x, ·, u) := (f, g)(t, x, ·, u) so that (A4.1) holds true for all y ∈ Rl. Hence, for ) exists on a certain interval [0, ˆT ). fixed k ∈ N and for u := uγk , the system ( ¯Dγk ) is well defined on the set D := {(t, x, y) ∈ [0, T ] × Rn × Rl : x ∈ int (cid:16) C(t) ∩ ¯B¯ε(¯x(t))(cid:17) , y ∈ B2¯δ (¯y(t))}. (3.84) ) ∈ D, standard local existence and uniqueness results from ordinary differential As (0, cγk, dγk equations (see Theorem 2.3.3 or [39, Theorem 5.3]) confirm that for some T1 ∈ (0, T ], the Cauchy problem ( ¯Dγk W 1,1([0, T1]; Rn × Rl) such that (s, xγk ) with (x(0), y(0)) = (cγk, dγk (s), yγk ) has a unique solution (xγk, yγk (s)) ∈ D for all s ∈ [0, T1]. Set ) ∈ ˆT := sup{T1 : (x, y) solves ( ¯Dγk ) on [0, T1] with (x(0), y(0)) = (cγk, dγk ) and (t, x(t), y(t)) ∈ D ∀t ∈ [0, T1]}. (3.85) The uniqueness of the solution yields that a solution (xγk, yγk ) exists on the interval [0, ˆT ), and we have (t, xγk (cγk, dγk Step I.2. On [0, ˆT ], (xγk (0) = cγk ∈ ¯C γk(0, k) ⊂ int ¯C γk(0) implies that the function ∆(·) given by Notice that xγk i=1 eγkψi(τ,xγk (τ )) − 1 has ∆(0) < 0. If for some t1 ∈ (0, ˆT ), ∆(t1) = 0, let t > t1 ∆(τ ) := Pr+1 close enough to t1 so that t ∈ (0, ˆT ). Then, from (3.53) and (3.50), we deduce that for (t), yγk (¯y(t)), and ˆT = T . ) of ( ¯Dγk (t)) ∈ D, ∀t ∈ [0, ˆT ). (t)) ∈ ¯C γk(t) × ¯B¯δ ) with (x(0), y(0)) = (t), yγk 83 i = 1, · · · , r + 1, there exists θi γk (·) ∈ ˆ∂sψi(·, xγk (·)) a.e., such that d ds ψi(s, xγk (s)) = θi γk (s) + ⟨∇xψi(s, xγk (s)), ˙xγk (s)⟩, s ∈ [t1, t] a.e., (3.86) (cid:12) (cid:12)θi (cid:12) γk (s) + ⟨∇xψi(s, xγk (s)), f (s, xγk (s), yγk (s), uγk (cid:12) (s))⟩ (cid:12) (cid:12) ≤ ¯L(1 + Mh) = ¯µ, a.e. s ∈ [t1, t]. (3.87) Then, using the first equation of ( ¯Dγk ), we obtain ∆(t) − ∆(t1)= r+1 X Z t γkeγkψi(s,xγk (s)) d ds ψi(s, xγk (s))ds i=1 Z t (3.86)= t1 r+1 X t1 i=1 γkeγkψi(s,xγk (s)) (cid:16) θi γk (s) + ⟨∇xψi(s, xγk (s)), f (s, xγk (s), yγk (s), uγk (cid:17) (s))⟩ r+1 X −⟨ i=1 γkeγkψi(s,xγk (s))∇xψi(s, xγk (s)), γkeγkψj (s,xγk (s))∇xψj(s, xγk (s))⟩   ds r+1 X j=1 (3.87) ≤ (3.62) ≤ ≤ Z t t1 Z t t1   r+1 X i=1 r+1 X i=1 γkeγkψi(s,xγk (s)) ¯µ − (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) r+1 X i=1 γkeγkψi(s,xγk (s))∇xψi(s, xγk 2  ds (3.88) (cid:13) (cid:13) (s)) (cid:13) (cid:13) (cid:13) γkeγkψi(s,xγk (s)) ¯µ − 4¯η2γk !! eγkψi(s,xγk (s)) ds r+1 X i=1 Z t r+1 X t1 i=1 γkeγkψi(s,xγk (s))(¯µ − 2¯η2γk)ds < 0, the third and the second to last inequality are due to the fact that we can choose t close (t1, xγk j=1 γkeγkψj (s,xγk (s)) > 1 2 enough to t1 so that, for s ∈ [t1, t], xγk Pr+1 (s) ∈ B2r1 , and the last inequality follows from γk > ¯µ 2¯η2 . This shows that, ∀t1 ∈ (0, ˆT ) with ∆(t1) = 0, ∆(t) < 0 for all t > t1 close enough to t1. Whence, the continuity of ∆(·) on [0, ˆT ) and ∆(0) < 0 yield that ∆(t) ≤ 0 for all t ∈ [0, ˆT ), that is, ∀t ∈ [0, ˆT ). (t1)) (so we apply (3.62)) and (t) ∈ ¯C γk(t) ⊂ int (cid:16) C(t) ∩ ¯B¯ε(¯x(t))(cid:17) xγk (0) = dγk ∈ ¯B¯ρk (¯y(0)) ⊂ B¯δ On the other hand, as yγk If for some t1 ∈ (0, ˆT ), φ(t1, yγk (t1)) = 0, that is, ∥yγk close enough to t1 so that, ∀s ∈ [t1, t], eγkφ(s,yγk (s)) > 1 2 (3.55), ( ¯Dγk (¯y(·)), and ¯η < ¯δ 2 ), (A4.1), (3.47), yγk (·) ∈ B2¯δ (0)) < 0. (¯y(0)), we have φ(0, yγk (t1) − ¯y(t1)∥ = ¯δ, choose t > t1 (s) − ¯y(s)∥ > ¯δ . Hence, and ∥yγk 2 (by Lemma 3.2.4), yield that, for 84 s ∈ [t1, t] a.e., d ds φ(s, yγk (s)) = ⟨yγk (s) − ¯y(s), − ˙¯y(s) + g(s, xγk (s), yγk (s), uγk (s))⟩ −γkeγkφ(s,yγk (s))∥yγk (s) − ¯y(s)∥2 ≤ ∥yγk (s) − ¯y(s)∥ L(¯x,¯y)(1 + Mh) − γkeγkφ(s,yγk (s))∥yγk (s) − ¯y(s)∥2 (3.89) < 2¯µ − γk 2 ¯η2 < 0, the last inequality follows from γk > 2¯µ φ(t, yγk (t)) = φ(t, yγk ¯η2 e. Hence, for all t close enough to t1, we have d ds (s))ds < 0. φ(s, yγk (t1)) = Z t (t)) − φ(t1, yγk t1 (t), yγk (t) ∈ ¯B¯δ (¯y(t)) for all t ∈ [0, ˆT ). (t)) remains in the compact set Gr(cid:16) ¯C γk(·) × ¯B¯δ This shows that yγk (¯y(·))(cid:17) then Since for t ∈ [0, ˆT ), (t, xγk ) to the whole interval [0, ˆT ]. it is possible to extend in this compact set the solution (xγk, yγk If ˆT < T , the local existence of a solution starting at ˆT contradicts the definition of ˆT , proving that ˆT = T . This completes Step I.2. Step I.3. Invariance of ¯C γk(t, k) × ¯B¯ρk As cγk ∈ ¯C γk(0, k), we have Pr+1 enough such that for all k ≥ k4, we have that . Since γk → ∞, there exists k4 ∈ N large (¯y(t)), i.e., (3.74) is valid. i=1 eγkψi(0,cγk ) ≤ 2¯µ ¯η2γk 2¯µ ¯η2γk ≥ max{e− γk ¯ϵo 2 , r+1 X i=1 eγkψi(0,cγk )}, (3.90) where ¯ϵo the constant from (3.63). Fix k ≥ k4. Let t1 ∈ [0, T ) such that Pr+1 i=1 eγkψi(t1,xγk (t1)) = and ψi(·, ·), we can choose t close . Let ¯ϵk = min{ ¯ϵo 2¯µ ¯η2γk enough to t1 such that for all s ∈ [t1, t], 2 , ln 2 2γk }. Using the continuity of xγk r+1 X i=1 eγkψi(s,xγk (s)) ≥ r+1 X i=1 eγkψi(t1,xγk (t1))e−γk¯ϵk= (3.91) 2¯µ γk ¯η2 e−γk¯ϵk γk ¯ϵo 2 = e−γk¯ϵo. (3.90) ≥ e− γk ¯ϵo 2 e−γk¯ϵk ≥ e− γk ¯ϵo 2 e− Hence, by Proposition 3.2.10(ii), and the fact that xγk Step I.2), we have (τ ) ∈ ¯C γk(τ ) for all τ ∈ [0, T ] (see ∥ r+1 X i=1 eγkψi(s,xγk (s))∇xψi(s, xγk (s))∥ > ¯η r+1 X i=1 eγkψi(s,xγk (s)), ∀s ∈ [t1, t]. (3.92) 85 Thus, for ¯∆(·) := Pr+1 j=1 eγkψj (·,xγk (·)) − 2¯µ ¯η2γk , we have ¯∆(t) − ¯∆(t1) = (3.88) ≤ (3.92) ≤ (3.91) ≤ r+1 X i=1 Z t t1 Z t t1 eγkψi(t,xγk (t)) − r+1 X i=1 eγkψi(t1,xγk (t1))   r+1 X i=1 r+1 X i=1 γkeγkψi(s,xγk (s)) ¯µ − (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) r+1 X i=1 γkeγkψi(s,xγk (s))∇xψi(s, xγk γkeγkψi(s,xγk (s)) ¯µ − ¯η2γk !! eγkψi(s,xγk (s)) ds r+1 X i=1 Z t r+1 X t1 i=1 γkeγkψi(s,xγk (s)) ¯µ (cid:16)1 − 2e−γk¯ϵk (cid:17) ds < 0, 2  ds (cid:13) (cid:13) (s)) (cid:13) (cid:13) (cid:13) (t) ∈ ¯C γk(t, k) for the last inequality follows from the definition of ¯ϵk. This proves that xγk all t > t1 close enough to t1. Whence, similarly to Step I.2, the continuity of ¯∆(·) and ¯∆(0) ≤ 0 imply that xγk (t) ∈ ¯C γk(t, k), ∀t ∈ [0, T ]. (0) = dγk ∈ ¯B¯ρk (¯y(0)), where ¯ρk is given in (3.57), means (0)) ≤ −¯αk. Since ¯αk → 0, and ¯αk > 0 for all k, then we can find k5 ≥ k4 such On the other hand, having yγk that φ(0, yγk that ¯αk ≤ min ( ¯δ2 4 , −φ(0, dγk ) ) for all k ≥ k5. }. If for some t1 ∈ [0, T ), φ(t1, yγk (t1)) = −¯αk, let t > t1 close Define ˆϵk := min{ ¯δ2 8 , ln 2 2γk enough to t1 such that φ(s, yγk (s)) ≥ −¯αk − ˆϵk, ∀s ∈ [t1, t]. Then, for all s ∈ [t1, t], we have ∥yγk (s) − ¯y(s)∥2 ≥ ¯δ2 − 2¯αk − 2ˆϵk ≥ ¯δ2 4 ≥ ¯η2. (3.93) (3.94) Hence, using, respectively, (3.89), yγk (·) ∈ ¯B¯δ (¯y(·)), (3.47), (3.93), first equation in (3.58), 86 and (3.94), we deduce that φ(t, yγk (t)) − φ(t1, yγk (t1)) = ≤ ≤ = Z t t1 Z t t1 Z t t1 Z t t1 (s))ds φ(s, yγk d ds (cid:16)¯µ − γke−γk ¯αke−γkˆϵk∥yγk ¯µ − ! 2¯µ ¯η2 e−γkˆϵk ¯η2 ds (cid:16)1 − 2e−γkˆϵk ¯µ (cid:17) ds < 0 (s) − ¯y(s)∥2(cid:17) ds proving that φ(t, yγk ∀t ∈ [0, T ]. (t)) < −¯αγk . Thus, the continuity of φ(·, yγk (·)) yields, yγk (t) ∈ ¯B¯ρk (¯y(t)) ) satisfy equation (3.75). ) of the Cauchy problem of ( ¯Dγk Step I.4. (xγk, yγk, ξγk, ζγk So far, we proved that a solution (xγk, yγk satisfies (3.74). Hence, the definitions of ¯C γk(t, k) and ξγk tively, yield that ∥ξγk∥∞ ≤ 2¯µ ¯η2 (t)) ≤ −¯αk, and thus, the same bound is immediately obtained for the norm of ζγk, φ(t, yγk defined in (3.76). Whence, the first inequality in (3.75) is satisfied. Employing this latter in ( ¯Dγk in (3.75) is valid. ) and then calling on the definition of ¯L in (3.47), we obtain that the second inequality . On the other hand, the definition of ¯B¯ρk given in (3.60) and (3.76), respec- (¯y(t)) yields that ) exists and Part (II). Step II.1. Existence of (ξ1, · · · , ξr+1, ζ) and (x, y) satisfying (3.77)-(3.80). Using (3.74)-(3.75), it follows that (.1) holds for R := r + 1 and (xk, yk, ξi k, ζk) := (xγk, yγk, ξi γk relabeled) of (xγk, yγk ), (ξ1 γk W 1,∞([0, T ]; Rn+l), (ξ1, · · · , ξr+1, ζ) ∈ L∞([0, T ]; Rr+2 , · · · , ξr+1 , ζγk , ζγk γk ), that converges, respectively, to some (x, y) ∈ + ), such that (3.77) and (3.79) are sat- ). Hence, by Lemma .0.2(i), there is a subsequence (not isfied. Moreover, (3.78) follows from (3.74), (3.58), and Proposition 3.2.10 (iv). Now, we show that (3.80) holds. Let i ∈ {1, · · · , r + 1} and t ∈ I - i (x), that is, ψi(t, x(t)) < 0. Then, by (A3.1) and the uniform convergence of xγk to x, there exist kt ∈ N, αt > 0, and 87 τt > 0 such that ∀k ≥ kt, we have ψi(s, xγk (s)) < − αt 2 , ∀s ∈ (t − τt, t + τt) ∩ [0, T ]. (s) < γke−γk Hence, ξi γk t ∈ ¯I -(x), then t ∈ I - i αt 2 −−−→ k→∞ 0, uniformly on (t − τt, t + τt) ∩ [0, T ] and ξi(t) = 0. Let (x) ∀i ∈ {1, · · · , r + 1}, and hence, ξi(t) = 0 ∀i ∈ {1, · · · , r + 1}, implying that also ξ(t) = 0. Similarly, let t ∈ [0, T ] such that φ(t, y(t)) < 0. The same arguments now applied to φ(t, y(t)) yield the existence of ˆkt ∈ N, ˆαt > 0 and ˆτt > 0 such that, ∀s ∈ (t − ˆτt, t + ˆτt) ∩ [0, T ], ζγk (s) := γkeγkφ(s,yγk (s)) < γke −γk ˆαt 2 uniformly −−−−−→ k→∞ 0, and hence, ζ(t) = 0. Step II.2. Existence of u ∈ U: ((x, y), u) & (ξi, ζ) satisfy ( ¯D) and (3.81)-(3.83), (x, y) unique. admits a subsequence that converges to some u ∈ U, for t ∈ [0, T ] a.e., or Whether uγk assumptions (A1) and (A4.2) are satisfied, apply in each of the two cases the corresponding result in Lemma .0.2(ii) to Q(·) := C(·) ∩ ¯B¯ε(¯x(·)), R := r + 1, qi(·, ·) := ψi(·, ·), and to in (3.76), and their ), ξi the sequences ((xk, yk), uk) := ((xγk, yγk k respective limits (x, y), ξi and ζ. Then, there exists u(·) such that ((x, y), u), ξi (i = 1, · · · , r+ 1) and ζ satisfy (3.81)-(3.83). The facts that (x, y) is a solution of ( ¯D) corresponding to , and ζk := ζγk := ξi γk ), uγk ((x0, y0), u) and is unique follow now directly from Lemma 3.2.8. This completes the proof of this Theorem. Remark 3.2.15. Similar arguments to steps I.2-3 in the proof of Theorem 3.2.14 also show the invariance of the larger sets ¯C γk(t) × ¯B¯ρk in ¯C γk(0) × ¯B¯ρk (¯y(0)), then for all t ∈ [0, T ], (xγk (¯y(t)); this means that if (cγk, dγk (t)) ∈ ¯C γk(t) × ¯B¯ρk (¯y(t)). (t), yγk ) is taken The following corollary is an immediate consequence of Theorem 3.2.14, in which we take uγk = u for all k, and hence, neither (A1) nor (A4.2) is required. It also consists of a Lipschitz-existence and uniqueness result for the Cauchy problem of ( ¯D) via the solution of the Cauchy problem of ( ¯Dγk ), whose initial condition is carefully chosen. 88 Corollary 3.2.16. For given (x0, y0) ∈ ¯N (¯ε,¯δ) (0) and u ∈ U, the system ( ¯D) cor- responding to ((x0, y0); u) has a unique solution (x, y), and hence it is Lipschitz and sat- isfies (3.44)-(3.46). This solution is the uniform limit of a subsequence (not relabeled ) of )k, which is obtained via Theorem 3.2.14 as the solution of ( ¯Dγk ); u), where cγk (xγk, yγk ((cγk, dγk sponding to c = x0 and d = y0, respectively. Hence, for k sufficiently large, we have that are the sequences from Remarks 3.2.11 and 3.2.13 corre- ) with ((x(0), y(0)); u) = and dγk (t), yγk (t)) ∈ ¯C γk(t, k) × ¯B¯ρk (xγk conclusions of Theorem 3.2.14 hold. (¯y(t)) ∀t ∈ [0, T ], (xγk, yγk )k is uniformly lipschitz, and all We now present the table summarizing the results of Subsection 3.2.2. Table 3.3 Summary of results from Subsection 3.2.2 . Result Description We extend the function h(t, x, ·, u) from ¯B¯δ (¯y(t)) to Rl so that this Remark 3.2.9 extension satisfies for all y ∈ Rl, (A4.1), and also (A4.2) whenever it is satisfied by h. This extension shall be later used in Theorem 3.2.14. Proposition 3.2.10 Remark 3.2.11 We derive properties for the sets ¯C γk(t) and ¯C γk(t, k). We approximate any c ∈ C(0) ∩ ¯B¯ε(¯x(0)) by a sequence cγk ∈ int ¯C γk(0, k) ⊂ int ¯C γk(0) such that cγk −→ c. Proposition We derive properties for the ball ¯B¯δ (¯y(0)) generated by the single 3.2.12 Remark 3.2.13 Theorem 3.2.14 function φ(0, ·). We approximate any d ∈ ¯B¯δ (¯y(0)) by a sequence dγk ∈ int ¯B¯ρk (¯y(0)) ¯C γk(·, k) × ¯B¯ρk (¯y(·)), and show that the solution such that dγk −→ d. We highlight the invariance for ( ¯Dγk (¯y(·)) ⊂ int ¯C γk(·) × B¯δ of ( ¯Dγk ) uniformly approximates that of ( ¯D). ) of 89 Table 3.3 (cont’d) Result Remark 3.2.15 Description We highlight the invariance of the larger sets ¯C γk(t) × ¯B¯ρk (¯y(t)). This is an immediate consequence of Theorem 3.2.14 and consists of a Corollary 3.2.16 Lipschitz-existence and uniqueness result for the Cauchy problem of ( ¯D) via the solution of the Cauchy problem of ( ¯Dγk ), whose initial condition is carefully chosen. 3.3 Study of the dynamic (D) under global assumptions We now introduce the following global versions of the previous assumptions that shall be used for the global results in this section. For completeness and the reader’s convenience, we include them here. (A3.1)G and (A4)G are, respectively, assumptions (A3.1) and (A4) when satisfied for ¯δ = ∞ (the same constants’ labels are kept), that is, ¯x, ¯y and the balls around them are not involved therein, and (A3.2)G is a global version of (A3.2) which will imply the uniform prox-regularity of C(t). (A3)G Global assumptions on the functions ψi: (A3.1)G There exist ρo > 0 and Lψ > 0 such that, for each i, ∇xψi(·, ·) exists on Gr (C(·)) +{0} × ρoB, and ψi(·, ·) and ∇xψi(·, ·) satisfy, for all (t1, x1), (t2, x2) ∈ Gr (C(·)) + {0} × ρo 2 ¯B, max {|ψi(t1, x1) − ψi(t2, x2)|, ∥∇xψi(t1, x1) − ∇xψi(t2, x2)∥} ≤ Lψ( |t1 − t2| + ∥x1 − x2∥). (A3.2)G  For every (t, x) ∈ Gr C(·) with I 0 (t,x) ̸= ∅ we have  λi∇xψi(t, x) = 0, with each λi ≥ 0   =⇒ X   i∈I0 (t,x) h λi = 0, ∀i ∈ I 0 (t,x) i . (A4)G Global assumptions on h(t, x, y, u) := (f, g)(t, x, y, u): (A4.1)G For (x, y, u) ∈ S C(t) × Rl × U, h(·, x, y, u) is Lebesgue-measurable and, t∈[0,T ] 90 for a.e. t ∈ [0, T ], h(t, ·, ·, ·) is continuous on C(t) × Rl × U (t). There exist Mh > 0, and Lh ∈ L2([0, T ]; R+), such that, for a.e. t ∈ [0, T ], for all (x, y), (x′, y′) ∈ C(t) × Rl and u ∈ U (t), ∥h(t, x, y, u)∥ ≤ Mh and ∥h(t, x, y, u) − h(t, x′, y′, u)∥ ≤ Lh(t)∥(x, y) − (x′, y′)∥. (A4.2)G The set h(t, x, y, U (t)) is convex for all (x, y) ∈ C(t) × Rl and t ∈ [0, T ] a.e. 2 3.3.1 Preliminary results The compactness of Gr C(·) assumed in the following lemma allows us to easily imitate the proof of Lemma 3.1.1 and produce the following equivalence between (A3.2)G and a global version of condition (3.10), namely, (3.95), in which ¯x and the localization around it are absent. Lemma 3.3.1. Assume that ψi(·, ·) is continuous and, for all t ∈ [0, T ], the set C(t) is nonempty, closed, and given by (3.3). Assume that (A3.1)G holds and that Gr C(·) is compact. Then (A3.2)G is equivalent to the existence of a constant η > 0 such that (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) X i∈I0 (t,c) (cid:13) (cid:13) (cid:13) λi∇xψi(t, c) (cid:13) (cid:13) (cid:13) (cid:13) > 2η, ∀(t, c) ∈ {(τ, x) ∈ Gr C(·) : I 0 (τ,x) ̸= ∅}, (3.95) where I 0 (τ,x) is defined in (3.7) and (λi) i∈I0 (t,c) is any sequence of nonnegative numbers satis- fying P i∈I0 (t,c) λi = 1. As a consequence of Lemma 3.3.1, we obtain the uniform prox-regularity of C(t), as well as a formula for the normal cone to C(t). For Lψ the common bound of {∥∇xψi(·, ·)∥}r i=1 and the common Lipschitz constant of {∇xψi(·, ·)}r i=1 we assume without loss of generality that Lψ ≥ 8η ρo . on the compact set Gr C(·) + {0} × ρo 2 ¯B, Lemma 3.3.2. Assume that ψi(·, ·) is continuous and, for all t ∈ [0, T ], the set C(t) is nonempty, closed, and given by (3.3). Assume that (A3.1)G and (A3.2)G hold, and that 2This condition is only needed for the existence of an optimal solution,Theorem 4.1.1, and not for Theorem 3.3.7. 91 Gr C(·) is compact. Then, for all t ∈ [0, T ], C(t) is amenable (in the sense of [62]), epi- lipschitzian, C(t) = cl (int C(t)), and is uniformly 2η Lψ -prox-regular. In this global setting, the normal cone formula (3.12) is now valid for all (t, x) ∈ Gr C(·). In particular, NC(t)(x) = N P C(t) (x) = N L C(t) (x) =    X i∈I0 (t,x) λi∇xψi(t, x) : λi ≥ 0    ̸= {0}, ∀x ∈ bdry C(t). (3.96) Proof. We use condition (3.95), Lemma 2.2.41 ([2, Theorem 9.1]) (with min{ρo, 2η Lψ } = 2η Lψ ), Lemma 2.2.11, Lemma 2.2.46, and Lemma 2.2.43. Remark 3.3.3. Since C(t) is 2η Lψ unique projection on C(t). Define for a.e. t ∈ [0, T ], and (x, y, u) ∈ -prox-regular, then each point in C(t) + 2η Lψ ×Rl ×U (t), B has a B i h C(t) + η Lψ ˆh(t, x, y, u) := h(t, π1(t, x), y, u), where π1(t, ·) := πC(t)(·). Notice that ˆh is well-defined, and π1(t, ·) is 2-lipschitz on C(t) + B (see Proposition 2.2.39(ii)). This means that the function ˆh (which we relabel h) η Lψ satisfies (A4.1)G, where C(t) is now replaced by C(t) + η Lψ , then ψ1, · · · , ψr satisfy (A3.1)G on 2Lh(t). On the other hand, we note that since η Lψ B, and Lh(t) is now replaced by < ρo 2 Gr (C(·)) + {0} × η Lψ ¯B. The following lemma, which requires Gr C(·) bounded, is a global version of Lemma 3.1.5. Lemma 3.3.4. Assume that ψi(·, ·) is continuous and, for all t ∈ [0, T ], the set C(t) is nonempty, closed, and given by (3.3). Assume that (A3.1)G, (A3.2)G and (A4.1)G hold, and that Gr C(·) is compact. Let u ∈ U, (x0, y0) ∈ C(0) × Rl be fixed, and (x(·), y(·)) ∈ W 1,1([0, T ]; Rn × Rl) with (x(0), y(0)) = (x0, y0) and x(t) ∈ C(t), ∀t ∈ [0, T ]. Then, (x, y) solves (D) corresponding to ((x0, y0), u) if and only if there exist measurable functions (λ1(·), · · · , λr(·)) such that, for all i = 1, · · · , r, λi(t) = 0 for t ∈ I - i (x), and ((x, y), u) 92 together with (λ1, · · · , λr) satisfies    ˙x(t) = f (t, x(t), y(t), u(t)) − P i∈I0 (t,x(t)) λi(t)∇xψi(t, x(t)) a.e. t ∈ [0, T ], ˙y(t) = g(t, x(t), y(t), u(t)), a.e. t ∈ [0, T ], (x(0), y(0)) = (x0, y0), and, we have the following bounds    ∥λi∥∞ ≤ ∥ Pr i λi∥∞ ≤ µ 4η2 , ∀i = 1, · · · , r, ∥ ˙x∥∞ ≤ Mh + µ 4η2 Lψ, ∥ ˙y∥∞ ≤ Mh. (3.97) (3.98) Furthermore, (x, y) is the unique solution of (D) corresponding to ((x0, y0), u). Proof. We follow the same proof of Lemma 3.1.5 using the normal cone formula in (3.96) instead of (3.12), Lemma 3.3.1 instead of Lemma 3.1.1, and the prox-regularity constant 2η Lψ provided by Lemma 3.3.2 instead of ρ. We now introduce the following notations that are going to be used in our proofs. • Recall from (3.17) that µ := Lψ(1 + Mh). Define a sequence (γk)k such that, for all 2µ η2 e and γk → ∞ as k −→ ∞, and the real sequences (αk)k,and (σk)k k ∈ N, γk > by αk := 1 ln ! η2γk 2µ ; σk := rLψ 2η2 ln(r) γk ! . + αk (3.99) γk Our choice of γk yields that γke−γkαk = 2µ η2 , (αk, σk) > 0 ∀ k ∈ N, αk ↘ 0, σk ↘ 0. (3.100) • For each t ∈ [0, T ] and k ∈ N, we define C γk(t) := ( x ∈ Rn : ) eγkψi(t,x) ≤ 1 ⊂ int C(t) for r > 1, & C γk(t) := C(t) for r = 1,(3.101) r X i=1 ( C γk(t, k) := x ∈ Rn : r X i=1 eγkψi(t,x) ≤ ) = e−αkγk 2µ η2γk ⊂ int C γk(t). (3.102) 93 Under the assumptions of Lemma 3.3.2, Proposition 3.2.10 and Remark 3.2.11 hold true in a global setting, that is, the ball around ¯x is now omitted in those statements. More precisely, the summations therein are only up to r instead of r + 1 terms, with C γk(t, k), C γk(t), and C(t) replacing ¯C γk(t, k), ¯C γk(t), and C(t) ∩ ¯B¯ε(¯x(t)), respectively. For the convenience of our readers, we present those results here. Proposition 3.3.5. Assume that ψi(·, ·) is continuous and, for all t ∈ [0, T ], the set C(t) is nonempty, closed, and given by (3.3). Assume that (A3.1)G and (A3.2)G hold, and that Gr C(·) is compact. The following holds true. (i) There exist k1 ∈ N and r1 ∈ (0, ρo 2 ], such that ∀k ≥ k1, ∀(t, x) ∈ {(t, x) ∈ [0, T ] × Rn : Pr i=1 eγkψi(t,x) = 1}, and ∀ (τ, z) ∈ B2r1 (t, x), we have (cid:13) (cid:13) eγkψi(τ,z)∇xψi(τ, z) (cid:13) (cid:13) (cid:13) > 2η r X i=1 (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) r X i=1 eγkψi(τ,z). (3.103) (ii) There exists k2 ≥ k1 and ϵo > 0 such that for all k ≥ k2 we have [x ∈ C γk(t) & ∥ r X eγkψi(t,x)∇xψi(t, x)∥ ≤ η r X eγkψi(t,x) # =⇒ " r X eγkψi(t,x) < e−ϵoγk # .(3.104) i=1 (iii) For all t ∈ [0, T ], for all k, C γk(t) ⊂ int C(t) for r > 1 and C γk(t, k) ⊂ int C γk(t), i=1 i=1 and these sets are uniformly compact. Moreover, there exists k3 ∈ N such that for k ≥ k3, these sets are the closure of their interiors, their boundaries and interiors are non-empty, and the formulae for their respective boundaries and interiors are obtained from their own definitions in (3.101) and (3.102) by replacing the inequalities therein by equalities and strict inequalities, respectively. Furthermore, C γk(t) and C γk(t, k) are amenable, epi-Lipschitz, and are respectively η Lψ - and η 2Lψ -prox-regular. (iv) For every t ∈ [0, T ], (C γk(t))k and (C γk(t, k))k are nondecreasing sequences whose Painlevé-Kuratowski limit is C(t) and satisfy int C(t) = [ k∈N int C γk(t) = [ k∈N C γk(t) = [ k∈N int C γk(t, k) = [ k∈N C γk(t, k). (3.105) (v) For c ∈ bdry C(0), there exist kc ≥ k3, rc > 0, and a vector dc ̸= 0 such that h C(0) ∩ ¯Brc (c)i + σk ! dc ∥dc∥ 94 ⊂ int C γk(0, k), ∀k ≥ kc. In particular, for k ≥ kc we have c + σk ! dc ∥dc∥ ∈ int C γk(0, k). (3.106) Remark 3.3.6. We deduce, from Proposition 3.3.5, that for any c ∈ C(0), there exists a sequence (cγk cγk −→ c. Indeed: )k such that, for k large enough, cγk ∈ int C γk(0, k) ⊂ int C γk(0), and (i) For c ∈ bdry C(0), we choose cγk := c + σk dc ∥dc∥ for all k. For k ≥ kc, we have from (3.106) that cγk ∈ int C γk(0, k). Moreover, since σk −→ 0 we have cγk −→ c. (ii) For c ∈ int C(0), Proposition 3.3.5(iv) yields the existence of ˆkc ∈ N, such that c ∈ int C γk(0, k) for all k ≥ ˆkc. Hence, there exists ˆrc > 0 satisfying c ∈ ¯Bˆrc (c) ⊂ int C γk(0, k), ∀k ≥ ˆkc. In this case, we take the sequence cγk ≡ c ∈ int C γk(0, k) that converges to c. 3.3.2 Existence and uniqueness of solution corresponding to (D) We now prove Theorem 3.3.7, which says that under global assumptions, the Cauchy problem corresponding to (D) has a unique solution that is Lipschitz. Similar to the proof of Corollary 3.2.16, the proof of Theorem 3.3.7 follows closely the arguments used to prove Theorem 3.2.14 after removal of the truncation on C(t) and Rl. However, doing so requires important modifications. For instance, removing the truncation on C(t) in the set D defined in (3.84), makes it unsuitable for the global setting, and hence, it will have to be redefined (see (3.108)). This discrepancy is due to having C(t) ∩ ¯B¯ε(¯x(t)) always generated by at least two functions, and hence, ¯C γk(t) ⊂ int C(t) is always valid. While in the global setting, for the case r = 1, (3.101) yields that C γk(t) = C(t) and hence, D must be modified to include Gr C(·). Theorem 3.3.7 (Existence & uniqueness of Lipschitz solutions for (D)). As- sume that ψi(·, ·) continuous, and, for t ∈ [0, T ], C(t) is non-empty, closed, and given by (3.3). Assume that (A3.1)G, (A3.2)G and (A4.1)G are satisfied, and that Gr C(·) is bounded. 95 Given (x0, y0) ∈ C(0) × Rl and u ∈ U, the Cauchy problem corresponding to (D) and ((x(0), y(0)); u) = ((x0, y0); u) has a unique solution (x, y), which is Lipschitz and is the uniform limit of a subsequence (not relabeled) of (xγk, yγk of a standard control system corresponding to u with xγk )k, where (xγk, yγk (t) ∈ int C(t), for all t ∈ [0, T ]. ) is the solution Proof. We denote by MC the bound of Gr C(·). Consider the Cauchy problem (D) corre- sponding to ((x(0), y(0)); u) = ((x0, y0); u) ∈ (C(0)×Rl)×U. The existence of a solution that is Lipschitz and unique, will be shown by approximating (D) with (Dγk global version of ( ¯Dγk ). Let cγk∈ C γk(0, k) be the sequence from Remark 3.3.6 corresponding (and converging) to c = x0. We now proceed with the proof by imitating the same steps ), defined below as the ) := (cγk, y0) and we make the of the proof of Theorem 3.2.14, in which we employ (cγk, dγk following notable modifications. Using Remark 3.3.3, we can extend h = (f, g)(t, ·, y, u) from C(t) to C(t) + η Lψ B such that h satisfy (A4.1)G, where C(t) is now replaced by C(t) + η Lψ B. For fixed k ∈ N large enough, we consider the system (Dγk be ) corresponding to ((cγk, y0), u) to (Dγk )   ˙x(t) = f (t, x(t), y(t), u(t)) −  ˙y(t) = g(t, x(t), y(t), u(t)), a.e. t ∈ [0, T ]. r P i=1 γkeγkψi(t,x(t))∇xψi(t, x(t)), a.e. t ∈ [0, T ], (3.107) This system is well defined on the following modified version of the set D, given in (3.84), DG := {(t, x, y) ∈ [0, T ] × Rn × Rl : x ∈ C(t) + η Lψ B}. (3.108) As (0, cγk, y0) ∈ DG, we follow steps similar to the ones used to reach (3.85), and we deduce ) with (x(0), y(0)) = (cγk, y0) exists on the interval [0, ˆTG) ⊂ that a solution (xγk, yγk (t), yγk [0, T ], and (t, xγk ) of (Dγk (t)) ∈ DG, ∀t ∈ [0, ˆTG), where ˆTG is given by ˆTG := sup{T1 : (x, y) solves (Dγk ) on [0, T1] with (x(0), y(0)) = (cγk, y0) and (t, x(t), y(t)) ∈ DG ∀t ∈ [0, T1]}. (3.109) Simlarily to Step I.2 in the proof of Theorem 3.2.14, we conclude that xγk for all t ∈ [0, ˆTG). On the other hand, since φ is absent in (Dγk ) and g is bounded by (t) ∈ C γk(t) 96 Mh (from (A4.1)G), we immediately obtain that, for t ∈ [0, ˆTG), yγk M0 := ∥y0∥ + MhT . The boundedness of Gr C(·) by MC guarantees that the solution remains ¯B and hence, ˆTG = T . By mimicking Step I.3. of the proof of ), and hence, our solution in the bounded set MC Theorem 3.2.14, we obtain the invariance of C γk(t, k) in x for (Dγk (xγk, yγk C γk(t, k) × M0 ) for the Cauchy problem (Dγk ¯B for all t ∈ [0, T ]. Thus, the definition of C γk(t, k) and ξi γk ) with ((cγk, y0), u), also satisfies (xγk (t), yγk (i = 1, · · · , r) ¯B, where (t) ∈ M0 ¯B × M0 (t)) ∈ given in (3.102) and (3.76), respectively, and ξγk 2µ η2 . Employing this bound of ξγk in (Dγk (·) := Pr (·), yield that ∥ξγk∥∞ ≤ i=1 ξi γk ), we obtain that max{∥ ˙xγk∥∞, ∥ ˙yγk∥∞} ≤ Mh + , 0). Whence, k, ζk) := (xγk, yγk, ξi γk 2µ η2 Lψ. It follows that (.1) holds for R := r and (xk, yk, ξi Lemma .0.2(i) together with Proposition 3.3.5(iv) implies that a subsequence of (xγk, yγk and ξi γk (∀i = 1, · · · , r) converge respectively to some (x, y) ∈ W 1,∞([0, T ]; Rn × Rl), and ), (ξ1, · · · , ξr) ∈ L∞([0, T ]; Rr + ), satisfying (xγk, yγk ) unif−−→ (x, y), ( ˙xγk, ˙yγk ) w∗−−−→ in L∞ ( ˙x, ˙y), ξi γk w∗−−−→ in L∞ ξi (∀i = 1, · · · , r), (x(t), y(t)) ∈ C(t) × M0 ¯B ∀t ∈ [0, T ]; max{∥ ˙x∥∞, ∥ ˙y∥∞} ≤ Mh + 2µ η2 Lψ; ∥ r X i=1 ξi∥∞ ≤ 2µ η2 , ξi(t) = 0 for all t ∈ I - i (x), i ∈ {1, · · · , r}, ξ(t) := r X i=1 ξi = 0 for all t ∈ I -(x), the validation of the last equations is similar to that for (3.80). We now apply the dominated convergence theorem to (Dγk ), uγk Lemma .0.2 (ii)), and we deduce that ((x, y), u) and λi = ξi satisfy (3.97). By means := u) (as done in the proof of Case 1 in ) at ((xγk, yγk of Lemma 3.3.4, we conclude that (x, y) is the unique solution of (D) corresponding to ((x0, y0), u) and is Lipschitz. 97 The following table summarizes the results of Section 3.3. Table 3.4 Summary of results from Section 3.3 . Result Lemma 3.3.1 Lemma 3.3.2 Description We use the compactness of Gr C(·) to produce an equivalence between (A3.2)G and a global version of condition (3.10), namely, (3.95). We use Lemma 3.3.1 to obtain the uniform prox-regularity of C(t), as well as a formula for the normal cone to C(t). Remark 3.3.3 We extend h to a function that satisfies (A4.1)G on C(t) + η Lψ B. We use Lemma 3.3.1 and Lemma 3.3.2 to establish the Lipschitz Lemma 3.3.4 continuity and the uniqueness of the solutions for the Cauchy problem Proposition 3.3.5 Remark 3.3.6 Theorem 3.3.7 of (D) via its equivalent form. We derive properties for our sets C γk(t) and C γk(t, k). We approximate any c ∈ C(0) by a sequence cγk ∈ int C γk(0, k) ⊂ int C γk(0) such that cγk −→ c. We prove that the Cauchy problem corresponding to (D) has a unique solution that is Lipschitz. 98 CHAPTER 4 OPTIMAL CONTROL PROBLEM (P ) OVER A COUPLED SWEEPING PROCESS DYNAMIC (D) The aim of this chapter is to derive global existence of optimal solutions and necessary conditions in the form of a maximum principle for a strong local minimizer of the fixed time Mayer problem (P ) given by the following: (P )    minimize J(x(0), y(0), x(T ), y(T )) over ((x, y), u) ∈ W 1,1([0, T ], Rn × Rl) × U such that (D)    ˙x(t) ∈ f (t, x(t), y(t), u(t)) − NC(t)(x(t)), a.e. t ∈ [0, T ], ˙y(t) = g(t, x(t), y(t), u(t)), a.e. t ∈ [0, T ], (x(0), y(0), x(T ), y(T )) ∈ S, (B.C.) where T > 0 is fixed, J : Rn × Rl × Rn × Rl −→ R ∪ {∞}, f : [0, T ] × Rn × Rl × Rm −→ Rn, g : [0, T ] × Rn × Rl × Rm −→ Rl, C(t) is the intersection of the zero-sublevel sets of a finite sequence of functions ψi(t, ·) where ψi : [0, T ] × Rn −→ R, i = 1, . . . , r, NC(t) is the Clarke normal cone to C(t), S ⊂ C(0)×Rl×Rn×Rl is closed, U (·) : [0, T ] ⇝ Rm is nonempty, closed, and Lebesgue- measurable set-valued map, and the set of control functions U is defined by U := {u : [0, T ] −→ Rm : u is measurable and u(t) ∈ U (t), a.e. t ∈ [0, T ]}. (4.1) A pair ((x, y), u) is admissible for (P ) if ((x, y), u) ∈ W 1,1([0, T ]; Rn × Rl) × U satisfies the dynamic (D) and the boundary conditions (B.C.). An admissible pair ((¯x, ¯y), ¯u) is said to be a ¯δ-strong local minimizer for (P ), for some ¯δ > 0, if for all ((x, y), u) admissible for (P ) and satisfying ∥(x, y) − (¯x, ¯y)∥∞ ≤ ¯δ, we have J(¯x(0), ¯y(0), ¯x(T ), ¯y(T )) ≤ J(x(0), y(0), x(T ), y(T )). 4.1 Existence of optimal solution for (P ) under global assumptions In this section, we demonstrate the global existence of an optimal solution for (P ) when the global assumptions are satisfied, see Theorem 4.1.1. Recall assumptions (A1), (A3.1)G, 99 (A3.2)G, and (A4)G from Chapter 3. Theorem 4.1.1 (Global existence of optimal solutions for (P )). Assume that (A1) holds, ψi(·, ·) continuous, and, for t ∈ [0, T ], C(t) is non-empty, closed, and given by (3.3). Assume that (A3.1)G, (A3.2)G and (A4)G are satisfied, and that Gr C(·) and π2(S) are bounded, where π2 is the projection of S into the second component. Let J : Rn × Rl × Rn × Rl → R ∪ {∞} be merely lower semicontinuous. Then, (P ) has a global optimal solution if and only if it has at least one admissible pair ((x, y), u) with (x(0), y(0), x(T ), y(T )) ∈ dom J. Proof. Let MC and Mπ2(S) be the bounds of Gr C(·) and π2(S), respectively. Observe that ((x, y), u) is admissible for (P ) is equivalent to ((x, y), u) solving (D) and (x(0), y(0), x(T ), y(T )) ∈ SG := S ∩ (C(0) × Mπ2(S) × C(T ) × M ¯B), where M := Mπ2(S) + T Mh. Since (P ) has an admissible solution with (x(0), y(0), x(T ), y(T )) ∈ dom J, J is lower semi- continuous, and SG is compact, then the infimum of J over ((x, y), u) satisfying (D) and (B.C.) exists (see Lemma 2.4.4). Let ((xn, yn), un) be a minimizing sequence for (P ). Then, for each n ∈ N, ((xn, yn), un) satisfies (D), starting at (x0, y0) := (xn(0), yn(0)), and (B.C.). Hence, by Lemma 3.3.4, there exists a sequence of (λn i with ((xn, yn), un) satisfy (3.97) and such that, for all i = 1, · · · , r, = 0 on I - i ∈ L∞([0, T ]; R+), λn λn i the bounds in (3.98). Apply lemma .0.2(i) to R := r, (xn, yn), ξi n (xn), and, (λn i )r i=1 )r i=1 i ), ζ := 0, and subsequences (not relabeled) of (xn, yn)n and (λn i L∞([0, T ]; Rr + (xn, yn) unif−−→ (ˆx, ˆy), ( ˙xn, ˙yn) i (λ1, · · · , λr) satisfy the bounds in (.2). Furthermore, we have (ˆx(0), ˆy(0), ˆx(T ), ˆy(T )) ∈ S. w∗−−−→ in L∞ ( ˙ˆx, ˙ˆy), λn w∗−−−→ in L∞ i , ζn := 0, M1 := , we obtain (ˆx, ˆy) ∈ W 1,∞, (λ1, · · · , λr) ∈ := λn )n such that λi, for all i = 1, · · · , r, and ( ˙ˆx, ˙ˆy) and max{MC, M }, M2 := Mh + µ 4η2 Lψ, and M3 := µ 4η2 On the other hand, as ((xn, yn), un) and (λn i )r i=1 for ζ := 0, qi := ψi, and Q(t) := C(t). Noting that (A1) and (A4.2)G hold, then by applying satisfy (3.97), this means that they solve (.3) the global version of Lemma .0.2 (ii) (see Remark .0.3), we obtain ˆu ∈ U such that ((ˆx, ˆy), ˆu) and (λi)r i=1 also satisfy (.3), which is (3.97). Thus, to prove that ((ˆx, ˆy), ˆu) is admissible for 100 (D), it suffices by the equivalence in Lemma 3.3.4 to show that for all i = 1, · · · , r, λi is supported on I 0 i (ˆx), knowing that for all i = 1, · · · , r, λn i (t) = 0 for t ∈ I - i (xn). Fix t ∈ I - i (ˆx), then, ψi(t, ˆx(t)) < 0. Since xn converges uniformly to ˆx and ψi(·, ·) is continuous, we can find ˆδ > 0 and ˆn ∈ N such that ∀s ∈ (t− ˆδ, t+ ˆδ)∩[0, T ] and for all n ≥ ˆn, we have ψi(s, xn(s)) < 0 (·) → 0 on (t − ˆδ, t + ˆδ) ∩ [0, T ], and so and hence λn i (s) = 0. Thus, as n → ∞, 0 = λn i λi(t) = 0. Therefore, Lemma 3.3.4 yields that ((ˆx, ˆy), ˆu) is admissible for (D). Using the lower semicontinuinity of J, we deduce that J(ˆx(0), ˆy(0), ˆx(T ), ˆy(T )) ≤ lim n→∞ J(xn(0), yn(0), xn(T ), yn(T )) = inf ((x,y),u) admissible for (P ) J(x(0), y(0), x(T ), y(T )), showing that ((ˆx, ˆy), ˆu) is optimal for (P ) over all admissible pairs ((x, y), u). Table 4.1 Summary of results from Section 4.1 . Description We demonstrate the global existence of an optimal solution for (P ). Result Theorem 4.1.1 4.2 Pontryagin maximum principle for (P ) under local assumptions In this section, we present the maximum principle for the problem (P ). We employ a modification of the exponential penalization technique used in [30, 70, 55] for special cases of (P ). We first approximate the given optimal solution of (P ) with optimal solutions for some approximating problems having joint-endpoint constraints, (P α,β γk ), which are standard optimal control problems involving exponential penalty terms (Proposition 4.2.8). Then, we find necessary conditions for the approximating problems (Proposition 4.2.9), and we finally conclude the necessary conditions for (P ) by taking the limit of the necessary conditions for (P α,β γk ). For a given pair (¯x, ¯y) ∈ C([0, T ]; Rn × Rl) such that ¯x(t) ∈ C(t) ∀t ∈ [0, T ], and for a constant ¯δ > 0, we adopt all the local assumptions introduced at the beginning of Chapter 101 3 and introduce two additional ones. We say that the following assumptions hold true at ((¯x, ¯y); ¯δ) if the corresponding conditions hold true. (A3.3) There exists a positive Lipschitz function ¯β(·) = ( ¯β1(·), · · · , ¯βr(·)) : [0, T ] −→ Rr such that X ¯βj(t)|⟨∇xψj(t, ¯x(t)), ∇xψi(t, ¯x(t))⟩| < ¯βi(t)∥∇xψi(t, ¯x(t))∥2, ∀t ∈ I 0(¯x), ∀i ∈ I 0 (t,¯x(t)). j∈I0 (t,¯x(t)) j̸=i For a > 0, b > 0 given, we recall the following set, given in (3.9), by ¯N(a,b)(t) := h C(t) ∩ ¯Ba(¯x(t))i × ¯Bb(¯y(t)), for t ∈ [0, T ], and we introduce a new set ¯Ba := ¯Ba(¯x(0)) × ¯Ba(¯y(0)) × ¯Ba(¯x(T )) × ¯Ba(¯y(T )). (4.2) (A5) Local assumption on J at ((¯x, ¯y); ¯δ): There exist ρ1 > 0 and LJ > 0 such that J is LJ -Lipschitz on S(¯δ), where S(¯δ) := (cid:16)h S ∩ ¯B¯δ i + ρ1 ¯B (cid:17) ∩ (cid:16) ¯N (¯δ,¯δ) (0) × ¯N (¯δ,¯δ) (T )(cid:17) . 4.2.1 Preliminary results We start the first subsection by presenting consequences of (A3.3) that shall be crucial for our approximating problem and the proof of the maximum principle. For this subsection, let C(·) satisfying (A2) for ρ > 0. Consider ¯x ∈ C([0, T ]; Rn) with ¯x(t) ∈ C(t) for all t ∈ [0, T ], and ¯δ > 0 such that (A3.1) holds at (¯x; ¯δ). The next remark discusses the significance of (A3.3) in the proof of the maximum principle. In particular, it highlights why it is sufficient to prove the maximum principle (Theorem 4.2.11) under a stronger assumption. Remark 4.2.1 (Assumption (A3.3)). Note that when r = 1, the sets C(t) are smooth and condition (A3.3) trivially holds. Let r > 1, then the sets C(t) are nonsmooth. In this case, a condition closely related to (A3.3), see [46, Theorem 1.3.1], has been first 102 mentioned in [40] to be useful when sweeping (or reflected) processes over nonsmooth sets are studied. For t ∈ I 0(¯x), denote by Gψ(t) the Gramian matrix of the vectors {∇xψi(t, ¯x(t)) : i ∈ I 0 (t,¯x(t))}, i.e. (Gψ(t))ij = ⟨∇xψi(t, ¯x(t)), ∇xψj(t, ¯x(t))⟩. • If for all i ∈ I 0 (t,¯x(t)) , we have (A3.3) holds for ¯βi(t) ≡ 1, then the matrix Gψ(t) is strictly diagonally dominant (see Definition 2.1.1) . • For the general case, (A3.3) says that for some positive Lipschitz vector function, ¯β(·), (t) is strictly diagonally dominant for all t ∈ I 0(¯x), where D ¯β(t) (t) the matrix Gψ(t)D ¯β(t) is the diagonal matrix whose diagonal entries are ( ¯βi(t)) ¯βj(t)⟨∇xψi(t, ¯x(t)), ∇xψj(t, ¯x(t))⟩. i∈I0 (t,¯x(t)) , and (Gψ(t)D ¯β(t) (t))ij = Thus, (i) (A3.3) yields that the vectors {∇xψi(t, ¯x(t)) : i ∈ I 0 (t,¯x(t))} are linearly independent, and hence, when (A3.3) is assumed to hold, (A3.2) is automatically satisfied; (ii) Setting ˜ψi(t, x) := ¯βi(t)ψi(t, x), it easily follows that C(t) is also the zero-sublevel sets , for i = 1, · · · , r, for some L ˜ψ > 0, ˜ψi satisfies (A3.1) for all i = 1, · · · , r, of ( ˜ψi(t, ·))r and condition (A3.3) is equivalent to saying that for t ∈ I 0(¯x), the Gramian matrix i=1 G ˜ψ (t) of the vectors {∇x ˜ψi(t, ¯x(t)) : i ∈ I 0 (t,¯x(t))} is strictly diagonally dominant; a fact that shall be used in the proof of the maximum principle; (iii) From parts (i)−(ii) of this remark, we have ˜ψ1, · · · , ˜ψr satisfy (A3.2), and hence, (3.34) of Lemma 3.2.4 is valid at ˜ψ1, · · · , ˜ψr, ψr+1 when replacing ¯η by ˜η := ¯η b ¯β , where := min n1, min{ ¯βi(t) : t ∈ [0, T ], i = 1, · · · , r} o . b ¯β Equivalent forms for the strict diagonally dominance of Gψ(t) are given in the following lemma. Lemma 4.2.2. The following assertions are equivalent: (i) For all t ∈ I 0(¯x), the Gramian matrix Gψ(t) of the vectors {∇xψi(t, ¯x(t)) : i ∈ I 0 (t,¯x(t))}, is strictly diagonally dominant. 103 (ii) There exists b ∈ (0, 1) such that, for all t ∈ I 0(¯x) and for all i ∈ I 0 (t,¯x(t)) , we have |⟨∇xψj(t, ¯x(t)), ∇xψi(t, ¯x(t))⟩| ≤ b∥∇xψi(t, ¯x(t))∥2. (4.3) X j∈I0 (t,¯x(t)) j̸=i (iii) There exist ¯c > 0, ¯b ∈ (0, 1), and ¯a > 0 such that ∀(t, x) ∈ Gr C(·) ∩ ¯B¯c(¯x(·)) with (t,x) ̸= ∅, I ¯a and ∀ i ∈ I ¯a (t,x), we have X |⟨∇xψj(t, x), ∇xψi(t, x)⟩| ≤ ¯b∥∇xψi(t, x)∥2, j∈I ¯a j̸=i (t,x) where I ¯a (t,x) := {i ∈ {1, . . . , r} : −¯a ≤ ψi(t, x) ≤ 0}. (4.4) (4.5) Proof. (i) =⇒ (ii): For t ∈ I 0(¯x) and i ∈ I 0 (t,¯x(t)) , we define b(t, i) := 1 ∥∇xψi(t, ¯x(t))∥2 X j∈I0 (t,¯x(t)) j̸=i |⟨∇xψj(t, ¯x(t)), ∇xψi(t, ¯x(t))⟩| < 1, (4.6) and set b := sup n b(t, i) : t ∈ I 0(¯x) and i ∈ I 0 (t,¯x(t)) o . Then, (4.3) holds true. To show that b < 1, use an argument by contradiction, together with Lemma .0.1 and inequality (4.6). (ii) =⇒ (iii): We fix ¯b ∈ (b, 1) and we use an argument by contradiction in conjunction with Lemma .0.1. (iii) =⇒ (i): Follows directly by taking ¯a := ¯c := 0, x = ¯x(t) and using ¯b < 1. The following result, which will be used in the proof of the maximum principle, is an immediate consequence of Lemma 3.2.4 obtained via a simple argument by contradiction and C(·) ∩ ¯B¯ε(¯x(·)(cid:17). the continuity in (A3.1) of (ψi)1≤i≤r and (∇xψi)1≤i≤r on the compact set Gr (cid:16) Lemma 4.2.3. Let C(·) satisfying (A2) for some ρ > 0. Consider ¯x ∈ C([0, T ]; Rn) with ¯x(t) ∈ C(t) for all t ∈ [0, T ], and ¯δ > 0 such that (A3.1) and (A3.2) hold at (¯x; ¯δ). Then, for ¯ε ∈ (0, ρ)∩(0, εo], and its corresponding ψr+1 and ¯η from Lemma 3.2.4, there exists ao > 0 such that for all i ∈ {1, . . . , r + 1} we have h(t, x) ∈ Gr (cid:16) C(·) ∩ ¯B¯ε(¯x(·)(cid:17) and ∥∇xψi(t, x)∥ ≤ ¯η i =⇒ ψi(t, x) < −ao. (4.7) 104 Table 4.2 Summary of results from Section 4.2.1. Result Description We discuss the significance of (A3.3) in the proof of the maximum Remark 4.2.1 principle. In particular, it highlights why it is sufficient to prove the Lemma 4.2.2 Lemma 4.2.3 maximum principle (Theorem 4.2.11) under a stronger assumption. We provide equivalent forms for the strict diagonally dominance of the Gramian matrix Gψ(t) of the vectors {∇xψi(t, ¯x(t)) : i ∈ I 0 (t,¯x(t))}. We prove that there exists ao such that h(t, x) ∈ Gr (cid:16) C(·) ∩ ¯B¯ε(¯x(·)(cid:17) and ∥∇xψi(t, x)∥ ≤ ¯η i =⇒ ψi(t, x) < −ao. 4.2.2 Study of approximating problems for (P ) Assume that (A1)-(A2) are satisfied, and ((¯x, ¯y), ¯u) is an admissible solution for (P ) with ¯δ > 0 such that (A3.1), (A3.2), (A4) and (A5) are satisfied at ((¯x, ¯y); ¯δ). Throughout the rest of this chapter, let ¯ε ∈ (0, ¯δ), ψr+1, ¯η and ¯ρ = 2¯η with Lψ ≥ 4¯η ρo . Let L(¯x,¯y) denote the Lipschitz constant of (¯x, ¯y), which, by Lemma 3.1.5, be fixed as in Subsection 3.2.2, Lψ is Lipschitz and uniquely solves (D) corresponding to ((¯x(0), ¯y(0)), ¯u). Without loss of generality, we assume L(¯x,¯y) ≥ 1. Therefore, all the results of Subsection 3.2.2 are valid for the systems ( ¯D) and ( ¯Dγk ), given respectively by (3.29) and (3.61). For given δ ∈ (0, ¯ε], define the problem ( ¯Pδ) to be the problem (P ), in which (D) is replaced by ( ¯D), and S is replaced by Sδ, where Sδ := S ∩ ¯Bδ, and ¯Bδ is defined in (4.2). (4.8) When Sδ is replaced by the following set Sδ,δ, Sδ,δ = S ∩ [ ¯N(δ,δ)(0) × ¯N(δ,δ)(T )] ⊂ S(¯δ) ⊂ domain of J, (4.9) 105 the resulting problem is named ( ¯Pδ,δ). For clarity and better visualization, we present the problems below in a structured form. (P )          minimize J(x(0), y(0), x(T ), y(T )) over ((x, y), u) ∈ W 1,1([0, T ], Rn × Rl) × U such that (D)   ˙x(t) ∈ f (t, x(t), y(t), u(t)) − NC(t)(x(t)), a.e. t ∈ [0, T ],  ˙y(t) = g(t, x(t), y(t), u(t)), a.e. t ∈ [0, T ], (x(0), y(0), x(T ), y(T )) ∈ S. (B.C.) minimize J(x(0), y(0), x(T ), y(T )) over ((x, y), u) ∈ W 1,1([0, T ], Rn × Rl) × U such that ( ¯Pδ) ( ¯D)    ˙x(t) ∈ f (t, x(t), y(t), u(t)) − NC(t)∩ ¯B¯ε(¯x(t)) (x(t)), a.e. t ∈ [0, T ], ˙y(t) ∈ g(t, x(t), y(t), u(t)) − N ¯B¯δ(¯y(t)) (y(t)), a.e. t ∈ [0, T ]. (x(0), y(0), x(T ), y(T )) ∈ Sδ = S ∩ ¯Bδ. minimize J(x(0), y(0), x(T ), y(T )) over ((x, y), u) ∈ W 1,1([0, T ], Rn × Rl) × U such that ( ¯Pδ,δ) ( ¯D)   ˙x(t) ∈ f (t, x(t), y(t), u(t)) − NC(t)∩ ¯B¯ε(¯x(t))  ˙y(t) ∈ g(t, x(t), y(t), u(t)) − N ¯B¯δ(¯y(t)) (x(t)), a.e. t ∈ [0, T ], (y(t)), a.e. t ∈ [0, T ]. (x(0), y(0), x(T ), y(T )) ∈ Sδ,δ = S ∩ [ ¯N(δ,δ)(0) × ¯N(δ,δ)(T )] ⊂ S(¯δ). Notice that ( ¯Pδ,δ) and ( ¯Pδ) have the same sets of admissible and optimal solutions. The following is an existence result of an optimal solution for ( ¯Pδ) (and, hence, of ( ¯Pδ,δ)) without requiring (A5). Theorem 4.2.4 (Global existence of optimal solution for ( ¯Pδ)). Assume that all the aforementioned assumptions in the beginning of this subsection are satisfied except 106 for (A5). Let J : Rn × Rl × Rn × Rl → R ∪ {∞} be merely lower semicontinuous on S(¯δ) with domain of J contains (¯x(0), ¯y(0), ¯x(T ), ¯y(T )). Then, for any δ ∈ (0, ¯ε], ( ¯Pδ) has a global optimal solution. Proof. Fix δ ∈ (0, ¯ε]. Being admissible for (P ), ((¯x, ¯y), ¯u) is also admissible for ( ¯Pδ), due to Remark 3.2.3 and that (¯x(0), ¯y(0), ¯x(T ), ¯y(T )) ∈ Sδ. As any admissible ((x, y), u) to ( ¯Pδ) also satisfies (x(0), y(0), x(T ), y(T )) ∈ Sδ,δ ⊂ S(¯δ), then, the lower semicontinuity of J on S(¯δ) and the compactness of Sδ,δ yield the infimum of J over ((x, y), u) satisfying ( ¯D) and having its states endpoints in Sδ, is finite. Let ((xn, yn), un) be a minimizing sequence for ( ¯Pδ). The proof from this point on continues as done in the proof of Theorem 4.1.1, in which (D) and S are now ( ¯D) and Sδ, respectively, and we use Lemma 3.2.8, system (3.44), and the bounds in (3.45) instead of Lemma 3.3.4, system (3.97), and the bounds in (3.98), respectively, and we apply Lemma .0.2 itself, where R = r + 1, Q(t) := C(t) ∩ ¯B¯ε(¯x(t)) and ζn and ζ are present, instead of its global version that was used for R := r, Q(t) := C(t) and ζn = ζ = 0. We deduce the existence of ((˜xδ, ˜yδ), ˜uδ) optimal for ( ¯Pδ). Remark 4.2.5. Note that Theorem 4.2.4 remains valid if we replace the objective function of ( ¯Pδ), J(x(0), y(0), x(T ), y(T )), by J(x(0), y(0), x(T ), y(T )) + R T where L is a Carathéodory function (see Definition 2.3.2) satisfying, for some σ ∈ L1([0, T ], R+), L(t, x(t), y(t)) dt, 0 |L(t, x, y)| ≤ σ(t), ∀(x, y) ∈ ¯N (¯ε,¯δ) (t), and t ∈ [0, T ]. (4.10) This is so, because for any u ∈ U, the solution (x, y) of ( ¯D) belongs to the uniformly bounded set valued map ¯N (¯ε,¯δ) (·), and L is a Carathéodory function (see Definition 2.3.2) satisfying (4.10), and does not explicitly depend on the control. Indeed, in the proof of Theorem 4.2.4, the existence of a minimizing sequence ((xn, yn), un) for ( ¯Pδ), in which this change is imple- mented, remains valid, and the limit as n → ∞ of the added term is R T L(t, ˜xδ(t), ˜yδ(t))dt, 0 by the dominated convergence theorem. The following remark establishes a connection between a strong local minimizer for (P ) and a strong local minimzer for ( ¯Pδ). 107 Remark 4.2.6. Using Theorem 4.2.4 and Remark 3.2.3, we have the following. (i) If ((¯x, ¯y), ¯u) is a ¯δ-strong local minimizer for (P ), then, for any δ ∈ (0, ¯ε), ((¯x, ¯y), ¯u) is a δ-strong local minimizer for ( ¯Pδ), and hence for ( ¯Pδ,δ). This fact motivates formulating in Proposition 4.2.8 the approximating problem for (P ) near ((¯x, ¯y), ¯u) as being that for ( ¯Pδo,δo also plays a key role in step 4 of the proof of Theorem 4.2.11 by relaxing instead of (P ), the problem ( ¯P), which is ( ¯P δ ), where δo is chosen strictly less than ¯ε. It ) with extended J and added L. (ii) Conversely, given δ ∈ (0, ¯ε], if ((¯x, ¯y), ¯u) is a ˆδ-strong local minimum for ( ¯Pδ) for 2 ˆδ ∈ (0, δ], then ((¯x, ¯y), ¯u) is a ˆδ-strong local minimum for (P ). For the rest of the chapter, ((¯x, ¯y), ¯u) is taken to be a ¯δ-strong local minimum for (P ). We shall employ the following notations. • If ¯x(0) ∈ int C(0), then, ¯x(0) ∈ int (C(0) ∩ ¯B¯ε(¯x(0))), and hence, taking c := ¯x(0) in Remark 3.2.11(ii), we deduce that there exist ˆk¯x(0) ∈ N and ˆr¯x(0)∈ (0, ¯ε), satisfying ¯x(0) ∈ ¯Bˆr¯x(0) (¯x(0)) ⊂ int ¯C γk(0, k), ∀k ≥ ˆk¯x(0). (4.11) If ¯x(0) ∈ bdry C(0), then ¯x(0) ∈ bdry (C(0) ∩ ¯B¯ε(¯x(0))), and hence, taking c := ¯x(0) in Proposition 3.2.10(v), we deduce that there exist a vector d¯x(0) ̸= 0, k¯x(0) ≥ k3, and r¯x(0)∈ (0, ¯ε), such that (cid:16) C(0) ∩ ¯Br¯x(0) (¯x(0))(cid:17) + ¯σk d¯x(0) ∥d¯x(0)∥ ⊂ int ¯C γk(0, k), ∀k ≥ k¯x(0). (4.12) • Since ¯y(0) ∈ int ¯B¯δ (¯y(0)), then taking d := ¯y(0) in Remark 3.2.13(ii), we deduce that there exist k¯y(0) ∈ N and r¯y(0) > 0 satisfying ¯y(0) ∈ ¯Br¯y(0) (¯y(0)) ⊂ int ¯B¯ρk (¯y(0)), ∀k ≥ k¯y(0). (4.13) • Motivated by Remark 4.2.6(i), and equations (4.11)-(4.13), let δo > 0 to be the fixed constant δo :=  min n ¯ε  min n ¯ε  2, ˆr¯x(0), r¯y(0) 2, r¯x(0), r¯y(0) o o if ¯x(0) ∈ int C(0) if ¯x(0) ∈ bdry C(0). (4.14) 108 • For β ∈ (0, 1], we define for t ∈ [0, T ] a.e., and (x, y, u) ∈ ¯N (¯δ,¯δ) (t) × U (t) : f β(t, x, y, u) := (1 − β)f (t, x, y, ¯u(t)) + βf (t, x, y, u), gβ(t, x, y, u) := (1 − β)g(t, x, y, ¯u(t)) + βg(t, x, y, u). Note that also hβ := (f β, gβ) satisfy (A4) as h does, and hence, all the results of Section 3.2 of Chapter 3 hold true for ( ¯Dβ) and ( ¯Dβ γk ( ¯D) and ( ¯Dγk • Let (¯xγk, ¯yγk ) by replacing h by hβ. Observe that hβ(t, x, y, ¯u(t)) = h(t, x, y, ¯u(t)). ) the solution of ( ¯Dβ γk ) corresponding to ((¯cγk, ¯y(0)), ¯u), where, for k large enough, ¯cγk ∈ int ¯C γk(0, k) is the sequence corresponding (and converging) to c = ¯x(0) via Remark 3.2.11, namely, ), which are respectively obtained from ¯cγk =   ¯x(0)  ¯x(0) + ¯σk if ¯x(0) ∈ int C(0) d¯x(0) ∥d¯x(0)∥ if ¯x(0) ∈ bdry C(0), where d¯x(0) is the vector from Proposition 3.2.10(v) corresponding to ¯x(0). Then, by Corollary 3.2.16, along a subsequence, (¯xγk, ¯yγk satisfies all conclusions of Theorem 3.2.14. In particular, we have that (¯xγk ¯C γk(t, k) × ¯B¯ρk (¯y(t)) ∀t ∈ [0, T ] and (¯xγk, ¯yγk )k is uniformly lipschitz. ) converges uniformly to (¯x, ¯y) and (t), ¯yγk (t)) ∈ [Sδo Sγk(k) := • We define for all k ∈ N    where, Sδo + (¯σk [Sδo is defined in (4.8), and + (0, 0, ¯eγk, ¯ωγk )] ∩ [ ¯N (¯ε,¯δ) (0) × ¯N (¯ε,¯δ) (T )], d¯x(0) ∥d¯x(0)∥, 0, ¯eγk, ¯ωγk )] ∩ [ ¯N (¯ε,¯δ) (0) × ¯N (¯ε,¯δ) (T )], if ¯x(0) ∈ int C(0) (4.15) if ¯x(0) ∈ bdry C(0), (¯eγk, ¯ωγk ) := (¯xγk (T ) − ¯x(T ), ¯yγk (T ) − ¯y(T )) −−−→ k→∞ (0, 0). Remark 4.2.7. Our sets Sγk(k) satisfy the following properties: ∀k ∈ N, Sγk(k) is closed, {(c, d) : (c, d, e, ω) ∈ Sγk(k)} ⊂ int ¯C γk(0, k) × int ¯B¯ρk and Sγk(k) ⊂ S(¯δ), for k sufficiently large, (4.16) (¯y(0)) ⊂ int ¯N (¯ε,¯δ) (0) for k large,(4.17) lim k→∞ Sγk(k) = Sδo,δo, (¯cγk, ¯y(0), ¯xγk (T ), ¯yγk (T )) ∈ Sγk(k), for k sufficiently large. (4.18) (4.19) 109 Using the local property of limiting normal cone (see Lemma 2.2.4), we know that, for any element (c, d, e, ω) ∈ Sγk(k) with (e, ω) ∈ int ¯N (T ) = (int C(T ) ∩ B¯ε(¯x(T ))) × B¯δ (¯y(T )), (¯ε,¯δ) we have N L Sγk (k) (c, d, e, ω) =    N L S (c, d, e − ¯eγk, ω − ¯ωγk ) if ¯x(0) ∈ int C(0) and (c, d, e − ¯eγk, ω − ¯ωγk ) ∈ int ¯Bδo (4.20) N L S (c − ¯σk d¯x(0) ∥d¯x(0)∥, d, e − ¯eγk, ω − ¯ωγk ) if ¯x(0) ∈ bdry C(0) and (c − ¯σk d¯x(0) ∥d¯x(0)∥, d, e − ¯eγk, ω − ¯ωγk ) ∈ int ¯Bδo. This next proposition provides a sequence of optimal control problems with specific joint endpoint constraints that approximates our initial problem (P ) near ((¯x, ¯y), ¯u), that is, the problem ( ¯Pδo,δo ). Proposition 4.2.8 (Approximating problems for (P )). For all α > 0 and β ∈ (0, 1], there exists a subsequence of (γk)k (we do not relabel) and a sequence (cγk, dγk, eγk, ωγk ; uγk ) ∈ Sγk(k) × U such that the associated problem (P α,β γk ) defined by: (P α,β γk ) : Minimize J(x(0), y(0), x(T ), y(T )) + α∥u − uγk∥1 + α ∥ (x(0), y(0), x(T ), y(T )) − (cγk, dγk, eγk, ωγk ) ∥, over ((x, y), u) such that u(·) ∈ U and    ˙x(t) = f β(t, x(t), y(t), u(t)) − Pr+1  ( ¯Dβ γk ) ˙y(t) = gβ(t, x(t), y(t), u(t)) − γkeγkφ(t,y(t))∇yφ(t, y(t)) a.e. t ∈ [0, T ], i=1 γkeγkψi(t,x(t))∇xψi(t, x(t)) a.e. t ∈ [0, T ],   (x(t), y(t)) ∈ ¯Bδo (¯x(t), ¯y(t)) ∀t ∈ [0, T ], (S.C), (x(0), y(0), x(T ), y(T )) ∈ Sγk(k), has an optimal solution ((xγk, yγk ), uγk ) such that (xγk (0), yγk (0), xγk (T ), yγk (T )) = (cγk, dγk, eγk, ωγk ) 110 and (xγk )k and (yγk )k are uniformly Lipschitz. Moreover, (xγk (0), yγk (xγk (t), yγk (T )) ∈ (Sδo (T ), yγk (0), xγk (t)) ∈ ¯C γk(t, k) × ¯B¯ρk (¯y(t)), ∀t ∈ [0, T ], + ρ1B) ∩ int (cid:16) ¯N (¯ε,¯δ) (0) × ¯N (¯ε,¯δ) (T )(cid:17) ⊂ int S(¯δ), (xγk, yγk ) unif−−→ (¯x, ¯y), uγk strongly −−−−→ in L1 ¯u, and ( ˙xγk, ˙yγk ) w∗−−−→ in L∞ ( ˙¯x, ˙¯y). (4.21) (4.22) (4.23) The functions ξi γk (i = 1, · · · , r + 1) and ζγk , corresponding to xγk and yγk via (3.76), satisfy (3.75) and there exists (ξ1, · · · , ξr) ∈ L∞([0, T ], Rr + ) such that ξi γk w∗−−−→ in L∞ ξi, ξi = 0 on I - i (¯x) (∀i = 1, · · · , r), ∥ r X i=1 ξi∥∞ ≤ 2¯µ ¯η2 , (γkξr+1 γk , γkζγk ) unif−−→ 0,(4.24) and ((¯x, ¯y), ¯u) together with (ξ1, · · · , ξr) satisfies    ˙¯x(t) = f (t, ¯x(t), ¯y(t), ¯u(t)) − Pr i=1 ξi(t)∇xψi(t, ¯x(t)) a.e. t ∈ [0, T ], ˙¯y(t) = g(t, ¯x(t), ¯y(t), ¯u(t)) a.e. t ∈ [0, T ], (4.25) ψi(t, ¯x(t)) ≤ 0, ∀t ∈ [0, T ], ∀i ∈ {1, · · · , r}. ) admits an optimal solution ((ˆxγk, ˆyγk Proof. Step 1: (P 0,β γk ) is the solution of ( ¯Dβ Given that (¯xγk, ¯yγk γk (4.19) holds, and (¯xγk, ¯yγk is admissible for (P α,β γk ). ), ˆuγk ) corresponding to ((¯cγk, ¯y(0)), ¯u), the inclusion ), ¯u) ) → (¯x, ¯y), then for k large enough, the sequence ((¯xγk, ¯yγk ) for every α ≥ 0 and β ∈ (0, 1]. In particular, for k large enough, ), ¯u) is admissible for (P 0,β γk ((¯xγk, ¯yγk Theorem 2.3.4 ([19, Theorem 23.10]) to (P 0,β γk ). We fix k large enough and β ∈ (0, 1]. We then apply (¯x(·), ¯y(·)) ∩ C(·) × Rl(cid:17) and ), with Q = Gr (cid:16) ¯Bδo E = Sγk(k). Notice that conditions (a), (b), (c), (d), (e), and (f) of this theorem are satisfied due to the validity of assumptions (A1), (A3.1), (A4), and (A5), along with the properties of Sγk(k). Hence, (P 0,β γk ) admits an optimal solution ((ˆxγk, ˆyγk ), ˆuγk ). Using equations (4.16) and (4.18), we deduce that there exists (c, d, e, ω) such that, up to a subsequence (ˆxγk (0), ˆyγk (0), ˆxγk (T ), ˆyγk (T )) −→ (c, d, e, ω) ∈ Sδo,δo. Step 2: Convergence of (ˆxγk, ˆyγk distance to (¯x, ¯y). ) to an admissible solution for ( ¯Pδo,δo ) with δo 111 (0), ˆyγk (0)) ∈ ¯C γk(0, k)× ¯B¯ρk As (ˆxγk then, applying Theorem 3.2.14(I) to ((ˆxγk unique solution of ( ¯Dβ γk ) is (ˆxγk, ˆyγk (¯y(0)) (see equation (4.17)) and its limit (c, d) ∈ ¯N (0), (¯ε,¯δ) (0), ˆyγk (0)), ˆuγk ), we deduce that the resulting ) and satisfies (3.74)-(3.76), and hence, by (A1), (A4.2), and Theorem 3.2.14(II), there exists ((ˆx, ˆy), u), such that along a subsequence of (ˆxγk, ˆyγk (t) for all t ∈ [0, T ], (we do not relabel), we have (ˆxγk, ˆyγk and ((ˆx, ˆy), u) uniquely solves ( ¯Dβ) starting at (c, d). It follows that (ˆx(T ), ˆy(T )) = (e, ω). ) unif−−→ (ˆx, ˆy), (ˆx(t), ˆy(t)) ∈ ¯N (¯ε,¯δ) ) ) satisfies (S.C), then we have (ˆx(t), ˆy(t)) ∈ ¯Bδo Moreover, as (ˆxγk, ˆyγk t ∈ [0, T ]. Using (A4.2) and Filippov Selection Theorem (see Theorem 2.3.5), we can find ˆu ∈ U such that ((ˆx, ˆy), ˆu) satisfies ( ¯D), and hence ((ˆx, ˆy), ˆu) is admissible for ( ¯Pδo,δo ∥(ˆx, ˆy) − (¯x, ¯y)∥∞ ≤ δo. (¯x(t), ¯y(t)) for all ) with Step 3: (P α,β γk n ) defined by means of (cγk n, dγk n, eγk n, ωγk n ; uγk n ), has ((xγk n, yγk n ), uγk n ) as optimal solution. Since ((¯x, ¯y), ¯u) is a ¯δ-strong local minimizer for (P ), then, by Remark 4.2.6(i) and δo < ¯ε, ((¯x, ¯y), ¯u) is a δo-strong local minimizer for ( ¯Pδo ), and hence, we have ) and hence for ( ¯Pδo,δo J(¯x(0), ¯y(0), ¯x(T ), ¯y(T )) ≤ J(ˆx(0), ˆy(0), ˆx(T ), ˆy(T )). On the other hand, ((ˆxγk, ˆyγk is admissible, we deduce that ), ˆuγk ) is an optimal solution for (P 0,β γk ) for which ((¯xγk, ¯yγk ), ¯uγk ) J(ˆxγk (0), ˆyγk (0), ˆxγk (T ), ˆyγk (T )) ≤ J(¯xγk (0), ¯yγk (0), ¯xγk (T ), ¯yγk (T )). Combining the above two inequalities and using the continuity of J(·, ·, ·, ·), we deduce that h J(¯xγk lim k→∞ (0), ¯yγk (0), ¯xγk (T ), ¯yγk (T )) − J(ˆxγk (0), ˆyγk (0), ˆxγk (T ), ˆyγk (T ))i = 0. Thus, for fixed α > 0, there exists an increasing sequence (kn)n such that ∀ n ≥ 1, ∀ kn > n, J(¯xγk n (0), ¯yγk n (0), ¯xγk n (T ), ¯yγk n (T )) ≤ J(ˆxγk n (0), ˆyγk n (0), ˆxγk n (T ), ˆyγk n (T )) + α n . The rest of the proof follows from imitating the proof of [55, Proposition 6.2], and applying Ekeland Variational Principle (Theorem 2.2.49 or [66, Theorem 3.3.1]), to the following version of the data corresponding to our problem: 112 • X = {(c, d, e, ω; u) ∈ Sγk(kn) × U : the unique solution ((x, y), u) of ( ¯Dβ γk n (x(0), y(0)) = (c, d) satisfies (x(T ), y(T )) = (e, ω) and (x(t), y(t)) ∈ ¯Bδo (¯x(t), ¯y(t)) ∀t}. ) with • For (c, d, e, ω; u), (c′, d′, e′, ω′; u′) ∈ X, we define the distance D ((c, d, e, ω; u), (c′, d′, e′, ω′; u′)) := ∥u − u′∥L1 + ∥(c, d, e, ω) − (c′, d′, e′, ω′)∥. • For (c, d, e, ω; u) ∈ X, F(c, d, e, ω; u) := J(c, d, e, ω). • α := α and λ := 1 n . Notice that (X, D) is a non-empty complete metric space, and F is continuous on X. There- fore, we deduce the existence of (cγk n, dγk n, eγk n, ωγk n the solution of ( ¯Dβ ) corresponding to ((cγk n, dγk n γk n (t), yγk n ) and (xγk n (t)) ∈ ¯Bδo (eγk n, ωγk n ; uγk n ), uγk n ) ∈ X such that, for (xγk n, yγk n ), satisfies (xγk n (T ), yγk n (T )) = ), (¯x(t), ¯y(t)) ∀t, and the following holds: J(xγk n (0), yγk n (T ), yγk n (T )) ≤ J(¯xγk n (0), ¯xγk n (0), ¯yγk n (cid:16)¯cγk n, ¯y(0), ¯xγk n (cid:17) − (T ), ¯yγk n (T )(cid:17) (T )), 1 ∥ ≤ , n (T ), ¯yγk n (4.26) (4.27) ∥uγk n − ¯u∥L1 + ∥ cγk n, dγk n, eγk n, ωγk n (0), xγk n (cid:16) and for all ((c, d, e, ω); u) ∈ X, we have J(xγk n (0), yγk n (0), xγk n (T ), yγk n (T )) ≤ J(x(0), y(0), x(T ), y(T )) +α(∥u − uγk n∥L1 + ∥ (c, d, e, ω) − (cid:16) cγk n, dγk n, eγk n, ωγk n (cid:17) ∥, (4.28) where ((x, y), u) is the unique solution of ( ¯Dβ γk n isfying (x(T ), y(T )) = (e, ω) and (x(t), y(t)) ∈ ¯Bδo Hence, for n large, the problem (P α,β γk n ((xγk n, yγk n ), uγk n ) as optimal solution satisfying ) starting with (x(0), y(0)) = (c, d) and sat- (¯x(t), ¯y(t)) ∀ t ∈ [0, T ]. ) defined by means of (cγk n, dγk n, eγk n, ωγk n ; uγk n ), has (xγk n (0), yγk n (0), xγk n (T ), yγk n (T )) = (cγk n, dγk n, eγk n, ωγk n (¯x(0), ¯y(0), ¯x(T ), ¯y(T )) ∈ S, uγk n strongly −−−−→ L1 ¯u, (xγk n, yγk n ) −−−→ n→∞ ) unif−−−→ n→∞ (¯x, ¯y), and all conclusions of Theorem 3.2.14. Hence, (4.23) is valid, and, for (ξi γk and ζγk and ζ such that (3.75), (3.77), (3.79), )r+1 i=1 corresponding to (xγk, yγk ) via (3.76), there exist (ξi)r+1 i=1 113 (3.80), and (3.81)-(3.83) hold. Notice that, as ψr+1(t, ¯x(t)) = − ¯ε2 0 ∀t ∈ [0, T ], we have that ξr+1 ≡ 0, ζ ≡ 0, and, for some ˜k ∈ N, ψr+1(t, xγk φ(t, yγk (t)) ≤ − ¯δ2 4 , ∀k ≥ ˜k and ∀t ∈ [0, T ], and hence, and γkζγk (t) ≤ γ2 ke−γk ke−γk ¯ε2 4 ¯δ2 γkξr+1 γk (t) ≤ γ2 2 < 0 and φ(t, ¯y(t)) = − ¯δ2 2 < and (t)) ≤ − ¯ε2 4 4 , ∀k ≥ ˜k, and ∀t ∈ [0, T ]. (4.29) That is, (γkξr+1 γk , γkζγk ) unif−−→ 0, and thus, (4.24) holds. Furthermore, since hβ(t, ¯x(t), ¯y(t), ¯u(t)) = h(t, ¯x(t), ¯y(t), ¯u(t)), it follows that ((¯x, ¯y), ¯u) and (ξ1, · · · , ξr) satisfy (4.25). Finally, (4.21) is also valid, due to having (xγk n ¯σkn → 0, (¯eγk n, ¯ωγk n int ¯N (t). ) → (0, 0) (as n → ∞), and (xγk n (¯ε,¯δ) (0), yγk n (T ), yγk n (0), xγk n (t)) ∈ ¯C γk n(t, k) × ¯B¯ρkn (T )) ∈ Sγk(kn), (¯y(t)) ⊂ (t), yγk n The next result is obtained as a direct application of the nonsmooth Pontryagin maximum principle for state constrained problems to each of the approximating problem (P α,β γk ) defined in Proposition 4.2.8. Proposition 4.2.9 (Maximum principle for the approximating problems (P α,β γk )). Let α > 0 and β ∈ (0, 1] be fixed. Let ((xγk, yγk tion 4.2.8 which is optimal for (P α,β γk ), uγk ) and satisfying lim k→∞ (¯x(0), ¯y(0), ¯x(T ), ¯y(T )). Then, for each k ∈ N, there exist pγk Rl) and a scalar λγk ≥ 0 such that (i) Nontriviality condition For all k ∈ N, we have ) be the sequence from Proposi- (xγk (0), yγk = (qγk, vγk (T ), yγk (0), xγk ) ∈ W 1,1([0, T ]; Rn × (T )) = (ii) Transversality equation ∥pγk∥∞ + λγk = 1. (4.30) (pγk (0), −pγk (T )) ∈ λγk∂LJ(xγk (0), yγk (0), xγk (T ), yγk (T )) + (4.31) α ¯B + N L Sγk (k) (xγk (0), yγk (0), xγk (T ), yγk (T )) . (iii) Maximization condition ( max u∈U (t) ⟨(qγk (t), vγk (t)), (f, g)(t, xγk (t), yγk (t), u))⟩ − ∥u − uγk ) (t)∥ λγkα β (4.32) is attained at u = uγk (t) a.e. t ∈ [0, T ]. 114 (iv) Adjoint equation For almost all t ∈ [0, T ], − ˙pγk (t) =  − ˙qγk    − ˙vγk  (t)    (t) ∈ (1 − β)(∂(x,y)f (t, xγk (t), yγk (t), ¯u(t)))T qγk (t) +β(∂(x,y)f (t, xγk (t), yγk (t), uγk (t)))T qγk (t) +(1 − β)(∂(x,y)g(t, xγk (t), yγk (t), ¯u(t)))T vγk (t) (cid:16)  (t), yγk +β(∂(x,y)g(t, xγk (t)))T vγk (t), uγk ∂x (cid:16)Pr+1 i=1 γkeγkψi(t,xγk (t))∇xψi(t, xγk (cid:16) γkeγkφ(t,yγk (t))∇yφ(t, yγk ∇y    − (cid:16) (t) (t))(cid:17)(cid:17)T (t))(cid:17)(cid:17)T vγk (t) qγk (t)     , (4.33) where, ∂x r+1 X i=1 γkeγkψi(t,xγk (t))∇xψi(t, xγk ! (t)) ⊂ r+1 X i=1 γkeγkψi(t,xγk (t))∂xxψi(t, xγk (t)) (cid:16) ∇y γkeγkφ(t,yγk (t))∇yφ(t, yγk i=1 (t))(cid:17) = γkeγkφ(t,yγk (t)) Il×l r+1 X + keγkψi(t,xγk (t))∇xψi(t, xγk γ2 (t))∇xψi(t, xγk (t))T , +γ2 keγkφ(t,yγk (t))∇yφ(t, yγk (t))∇yφ(t, yγk (t))T . Proof. As (P α,β γk ) is a standard optimal control problem with implicit state constraints, we shall apply [66, Theorem 9.3.1 and P.332] for the optimal solution ((xγk, yγk obtained in Proposition 4.2.8. The proof is obtained from translating the conditions of [66, ) of (P α,β γk ), uγk ) Theorem 9.3.1] to our data, and using the standard state augmentation technique. Step 1. All assumptions of [66, Theorem 9.3.1 and P.332] are satisfied. Applying the state augmentation technique, our optimal solution is (xγk, yγk, zγk (xγk, yγk and uγk Assumptions (H1), (H2) and (H3) of [66, Theorem 9.3.1] are satisfied because assumptions ) is the optimal solution from Proposition 4.2.8, zγk is the optimal control. (s)∥ds = 0, (s) − uγk (t) := R t ), where 0 ∥uγk (A2), (A3), (A4), and (A5) hold true, (xγk, yγk satisfied. Note that for k large enough, the required constraint qualification (CQ) in [66, Page 332] is satisfied by the multifunction ¯Bδo we need to show that ) converges uniformly to (¯x, ¯y) and (4.21) is (t)). In other words, (¯x(·), ¯y(·)) at (xγk (t), yγk 115 1. ¯Bδo 2. conv ( ¯N L ¯N L (¯x(·), ¯y(·)) is lower semicontinuous multifunction, ¯Bδo (¯x(t),¯y(t)) (xγk (t), yγk (t))) is pointed ∀t ∈ [0, T ], where the graph of (·) is defined to be the closure of the graph of N L ¯Bδo (¯x(·),¯y(·)) ) converging uniformly to (¯x, ¯y) and to ¯Bδo This is due to (xγk, yγk semicontinuous, with closed, convex, and nonempty interior values (hence epi-Lipschitz), (¯x(·), ¯y(·)) being lower ¯Bδo (¯x(·),¯y(·)) (·). (see Lemma 2.2.47 or [57, Remark 4.8(ii)]). Step 2: The measure corresponding to the state constraint (S.C) is null. Notice that the measure ηγk ∈ C ∗([0, T ], Rn+l) corresponding to the state constraint (S.C) produced by [66, Theorem 9.3.1], is actually null. This is due to the fact that its support satisfies supp {ηγk} ⊂ {t ∈ [0, T ] : (t, xγk (t), yγk (t)) ∈ bdry Gr ¯Bδo (¯x(·), ¯y(·))}, = {t ∈ [0, T ] : (t, xγk (t), yγk (t)) ∈ ∪t∈[0,T ]{t} × Sδo (¯x(t), ¯y(t))} = ∅, (¯x(t), ¯y(t)) = {(x, y) : ∥(x, y) − (¯x(t), ¯y(t))∥ = δo}. The last equality follows from where Sδo the uniform convergence to (¯x, ¯y) of (xγk (t), yγk (t)), (4.23). Step 3. Deriving the transversality condition. , vγk and zγk Let qγk respectively. We translate equation (iii) in [66, Theorem 9.3.1] to our data. First, notice adjoint vectors corresponding to the optimal states xγk and eγk , yγk that In addition, we have, for pγk = (qγk, vγk ), that eγk (T ) = −λγkα. (pγk (0), −pγk (T )) ∈ λγk∂LJ(xγk (0), yγk (0), xγk (T ), yγk (T )) + α ¯B + N L Sγk (k) (xγk (0), yγk (0), xγk (T ), yγk (T )). 116 Step 4. Deriving the adjoint equation. We note that the Hamiltonian is given by H(t, (x, y, z), (q, v, e), u) = ⟨q, f β(t, x, y, u) − r+1 X i=1 γkeγkψi(t,x)∇xψi(t, x)⟩ + ⟨v, gβ(t, x, y, u) − γkeγkφ(t,y)∇yφ(t, y)⟩ + ⟨e, ∥u − uγk (t)∥⟩. Using equation (ii) in [66, Theorem 9.3.1], we deduce that equation (4.33) is satisfied, and (t) = 0 for t ∈ [0, T ] a.e. Now, we use the transversality condition to deduce that for a.e ˙eγk t ∈ [0, T ], eγk (t) = eγk (T ) = −λγkα. Step 5. Deriving the Maximization condition. Applying equation (iv) in [66, Theorem 9.3.1] to our data, with the fact that eγk a.e. t ∈ [0, T ], we deduce that (t) = −αλγk ( max u∈U (t) ⟨qγk (t), f (t, xγk (t), yγk (t), u)⟩ + ⟨vγk (t), g(t, xγk (t), yγk (t), u)⟩ − ) ∥u − uγk (t)∥ λγkα β (4.34) is attained at u = uγk Step 6. Nontriviality condition. (t) a.e. t ∈ [0, T ]. is null everywhere, we deduce from the nontriviality condition of Theorem 9.3.1 Since ηγk that (pγk, eγk, λγk ) ̸= 0. But eγk = −αλγk then the transversality condition translates to ∥pγk∥∞ + λγk ̸= 0. Remark 4.2.10. We note the following. • We will prove in the maximum principle (see equation (4.41)) that there exists ˜Mp > 0 such that ∥pγk (t)∥ ≤ ˜Mp∥pγk (T )∥, ∀t ∈ [0, T ], ∀k ∈ N. Hence, we can replace the nontriviality condition (i) by ∥pγk (T )∥ + λγk = 1. 117 (4.35) (4.36) This is particularly useful for us when taking the limit of the non-triviality condition in the proof of the maximum principle. As we will see, pγk function p, allowing us to take the limit in (4.36). converges pointwise to a • In addition, if S = C0 × Rn+l for a closed C0 ⊂ C(0) × Rl, then λγk ̸= 0 and it is taken = 0, then using to be 1 and the nontriviality condition (i) is eliminated. Indeed, if λγk transversality condition (ii), we deduce that pγk we deduce that pγk condition. is null. Hence, (pγk, λγk (T ) = 0. Thus, using equation (4.35), ) = 0 which contradicts the non-triviality Result Theorem 4.2.4 Remark 4.2.5 Table 4.3 Summary of results from Subsection 4.2.2. Description We provide an existence result of an optimal solution for the truncated optimal control problem ( ¯Pδ). We provide an existence result of an optimal solution for a truncated optimal control problem, which is identical to ( ¯Pδ) except for the addition of an integral term involving a Carathéodory function in its objective function. Remark 4.2.6 Remark 4.2.7 We establish a connection between a strong local minimizer for (P ) and a strong local minimzer for ( ¯Pδ). We provide properties for the sets Sγk(k). Proposition 4.2.8 Proposition 4.2.9 Remark 4.2.10 We provide a sequence of optimal control problems with specific joint endpoint constraints that approximates our initial problem (P ) near ((¯x, ¯y), ¯u), that is, the problem ( ¯Pδo,δo We provide necessary conditions to each of the approximating problems ). (P α,β γk ) defined in Proposition 4.2.8. We provide conditions that could replace the non-triviality condition. 118 4.2.3 Maximum principle for (P ) The following result provides necessary conditions, in the form of an extended Pontrya- gin’s maximum principle, for a ¯δ-strong local minimizer ((¯x, ¯y), ¯u) for the problem (P ). We start by proving the theorem under the temporary assumption (A4.2), and without assuming any uniform bound on the sets U (t) (Step I). In Step II, we show that, when the compact sets U (t) are uniformly bounded, the convexity assumption (A4.2) can be removed. First, we introduce the following nonstandard notions of subdifferentials that shall be used in Theorem 4.2.11. • ∂(x,y) ℓ h(t, ·, ·, u) denotes the extended Clarke generalized Jacobian of h(t, ·, ·, u) that ex- tends from the interior to the boundary of ¯N (¯δ,¯δ) (t) : = h C(t) ∩ ¯B¯δ (¯x(t))i × ¯B¯δ (¯y(t)) the • ∂xx • ∂L notion of the Clarke generalized Jacobian (see Definition 2.2.30 or [55, Equation(11)]), ℓ ψi(t, ·) is the Clarke generalized Hessian relative to int [C(t) ∩ ¯B¯δ (see Definition 2.2.32 or [55, Equation(12)]), ℓ J(·, ·, ·, ·) is the limiting subdifferential of J(·, ·, ·, ·) relative to int S(¯δ) (see Definition 2.2.25 or [55, Equation(8)]). (¯x(t))] of ψi(t, ·) Theorem 4.2.11 (Generalized Pontryagin principle for (P )). Assume that (A1)- (A2) are satisfied. Let ((¯x, ¯y), ¯u) be a ¯δ-strong local minimizer for (P ) such that (A3.1), (A3.3), (A4.1) and (A5) are satisfied at ((¯x, ¯y); ¯δ). Then, whenever (A4.2) holds true, or if sets U (t) are uniformly bounded, there exist an adjoint vector p = (q, v) with q ∈ BV ([0, T ]; Rn) and v ∈ W 1,2([0, T ]; Rl), finite signed Radon measures (νi)r on [0, T ], non- in L∞([0, T ]; R+), L2-measurable functions ¯A(·) in Mn×n([0, T ]), negative functions (ξi)r ¯E(·) in Mn×l([0, T ]), ¯A(·) in Ml×n([0, T ]), and ¯E(·) in Ml×l([0, T ]), L∞-measurable functions (ϑi(·))r in Mn×n([0, T ]), and a scalar λ ≥ 0, satisfying the following: i=1 i=1 i=1 119 (i) Primal-dual admissible equation    ˙¯x(t) = f (t, ¯x(t), ¯y(t), ¯u(t)) − Pr i=1 ξi(t)∇xψi(t, ¯x(t)) a.e. t ∈ [0, T ], ˙¯y(t) = g(t, ¯x(t), ¯y(t), ¯u(t)) a.e. t ∈ [0, T ], ψi(t, ¯x(t)) ≤ 0, ∀t ∈ [0, T ], ∀i ∈ {1, · · · , r}. (ii) Non-triviality condition (iii) Adjoint equations For any z(·) ∈ C([0, T ], Rn) λ + ∥p(T )∥ = 1. Z ⟨z(t), dq(t)⟩ = Z T 0 ⟨z(t), − ¯A(t)T q(t)⟩dt + Z T 0 ⟨z(t), − ¯A(t)T v(t)⟩dt ξi(t)⟨z(t), ϑi(t)q(t)⟩dt + ⟨z(t), ∇xψi(t, ¯x(t))⟩dνi(t) ! , Z T 0 [0,T ] r X + Z T i=1 0 ˙v(t) = − ¯E(t)T q(t) − ¯E(t)T v(t), where for all t ∈ [0, T ] a.e., ( ¯A(t), ¯E(t)) ∈ ∂(x,y) ℓ f (t, ¯x(t), ¯y(t), ¯u(t)), ( ¯A(t), ¯E(t)) ∈ ∂(x,y) ℓ g(t, ¯x(t), ¯y(t), ¯u(t)), ϑi(t) ∈ ∂xx ℓ ψi(t, ¯x(t)), for i = 1, · · · , r. (iv) Maximization condition ( max u∈U (t) ⟨q(t), f (t, ¯x(t), ¯y(t), u)⟩ + ⟨v(t), g(t, ¯x(t), ¯y(t), u)⟩ ) is attained at u = ¯u(t) for a.e. t ∈ [0, T ]. (v) Complementary Slackness condition For i = 1, · · · , r, we have: ξi(t) = 0 ∀t ∈ I - i (¯x), and ξi(t)⟨∇xψi(t, ¯x(t)), q(t)⟩ = 0 a.e. t ∈ [0, T ]. (vi) Measures Properties For i = 1, · · · , r, we have: supp {νi} ⊂ I 0 i (¯x) and the measure ⟨q(t), ∇xψi(t, ¯x(t))⟩dνi(t) is nonnegative. 120 (vii) Transversality condition ((q, v)(0), −(q, v)(T )) ∈ λ ∂L ℓ J((¯x, ¯y)(0), (¯x, ¯y)(T )) + N L S ((¯x, ¯y)(0), (¯x, ¯y)(T )). In addition, if S = C0 × Rn+l, for a closed C0 ⊂ C(0) × Rl, then λ = 1, and the non-triviality condition is discarded. Proof. Step I. Assume for now the temporary assumption (A4.2) holds true. All the previous results including the consequences in subsection 4.2.2 are valid. In particular, (¯x, ¯y) is L(¯x,¯y)-Lipschitz with L(¯x,¯y) ≥ 1. Assume as well that the additional assump- tions, (A3.3)′, is satisfied. (A3.3)′ ∀t ∈ I 0(¯x), Gψ(t), the Gramian matrix of the vectors {∇xψi(t, ¯x(t)) : i ∈ I 0 (t,¯x(t))}, is strictly diagonally dominant. Since {ψi}r i=1 satisfy (A3.3)′, then by Lemma 4.2.2, there exist 0 < ¯a ≤ 2ao, 0 < ¯b < 1, and ¯c > 0 such that (4.4) is satisfied, where ao is the constant in Lemma 4.2.3. We begin our proof by introducing the function ˆψi, which we will work with in place of has uniformly bounded variation in Step I.2. ψi, in order to establish that the function ˆqγk After formulating the Pontryagin Maximum Principle in terms of ˆψi, we will translate the necessary conditions in terms of ψi (see Step I.3.4). Define the following function ˆψi(·, ·) on the same domain of ψi(·, ·) as ˆψi(t, x) :=    ψi(t, x) if − ¯a 2 ≤ ψi(t, x) ≤ 0 or ψi(t, x) > 0 s(ψi(t, x)) if − ¯a ≤ ψi(t, x) < − ¯a 2 s(−¯a) if ψi(t, x) < −¯a, where s(z) := − 3 4 ¯a + 1 ¯a (z + ¯a)2, for − ¯a ≤ z ≤ − ¯a 2 . Notice that s(·) is a quadratic function with: • s(−¯a) = − 3 4 ¯a and s(− ¯a 2 ) = − ¯a 2 . • s′(−¯a) = 0 and s′(− ¯a 2 ) = 1. 121 • 0 ≤ s′(z) ≤ 1 for all −¯a ≤ z ≤ − ¯a 2 . We also have ˆψi(t, x) := ∇x    Notice the following: ∇xψi(t, x) if − ¯a 2 ≤ ψi(t, x) ≤ 0 or ψi(t, x) > 0 s′(ψi(t, x)).∇xψi(t, x) if − ¯a ≤ ψi(t, x) < − ¯a 2 0 if ψi(t, x) < −¯a. • {x ∈ Rn : ˆψi(t, x) ≤ 0, ∀i = 1, · · · , r} = {x ∈ Rn : ψi(t, x) ≤ 0, ∀i = 1, · · · , r} = C(t). satisfy (A3.1) and (A3.2) with • Since {ψi}r i=1 = Lψ(1 + 2 L ˆψ satisfy (A3.1) and (A3.2), then { ˆψi}r ¯a Lψ) replacing Lψ. i=1 • All results of Subsection 4.2.2, including Proposition 4.2.8 and Proposition 4.2.9, can now be formulated in terms of ˆψi (i = 1, · · · , r) instead of ψi (i = 1, · · · , r). • Since {ψi}r i=1 satisfy (A3.3)′, and equation (4.4) is satisfied, we deduce that ∀(t, x) ∈ Gr C(·) ∩ ¯B¯c(¯x(·)) with I ¯a 2 (t,x) ̸= ∅, and ∀ i ∈ I (t,x), we have ¯a 2 (cid:12) (cid:12) (cid:12)⟨∇x ˆψj(t, x), ∇x (cid:12) ˆψi(t, x)⟩ (cid:12) (cid:12) ≤ ¯b∥∇x ˆψi(t, x)∥2. (4.37) X j∈I ¯a j̸=i (t,x) This is due to the fact that ˆψi(t, x) = ψi(t, x) for i ∈ I ¯a 2 (t,x) , and s′(z) ≤ 1 ∀ − ¯a ≤ z ≤ . − ¯a 2 Step I.1. Results from Proposition 4.2.8 and Proposition 4.2.9 and formulating the primal-dual admissible equation for fixed (α, β). Fix α > 0 and β ∈ (0, 1]. Recall from proposition 4.2.8 that there exist a subsequence of (γk)k (we do not relabel), an optimal solution ((xγk, yγk ( ˆξ1 γk ), uγk ) via (3.76), and ( ˆξ1, · · · , ˆξr) ∈ L∞([0, T ]; Rr , · · · , ξr+1 , ζγk γk + ) for (P α,β γk ) with corresponding ), such that (4.21)-(4.25) hold 122 and ((¯x, ¯y), ¯u) together with ( ˆξ1, · · · , ˆξr) satisfies the primal-dual admissible equation    ˙¯x(t) = f (t, ¯x(t), ¯y(t), ¯u(t)) − Pr i=1 ˆξi(t)∇x ˆψi(t, ¯x(t)) a.e. t ∈ [0, T ], ˙¯y(t) = g(t, ¯x(t), ¯y(t), ¯u(t)) a.e. t ∈ [0, T ], ˆψi(t, ¯x(t)) ≤ 0, ∀t ∈ [0, T ], ∀i ∈ {1, · · · , r}. (4.38) Moreover, Proposition 4.2.9 produces ∀k ∈ N, ˆpγk ) ∈ W 1,1([0, T ]; Rn × Rl), and ˆλγk ≥ 0 such that equations (4.30)-(4.33) are valid. For simplicity, the (α, β)-dependency shall only be made visible at the stage when the limit in (α, β) is performed. = (ˆqγk, ˆvγk Since (xγk (t), yγk (t)) ∈ int (cid:16) C(t) ∩ ¯B¯δ (¯x(t))(cid:17) × B¯δ (¯y(t)) for all t ∈ [0, T ], then ∂(x,y)(f, g)(t, xγk (t), yγk (t), uγk (t)) = ∂(x,y) ℓ (f, g)(t, xγk (t), yγk (t), uγk (t)), ∂(x,y)(f, g)(t, xγk (t), ¯u(t)) = ∂(x,y) ℓ (t), yγk (t), ¯u(t)) (t), yγk ∂xx ˆψi(t, xγk (f, g)(t, xγk ˆψi(t, xγk (t)) = ∂xx ℓ (t)) for i = 1, · · · , r, ∂xxψr+1(t, xγk (t)) = ∂xx ℓ ψr+1(t, xγk (t)). Also, (xγk (0), yγk (0), xγk (T ), yγk (T )) ∈ int S(¯δ), yields ∂LJ(xγk (0), yγk (0), xγk (T ), yγk (T )) = ∂L ℓ J(xγk (0), yγk (0), xγk (T ), yγk (T )). Using (A4.1), first equation of (4.23), and Filippov Selection Theorem (Theorem 2.3.5), equation (4.33) yields the existence of measurable ˆ¯Aγk in Mn×l[0, T ], ˆ¯Aγk (·), ˆAγk Mn×n[0, T ] such that for almost all t ∈ [0, T ], (·) in Mn×n[0, T ], ˆ¯Eγk (·) in Ml×n[0, T ], ˆEγk (·) in Ml×l[0, T ], ˆϑi (·), ˆ¯Eγk (·), ϑr+1 γk (·), ˆAγk (·), ˆEγk (·) in (·) γk (t), ¯u(t)), ( ˆAγk, ˆEγk (t), ¯u(t)), ( ˆAγk, ˆEγk )(t) ∈ ∂(x,y) ℓ f (t, xγk (t), yγk (t), uγk (t)); )(t) ∈ ∂(x,y) ℓ g(t, xγk (t), yγk (t), uγk (t)); )(t) ∈ ∂(x,y) ℓ f (t, xγk (t), yγk ( ˆ¯Aγk, ( ˆ¯Aγk, ˆϑi γk ˆ¯Eγk ˆ¯Eγk (t) ∈ ∂xx ℓ ∥( ˆ¯Aγk, max (cid:26) ∥ ˆϑi γk ∥∞ ≤ L ˆψ (t), yγk ℓ g(t, xγk )(t) ∈ ∂(x,y) ˆψi(t, xγk ˆ¯Eγk )∥2, ∥( ˆAγk, ˆEγk for i = 1, · · · , r, (t)) for i = 1, · · · , r, (t) = In×n; ϑr+1 γk ˆ¯Eγk )∥2, ∥( ˆAγk, ˆEγk and ∥∞ = 1, )∥2, ∥( ˆ¯Aγk, ∥ϑr+1 γk (cid:27) )∥2 ≤ ∥Lh∥2 ; 123 ˙ˆqγk ˙ˆvγk (t) = − h(1 − β) ˆ¯AT γk (t) + β ˆAT γk (t)iˆqγk (t) − (cid:20) (1 − β) ˆ¯Aγk (t)T + β ˆAT γk (t) (cid:21) ˆvγk (t) {z Qγk (t) } | r X + i=1 | r X i=1 | + + γ2 | γkeγk ˆψi(t,xγk (t)) ˆϑi γk (t)ˆqγk (t) + γkeγkψr+1(t,xγk (t)) ˆqγk (t) keγk γ2 (t))⟨∇x ˆψi(t, xγk ˆψi(t,xγk (t))∇x {z Xγk (t) ˆψi(t, xγk {z Yγk (t) (t))⟨∇xψr+1(t, xγk keγkψr+1(t,xγk (t))∇xψr+1(t, xγk {z Zγk (t) (cid:21) ˆqγk (t)T + β ˆEγk (t) − (t)T (1 − β) ˆ¯Eγk (cid:20) (t)), ˆqγk } (t)⟩ } (t)), ˆqγk , (t)⟩ } (4.39) (t)T + β ˆEγk (t)T (cid:21) ˆvγk (t) (t) = − (cid:20) (1 − β) ˆ¯Eγk +γkeγkφ(t,yγk (t))ˆvγk (t) + γ2 keγkφ(t,yγk (t))∇yφ(t, yγk (t))⟨∇yφ(t, yγk (t)), ˆvγk (t)⟩. (4.40) Step I.2. Uniform boundedness of {ˆpγk}, {∥ ˙ˆvγk∥2}, and {∥ ˙ˆqγk∥1}. The proof of this step is a generalization to our general setting of the proof for the cor- responding step in [58, Theorem 3.1]. We first start by proving that {ˆpγk} is uniformly bounded. We have ∥ˆpγk 1 d 2 dt (4.39)+(4.40)= (t), ˙ˆqγk (t)∥2 = ⟨ˆqγk (cid:28) ˆqγk (t), −[β ˆAγk (t)⟩ + ⟨ˆvγk (t)T + (1 − β) ˆ¯Aγk (t), ˙ˆvγk (t)⟩ (t)T ]ˆqγk (t) − [β ˆAγk (t)T + (1 − β) ˆ¯Aγk (t)T ]ˆvγk (t) (cid:29) + r X i=1 γkeγk ˆψi(t,xγk (t))  ⟨ˆqγk  (t), ˆϑi γk (t)ˆqγk (t)⟩ + γk|⟨ˆqγk | ˆψi(t, xγk (t), ∇x {z positive term (t))⟩|2   }  + γkeγkψr+1(t,xγk (t))∥ˆqγk | (t)∥2 + γ2 keγkψr+1(t,xγk (t))|⟨ˆqγk {z positive term (t), ∇xψr+1(t, xγk (t))⟩|2 } (cid:28) + ˆvγk (t), −[β ˆEγk (t)T ]ˆqγk (t) − [β ˆEγk + γkeγkφ(t,yγk (t))∥ˆvγk keγkφ(t,yγk (t))|⟨ˆvγk (t), ∇yφ(t, yγk | (t)T + (1 − β) ˆ¯Eγk (t)∥2 + γ2 {z positive term (t)T ]ˆvγk (cid:29) (t) (t)T + (1 − β) ˆ¯Eγk (t))⟩|2 } " ≥ −2Lh(t) − L ˆψ # 2¯µ ¯η2 ∥ˆpγk (t)∥2 := −Lp(t)∥ˆpγk (t)∥2, where (3.75) is employed and Lp(·) ∈ L2([0, T ], R+). Using Gronwall’s Lemma (Lemma 124 2.4.1), we deduce that there exists a constant Mp > 0 such that ∥ˆpγk (t)∥ ≤ e∥Lp(·)∥1∥ˆpγk (T )∥ ≤ Mp, ∀t ∈ [0, T ], ∀k ∈ N, (4.41) where the last inequality is due to the uniform boundedness of ∥ˆpγk nontriviality condition (4.36) when S has a general form, and to the transversality condition (4.31), ˆλγk We proceed to prove the uniform boundedness of {∥ ˙ˆvγk∥2} and {∥ ˙ˆqγk∥1}. From (4.40), (4.41), (3.75), and (4.29), there exist Lv(·) ∈ L2([0, T ], R+) and kv ∈ N, such that for k ≥ kv we = 1, and equation (4.20), when S = C0 × Rn+l. (T )∥ obtained from the have ∥ ˙ˆvγk (t)∥ ≤ + (cid:20) (1 − β) ˆ¯Eγk ∥ 2¯µ ¯η2 Mp + γ2 ke−γk (cid:21) (t)T + β ˆEγk (t)T ˆqγk 4 ¯δ2Mp ≤ Lv(t)Mp, ¯δ2 ∀t ∈ [0, T ]. (t) + (cid:20) (1 − β) ˆ¯Eγk (t)T + β ˆEγk (t)T (cid:21) ˆvγk (t)∥ ) is uniformly bounded in L2 by a constant Mv. Thus, for all k ≥ kv, ( ˙ˆvγk We now proceed to prove that ( ˙ˆqγk with (4.22) and (4.41), yields that for some ¯k1 ∈ N, ¯k1 ≥ kv, we have ) is uniformly bounded in L1. Observe that (4.29) together ∥Qγk (t)∥ ≤ 2Lh(t)Mp; ∥Xγk (t)∥ ≤ 2¯µ ¯η2 max{1, L ˆψ}Mp; ∥Zγk (t)∥ ≤ γ2 k e−γk ¯ε2 4 ¯ε2 Mp. (4.42) Hence, using (4.39) and (4.42), we can see {ˆqγk} is of uniformly bounded variation once we prove Z T 0 ∥Yγk (t)∥dt = Z T r X 0 i=1 keγk γ2 ˆψi(t,xγk (t))∥∇x ˆψi(t, xγk (t))∥ (cid:12) (cid:12) (cid:12)⟨∇x ˆψi(t, xγk (t)), ˆqγk (cid:12) (t)⟩ (cid:12) (cid:12)dt is uniformly bounded. Denote by I ¯a k = I ¯a (t,xγk (t)) = {i ∈ {1, . . . , r} : −¯a ≤ ψi(t, xγk (t)) ≤ 0} and define I ¯a(xγk ) := {t ∈ [0, T ] : I ¯a (t,xγk (t)) ̸= ∅}. (4.43) (4.44) 125 Using the definition of I ¯a(xγk ), I ¯a k ¯a and I 2 k , we deduce that ∀t ∈ [I ¯a(xγk )]c, ∀i = 1, · · · , r, ˆψi(t, xγk ∀t ∈ I ¯a(xγk ), ∀i ∈ [I ¯a k ]c, ˆψi(t, xγk (t)) = − ˆψi(t, xγk (t)) = 0, 3¯a 4 , ∇x ˆψi(t, xγk (t)) = 0, (t)) = − 3¯a 4 , ∇x (4.45) (4.46) ∀t ∈ I ¯a(xγk ), ∀i ∈ I ¯a 2 k , ˆψi(t, xγk (t)) = ψi(t, xγk (t)), ∇x ˆψi(t, xγk (t)) = ∇xψi(t, xγk (t)),(4.47) ), ∀i ∈ I ¯a k \ I ¯a 2 k , ∀t ∈ I ¯a(xγk  ˆψi(t, xγk   ∇x ˆψi(t, xγk (t)) < − ¯a 2 , (4.48) (t)) = s′(ψi(t, xγk (t)))∇xψi(t, xγk (t)). As a result of (4.45)-(4.48) and the fact that 0 ≤ s′(z) ≤ 1 for all −¯a ≤ z ≤ − ¯a 2 , to prove R T 0 ∥Yγk (t)∥dt is uniformly bounded, it remains to prove that I1 := Z I ¯a(xγk ) X i∈I ¯a 2 k keγkψi(t,xγk (t))∥∇xψi(t, xγk γ2 (t))∥|⟨∇xψi(t, xγk (t)), ˆqγk (t)⟩|dt ≤ M1, (4.49) for a certain constant M1 > 0. For that, it is sufficient to prove that there exists M2 > 0 such that I2 := Z I ¯a(xγk ) X i∈I ¯a 2 k keγkψi(t,xγk (t))∥∇xψi(t, xγk γ2 (t))∥2|⟨∇xψi(t, xγk (t)), ˆqγk (t)⟩|dt ≤ M2. (4.50) Indeed, for t ∈ I ¯a(xγk uniform convergence of xγk all k ≥ ¯k2, we have ∥∇xψi(t, xγk M2, then it follows that I1 ≤ M2 ¯η ¯a ) and i ∈ I 2 k , we have ψi(t, xγk 2 ≥ −ao, and hence the to ¯x and Lemma 4.2.3 yield the existence of ¯k2 ∈ N such that for (t))∥ > ¯η. Thus, if I2 is uniformly bounded by a constant (t)) ≥ − ¯a , for k large enough. We proceed to prove that (4.50) holds true. Using Lemma 2.4.2, we first calculate for each j = 1, · · · , r and t ∈ [0, T ]: d dt (cid:12) (cid:12)⟨ˆqγk (cid:12) (t), ∇x ˆψj(t, xγk (cid:12) (t))⟩ (cid:12) (cid:12) = ⟨ ˙ˆqγk (t), ∇x ˆψj(t, xγk (t))⟩sj γk (t) + ⟨ˆqγk (t), Θj γk (t).(1, ˙xγk (t))⟩sj γk (t), a.e. (4.51) 126 where sj γk (t) is the sign of ⟨ˆqγk Using equation (4.39) in (4.51), we get (t), ∇x ˆψj(t, xγk (t))⟩ and Θj γk (t) ∈ ∂(t,x)∇x ˆψj(t, xγk (t)). d dt ⟨Qγk r X + i=1 r X J1 := X j∈I ¯a k d (cid:12) (cid:12)⟨ˆqγk (cid:12) dt i=1 = X j∈I ¯a k − X j∈I ¯a k (cid:12) (cid:12)⟨ˆqγk (cid:12) (t), ∇x ˆψj(t, xγk (cid:12) (t))⟩ (cid:12) (cid:12) = (t) + Xγk (t) + Zγk keγk γ2 ˆψi(t,xγk (t))⟨∇x (t), ∇x ˆψi, ∇x ˆψj(t, xγk (t))⟩sj γk ˆψj⟩|(t,xγk (t))⟨∇x (t) + ⟨ˆqγk ˆψi(t, xγk (t), Θj γk (t).(1, ˙xγk (t))⟩sj γk (t) (t)), ˆqγk (t)⟩sj γk (t) a.e. (4.52) Let t ∈ I ¯a(xγk ). Summing the previous equality over j ∈ I ¯a k , we obtain that: keγk γ2 ˆψi(t,xγk (t))⟨∇x ˆψi, ∇x ˆψj⟩|(t,xγk (t))⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩sj γk (t) (t), ∇x ˆψj(t, xγk (t))⟩ ⟨Qγk (t) + Xγk (t) + Zγk (t), ∇x ⟨ˆqγk (t), Θj γk (t).(1, ˙xγk (t))⟩sj γk (cid:12) (cid:12) − X (cid:12) j∈I ¯a k (t) a.e. ˆψj(t, xγk (t))⟩sj γk (t) (4.53) On the other hand, splitting in the definition of J1 the summation over i and switching the order of summation between i and j, we have r X X J1 = j∈I ¯a k j∈I ¯a k X j∈I ¯a k i=1 = X i∈I ¯a k = X ¯a 2 k i∈I + X i∈I ¯a ¯a 2 k \I k j∈I ¯a k γ2 keγk ˆψi(t,xγk (t))⟨∇x ˆψi, ∇x ˆψj⟩|(t,xγk (t)) ⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩sj γk (t) X γ2 keγk ˆψi(t,xγk (t))⟨∇x ˆψi, ∇x ˆψj⟩|(t,xγk (t)) ⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩sj γk (t) keγk γ2 ˆψi(t,xγk (t))⟨∇x ˆψi, ∇x ˆψj⟩|(t,xγk (t)) ⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩sj γk (t) X keγk γ2 ˆψi(t,xγk (t))⟨∇x ˆψi, ∇x ˆψj⟩|(t,xγk (t)) ⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩sj γk (t) = X keγk γ2 ˆψi(t,xγk (t))  ∥∇x ˆψi(t, xγk (t))∥2 + i∈I ¯a 2 k X (t)si γk (t)⟨∇x ˆψi(t, xγk (t)), ∇x ˆψj(t, xγk   |⟨∇x (t))⟩ ˆψi(t, xγk (t)), ˆqγk (t)⟩| sj γk j∈I ¯a k j̸=i + X i∈I ¯a ¯a 2 k \I k j∈I ¯a k X keγk γ2 ˆψi(t,xγk (t))⟨∇x ˆψi, ∇x ˆψj⟩|(t,xγk (t)) ⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩sj γk (t). 127 Using the fact that xγk exists ¯k3 ∈ N such that for k ≥ ¯k3, we have for i ∈ I ¯a 2 k , converges uniformly to ¯x, we deduce from equation (4.37) that, there sj γk (t)si γk (t)⟨∇x ˆψi(t, xγk (t)), ∇x ˆψj(t, xγk (t))⟩ ≥ −¯b∥∇x ˆψi(t, xγk (t))∥2. X j∈I ¯a k j̸=i Then, J1 ≥ (1 − ¯b) X keγk γ2 ˆψi(t,xγk (t))∥∇x ˆψi(t, xγk (t))∥2|⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩| i∈I ¯a 2 k X j∈I ¯a k + X ¯a 2 k \I k i∈I ¯a | keγk γ2 ˆψi(t,xγk (t))⟨∇x ˆψi, ∇x ˆψj⟩|(t,xγk (t)) ⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩sj γk (t) . {z J2 } Hence, (recalling (4.47)) we have keγkψi(t,xγk (t))∥∇xψi(t, xγk γ2 (t))∥2|⟨∇xψi(t, xγk (t)), ˆqγk (t)⟩| ≤ 1 1 − ¯b (J1 − J2). (4.54) X i∈I ¯a 2 k Integrating the last inequality over I ¯a(xγk ), we deduce from the definition of I2 that 0 ≤ I2 ≤ 1 1 − ¯b Z I ¯a(xγk ) (J1 − J2) dt ≤ 1 1 − ¯b (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) Z I ¯a(xγk ) (cid:12) (cid:12) (cid:12) J1 dt (cid:12) (cid:12) + 1 1 − ¯b Z I ¯a(xγk ) |J2|dt. Using (4.48), we deduce that, there exists ¯k4 ∈ N, there exists constant M3 > 0 such that for all k ≥ ¯k4, we have Z I ¯a(xγk ) |J2|dt ≤ ke−γk γ2 ψr2MpT ¯a 2 L3 1 − ¯b ≤ M3. Hence, 0 ≤ I2 ≤ 1 1 − ¯b (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) Z I ¯a(xγk ) (cid:12) (cid:12) (cid:12) J1 dt (cid:12) (cid:12) + M3, ∀k ≥ ¯k4. (4.55) (4.56) Note that, (4.53) yields that the uniform boundedness of (cid:12) (cid:12) (cid:12) of R (cid:12) (cid:12) I ¯a(xγk ) J1 dt (cid:12) is equivalent to that (cid:12) (cid:12) (cid:12) (cid:12) Z I ¯a(xγk ) d dt X j∈I ¯a k |⟨ˆqγk (t), ∇x ˆψj(t, xγk (cid:12) (cid:12) (t))⟩| dt (cid:12) (cid:12) , 128 since (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) Z I ¯a(xγk ) Z I ¯a(xγk ) X j∈I ¯a k X j∈I ¯a k ⟨ˆqγk (t), Θj γk (t).(1, ˙xγk (t))⟩sj γk (cid:12) (cid:12) (t)dt (cid:12) (cid:12) ≤ MpLψ(1 + Mh + 2¯µ ¯η2 ¯L)rT, ⟨Qγk (t) + Xγk (t) + Zγk (t), ∇x ˆψj(t, xγk (t))⟩sj γk (cid:12) (cid:12) (t)dt (cid:12) (cid:12) " ≤ (2Lh(t)Mp) + ( 2¯µ ¯η2 max{Lψ, 1}Mp) + (γ2 ke−γk # ¯ε2 4 ¯ε2Mp) LψrT. We now proceed to prove the boundedness of (cid:12) (cid:12) (cid:12) (cid:12) Z I ¯a(xγk ) d dt X j∈I ¯a k |⟨ˆqγk (t), ∇x ˆψj(t, xγk (cid:12) (cid:12) (t))⟩| dt (cid:12) (cid:12) . Using the Fundamental Theorem of Calculus, we have that (cid:12) (cid:12) (cid:12) (cid:12) Z T r X 0 j=1 d dt |⟨ˆqγk (t), ∇x ˆψj(t, xγk (cid:12) (cid:12) (t))⟩| dt (cid:12) (cid:12) ≤ 2rLψMp. (4.57) Using (4.45), (4.46), (4.52), and the uniform boundedness of ( ˙xγk exists a constant M4 > 0 such that ), we deduce that that there (cid:12) (cid:12) (cid:12) (cid:12) Z r X [I ¯a(xγk )]c j=1 d dt |⟨ˆqγk (t), ∇x ˆψj(t, xγk (cid:12) (cid:12) (t))⟩| dt (cid:12) (cid:12) ≤ M4, (cid:12) (cid:12) (cid:12) (cid:12) Z X I ¯a(xγk ) j∈[I ¯a k ]c d dt |⟨ˆqγk (t), ∇x ˆψj(t, xγk (cid:12) (cid:12) (t))⟩| dt (cid:12) (cid:12) ≤ M4. (4.58) (4.59) Hence, combining (4.57) and (4.58), we conclude that there exists a constant M5 > 0 such that (cid:12) (cid:12) (cid:12) (cid:12) Z r X I ¯a(xγk ) j=1 d dt |⟨ˆqγk (t), ∇x ˆψj(t, xγk (cid:12) (cid:12) (t))⟩| dt (cid:12) (cid:12) ≤ M5. This last inequality with (4.59) yield that there exists a constant M6 > 0 such that (cid:12) (cid:12) (cid:12) (cid:12) Z I ¯a(xγk ) d dt X j∈I ¯a k |⟨ˆqγk (t), ∇x ˆψj(t, xγk (cid:12) (cid:12) (t))⟩| dt (cid:12) (cid:12) ≤ M6. R (cid:12) (cid:12) I ¯a(¯x) J1 dt (cid:12) is uniformly bounded, and by (4.56), I2 is uniformly bounded. Hence, Hence, (cid:12) (cid:12) (cid:12) {∥ ˙ˆqγk∥1} uniformly bounded by a constant Mq. Step I.3. Construction of p = (q, v), λ ≥ 0, ϑi (for each i), νi (for each i ), ¯A, A, ¯A, 129 A, ¯E, ¯E, E, E for each fixed (α, β) satisfying some necessary conditions. In Step I.1, we proved the existence of ( ˆξi)r in L∞([0, T ]; R+) such that condition (i) is i=1 satisfied. We now follow steps similar to steps 3-10 in the proof of [55, Theorem 6.1]. Step I.3.1 Construction of ˆp = (ˆq, ˆv). From Step I.2, we find that ˆqγk ∈ W 1,1 satisfies, for k large enough, ∥ˆqγk∥∞ ≤ Mp and V 1 0 (ˆqγk ) = ∥ ˙ˆqγk∥1 ≤ Mq. (4.60) Hence, by Helly first theorem (see Theorem 2.4.6(i)), we deduce that ˆqγk convergent subsequence, whose limit ˆq ∈ BV ([0, T ]; Rn), with admits a pointwise ∥ˆq∥∞ ≤ Mp and V 1 0 (ˆq) ≤ Mq. (4.61) By Helly second theorem (see Theorem 2.4.6(ii)), we deduce that for any z ∈ ([0, T ]; Rn), we have lim k→∞ Z T 0 ⟨z(t), ˙ˆqγk (t)⟩dt = Z [0,T ] ⟨z(t), dˆq(t)⟩. By Step I.2, we also find that ˆvγk ∈ W 1,2 satisfies, for k large enough, ∥ˆvγk∥∞ ≤ Mp and ∥ ˙ˆvγk∥2 ≤ Mv. (4.62) (4.63) Hence, by Theorems 2.4.10-2.4.13, we deduce that ˆvγk quence to a function ˆv(·) ∈ W 1,2([0, T ]; Rl) such that admits a pointwise convergent subse- ˆvγk (·) unif−−→ ˆv(·), ∥ˆv∥∞ ≤ Mp, ˙ˆv(·), (·) w−→ L2 ˙ˆvγk ∥ ˙ˆv∥2 ≤ Mv, and for any z(·) ∈ C([0, T ], Rl), we have lim k→∞ Z T 0 ⟨z(t), ˙ˆvγk (t)⟩dt = Z [0,T ] ⟨z(t), ˙ˆv(t)⟩dt. (4.64) Step I.3.2 Construction of ˆ¯A, ˆA, i = 1, · · · , r) and formulating adjoint equations for fixed (α, β). ˆ¯A, ˆA, ˆ¯E, ˆ¯E, ˆE, ˆE, ˆϑi (for i = 1, · · · , r), ˆνi (for 130 It follows from (4.62), that, for any z(·) ∈ C([0, T ], Rn), we have Z [0,T ] ⟨z(t), dˆq(t)⟩ = lim k→∞ = lim k→∞ + lim k→∞ Z T 0 Z T 0 Z T 0 ⟨z(t), ˙ˆqγk (t)⟩dt ⟨z(t), Qγk (t)⟩dt + lim k→∞ ⟨z(t), Yγk (t)⟩dt + lim k→∞ We will work on each of these limits above separately. Since Z T 0 Z T 0 ⟨z(t), Xγk (t)⟩dt ⟨z(t), Zγk (t)⟩dt. max (cid:26) ∥( ˆ¯Aγk, ˆ¯Eγk )∥2, ∥( ˆAγk, ˆEγk )∥2, ∥( ˆ¯Aγk, ˆ¯Eγk )∥2, ∥( ˆAγk, ˆEγk )∥2 (cid:27) ≤ ∥Lh∥2 then, using Theorem 2.4.11, along a subsequence, we do not relabel, ( ˆ¯Aγk, ˆ¯E), ( ˆA, ˆE), ( ˆ¯A, ) converge weakly in L2 to some ( ˆ¯A, ( ˆ¯Aγk, tively. Using Theorem 2.4.15, we conclude that ), ( ˆAγk, ˆEγk ˆ¯Eγk ˆ¯Eγk ), ( ˆAγk, ˆEγk ), ˆ¯E), ( ˆA, ˆE) respec- ( ˆ¯A, ( ˆ¯A, ˆ¯E)(t) ∈ ∂(x,y) ˆ¯E)(t) ∈ ∂(x,y) ℓ ℓ f (t, ¯x(t), ¯y(t), ¯u(t)), g(t, ¯x(t), ¯y(t), ¯u(t)). We also know that ˆqγk and ˆv(·) respectively. We then conclude using Theorem 2.4.12 that and ˆvγk are uniformly bounded in L∞ and converge pointwise to ˆq(·) ˆ¯Aγk ˆAγk ˆ¯Aγk ˆAγk (t)T ˆqγk (t)T ˆqγk (t) weakly−−−→ L2 (t) weakly−−−→ L2 (t)T ˆvγk (t) weakly−−−→ L2 (t)T ˆvγk (t) weakly−−−→ L2 ˆ¯A(t)T ˆq(t) ˆA(t)T ˆq(t) ˆ¯A(t)T ˆv(t) ˆA(t)T ˆv(t). (4.65) Then, for any z(·) ∈ C([0, 1], Rn), we have Z T lim k→∞ Z T = 0 ⟨z(t), Qγk (t)⟩dt z(t), − h(1 − β) ˆ¯AT (t) + β ˆAT (t)iˆq(t) − (cid:21) (cid:20) (1 − β) ˆ¯A(t)T + β ˆAT (t) (cid:29) ˆv(t) . 0 (cid:28) Now, for each i, the sequence of positive and continuous functions ˆξi γk produces a sequence of bounded linear functionals in C ⊕(0; T ) to which it corresponds a sequence of finite positive 131 Radon measure ˆµi γk ∈ M+([0, T ]) such that for all B ∈ B([0, T ]) and for all z ∈ C([0, T ], R), we have ˆµi γk (B) = Z B ˆξi γk (t) dt, Z [0,T ] zdˆµi γk = Z T 0 z(t) ˆξi γk (t) dt. Using the fact that ˆξi γk uniformly bounded in L∞ and converges weakly* in L∞ to ˆξi, we conclude from the second equation of (4.66) that ˆµi γk M+([0, T ]) corresponding to ˆξi. Now, using the fact that ∥ ˆϑi apply Theorem 2.4.14 and we follow the same arguments as those used in Step 3 of the proof ˆψi(t, ¯x(t)) of Theorem 5.1 in [70] to deduce that there exist ( ˆϑi(·))r converges weakly* to ˆµi o such that ˆϑi(t) ∈ ∂xx (for i = 1, · · · , r), we , the element in ∥∞ ≤ L ˆψ γk i=1 l a.e. t ∈ [0, 1] and for any z(·) ∈ C([0, 1]; Rn), we have lim k→∞ Z T r X 0 i=1 γkeγk ˆψi(t,xγk (t))⟨z(t), ˆϑi γk (t)ˆqγk (t)⟩dt = Z T r X 0 i=1 ˆξi(t)⟨z(t), ˆϑi(t)ˆq(t)⟩dt. (4.66) Using (4.24) in which we have γkξr+1 γk −→ 0 uniformly, we deduce that, for all z(·) ∈ C([0, T ]; Rn), we have lim k→∞ lim k→∞ Z T 0 Z T 0 ⟨z(t), γkeγkψr+1(t,xγk (t)) ˆqγk (t)⟩dt = 0, (4.67) keγkψr+1(t,xγk (t))⟨∇xψr+1(t, xγk γ2 (t)), ˆqγk (t)⟩⟨z(t), ∇xψr+1(t, xγk (t))⟩dt = 0. (4.68) This means that for any z(·) ∈ C([0, 1], Rn), we have lim k→∞ lim k→∞ Z T 0 Z T 0 ⟨z(t), Xγk (t)⟩dt = Z T r X 0 i=1 ⟨z(t), Zγk (t)⟩dt = 0. ˆξi(t)⟨z(t), ˆϑi(t)ˆq(t)⟩dt, We now work on the last term of our limit taking process: lim k→∞ Z T r X 0 i=1 keγk γ2 ˆψi(t,xγk (t))⟨z(t), ∇x ˆψi(t, xγk (t))⟩⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩dt. Let ˆνi γk the finite signed Radon measure on [0, T ], corresponding to the bounded linear ˆψi(t, xγk (t)⟩, t ∈ [0, 1], i.e. functional on C([0, 1]; R) defined by γk (t)), ˆqγk (t)⟨∇x ˆξi γk dˆνi γk (t) := γk ˆξi γk (t)⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩dt, i = 1, · · · , r. (4.69) 132 This means that, for all z ∈ C([0, T ]; R), we have ⟨ˆνi γk , z⟩ = Z [0,T ] z dˆνi γk = Z T 0 z(t)γk ˆξi γk (t)⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩dt. Using steps similar to step above, we can prove that there exists a constant M7 > 0 such that for k large enough Thus, Z T 0 γk ˆξi γk (t)⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩dt ≤ M7. (4.70) ∥ˆνi γk ∥T.V. ≤ M7. Hence, along a subsequence (we do not relabel), the sequence (ˆνi γk )k converges weakly* to a finite signed Radon measure ˆνi supported in {t ∈ [0, T ] : ˆψi(t, ¯x(t)) = 0} = {t ∈ [0, T ] : ψi(t, ¯x(t)) = 0} = I 0 i (¯x) and ∥ˆνi∥T.V. ≤ M7. Using Theorem 2.4.14, and the fact that ∇x ˆψi(t, ¯x), we deduce that ∇x ˆψi(t, xγk ˆψi(t, xγk which means that for all z(·) ∈ C([0, 1], Rn), we have uniformly to ∇x ) is uniformly bounded and converges ˆψi(t, ¯x)ˆνi, converges weakly* to ∇x )ˆνi γk lim k→∞ = lim k→∞ Z T 0 Z T 0 ⟨z(t), Yγk (t)⟩dt r X i=1 keγk γ2 ˆψi(t,xγk (t))⟨z(t), ∇x ˆψi(t, xγk (t))⟩⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩dt Z T = r X ⟨z(t), ∇x ˆψi(t, ¯x(t))⟩dˆνi(t). 0 i=1 Hence, Z [0,T ] ⟨z(t), dˆq(t)⟩ Z T (cid:28) z(t), − h(1 − β) ˆ¯AT (t) + β ˆAT (t)iˆq(t) − (cid:21) (cid:20) (1 − β) ˆ¯A(t)T + β ˆAT (t) ˆv(t) (cid:29) ˆξi(t)⟨z(t), ˆϑi(t)ˆq(t)⟩dt + r X i=1 Z T r X 0 i=1 ⟨z(t), ∇x ˆψi(t, ¯x(t))⟩dˆνi(t). (4.71) 133 = + 0 Z T 0 Now, notice from (4.64), that for any z(·) ∈ C([0, T ]; Rl), we have Z [0,T ] ⟨z(t), ˙ˆv(t)⟩dt = lim k→∞ = lim k→∞ + lim k→∞ + lim k→∞ + lim k→∞ Z T 0 Z T 0 Z T 0 Z T 0 Z T 0 (t)⟩dt ⟨z(t), ˙ˆvγk (cid:20) ⟨z(t), − ⟨z(t), − (1 − β) ˆ¯Eγk (1 − β) ˆ¯Eγk (cid:20) (t)T + β ˆEγk (t)T (cid:21) ˆqγk (t)⟩dt (t)T + β ˆEγk (cid:21) (t)T ˆvγk (t)⟩dt γkeγkφ(t,yγk (t))⟨z(t), ˆvγk (t)⟩dt keγkφ(t,yγk (t))⟨∇yφ(t, yγk γ2 (t)), ˆvγk (t)⟩⟨z(t), ∇yφ(t, yγk (t))⟩dt. Using (4.24), we have γkζγk −→ 0 uniformly. Hence, for all z(·) ∈ C([0, T ]; Rl), we have lim k→∞ lim k→∞ Z T 0 Z T 0 ⟨z(t), γkeγkφ(t,yγk (t))ˆvγk (t)⟩dt = 0, keγkφ(t,yγk (t))⟨∇yφ(t, yγk γ2 (t)), ˆvγk (t)⟩⟨z(t), ∇yφ(t, yγk (t))⟩dt = 0. (4.72) (4.73) We also know that ˆqγk and ˆv(·) respectively. We then conclude that and ˆvγk are uniformly bounded in L∞ and converge pointwise to ˆq(·) ˆ¯Eγk ˆEγk ˆ¯Eγk ˆEγk (t)T ˆqγk (t)T ˆqγk (t)T ˆvγk (t)T ˆvγk (t) weakly−−−→ L2 (t) weakly−−−→ L2 (t) weakly−−−→ L2 (t) weakly−−−→ L2 ˆ¯E(t)T ˆq(t) ˆE(t)T ˆq(t) ˆ¯E(t)T ˆv(t) ˆE(t)T ˆv(t) Hence, we have that Z [0,T ] ⟨z(t), ˙ˆv(t)⟩dt = + Z T 0 Z T 0 ⟨z(t), − (cid:20) (1 − β) ˆ¯E(t)T + β ˆE(t)T (cid:21) ˆq(t)⟩dt ⟨z(t), − (cid:20) (1 − β) ˆ¯E(t)T + β ˆE(t)T (cid:21) ˆv(t)⟩dt. (4.74) Step I.3.3 Formulating non-triviality condition, maximization condition, com- plementary slackness, measure properties, and transversality condition for fixed (α, β). For condition (vi), equation (4.69) yields the following Dˆqγk (t), ∇x ˆψi(t, xγk (t))E dˆνi γk (t) = γk ˆξi γk (t)⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩2 ≥ 0, 134 and hence, upon taking the limit, we get Dˆq(t), ∇x ˆψi(t, ¯x(t))E dˆνi(t) ≥ 0. For condition (ii), since ˆλγk ∈ [0, 1] then, along a subsequence, ˆλγk limit ˆλ ∈ [0, 1]. Taking the limit of (4.36), we deduce that converges pointwise to a ˆλ + ∥ˆp(T )∥ = 1. For condition (iv), we know by (4.32) that for t ∈ [0, T ], u ∈ U (t), ⟨ˆqγk (t), f (t, xγk (t), yγk (t), u)⟩ + ⟨ˆvγk (t), g(t, xγk (t), yγk (t), u)⟩ − ≤ ⟨ˆqγk (t), f (t, xγk (t), yγk (t), uγk (t))⟩ + ⟨ˆvγk (t), g(t, xγk (t), yγk (t), uγk ∥u − uγk ˆλγkα β (t))⟩ a.e. t ∈ [0, 1]. (t)∥ Taking the limit when k → ∞ of this last inequality, we conclude that for t ∈ [0, T ], u ∈ U (t), ⟨ˆq(t), f (t, ¯x(t), ¯y(t), u)⟩ + ⟨ˆv(t), g(t, ¯x(t), ¯y(t), u)⟩ − ˆλα β ∥u − ¯u(t)∥ ≤ ⟨ˆq(t), f (t, ¯x(t), ¯y(t), ¯u(t))⟩ + ⟨ˆv(t), g(t, ¯x(t), ¯y(t), ¯u(t))⟩ a.e. t ∈ [0, 1]. This is equivalent to saying that ( max u∈U (t) ⟨ˆq(t), f (t, ¯x(t), ¯y(t), u)⟩ + ⟨ˆv(t), g(t, ¯x(t), ¯y(t), u)⟩ − ) ∥u − ¯u(t)∥ ˆλα β is attained at u = ¯u(t) for a.e. t ∈ [0, T ]. For condition (v), we have ˆξi ≥ 0 (i = 1, · · · , r), and ˆξi(t) = 0 ∀t ∈ I - i (¯x). We also have using equation (4.70) that Z T 0 ˆξi γk (t)|⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩|dt ≤ 1 Z T γk 0 γk ˆξi γk (t)⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩dt ≤ . M7 γk (4.75) Hence, And thus, lim k→∞ Z T 0 ˆξi γk (t)|⟨∇x ˆψi(t, xγk (t)), ˆqγk (t)⟩| = 0. Z T 0 ˆξi(t)|⟨∇x ˆψi(t, ¯x(t)), ˆq(t)⟩| = 0. 135 We conclude that ˆξi(t)⟨∇x ˆψi(t, ¯x(t)), ˆq(t)⟩ = 0 a.e. t ∈ [0, 1]. Finally, for condition (vii), by equation (4.31), we have that (ˆpγk (0), −ˆpγk (T )) ∈ ˆλγk ∂L l J(xγk (0), yγk (0), xγk (T ), yγk (T )) + α ¯B + N L Sγk (k) (xγk (0), yγk (0), xγk (T ), yγk (T )) . (4.76) This is equivalent to saying that there exist (z1 γk (w1 γk , z2 γk , s1 γk , s2 γk ) ∈ ∂L l J(xγk , w2 γk , m1 γk , m2 γk ) ∈ N L Sγk (k) (0), yγk (xγk (0), xγk (0), yγk (T ), yγk (0), xγk (T )), (T ), yγk (T )), oγk ∈ ¯B such that (ˆpγk (0), −ˆpγk (T )) = ˆλγk (z1 γk , z2 γk , s1 γk , s2 γk ) + αoγk + (w1 γk , w2 γk , m1 γk , m2 γk ). (4.77) • As we have seen before, since ˆλγk ∈ [0, 1], then, along a subsequence, ˆλγk converges pointwise to a limit ˆλ ∈ [0, 1]. We also have ∥(z1 γk , z2 γk , s1 γk , s2 γk )∥ ≤ Lg, then, along a subsequence, (z1 γk , z2 γk , s1 γk , s2 γk ) −→ (z1, z2, s1, s2). Since ∂L l J(·, ·, ·, ·) has closed graph with nonempty and compact values then, using the fact that (xγk (0), yγk (0), xγk (T ), yγk (T )) −→ (¯x(0), ¯y(0), ¯x(T ), ¯y(T )), we get (z1, z2, s1, s2) ∈ ∂L l J(¯x(0), ¯y(0), ¯x(T ), ¯y(T )). • Since ∥oγk∥ ≤ 1, then, along a subsequence, we have that oγk −→ o ∈ ¯B. • We also have (ˆpγk • We deduce from (4.77) that (w1 γk (T )) −→ (ˆp(0), −ˆp(T )). (0), −ˆpγk , m1 γk , m2 γk , w2 γk ) must converge to (w1, w2, m1, m2) respectively. We now show that (w1, w2, m1, m2) ∈ N L S (¯x(0), ¯y(0), ¯x(T ), ¯y(T )). Indeed, (xγk (0), yγk (0), xγk (T ), yγk (T )) ∈ Sγk(k), (xγk (0), yγk (0), xγk (T ), yγk (T )) ∈ (Sδo + ρ1B) ∩ int (cid:16) ¯N (¯ε,¯δ) (0) × ¯N (¯ε,¯δ) (T )(cid:17) ⊂ int S(¯δ). 136 We now have two cases: Case 1: ¯x(0) ∈ int C(0). Since (xγk (0), yγk (0), xγk (T ) − ¯eγk, yγk (T ) − ¯ωγk ) ∈ int ¯Bδo , then N L Sγk (k) (xγk (0), yγk (0), xγk (T ), yγk (T )) = N L S (xγk (0), yγk (0), xγk (T ) − ¯eγk, yγk (T ) − ¯ωγk ). Case 2: ¯x(0) ∈ bdry C(0). Since (xγk (0) − ¯σk d¯x(0) ∥d¯x(0)∥, yγk (0), xγk (T ) − ¯eγk, yγk (T ) − ¯ωγk ) ∈ int ¯Bδo, then N L Sγk (k) (xγk (0), yγk (0), xγk (T ), yγk (T )) = N L S (xγk (0)−¯σk d¯x(0) ∥d¯x(0)∥ , yγk (0), xγk (T )−¯eγk, yγk (T )−¯ωγk ). In both cases, since (w1 γk , w2 γk , m1 γk , m2 γk ) → (w1, w2, m1, m2), and N L S (·) has closed values and closed graph, then (w1, w2, m1, m2) ∈ N L S (¯x(0), ¯y(0), ¯x(T ), ¯y(T )). Consequently, the limit of (4.76) is (ˆp(0), −ˆp(T )) ∈ ˆλ ∂L l J(¯x(0), ¯y(0), ¯x(T ), ¯y(T )) + α ¯B + N L S (¯x(0), ¯y(0), ¯x(T ), ¯y(T )). Step I.3.4 Formulating the necessary conditions for each fixed (α, β) in terms of (¯x), and on this set, ∇x ψi. Notice that ˆνi and ˆξi are supported in {t ∈ [0, T ] : ˆψi(t, ¯x(t)) = 0} = {t ∈ [0, T ] : ψi(t, ¯x(t)) = ˆψi(t, ¯x(t)) = ∇xψi(t, ¯x(t)). Hence, all the previous necessary 0} = I 0 i conditions can be formulated in terms of ψi by simply taking q := ˆq, v := ˆv, p := ˆp, λ := ˆλ, ¯A := ˆ¯A, A := ˆA, ¯A := ˆ¯A, A := ˆA, ¯E := ˆ¯E, ¯E := ˆ¯E, E := ˆE, E := ˆE, ξi := ˆξi (for i = 1, · · · , r), ϑi := ˆϑi (for i = 1, · · · , r) and νi := ˆνi (for i = 1, · · · , r). Step I.4. Taking α → 0. All the boxed equations above depend on α and β. As the first step, we take the limit α → 0, while keeping β fixed. To explicitly indicate the dependence on α in our notation, we introduce a subscript αj, where αj ∈ (0, 1] and αj → 0. First, for each j, (ξ1 αj , · · · , ξr αj ) ∈ L∞([0, T ], Rr + ) such that ξi αj = 0 on I - i (¯x) (∀i = 1, · · · , r), ∥ r X i=1 ξi αj ∥∞ ≤ 2¯µ ¯η2 , (4.78) 137 and    ˙¯x(t) = f (t, ¯x(t), ¯y(t), ¯u(t)) − Pr i=1 ξi αj (t)∇xψi(t, ¯x(t)) a.e. t ∈ [0, T ], ˙¯y(t) = g(t, ¯x(t), ¯y(t), ¯u(t)) a.e. t ∈ [0, T ], (4.79) ψi(t, ¯x(t)) ≤ 0, ∀t ∈ [0, T ], ∀i ∈ {1, · · · , r}. Thus, for each i = 1, · · · , r, there exists a subsequence of ξi αj (we do not relabel) that converges weakly* (and hence weakly in L2) to a non-negative function ξi ∈ L∞([0, T ], R), with ξi = 0 on I - i (¯x). Moreover, using the fact that for each i ∈ {1, · · · , r}, Z T 0 ξi αj (t)∇xψi(t, ¯x(t)) −→ Z T 0 ξi(t)∇xψi(t, ¯x(t)), we deduce that condition (i) of our theorem is satisfied (with no dependency on α). We now show the dependency on αj in the adjoint equation. For each j, we have Z [0,T ] ⟨z(t), dqαj (t)⟩ D z(t), − h(1 − β) ¯AT αj (t) + βAT αj (t)i (t) − h(1 − β) ¯Aαj (t)T + βAT αj (t)i (t)E vαj r X i=1 ξi αj (t)⟨z(t), ϑi αj (t)qαj (t)⟩dt + ⟨z(t), ∇xψi(t, ¯x(t))⟩dνi αj (t), qαj r X Z T 0 i=1 = + Z T 0 Z T 0 Z [0,T ] ⟨z(t), ˙vαj (t)⟩dt = + Z T 0 Z T 0 ⟨z(t), − h(1 − β) ¯Eαj (t)T + βEαj (t)T i qαj (t)⟩dt ⟨z(t), − h(1 − β) ¯Eαj (t)T + βEαj (t)T i vαj (t)⟩dt. Using the results of Steps I.3.1 and I.3.2 with the subscript γk being replaced by the subscript αj, we deduce that there exist a function q(·) of bounded variation, an absolutely continuous function v(·), such that the previous two equations are satisfied with no αj-dependency. By step I.3.3, we deduce that λαj to λ ∈ [0, 1] and λ + ∥p(T )∥ = 1. We also have that + ∥pαj (T )∥ = 1. Then, along a subsequence, λαj converges ( ⟨qαj max u∈U (t) (t), f (t, ¯x(t), ¯y(t), u)⟩ + ⟨vαj (t), g(t, ¯x(t), ¯y(t), u)⟩ − ) ∥u − ¯u(t)∥ λαj αj β (4.80) 138 is attained at u = ¯u(t) for a.e. t ∈ [0, 1]. Hence, taking the limit when αj → 0, we deduce that ( max u∈U (t) ⟨q(t), f (t, ¯x(t), ¯y(t), u)⟩ + ⟨v(t), g(t, ¯x(t), ¯y(t), u)⟩ ) (4.81) is attained at u = ¯u(t) for a.e. t ∈ [0, 1]. As, for each i = 1, · · · , r, ξi αj converges weakly* in L∞ to ξi and ξi αj (t)⟨∇xψi(t, ¯x(t)), qαj (t)⟩ = 0 a.e. t ∈ [0, T ], then 0 = lim j→∞ Z T 0 ξi αj (t)|⟨∇xψi(t, ¯x(t)), qαj (t)⟩|dt = Z T 0 ξi(t)|⟨∇xψi(t, ¯x(t)), q(t)⟩|dt. Hence, ξi(t)⟨∇xψi(t, ¯x(t)), q(t)⟩ = 0 a.e. t ∈ [0, T ]. Finally, for the transversality condition, we have (pαj (0), −pαj (T )) ∈ λαj ∂L l J(¯x(0), ¯y(0), ¯x(T ), ¯y(T )) + αj ¯B + N L S (¯x(0), ¯y(0), ¯x(T ), ¯y(T )) . Then, using similar steps used to derive the transversality condition for fixed (α, β) in Step I.3.3, we deduce that (p(0), −p(T )) ∈ λ ∂L l J(¯x(0), ¯y(0), ¯x(T ), ¯y(T )) + N L S (¯x(0), ¯y(0), ¯x(T ), ¯y(T )) . Step I.5. Taking β → 0. In this step, we explicitly indicate the dependence on β in our notation, and we introduce a subscript βj, where βj ∈ (0, 1] and βj → 0. Deriving all the conditions except for the adjoint equation follows a similar process to Step I.4, replacing the subscript αj by the subscript βj. Below, we present our derivation for the adjoint equation. For each j, we have Z [0,T ] ⟨z(t), dqβj (t)⟩ = + Z T 0 Z T 0 D z(t), − h(1 − βj) ¯AT βj (t) + βjAT βj r X i=1 ξi βj (t)⟨z(t), ϑi βj (t)qβj (t)⟩dt + (t) − h(1 − βj) ¯Aβj (t)T + βjAT βj (t)i (t)E vβj ⟨z(t), ∇xψi(t, ¯x(t))⟩dνi βj (t), (4.82) (t)i Z T qβj r X 0 i=1 139 Z [0,T ] ⟨z(t), ˙vβj (t)⟩dt = + Z T 0 Z T 0 ⟨z(t), − h(1 − βj) ¯Eβj (t)T + βEβj (t)T i qβj (t)⟩dt ⟨z(t), − h(1 − βj) ¯Eβj (t)T + βjEβj (t)T i vβj (t)⟩dt, (4.83) where max n ∥( ¯Aβj , ¯Eβj )∥2, ∥(Aβj , Eβj )∥2, ∥( ¯Aβj , ¯Eβj )∥2, ∥(Aβj , Eβj )∥2 o ≤ ∥Lh∥2, ( ¯Aβj , ¯Eβj ( ¯Aβj , ¯Eβj )(t) ∈ ∂(x,y) ℓ f (t, ¯x(t), ¯y(t), ¯u(t)), )(t) ∈ ∂(x,y) g(t, ¯x(t), ¯y(t), ¯u(t)), ℓ ∥qβj ∥∞ ≤ Mp and V 1 0 (qβj ) = ∥ ˙qβj ∥1 ≤ Mq, ∥vβj ∥∞ ≤ Mp and ∥ ˙vβj ∥2 ≤ Mv, ∥νi βj ∥T.V. ≤ M7, for i = 1, · · · , r, ∥ϑi βj ∥∞ ≤ Lψ, for i = 1, · · · , r. Taking βj → 0, and following steps similar to Steps I.3.1 and I.3.2, we obtain the adjoint equations (condition (iii) of our theorem). Step II. We have concluded proving the theorem under the temporary assumptions (A4.2) and (A3.3)′. The goal of Step II is to remove those two temporary assumptions. Step II.1. Removing assumption (A3.3)′. In this step, we remove (A3.3)′, and we simply assume that (A3.3) is satisfied for some ¯β(·) positive. We use arguments similar to those at the last step of the proof of [58, Theorem 3.1], as well as Remark 4.2.1(ii)-(iii). We first define ˜ψi(t, x) := ¯βi(t)ψi(t, x). Notice that • C(t) is also the zero-sublevel sets of ( ˜ψi(t, ·))r • For some L ˜ψ > 0, ˜ψi satisfies (A3.1) for all i = 1, · · · , r. • Condition (A3.3) is equivalent to saying that for t ∈ I 0(¯x), the Gramian matrix G ˜ψ , for i = 1, · · · , r. i=1 (t) of the vectors {∇x ˜ψi(t, ¯x(t)) : i ∈ I 0 (t,¯x(t))} is strictly diagonally dominant. • ˜ψ1, · · · , ˜ψr satisfy (A3.2), and hence, (3.34) of Lemma 3.2.4 is valid at ˜ψ1, · · · , ˜ψr, ψr+1 when replacing ¯η by ˜η := ¯η b ¯β , where o := min n1, min{ ¯βi(t) : t ∈ [0, T ], i = 1, · · · , r} . b ¯β 140 We denote by ( ˜P ) the version of (P ) in which the functions ψi are replaced by ˜ψi. Note that (P ) and ( ˜P ) coincide, and ((¯x, ¯y), ¯u) is a strong local minimizer for ( ˜P ). Furthermore, the data of ( ˜P ) satisfy the assumptions required for the proven maximum principle (established in Step II.1). Therefore, we apply the proven version of the maximum principle to ( ˜P ), and we get the existence of an adjoint vector ˜p = (˜q, ˜v) with ˜q ∈ BV ([0, T ]; Rn) and ˜v ∈ i=1 i=1 on [0, T ], nonnegative functions ( ˜ξi)r W 1,2([0, T ]; Rl), finite signed Radon measures (˜νi)r in L∞([0, T ]; R+), L2-measurable functions ˜¯A(·) in Mn×n([0, T ]), ˜¯E(·) in Mn×l([0, T ]), ˜¯A(·) in Ml×n([0, T ]), and ˜¯E(·) in Ml×l([0, T ]), L∞-measurable functions ( ˜ϑi(·))r in Mn×n([0, T ]), and a scalar ˜λ ≥ 0 that satisfy conditions (i)-(vii). To express those conditions in terms of the original data of (P ), we replace ˜ψi(t, x) by ¯βi(t)ψi(t, x), and we take p := ˜p, q := ˜q, v := ˜v, ξi(·) := ¯βi(·) ˜ξi(·), λ := ˜λ, ¯A(·) := ˜¯A(·), ¯E(·) := ˜¯E(·), ¯A(·) := ˜¯A(·), ¯E(·) := ˜¯E(·), dνi(·) = ¯βi(·)d˜νi(·), and ϑi(·) := 1 ¯βi(·) Step II.2. Removing assumption (A4.2) when the sets U (t) are uniformly bounded. ˜ϑi(·). i=1 In this step, we remove (A4.2) (so assume h does not satisfy (A4.2)), and we assume that the sets U (t) are uniformly bounded. To remove (A4.2), that is, the convexity assumption of h(t, x, y, U (t)) for (x, y) ∈ ¯N (t) and t ∈ [0, T ] a.e., we shall extend the relaxation tech- (¯δ,¯δ) nique in [70, Section 5.2], developed for global minimizers of Mayer optimal control problems over sweeping processes having constant compact sweeping sets and constant control set U, to the case of strong local minimizers, the sweeping sets are ¯N (¯ε,¯δ) (t), which are time-dependent and not necessarily moving in an absolutely continuous way, U is time-dependent, and joint- 2 , where δ ∈ (0, ¯ε) is fixed. endpoints constraint S δ Step II.2.1. ( ¯X := (¯x, ¯y), ¯u) is a δ-strong local minimizer for ( ¯Pδ) with extended J. Fix δ ∈ (0, ¯ε). Using Theorem 2.4.3, there is an LJ -Lipschitz function ¯J : Rn+l × Rn+l → R that extends J to R2(n+l) from S(¯δ). By Remark 4.2.6(i), ((¯x, ¯y), ¯u) being a ¯δ-strong local minimizer for (P ), then it is also a δ-strong local minimum for ( ¯Pδ) in which we use the extension ¯J instead of J. 141 Step II.2.2. ( ¯X, ¯u) is a global minimum for a problem ( ¯P). Performing appropriate modifications to the technique presented in the proof of [55, Theorem 6.2], we are then able to formulate the following problem ( ¯P) associated with ( ¯Pδ) for which the same solution ( ¯X, ¯u) is a global minimum: ( ¯P)    minimize ¯J(X(0), X(T )) + ¯K R T 0 L(t, X(t)) dt over X := (x, y) ∈ W 1,1([0, T ], Rn+l), u ∈ U, such that ( ¯D) ( ˙X(t) ∈ h(t, x(t), y(t), u(t)) − N ¯N (¯ε,¯δ)(t) (X(t)), a.e. t ∈ [0, T ], (X(0), X(T )) ∈ S δ 2 = S ∩ ¯B δ 2 , where L : [0, T ] × Rn+l → R and ¯K > 0 are defined by L(t, X) = L(t, x, y) := max{∥x − ¯x(t)∥2 − (4.84) ¯K := 512 ¯MℓMJ 5δ3 , where 2 ¯Mℓ := max{L(¯x,¯y), Mh + |J(X1, X2)|, (4.85) δ2 4 , ∥y − ¯y(t)∥2 − ¯µ 4¯η2 δ2 4 , 0} > 0, ¯L}, MJ := max S(¯δ) and hence, as L(t, ¯X(t)) ≡ 0, we deduce min( ¯P) = J( ¯X(0), ¯X(T )). We now show that ( ¯X, ¯u) is a global minimum for ( ¯P). Indeed, let (X, u) be admissible for ( ¯P). Case 1: ∥X − ¯X∥∞ ≤ δ. Then, (X, u) being admissible for ( ¯Pδ), and ( ¯X, ¯u) being a δ-strong local minimum for ( ¯Pδ), yield that ¯J(X(0), X(T )) + ¯K Z T 0 L(t, X(t)) dt = J(X(0), X(T )) + ¯K Z T 0 L(t, X(t)) dt ≥ J(X(0), X(T )) ≥ J( ¯X(0), ¯X(T )) = ¯J( ¯X(0), ¯X(T )) + ¯K Z T 0 L(t, ¯X(t)) dt. Case 2: ∥X − ¯X∥∞ > δ. , there exists ¯t ∈ [0, T ] such that ∥X(¯t) − ¯X(¯t)∥ = δ. Using Given that (X(0), X(T )) ∈ S δ that the function t 7→ ∥X(t)− ¯X(t)∥ is Lipschitz continuous with Lipschitz constant 4 ¯Mℓ (see equation (3.45)), and the fact that ∥X(0) − ¯X(0)∥ ≤ δ , we get that the Lebesgue measure of 2 2 142 n t ∈ [0, T ] : ∥X(t) − ¯X(t)∥ ≥ 3δ 4 o ≥ δ 16 ¯Mℓ . Hence, ¯J(X(0), X(T )) + ¯K Z T 0 L(t, X(t)) dt ≥ −MJ + ¯K   !2 δ 16 ¯Mℓ 3δ 4  − δ2 4  ≥ −MJ + ¯K Z T 0 L(t, X(t)) dt = MJ ≥ J( ¯X(0), ¯X(T )) = ¯J( ¯X(0), ¯X(T )) + ¯K Z T 0 L(t, ¯X(t)) dt. This proves that ( ¯X, ¯u) is a global minimum for ( ¯P). Step II.2.3. ( ¯X, ¯w) := ( ¯X, (( Define the problem ( ˜P) n+l+1 1, 0, ..., 0))) is a global minimum for ( ˜P). z }| { z ¯u, ..., ¯u), ( n+l+1 }| { ( ˜P)    where minimize ¯J(X(0), X(T )) + ¯K R T 0 L(t, X(t)) dt over X := (x, y) ∈ W 1,1([0, T ], Rn+l), w(·) := (cid:16)(u0(·), · · · , un+l(·)), (λ0(·), · · · , λn+l(·))(cid:17) ∈ W such that ( ˜(D) ˙X(t) ∈ ˜h(t, X(t), w(t)) − N ¯N (¯ε,¯δ)(t) (X(t)), a.e. t ∈ [0, T ], (X(0), X(T )) ∈ S δ 2 , ˜h : Gr [ ¯N (¯δ,¯δ) (·) × (U (·))n+l+1] × Λ → Rn+l defined as ˜h(t, X, w) := n+l X i=0 λih(t, X, ui), (4.86) Λ := n(λ0, · · · , λn+l) ∈ Rn+l+1 : λi ≥ 0 for i = 0, ..., n + l and λi = 1o , n+l X i=0 W := n w : [0, T ] −→ R(m+1)(n+l+1) measurable : w(t) ∈ W (t) := (U (t))n+l+1 × Λ a.e.. o First, we note the following two facts that are going to be useful for our goal: • Notice that ˜h satisfies (A4.1), and hence, Corollary 3.2.16 yields that for X0 = (x0, y0) ∈ (0), and for w ∈ W , ( ˜D) has a unique solution X(·) corresponding to (X0, w) ¯N (¯ε,¯δ) which is (Mh + ¯µ 4¯η2 ¯L)-Lipschitz and satisfies (3.44)-(3.46). • Using that 0 < δ < ¯ε < ¯δ and that ¯X := (¯x, ¯y) is L(¯x,¯y)-Lipschitz, then the function L, defined in (4.84), is Lipschitz on Gr ¯N (¯δ,¯δ) (·) and satisfies L ≡ 0 on Gr ¯N 2 , δ ( δ 2 ) (·), and |L| ≤ ¯δ2 on Gr ¯N (¯δ,¯δ) (·). (4.87) 143 Hence, by the convexity of ˜h(t, X, W (t)) (so ˜h satisfy (A4)), and by Remark 4.2.5, where L := ¯K L, it follows that ( ˜P) admits a global optimal minimizer ( ˜X, ˜w). We show that min( ˜P) = min( ¯P) and, ( ¯X, ¯w) is optimal for ( ˜P). Let U defined in (3.8), the compact set V := cl U, and R := n σ : [0, T ] → M1 + (V) : σ is measurable and σ(t)(U (t)) = 1, t ∈ [0, T ]o . This set of relaxed controls satisfies R ⊂ L1 ([0, T ], C(V; R))∗, which is endowed with the weak* topology. Each regular control function u ∈ U is identified with its associated Dirac relaxed control σ(·) = δu(·), and thereby U ⊂ R (see e.g., [68]). Define hσ(t, X) and the problem (P)r by hσ(t, X) := Z U (t) h(t, X, u)σ(t)(du), ∀(t, X) ∈ Gr ¯N (¯δ,¯δ) (·), σ ∈ R, (P)r    minimize ¯J(X(0), X(T )) + ¯K R T 0 L(t, X(t)) dt over X := (x, y) ∈ W 1,1([0, T ], Rn+l), σ ∈ R, such that ( (D)r ˙X(t) ∈ hσ(t, X(t)) − N ¯N (¯ε,¯δ)(t) (X(t)), a.e. t ∈ [0, T ], (X(0), X(T )) ∈ S δ 2 . Since h satisfies (A4.1) and σ(t)(U (t)) = 1 (∀t ∈ [0, T ]), then hσ(t, X) is uniformly bounded by Mh, a Carathéodory function in (t, X), and Lh(t)-Lipschitz in X, for all t, that is, hσ(t, X) satisfies (A4.1). Using Corollary 3.2.16 for X0 = (x0, y0) ∈ ¯N (¯ε,¯δ) hσ(t, X), the Cauchy problem of (D)r corresponding to (X0, σ) admits a unique solution (0), σ ∈ R, and (f, g)(t, X, u) = h(t, X, u) := which is Lipschitz and satisfies (3.44)-(3.46). It follows that the results in [70, Lemmas 5.1 &5.2] remain valid for the systems ( ˜D), and (D)r, defined here, and also for the corresponding (D)c, where ( (D)c ˙X(t) ∈ conv h(t, X(t), U (t)) − N ¯N (¯ε,¯δ)(t) (X(t)), a.e. t ∈ [0, T ]. 144 Therefore, for X := (x, y) ∈ W 1,1([0, T ], Rn+l) and X(0) ∈ ¯N (¯ε,¯δ) (0), we have (X, w) satisfies ( ˜D), for some w ∈ W ⇐⇒ (X, σ) satisfies (D)r for some σ ∈ R ⇐⇒ X satisfies (D)c. Furthermore, due to having (3.46) satisfied by the solutions of (D)r and due to the hy- pomonotonicity property of the uniform prox-regular sets ¯N (¯ε,¯δ) (t) (which we recall it to be -prox-regular set C(t) ∩ ¯B¯ε(¯x(t)) with ¯B¯δ the product of the uniform 2¯η Lψ (¯y(t))), it follows that [36, Theorem 2] (also [15, Proposition 3.5]) is valid. Hence, using that ( ˜X, ˜w) is optimal for ( ˜P), the proof of [70, Proposition 5.2] holds true for our setting, and therefore, as ( ¯X, ¯u) is optimal for ( ¯P), we conclude that min(P)r = min( ˜P) = min( ¯P) = J( ¯X(0), ¯X(T )). (4.88) Now, since ( ¯X, ¯w) is admissible for ( ˜P) at which the objective value is J( ¯X(0), ¯X(T )), we deduce that ( ¯X, ¯w) is a global minimum for ( ˜P). This terminates proving Key Step 4(c). 2-strong local minimum for ( ˜P ) to which we apply ((¯x, ¯y), ¯w) is a δ Step II.2.4. Theorem 4.2.11. As ( ¯X, ¯w) is a global minimizer for ( ˜P), it follows that it is also a δ -strong local minimum for ( ˜P), which, by the first equation of (4.87), has now ¯J(X(0), X(T )) as objective function. 2 Hence, we conclude that ((¯x, ¯y), ¯w) is a δ 2 -strong local minimum for the problem ( ˜P ) minimize ¯J(x(0), y(0), x(T ), y(T )) over X := (x, y) ∈ W 1,1([0, T ], Rn+l), ( ˜P ) w(·) := (cid:16)(u0(·), · · · , un+l(·)), (λ0(·), · · · , λn+l(·))(cid:17) ∈ W such that ˜(D)  ˙x(t) ∈ ˜f (t, x(t), y(t), w(t)) − NC(t)(x(t)), a.e. t ∈ [0, T ],   ˙y(t) = ˜g(t, x(t), y(t), w(t)), a.e. t ∈ [0, T ],    (x(0), y(0), x(T ), y(T )) ∈ S δ , 2 where ( ˜f , ˜g) = ˜h defined in (4.86), that is, 145 ˜f (t, x, y, w) := n+l X i=0 λif (t, x, y, ui), and ˜g(t, x, y, w) := n+l X i=0 λig(t, x, y, ui). Clearly ( ˜P ) is of the form of (P ), where f (t, x, y, u) := ˜f (t, x, y, w), g(t, x, y, u) := ˜g(t, x, y, w), , U (t) := W (t), and J := ¯J. Furthermore, the associated ˜h(t, x, y, u) = ( ˜f , ˜g)(t, x, y, w) S := S δ satisfies that ˜h(t, x, y, W (t)) convex for each (t, x, y) ∈ Gr ¯N (·). Thus, assumptions (A1)- (A5) hold at the strong local minimizer ((¯x, ¯y), ¯w) for ( ˜P ) to which the already proven (¯δ,¯δ) 2 (i)-(vii) of Theorem 4.2.11 apply. Doing so, and noticing these facts: ¯J(¯x(0), ¯y(0), ¯x(T ), ¯y(T )) = ∂L ℓ J(¯x(0), ¯y(0), ¯x(T ), ¯y(T )), ℓ • ¯J = J on S(¯δ), and hence, ∂L • ˜h(t, ¯x(t), ¯y(t), ¯w(t)) = h(t, ¯x(t), ¯y(t), ¯u(t)), ˜f (t, ¯x(t), ¯y(t), ¯w(t)) ⊂ ∂(x,y) • ∂(x,y) ℓ • ∂(x,y) g(t, ¯x(t), ¯y(t), ¯u(t)), ℓ • ⟨˜h(t, ¯x(t), ¯y(t), w), p(t)⟩ = ⟨h(t, ¯x(t), ¯y(t), u), p(t)⟩, ˜g(t, ¯x(t), ¯y(t), ¯w(t)) ⊂ ∂(x,y) f (t, ¯x(t), ¯y(t), ¯u(t)), ℓ ℓ ∀w = ((u, ..., u), (1, 0, ..., 0)) ∈ U n+l+1 × Λ, • N L S δ 2 (¯x(0), ¯y(0)) = N L S (¯x(0), ¯y(0)), we conclude that Theorem 4.2.11 holds for (P ) without assumption (A4.2). Step II.3 Proof of the “In addition” part of the theorem. When S = C0 × Rn+l, for C0 ⊂ C(0) × Rl closed, Remark 4.2.10 yields that λ = 1. This completes the proof of the theorem. Table 4.4 Summary of results from Section 4.2.3 . Result Theorem 4.2.11 Description We provide necessary conditions, in the form of an extended Pontryagin’s maximum principle, for a ¯δ-strong local minimizer ((¯x, ¯y), ¯u) for the problem (P ). 146 CHAPTER 5 VALIDATING THEORETICAL RESULTS USING AN EXAMPLE Consider the problem (P ) with the following data. • The perturbation mappings f : [0, π 2 ]×R3×R×R −→ R3 and g : [0, π 2 ]×R3×R×R −→ R are defined by f (t, (x1, x2, x3), y, u) = (x1 − x2 − u + y2, x1 + x2 + u + y3, x3 + t − π − 1); g (t, (x1, x2, x3), y, u) = x2 1 + x2 2 − 16 + u + y. • The two functions ψ1, ψ2 : [0, π 2 ] × R3 −→ R are defined by ψ1(t, x1, x2, x3) := x2 1 + x2 2 + ψ2(t, x1, x2, x3) := x2 1 + x2 2 − 32 π 32 π x3 + x3 − 32 π 32 π t − 48, t + 16, and hence, for each t ∈ [0, π 2 ], the set C(t) is the nonsmooth, convex and bounded set (see Figure 5.1) C(t) = C1(t) ∩ C2(t) := {(x1, x2, x3) : ψ1(t, x1, x2, x3) ≤ 0} ∩ {(x1, x2, x3) : ψ2(t, x1, x2, x3) ≤ 0}. • The objective function J : R8 −→ R ∪ {∞} is defined by J(x1, x2, x3, y1, x4, x5, x6, y2) :=    −x2 4 − x2 5 + 16 + (cid:12) (cid:12) (cid:12) π 2 − x6 (cid:12) (cid:12) (cid:12) (x4, x5, x6) ∈ C( π 2 ), ∞ Otherwise. • The control multifunction is the constant U (t) := [0, 1] for all t ∈ [0, π 2 ]. • The set S is given by S := {(x1, x2, x3, y1, x4, x5, x6, y2) ∈ R8 : x2 1 + x2 2 = 16, x3 = π, x2 + x2 6 = π2 4 , x2 1 8 + x4 = 2, y1 + x2 2 = 0}. 147 Figure 5.1 The sweeping set C(to) at a certain time to ∈ (0, π 2 ) Define, for each t ∈ [0, π 2 ], the curve Γ(t) := n(x1, x2, x3) : x2 1 + x2 2 = 16 and x3 = π − t o = (bdry C1(t) ∩ bdry C2(t)) ⊂ bdry C(t). Since S ⊂ Γ(0) × R × R3 × R and J vanishes on R3 × R × Γ( π 2 ) × R and is strictly positive elsewhere in R3 × R × C( π 2 ) × R, we may seek for (P ) a candidate ((¯x, ¯y), ¯u) for optimality with ¯x(t) := (¯x1(t), ¯x2(t), ¯x3(t)) belonging to Γ(t) for every t, if possible, and hence we have    ¯x2 1 (t) + ¯x2 2 (t) = 16 and ¯x3(t) = π − t ∀t ∈ [0, π 2 ] and ¯x1(t) ˙¯x1(t) + ¯x2(t) ˙¯x2(t) = 0 a.e. and (¯x(0)T, ¯y(0)T, ¯x( π 2 )T, ¯y( π 2 )T) ∈ {(4, 0, π, 0, 0, 4, π 2 , a), (−4, 0, π, 0, 0, 4, π 2 , b), (4, 0, π, 0, 0, −4, π 2 , c), (−4, 0, π, 0, 0, −4, π 2 , d); a, b, c, d ∈ R}. (5.1) One can readily verify that all assumptions of Theorem 4.2.11 are satisfied for any choice of 148 (¯x, ¯y) such that ¯x(t) ∈ Γ(t) for all t, with (A3.3) being satisfied for ¯β = (1, 1). Applying1 Theorem 4.2.11 to such candidate ((¯x, ¯y), ¯u), we obtain the existence of an adjoint vector p = (q, v) where q := (q1, q2, q3) ∈ BV ([0, π 2 i, ξ1, ξ2 ∈ L∞([0, π measures ν1, ν2 on h0, π equations (5.1) into Theorem 4.2.11(i)-(vii), we obtain 2 2 ]; R3), v ∈ W 1,1([0, π 2 ]; R), two finite signed Radon ]; R+), and λ ≥ 0, such that when incorporating (a) ∥p( π 2 )∥ + λ = 1. (b) The admissibility equation holds, that is, for t ∈ [0, π 2 ] a.e.,    ˙¯x1(t) = ¯x1(t) − ¯x2(t) − ¯u(t) + ¯y2(t) − 2¯x1(t)(ξ1(t) + ξ2(t)), ˙¯x2(t) = ¯x1(t) + ¯x2(t) + ¯u(t) + ¯y3(t) − 2¯x2(t)(ξ1(t) + ξ2(t)), ˙¯x3(t) = ¯x3(t) + t − π − 1 − 32 π (ξ1(t) − ξ2(t)), ˙¯y(t) = ¯x2 1 (t) + ¯x2 2 (t) − 16 + ¯u(t) + ¯y(t). (c) The adjoint equation is satisfied, that is, for t ∈ [0, π 2 ], dq(t) =         −1 −1 1 −1 0 0 0 0 −1         q(t) dt +         −2¯x1(t) −2¯x2(t) 0         v(t)dt + (ξ1(t) + ξ2(t))         2 0 0 0 2 0         0 0 0 q(t) dt + (cid:18) ˙v(t) = 0 0 0 (cid:19) q(t) dt − v(t)                 2¯x1(t) 2¯x2(t) 32 π dν1 + 2¯x2(t) dν2,                 2¯x1(t) − 32 π (d) The complementary slackness condition is valid, that is, for t ∈ [0, π 2 ] a.e.,   ξ1(t)(2q1(t)¯x1(t) + 2q2(t)¯x2(t) + 32  ξ2(t)(2q1(t)¯x1(t) + 2q2(t)¯x2(t) − 32 π q3(t)) = 0, π q3(t)) = 0. √ 3 1Note that for (x1, x2, x3) ∈ Γ(t) with − 2 < x1 < 1 − 3 < 0, and hence, the maximum principle of [34] cannot be applied to this sweeping set C(t). √ 3 2 , we have ⟨∇ψ1(x1, x2, x3), ∇ψ2(x1, x2, x3)⟩ = 4x2 149 (e) The transversality condition holds, that is, (q(0), v(0), −q( π 2 ), −v( π 2 ))T ∈ λ{(0, 0, 0, 0, 0, −8, α, 0) : α ∈ [−1, 1]} +{(8α1 + α4, α3, α2, α5, α4, 0, α3π, 0) : α1, α2, α3, α4, α5 ∈ R} if (¯x(0)T, ¯y(0)T, ¯x( π 2 )T, ¯y( π 2 )T) ∈ {(4, 0, π, 0, 0, 4, π 2 , a) : a ∈ R}. Similarly, we work on deriving transversality conditions for each of the four cases in (5.1). (f) max{u (−q1(t) + q2(t) + v(t)) : u ∈ [0, 1]} is attained at ¯u(t) for t ∈ [0, π 2 ] a.e. We temporarily assume that −q1(t) + q2(t) + v(t) < 0, ∀t ∈ [0, π 2 ] a.e. (5.2) This gives from (f) that ¯u(t) = 0 for t ∈ [0, π 2 ] a.e. Now solving the differential equations of (b) and using (5.1), we obtain that ξ1(t) = ξ2(t) = 1 4, ¯x(t)T = (4 cos t, 4 sin t, π − t)2, and ¯y(t) = 0 ∀t ∈ [0, π 2 ]. Hence, from (d), we deduce that q3(t) = 0 for t ∈ [0, π 2 ] a.e., and cos t q1(t) + sin t q2(t) = 0, ∀t ∈ [0, π 2 ] a.e., (5.3) and the adjoint equation (c) simplifies to the following ˙v(t) = −v(t), dq1(t) = (−q1(t) − q2(t))dt − 8 cos t v(t)dt + q1(t) dt + 8 cos t (dν1 + dν2), (5.4) dq2(t) = (q1(t) − q2(t))dt − 8 sin t v(t)dt + q2(t) dt + 8 sin t (dν1 + dν2),    dq3(t) = −q3(t) dt + 32 π (dν1 − dν2). 2Note that another possible choice for ¯x(·) is ¯x(t)T = (−4 cos t, −4 sin t, π − t). 150 Since v( π 2 ) = 0 then v(t) = 0 ∀t ∈ [0, π 2 ]. Using (a), (5.3), (e), and (5.4), one can get the following    λ = √ 2π 1+(16π)2 2π+ and A = √ 1 1+(16π)2 , 2π+ q(t)T = (A sin t, −A cos t, 0) on [0, π 2 ), q( π 2 )T = (A, 16Aπ, 0), dν1 = dν2 = Aπδnπ o, 2 where δ{a} denotes the unit measure concentrated on the point a. Note that for all t ∈ [0, π 2 we have −q1(t) + q3(t) + v(t) < 0, and hence, the temporary assumption (5.2) is satisfied. ], Therefore, the above analysis, realized via Theorem 4.2.11, produces an admissible pair ((¯x, ¯y), ¯u), where ¯x(t)T = (4 cos t, 4 sin t, π − t), ¯y(t) = 0, and ¯u(t) = 0, ∀t ∈ [0, π 2 ], which is optimal for (P ). Figure 5.2 The solution ¯x(t) (in green) evolving on the set C(t) = C1(t) ∩ C2(t) over different time instances. 151 CHAPTER 6 CONCLUSION AND POSSIBLE FUTURE DIRECTIONS 6.1 Conclusion In this dissertation, we employ the exponential penalty-type approximation method to launch the study of a general model (P ) given by: (P )    minimize J(x(0), y(0), x(T ), y(T )) over ((x, y), u) ∈ W 1,1([0, T ], Rn × Rl) × U such that (D)    ˙x(t) ∈ f (t, x(t), y(t), u(t)) − NC(t)(x(t)), a.e. t ∈ [0, T ], ˙y(t) = g(t, x(t), y(t), u(t)), a.e. t ∈ [0, T ], (x(0), y(0), x(T ), y(T )) ∈ S, where, for t ∈ [0, T ], the set C(t) is defined as the intersection of a finite number of zero sub-level sets of (ψi(t, ·))r i=1 , referred to as generators. One of the main results of our work, which is global, encompasses the existence and unique- ness of a Lipschitz solution for the Cauchy problem corresponding to our dynamic (D) without requiring any Lipschitz property on C(·)—a condition commonly required in the literature (see e.g., [36]). Instead, we assume Gr C(·) is bounded and the gradients of the active generators are positively linear independent. Note that this is the first such a result for general nonsmooth moving sweeping sets, even for the uncoupled sweeping process, which is based on the method of exponential penalty approximation. Another global main result encompasses the global existence of optimal solution for our problem (P ) under global assumptions. We note that this constitutes the first attempt to prove existence result of optimal solutions for time-dependent general sweeping set. The main local result consists of deriving under minimal assumptions on the data, a com- plete set of necessary conditions in the form of nonsmooth Pontryagin maximum principle for strong local minimizers of the problem (P ) via developing the exponential penalization technique. Our Pontryagin maximum principle generalizes previously known Pontryagin 152 maximum principle results ([30, 31, 33, 34, 70, 55, 58]). In fact, we establish a Pontryagin maximum principle in its expected form (i.e., standard nontriviality condition, adjoint equa- tion, transversality condition, and the maximality condition on the Hamiltonian) for optimal control problems over the sweeping process (1.2) in each of the following settings: (i) When the nonsmooth moving sweeping sets C(t) are bounded and general (no restriction on the corners); (ii) When the general nonsmooth sweeping sets are unbounded (constant or moving); (iii) When joint state endpoints constraint set is present, the convexity of f (t, x, U (t)) is absent, or the global constraint qualification is only local, for all types of sweeping sets: smooth, nonsmooth, constant, moving, bounded, or unbounded; (iv) When the sweeping process is coupled with a differential equation. 6.2 Future directions In this section, we outline several promising future directions that stem from our current work on optimal control problems over sweeping processes. We will focus on five key areas: extending the model to include state constraints, developing a numerical algorithm to solve our model, incorporating control into the sweeping set, exploring the bilateral minimal time function in the context of sweeping processes, and applying these results to real-world sce- narios. Project 1: Adding state constraint We are currently working on extending the techniques discussed earlier to address problems that include explicit external state constraints: ω(t, x(t), y(t)) ≤ 0. This implies that our approximating problems differ from those in Chapter 4 due to the presence of an additional explicit state constraint. This introduces challenges when attempting to prove the bound- edness of the adjoint vector for the approximating problem, which subsequently complicates the limit-taking process. It is worth noting that adding a state constraint to the sweeping process has been addressed in the literature, as seen in [44] for example, but only for a special case of our model. 153 Project 2: Numerical algorithm We are interested in constructing a numerical algorithm to solve our Mayer problem (P ), as in [32, 56, 59]. We plan to expand the domain of applicability of the numerical method to: • Time-dependent sweeping set C(t), • Initial state set C0 instead of fixed x0, • Final endpoint CT instead of free final endpoint. Project 3: The sweeping set is controlled and is of the form C(t) + u(t) A potential future direction for this work would involve exploring the effects of introducing a control function into the sweeping set. Specifically, one could investigate how our results would change when the sweeping set is defined as C(t) := C + v(t) where v(·) is a control function belonging to W 1,2. Project 4: Finding the bilateral minimal time function for the sweeping process The bilateral minimal time function, introduced by Clarke and Nour in [20], defines T (α, β) as the minimum time taken by a trajectory to go from α to β. In my master’s thesis, I worked on studying the variational analysis and the sensitivity relations of the bilateral minimal time functions in order to study the regularity of this function for nonlinear control system. The results we obtained, published in [16], extends the main result of [54] where a similar result is obtained for the linear case. We can integrate the study of the bilateral minimal time function with the sweeping process. More specifically, we can study the bilateral minimal time function when the set-valued map that defines the trajectory is given as a sweeping process. This would build on the work done in [24], where the authors have worked on the unilateral minimal time function within the context of sweeping process. Project 5: Real-life applications of the sweeping process Another promising future direction involves validating both the numerical and theoretical results of optimal control problems governed by sweeping process using real-life case study models and experimental setups, such as crowd motion models in emergency evacuations, robotics models, marine surface vehicle modeling, and nanoparticle modeling. 154 BIBLIOGRAPHY [1] L. Adam and J. Outrata. On optimal control of a sweeping process coupled with an ordinary differential equation. Discrete Contin. Dyn. Syst. B, 19:2709–2738, 2014. [2] A. Adly, F. Nacry, and L. Thibault. Discontinuous sweeping process with prox-regular sets. ESAIM COCV, 23(4):1293–1329, 2017. [3] A. Bensoussan, K. Chandrasekaran, and J. Turi. Optimal control of variational inequal- ities. Commun. Inf. Syst., 10(4):203–220, 2010. [4] A. Bergqvist. Magnetic vector hysteresis model with dry friction-like pinning. Phys. B Condens. Matter, 233(4):342–347, 1997. [5] A. Bouach, T. Haddad, and B.S. Mordukhovich. Optimal control of nonconvex integro- differential sweeping processes. J. Differ. Equ., 329:255–317, 2022. [6] M. Brokate and P. Krejčí. Optimal control of ode systems involving a rate independent variational inequality. Discrete Contin. Dyn. Syst. B, 18:331–348, 2013. [7] M. Brokate and J. Sprekels. Hysteresis and Phase Transitions, volume 121. Springer, New York, 1996. [8] R. J. Stern C. Nour and J. Takche. Proximal smoothness and the exterior sphere condition. Journal of Convex Analysis, 16(2):501–514, 2009. [9] A. Canino. On p-convex sets and geodesics. Journal of Differential Equations, 75(1): 118–157, 1998. [10] T.H. Cao and B. Mordukhovich. Optimality conditions for a controlled sweeping process with applications to the crowd motion model. Disc. Cont. Dyn. Syst. B, 22:267–306, 2017. [11] T.H. Cao and B. Mordukhovich. Optimal control of a nonconvex perturbed sweeping process. Journal of Differential Equations, 266:1003–1050, 2019. [12] T.H. Cao, G. Colombo, B. Mordukhovich, and D. Nguyen. Optimization of fully con- trolled sweeping processes. Journal of Differential Equations, 295:138–186, 2021. [13] T.H. Cao, G. Colombo, B. Mordukhovich, and D. Nguyen. Optimization and discrete approximation of sweeping processes with controlled moving sets and perturbations. Journal of Differential Equations, 274:461–509, 2021. [14] T.H. Cao, B.S. Mordukhovich, D. Nguyen, and T. Nguyen. Applications of controlled sweeping processes to nonlinear crowd motion models with obstacles. IEEE Control 155 Systems Letters, 6:740–745, 2021. [15] C. Castaing, A. Salvadori, and L. Thibault. Functional evolution equations governed by nonconvex sweeping process. J. Nonlinear Conv. Anal., 2:217–241, 2001. [16] S. Chamoun and C. Nour. A nonlinear φ0-convexity result for the bilateral minimal time function. Journal of Convex Analysis, 28(1):081–102, 2021. [17] Samara Sarkis Chamoun. Variational analysis and sensitivity relations of the bilateral minimal time function for nonlinear differential inclusions. Master’s thesis, Lebanese American University, 2019. [18] F. H. Clarke. Optimization and Nonsmooth Analysis. John Wiley, New York, 1983. [19] F. H. Clarke. Functional Analysis, Calculus of Variations and Optimal Control, volume 264 of Graduate Texts in Mathematics. Springer, London, 2013. [20] F. H. Clarke and C. Nour. The hamilton-jacobi equation of minimal time control. J. Convex Anal., 11(2):413–436, 2004. [21] F. H. Clarke, R. J. Stern, and P. R. Wolenski. Proximal smoothness and the lower-c2 property. J. Convex Anal., 2:117–144, 1995. [22] F. H. Clarke, Yu. Ledyaev, R. J. Stern, and P. R. Wolenski. Nonsmooth Analysis and Control Theory, volume 178 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1998. [23] G. Colombo and P. Godini. On the optimal control of rate-independent soft crawlers. Journal de Mathématiques Pures et Appliquées, 2020. [24] G. Colombo and M. Palladino. The minimum time function for the controlled moreau’s sweeping process. IAM Journal on Control and Optimization, 54(4):2036–2062, 2016. [25] G. Colombo, R. Henrion, N.D. Hoang, and B.S. Mordukhovich. Optimal control of the sweeping process. Dyn. Contin. Discrete Impuls. Syst. B, 19:117–159, 2012. [26] G. Colombo, R. Henrion, N.D. Hoang, and B.S. Mordukhovich. Optimal control of the sweeping process over polyhedral controlled sets. Journal of Differential Equations, 260 (4):3397–3447, 2016. [27] G. Colombo, B. Mordukhovich, and D. Nguyen. Optimal control of sweeping processes in robotics and traffic flow models. J. Optim. Theory Appl., 182(2):439–472, 2019. [28] G. Colombo, B. Mordukhovich, and D. Nguyen. Optimization of a perturbed sweeping process by constrained discontinuous controls. SIAM Journal on Control and Optimiza- 156 tion, 58(4):2678–2709, 2020. [29] Giovanni Colombo and Antonio Marigonda. Differentiability properties for a class 10.1007/ of non-convex functions. Calculus of Variations, 25:1–31, 2006. s00526-005-0352-7. doi: [30] M.d.R. de Pinho, M.M.A. Ferreira, and G.V. Smirnov. Optimal control involving sweep- ing processes. Set-Valued Var. Anal., 27(2):523–548, 2019. [31] M.d.R. de Pinho, M.M.A. Ferreira, and G.V. Smirnov. Correction to: Optimal control involving sweeping processes. Set-Valued Var. Anal., 27:1025–1027, 2019. [32] M.d.R. de Pinho, M.M.A. Ferreira, and G.V. Smirnov. Optimal control with sweeping processes: Numerical method. J. Optim. Theory Appl., 185:845–858, 2020. [33] M.d.R. de Pinho, M.M.A. Ferreira, and G.V. Smirnov. Necessary conditions for optimal control problems with sweeping systems and end point constraints. Optimization, 71 (11):3363–3381, 2021. [34] M.d.R. de Pinho, M.M.A. Ferreira, and G.V. Smirnov. A maximum principle for optimal control problems involving sweeping processes with a nonsmooth set. J. Optim. Theory Appl., 199:273–297, 2023. [35] D. Drusvyatskiy, A.D. Ioffe, and A.S. Lewis. Clarke subgradients of directionally lip- schitzian stratifiable functions. Mathematics of Operations Research, 40(2):328–349, 2015. [36] J.F. Edmond and L. Thibault. Relaxation of an optimal control problem involving a perturbed sweeping process. Mathematical Programming - Series B, 104:347–373, 2005. [37] L. C. Evans. Weak convergence methods for nonlinear partial differential equations, volume 74 of CBMS Regional Conference Series in Mathematics. Conference Board of the Mathematical Sciences, Washington, DC, 1990. [38] L. C. Evans. Partial differential equations. American Mathematical Society, Providence, RI, 1998. [39] J.K. Hale. Ordinary Differential Equations. Krieger Publishing Company, 2 edition, 1980. [40] J.M. Harrison and M.I. Reiman. Reflected brownian motion on an orthant. Annals of Probability, 9:302–308, 1981. [41] R. Henrion, A. Jourani, and B. S. Mordukhovich. Controlled polyhedral sweeping pro- cesses: Existence, stability, and optimality conditions. Journal of Differential Equations, 157 366:408–443, 2023. [42] C. Hermosilla and M. Palladino. Optimal control of the sweeping process with a nons- mooth moving set. SIAM Journal on Control and Optimization, 60(5):2811–2834, 2022. [43] J.-B. Hiriart-Urruty. Extension of lipschitz functions. Journal of Mathematical Analysis and Applications, 77:539–554, 1980. [44] N.T. Khalil and F.L. Pereira. A maximum principle for state-constrained optimal sweep- ing control problems. IEEE Control Systems Letters, 7:43–48, 2023. [45] P. Krejčí and J. Sprekels. Temperature-dependent hysteresis in one-dimensional thermovisco-elastoplasticity. Applied Mathematics, 43(3):173–205, 1998. [46] B. Li. Generalizations of Diagonal Dominance in Matrix Theory. PhD thesis, University of Saskatchewan, Regina, 1997. [47] Xing-Si Li and Shu-Cherng Fang. On the entropic regularization method for solving min-max problems with applications. Mathematical Methods of Operations Research, 46:119–130, 1997. [48] B. S. Mordukhovich. Variational Analysis and Generalized Differentiation, I: Basic Theory. Springer, Berlin, 2006. [49] J.J. Moreau. Rafle par un convexe variable, i. Travaux Séminaire d’Analyse Convexe, Montpellier 1, Exposé 15:36, 1971. [50] J.J. Moreau. Rafle par un convexe variable, ii. Travaux Séminaire d’Analyse Convexe, Montpellier 2, Exposé 3:43, 1972. [51] J.J. Moreau. Evolution problem associated with a moving convex set in a hilbert space. Journal of Differential Equations, 26:347–374, 1977. [52] F. Nacry and L. Thibault. Regularization of sweeping process: old and new. Pure and Applied Functional Analysis, 4(1):59–117, 2019. [53] F. Nacry, J. Noel, and L. Thibault. On first and second order state-dependent prox- regular sweeping process. Pure and Applied Functional Analysis, 6(6):1453–1493, 2021. [54] C. Nour. Proximal subdifferential of the bilateral minimal time function and some regularity applications. Journal of Convex Analysis, 20(4):1095–1112, 2013. [55] C. Nour and V. Zeidan. Optimal control of nonconvex sweeping processes with separable endpoints: Nonsmooth maximum principle for local minimizers. Journal of Differential Equations, 318:113–168, 2022. 158 [56] C. Nour and V. Zeidan. Numerical solution for a controlled nonconvex sweeping process. IEEE Control Systems Letters, 6:1190–1195, 2022. [57] C. Nour and V. Zeidan. A control space ensuring the strong convergence of continuous approximation for a controlled sweeping process. Set-Valued and Variational Analysis, 31(3), 2023. [58] C. Nour and V. Zeidan. Pontryagin-type maximum principle for a controlled sweeping process with nonsmooth and unbounded sweeping set. Journal of Convex Analysis, 31 (3):787–825, 2024. [59] C. Nour and V. Zeidan. Numerical method for a controlled sweeping process with nonsmooth sweeping set. Journal of Optimization Theory and Applications, 203(2): 1385–1412, 2024. [60] R. A. Poliquin and R. T. Rockafellar. Prox-regular functions in variational analysis. Transactions of the American Mathematical Society, 348(5):1805–1838, 1996. [61] R. T. Rockafellar R. A. Poliquin and L. Thibault. Local differentiability of distance func- tions. Transactions of the American Mathematical Society, 352(11):5231–5249, 2000. [62] R. T. Rockafellar and R. J.-B. Wets. Variational Analysis, volume 317 of Grundlehren der Mathematischen Wissenschaften. Springer-Verlag, Berlin, 1998. [63] G. Salinetti and R. J.-B. Wets. On the convergence of sequences of convex sets in finite dimensions. SIAM Review, 21:18–33, 1979. [64] L. Thibault and J. J. Moreau. Sweeping process with bounded truncated retraction. Journal of Convex Analysis, 23(4):1051–1098, 2016. [65] A. A. Tolstonogov. Polyhedral sweeping processes with unbounded nonconvex-valued perturbation. Journal of Differential Equations, 263:7965–7983, 2017. [66] R. Vinter. Optimal Control, volume 1. Springer, 2017. [67] A. Visintin. Differential Models of Hysteresis, volume 111. Springer Berlin Heidelberg, 1994. [68] J. Warga. Optimal Control of Differential and Functional Equations. Academic Press, New York and London, 1972. [69] V. Zeidan. A modified hamilton-jacobi approach in the generalized problem of bolza. Appl. Math. Optim, (11):97–109, 1984. [70] V. Zeidan, C. Nour, and H. Saoud. A nonsmooth maximum principle for a controlled 159 nonconvex sweeping process. Journal of Differential Equations, 269(11):9531–9582, 2021. 160 APPENDIX APPENDIX TO CHAPTERS 3-4 Translating Lemma 6.2 in [58] to our setting gives us the following lemma. n ∈ N, with αn −→ αo and let (tn, cn) ∈ Gr C(·) be a sequence such that I αn Lemma .0.1. Assume that ψi is continuous for all i = 1, . . . , r. Let αn ≥ 0, for all (tn,cn) ̸= ∅, for all (to,co) ̸= ∅ and there exist ∅ ̸= Jo ⊂ {1, . . . , r} and a n ∈ N, and (tn, cn) −→ (to, co). Then, I αo subsequence of (αn, tn, cn)n we do not relabel, such that I αn (tn,cn) = Jo ⊂ I αo (to,co) for all n ∈ N. In particular, for all a ≥ 0, for any continuous function x : [0, T ] −→ Rn such that x(t) ∈ C(t) for all t ∈ [0, T ], we have I a(x) is closed, and hence compact. This result shall be used in different places of the thesis. Lemma .0.2. (i) Let (xn, yn) ∈ W 1,∞([0, T ]; Rn+d), (ξ1 n , ζn) ∈ L∞([0, T ], RR+1 be such that, for some positive constants M1, M2, M3 we have, ∀n ∈ N and ∀i ∈ {1, · · · , R}, n, · · · , ξR + ), ∥(xn, yn)∥∞ ≤ M1, ∥( ˙xn, ˙yn)∥∞ ≤ M2, ∥(ξi n, ζn)∥∞ ≤ M3. (.1) Then, there exist (x, y) ∈ W 1,∞([0, T ]; Rn+d) and (ξ1, · · · , ξR, ζ) ∈ L∞([0, T ]; RR+1 + ) such that (xn, yn), and (ξ1 n, · · · , ξR n , ζn) admit a subsequence (not relabeled) satisfying ∀i ∈ {1, · · · , R},    (xn, yn) unif−−→ (x, y), ( ˙xn, ˙yn) w∗−−−→ in L∞ ( ˙x, ˙y), (ξi n, ζn) w∗−−−→ in L∞ (ξi, ζ), (.2) ∥(x, y)∥∞ ≤ M1, ∥( ˙x, ˙y)∥∞ ≤ M2, ∥(ξi, ζ)∥∞ ≤ M3. (ii) For given qi : [0, T ] × Rn −→ R, let Q(t) := ∩R i=1{x ∈ Rn : qi(t, x) ≤ 0} and (¯x, ¯y) ∈ C([0, T ]; Rn × Rl) be such that ¯x(t) ∈ Q(t) ∀t ∈ [0, T ]. Assume (A2) is satisfied by C(t) := Q(t), and, for some ¯δ > 0, (A3.1) and (A4.1) hold at ((¯x, ¯y); ¯δ) respectively by ψi := qi and n , ζn) be such h = (f, g) : [0, T ] × Rn × Rl × Rm −→ Rn × Rl. Let (xn, yn) and (ξn, · · · , ξR 161 that (xn(t), yn(t)) ∈ [Q(t) ∩ ¯B¯δ (x, y, ξi, ζ) be their corresponding limits via (.2). Consider un ∈ U such that, for all n ∈ N, (¯y(t)) (∀t ∈ [0, T ]) and (.1) is satisfied, and let (¯x(t))] × ¯B¯δ ((xn, yn), un), and (ξ1 n, · · · , ξR n , ζn) satisfy    ˙x(t) = f (t, x(t), y(t), u(t)) − PR i=1 ξi(t)∇xqi(t, x(t)) a.e. t ∈ [0, T ], (.3) ˙y(t) = g(t, x(t), y(t), u(t)) − ζ(t)∇yφ(t, y(t)) a.e. t ∈ [0, T ], where φ is given by (3.31).Then, in either of the following cases, there exists u ∈ U such that ((x, y), u), and (ξ1, · · · , ξR, ζ) also satisfy system (.3). Case 1. If there exists a subsequence of un that converges pointwise a.e. to some u ∈ U. Case 2. If (A1) and (A4.2) are satisfied. Proof. (i): By (.1), the sequence (xn, yn)n is equicontinuous and uniformly bounded. Hence, using Arzela-Ascoli theorem and that ( ˙xn, ˙yn) is uniformly bounded in L∞, it follows that there exists (x, y) ∈ W 1,∞([0, T ]; Rn+d) such that along a subsequence (we do not relabel) of (xn, yn), we have (xn, yn) unif−−→ (x, y), ( ˙xn, ˙yn) the bounds in (.2) (see Theorem 2.4.13). As ∥(ξi all n ∈ N, some subsequences of (ξ1 n, · · · , ξR (ξ1, · · · , ξR, ζ) ∈ L∞ which satisfy the required bound in (.2) (see Theorem 2.4.11). ( ˙x, ˙y), with (x, y) and ( ˙x, ˙y) satisfy w∗−−−→ in L∞ n, ζn)∥∞ ≤ M3 for all i = 1, · · · , R and for n , ζn) converge in the weak∗-topology to some (ii) Case 1. Let t ∈ [0, T ) lebesgue point of ˙x(·), ˙y(·), f (·, x(·), y(·), u(·)), G(·, x(·), y(·), u(·)), ξi(·) for all i = 1, · · · , R and ζ, and let τ ∈ (0, T − t). Then, (.3) implies    xn(t+τ )−xn(t) τ = 1 τ R t+τ t h f (s, xn(s), yn(s), un(s)) − PR i=1 ξi n (s)∇xqi(s, xn(s))i ds, (.4) yn(t+τ )−yn(t) τ = 1 τ R t+τ t [g(s, xn(s), yn(s), un(s)) − ζn(s)∇yφ(s, yn(s))] ds. Using Dominated Convergence Theorem, and taking the limit as n → ∞ of (.4), we deduce that    x(t+τ )−x(t) τ = 1 τ R t+τ t [f (s, x(s), y(s), u(s)) − PR i=1 ξi(s)∇xqi(s, x(s))]ds, (.5) y(t+τ )−y(t) τ = 1 τ R t+τ t [g(s, x(s), y(s), u(s)) − ζ(s)∇yφ(s, y(s))] ds. 162 Now, let τ → 0 in (.5), we get that (.3) is satisfied for every t lebesgue point, hence it hold for a.e. t ∈ [0, T ]. (ii) Case 2. For s ∈ [0, T ] a.e., define in Rn × Rl the sets Sn(s) := h(s, xn(s), yn(s), U (s)) and S(s) := h(s, x(s), y(s), U (s)). Using (A1), the continuity of h(s, ·, ·, ·) in (A4.1), and the convexity assumption (A4.2), it follows that Sn(s) and S(s) are nonempty closed convex sets and Sn(s) Hausdorff-converges to S(s). Hence, Filippov Selection Theorem yields that ((xn, yn), un) and (ξi n, ζn) satisfying (.3) is equivalent to, ∀z ∈ Rn+d and s ∈ [0, T ] a.e., ⟨z, ( ˙xn(s), ˙yn(s))⟩ ≤ σ(z, h(s, xn(s), yn(s), U (s))) R X −⟨z, ( i=1 ξi n (s)∇xqi(t, xn(s)), ζn(s)∇yφ(s, yn(s)))⟩. (.6) Furthermore, by (2.4) and the positive homogeneity of σ(·, Sn) and σ(·, S), we deduce that σ(z, h(s, xn(s), yn(s), U (s))) −−−→ k→∞ σ(z, h(s, x(s), y(s), U (s))), ∀z ∈ Rn+d and s ∈ [0, T ] a.e., and the bound of h in (A4.1) gives that, for z ∈ Rn+d, |σ (z, h(s, xn(s), yn(s), U (s))) | ≤ 2∥z∥Mh. Thus, for t ∈ [0, T ) a lebesgue point of ξi(·), ζ(·), ˙x(·), ˙y(·), σ(z, h(·, x(·), y(·), U (·))), and for τ ∈ (0, T − t), when integrating (.6) on [t, t + τ ] and then taking the limit as n → ∞, the Dominated Convergence Theorem yields that, ∀z = (z1, z2) ∈ Rn × Rl, Z t+τ t ⟨z, ( ˙x(s), ˙y(s))⟩ds ≤ Z t+τ t [σ(z, h(s, x(s), y(s), U (s))) − ⟨z, ( R X ξi(s)∇xqi(s, x(s)), ζ(s)∇yφ(s, y(s)))⟩]ds. i=1 Dividing the last equation by τ and taking the limit when τ → 0, we get that, ∀z ∈ Rn × Rl, and for t lebesgue point, ⟨z, ( ˙x(t), ˙y(t))⟩ ≤ σ(z, h(t, x(t), y(t), U (t))) − ⟨z, ( R X i=1 ξi(t)∇xqi(t, x(t)), ζ(t)∇yφ(t, y(t)))⟩, and hence this inequality is valid for t ∈ [0, T ] a.e. Therefore, by means of Filipov Selection Theorem, there exists u ∈ U, such that ((x, y), u), and (ξ1, · · · , ξR, ζ) satisfy system (.3). 163 Remark .0.3. When ¯δ = ∞, Lemma .0.2 remains valid with (¯x, ¯y) and the assumptions involving them are now superfluous. In this case, recall that (A3.1),(A4.1), and (A4.2) are replaced by (A3.1)G, (A4.1)G, and (A4.2)G, respectively. 164