BLOW-UP PROBLEMS FOR THE HEAT EQUATION WITH LOCAL NONLINEAR NEUMANN BOUNDARY CONDITIONS By Xin Yang A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Mathematics — Doctor of Philosophy 2017 ABSTRACT BLOW-UP PROBLEMS FOR THE HEAT EQUATION WITH LOCAL NONLINEAR NEUMANN BOUNDARY CONDITIONS By Xin Yang This thesis studies the blow-up problem for the heat equation ut = ∆u in a C 2 bounded open subset Ω of Rn (n ≥ 2) with positive initial data u0 and a local nonlinear Neumann ∂u = uq on partial boundary Γ ⊆ ∂Ω for some q > 1 and ∂u = 0 on boundary condition: ∂n 1 ∂n the rest of the boundary. The motivation of the study is the partial damage to the insulation on the surface of space shuttles caused by high speed flying subjects. First, we establish the local existence and uniqueness of the classical solution for such a problem. Secondly, we show the finite-time blowup of the solution and estimate both upper and lower bounds of the blow-up time T ∗ . In addition, the asymptotic behaviour of T ∗ on q, M0 (the maximum of the initial data) and |Γ1 | (the surface area of Γ1 ) are studied. • As q 1, the order of T ∗ is exactly (q − 1)−1 . 0, the order of T ∗ is at least ln(M0−1 ); if the region near Γ1 is convex, then • As M0 −(q−1) the order of T ∗ is at least M0 −(q−1) least M0 / ln(M0−1 ); if Ω is convex, then the order of T ∗ is at . On the other hand, if the initial data u0 does not oscillate too much, −(q−1) then the order of T ∗ is at most M0 • As |Γ1 | . 0, the order of T ∗ is at least ln(|Γ1 |−1 ) and at most |Γ1 |−1 . If the region 1 − near Γ1 is convex, then the order of T ∗ is at least |Γ1 | n−1 and |Γ1 |−1 ln |Γ1 |−1 2 ln |Γ1 |−1 for n ≥ 3 for n = 2. If Ω is convex, then the order of T ∗ is at least 1 − |Γ1 | n−1 for n ≥ 3 and |Γ1 |−1 ln |Γ1 |−1 for n = 2. Finally, we provide two strategies from engineering point of view (which means by changing the setup of the original problem) to prevent the finite-time blowup. Moreover, if the region near Γ1 is convex, then one of the strategies is applied to bound the solution from above by M1 for any M1 > M0 . For the space shuttle mentioned in the motivation of this thesis, Γ1 is on its left wing of the shuttle, so the region near Γ1 is indeed convex. In addition, the relation between T ∗ and small surface area |Γ1 | is of particular interest for this problem. As an application of the above estimates to this problem, let n = 3 and |Γ1 | 1 |Γ1 |− 2 0, then the order of T ∗ is between ln |Γ1 |−1 and |Γ1 |−1 . On the other hand, one of the strategies can be applied to prevent the temperature from being too high. This thesis seems to be the first to systematically study the heat equation with piecewise continuous Neumann boundary conditions. It also seems to be the first to investigate the relation between T ∗ and |Γ1 |, especially when |Γ1 | 0. The key innovative part of this thesis is Chapter 4. First, the new method developed in Chapter 4 is able to derive a lower bound for T ∗ without the convexity assumption of the domain which was a common requirement in the historical works. Secondly, even for the convex domains, the lower bound estimate obtained by this new method improves the previous results significantly. Thirdly, this method does not involve any differential inequality argument which was an essential technique in the past on the blow-up time estimate. ACKNOWLEDGMENTS I would like to express my deepest gratitude to my thesis adviser, Dr. Zhengfang Zhou, for his inspiring guidance and tremendous support during my graduate study. He not only introduces such an interesting thesis problem to me, but also provides technical suggestions on crucial steps. Moreover, his consistent encouragement gives me confidence to conquer all the difficulties. I would like to thank Dr. Keith Promislow, Dr. Jeffrey Schenker, Dr. Willie Wong and Dr. Baisheng Yan for serving as my doctoral committee members and providing invaluable suggestions. I am grateful to Dr. Gabriel Nagy, Dr. Keith Promislow, Dr. Benjamin Schmidt and Dr. Baisheng Yan for supporting my postdoctoral applications. I also would like to thank the professors who supported my research in one way or another: Dr. Keith Promislow, for discussions on the math model of this thesis; Dr. Benjamin Schmidt, for discussions on the geometric properties of the boundary; Dr. Baisheng Yan, for suggestions on my research from time to time. In addition, I appreciate the research assistantships provided by Dr. Patricia Lamm and Dr. Zhengfang Zhou which facilitate my research progress. During these years’ graduate study, I have broaden my view by taking courses in varies areas. Special thanks go to Dr. Gabor Francsics, Dr. Jun Kitagawa, Dr. Keith Promislow, Dr. Willie Wong and Dr. Zhengfang Zhou on PDEs; Dr. Jeffrey Schenker, Dr. Ignacio Uriarte-Tuero and Dr. Alexander Volberg on Analysis; Dr. Yingda Cheng, Dr. Di Liu and Dr. Jianliang Qian on Numerical analysis; Dr. Thomas Parker and Dr. Benjamin Schmidt iv on Geometric analysis. I also want to take this opportunity to thank all my friends for their help and sharing of challenges and happiness together. Last but not least, I owe a debt of gratitude to my family for their whole-hearted love, understanding and support. v TABLE OF CONTENTS LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . 1.1 Motivation and mathematical model . . . . . . . . . . . 1.2 Historical works . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Blow-up phenomenon for Cauchy problems . . . . 1.2.2 Parabolic blow-up problems in bounded domains 1.3 Difficulties and main ideas . . . . . . . . . . . . . . . . . 1.4 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 3 3 5 7 8 11 Chapter 2 Existence and Uniqueness . . . . . . . . . . . . . . . . 2.1 Main theorem and outline of the approach . . . . . . . . . . . . 2.2 Jump relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Heat potentials with continuous density . . . . . . . . . . 2.2.2 Heat potentials with piecewise continuous density . . . . 2.3 Linear case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Definition of the local solution . . . . . . . . . . . . . . . 2.3.2 Auxilliary lemmas . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Existence . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 Comparison principles, uniqueness and global solution . . 2.4 Nonlinear case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Definition of the local solution . . . . . . . . . . . . . . . 2.4.2 Comparison principles and uniqueness . . . . . . . . . . 2.4.3 Existence . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Maximal solution and application to the target problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 14 15 15 18 32 32 32 38 44 47 47 48 50 62 Chapter 3 Upper Bound Estimate of the Blow-up Time 3.1 Main theorem and outline of the approach . . . . . . . . 3.2 Approximation for C 2 domain from inside . . . . . . . . 3.3 Approximation for the solution by smoother functions . . 3.4 Upper bound on life span: case of C 2 domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 65 67 74 79 Chapter 4 Lower Bound Estimate of the Blow-up 4.1 Main theorems and outline of the approach . . . . 4.2 Weak solution and representation formula . . . . 4.2.1 Weak solution . . . . . . . . . . . . . . . . 4.2.2 Representation formula . . . . . . . . . . . 4.2.3 Time-shifted representation formula . . . . 4.3 Lower bound on life span: case of C 2 domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 83 87 87 89 92 95 vi Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 102 109 109 112 115 124 125 139 146 Chapter 5 Prevention of Blowup . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Main theorems and outline of the approach . . . . . . . . . . . . . . . . . . . 5.2 Repairing the broken part . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Prevention of finite-time blowup . . . . . . . . . . . . . . . . . . . . . 5.2.2 Control of the solution under a given value for convex domains . . . . 5.2.3 Control of the solution under a given value for the domain with local convexity near Γ1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Adding a pump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 148 152 153 158 4.4 4.5 4.6 4.3.1 A traditional way by Gronwall-type technique . . . . . . . . . . 4.3.2 Better estimate by a new method . . . . . . . . . . . . . . . . . Lower bound on life span: case of convex C 2 domain . . . . . . . . . . 4.4.1 Main idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Auxiliary lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Proof of Theorem 4.1.4 . . . . . . . . . . . . . . . . . . . . . . . Lower bound on life span: case of C 2 domain with local convexity near 4.5.1 Estimates for boundary integrals . . . . . . . . . . . . . . . . . 4.5.2 Proof of Theorem 4.1.3 . . . . . . . . . . . . . . . . . . . . . . Comparison with previous works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Γ1 . . . . . . 161 165 BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 vii LIST OF FIGURES Figure 1.1: Mathematical Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Figure 3.1: Γ1,j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Figure 5.1: Model with a pump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 viii Chapter 1 Introduction 1.1 Motivation and mathematical model This thesis is partially motivated by the Space Shuttle Columbia disaster in 2003. When the space shuttle was launched, a piece of foam broke off from its external tank and struck the left wing damaging the insulation there. As a result, the shuttle disintegrated during its reentry to the atmosphere due to the enormous heat generated near the damaged part. Actually, such damages on the wings were also found in previous shuttles too. But the engineers suspected that the previous damages were so small that the shuttle managed to land safely before the temperature became too high. Motivated by this, this thesis intends to study the relation between the blow-up time of the temperature inside the shuttle and the area of the broken part on the left wing. The goal is to rigorously verify the engineers’ conjecture from a mathematical perspective. In addition, some strategies that could prevent the blowup will also be explored. In Figure 1.1, let u be the inside temperature of the space shuttle. During the re-entry of the shuttle to the atmosphere, the air was compressed at a very high speed. Then many chemical reactions happened and produced enormous radiative heat flux, which was the main source of the heat. In physics, the radiative heat flux is proportional to the fourth power of the temperature. Due to this nonlinear effect, we consider a simplified model as follows. On 1 Γ2 : ∂u ∂n Ω =0 ut − ∆u = 0 Γ1 : ∂u ∂n = uq Figure 1.1: Mathematical Model ∂u = H(u) ∼ uq for some q > 1; on the other part Γ , ∂u = 0, since the broken part Γ1 , ∂n 2 ∂n the insulation there is intact. Finally, the inside temperature of the shuttle is supposed to satisfy the heat equation. Thus, the following math model is adopted (see more descriptions in Section 1.4).     ut (x, t) = ∆u(x, t) in Ω × (0, T ],          ∂u(x,t) = uq (x, t) on Γ1 × (0, T ], ∂n(x)   ∂u(x,t)  =0   ∂n(x)        u(x, 0) = u0 (x) (1.1.1) on Γ2 × (0, T ], in Ω, where q > 1, Γ1 = ∅, u0 ∈ C 1 (Ω), u0 (x) ≥ 0, u0 (x) ≡ 0. 2 (1.1.2) 1.2 1.2.1 Historical works Blow-up phenomenon for Cauchy problems In the seminal work [9], Fujita studied the Cauchy problem     ut (x, t) − ∆u(x, t) = up (x, t) in Rn × (0, ∞), (1.2.1) in Rn ,    u(x, 0) = ψ(x) where n ≥ 1, ψ ∈ C 2 (Rn ) is nonnegative and ψ, Di ψ, Dij ψ are all bounded on Rn . It is shown that if 1 < p < 1 + n2 , then the only nonnegative global solution is u ≡ 0, and if p > 1 + n2 , then there exist positive global solutions for positive and sufficiently small ψ. Since then, there is vast literature studying the nonlinear blow-up phenomenons. The number 1 + n2 is called the critical power in the sense that when p < 1 + n2 , any positive solution blows up in finite time; when p > 1 + n2 , there exist positive global solutions. The existence of such a critical power is a feature of this kind of blow-up problem. The study of the borderline case is more involved and usually obtained separately after the subcritical and supercritical are established. In the works [14] and [19], it is shown that the critical power p = 1 + n2 case belongs to the blow-up regime. Similar questions were also asked for the nonlinear wave equations and the situation there is more complicated.    utt (x, t) − ∆u(x, t) = |u|p (x, t) in Rn × (0, ∞),     u(x, 0) = ψ0 (x) in Rn ,       u (x, 0) = ψ (x) in Rn , t 1 3 (1.2.2) where n ≥ 1, ψ0 and ψ1 are nonnegative, compactly supported and let either of them be positive somewhere. In the pioneering work [17], John showed that when n = 3, the critical √ √ power for (1.2.2) is p = 1 + 2. Again this means if 1 < p < 1 + 2, then any solution √ blows up in finite time; if p > 1 + 2, then there exist global solutions for suitably small initial data. For general dimensions, the problem is also called the Strauss conjecture and the critical power is conjectured to be the positive root of the quadratic equation below for n ≥ 2 and infinity for n = 1. (n − 1)p2 − (n + 1)p − 2 = 0. Now this guess has been confirmed after several decades’ work. The subcritical cases can be found in [13, 39]. The supercritical cases are dealt with in [10, 12, 26]. Finally, the borderline cases are also proved to be in the blow-up regime, see [37,46]. The one dimensional case was discussed in [13] and [18]. It is also interesting to notice a problem which combines both nonlinear heat and wave equations.    ut (x, t) + utt (x, t) − ∆u(x, t) = |u|p (x, t) in Rn × (0, ∞),     u(x, 0) = ψ0 (x) in Rn ,       u (x, 0) = ψ (x) in Rn , t 1 where ψ0 and ψ1 are compactly supported. See [42, 47] for more details. 4 1.2.2 Parabolic blow-up problems in bounded domains Now let us focus on the parabolic type of nonlinear equations. Besides the Cauchy problems, people also study the boundary value problems including both Dirichlet and Neumann types.    ut (x, t) − ∆u(x, t) = f x, t, u(x, t)      in on ∂Ω × (0, T ], F x, t, u(x, t) = 0        u(x, 0) = ψ(x) or Ω × (0, T ],    ut (x, t) − ∆u(x, t) = f x, t, u(x, t)      ∂u(x,t) in Ω, in Ω × (0, T ], on ∂Ω × (0, T ], = F x, t, u(x, t) ∂n(x)        u(x, 0) = ψ(x) in (1.2.3) (1.2.4) Ω. For detailed discussions on the history, we refer the readers to the surveys [5, 24] and the books [7, 15, 35]. The typical examples are    ut (x, t) − ∆u(x, t) = up (x, t) in      on ∂Ω × (0, T ], u(x, t) = 0        u(x, 0) = ψ(x) and Ω × (0, T ], in    ut (x, t) − ∆u(x, t) = 0 in      ∂u(x,t) = uq (x, t) ∂n(x)        u(x, 0) = ψ(x) Ω, Ω × (0, T ], on ∂Ω × (0, T ], in (1.2.5) (1.2.6) Ω, where p > 1, q > 1 and the initial data is positive. But the blow-up properties of (1.2.5) and 5 (1.2.6) are quite different from (1.2.1). More precisely, for the problem (1.2.5), there exists some positive global solution, see [28, 38]. On the other hand, for the problem (1.2.6), any solution blows up in finite time, see [16, 36, 44]. For the more general problems (1.2.3) and (1.2.4), the research topics include the local and global existence and uniqueness of the solutions [1–4, 21, 27, 44]; nonexistence of global solutions and upper bound for the blow-up time [16, 21–23, 25, 27, 31, 36, 44]; lower bound for the blow-up time [31–34, 43]; blow-up sets, blow-up rate and the asymptotic behaviour of the solutions near the blow-up time [8, 11, 16, 21, 27, 29, 36, 45]. When considering the bounds of the blow-up time, the upper bound is usually related to the nonexistence of the global solutions and this area has brought extensive attention over several decades. Various methods on this issue have been developed, such as the comparison method, the concavity method, the Green’s function method, the energy method, and the unbounded Fourier coefficient method (see [23]). Most methods have in common that they first consider some nonlinear functional of the solution and try to establish a first order differential inequality for that functional, then it is shown that such a differential inequality can not hold beyond some finite time T which serves as an upper bound. The lower bound was not studied as much in the past but was paid more attention in recent years. However, the lower bound can be argued to be more useful in practice, since it provides an estimate of the safe time. In contrast to the upper bound case, not many methods have been developed to deal with the lower bound. But the existing methods again have in common that they first consider some nonlinear functional of the solution and try to establish a first order differential inequality for that functional, then it is shown that such differential inequality will hold for at least some time T , which serves as a lower bound. 6 1.3 Difficulties and main ideas Generally speaking, there are two main difficulties. First, Although there has been vast literature on the blow-up problem of the parabolic type, few of them deal with discontinuous Neumann boundary conditions. Second, the existing works on the estimate of the lower bound of the blow-up time only work for convex domains. But by examining the graph 1.1, the domain is clearly not convex. In the following, we discuss these difficulties and the corresponding strategies in more details. First, for the theory on existence and uniqueness of the solution to (1.2.4), the key tool is the jump relation of the single-layer potentials (see Theorem 2.2.1). In order to generalize the theory to the linear problem (2.3.1) with piecewise continuous Neumann boundary conditions, we establish an adapted version of the jump relation in Theorem 2.2.6. Taking advantage of this adapted relation, a classical solution can be constructed to satisfy (2.3.1) pointwise. In addition, such a solution also fits the condition (2.3.2). By imposing the condition (2.3.2) into the definition of the solution (see Definition 2.3.1), the uniqueness follows from the maximum principle and the Hopf lemma. After the theory is established for the linear problem, the existence and uniqueness for the nonlinear case (2.4.1) will be justified by applying the iterative arguments and fixed point theorems. Secondly, for the estimate of the upper bound for the blow-up time, we adopt the idea in [36] which introduces a suitable energy function and shows the finite blowup of this energy function. But due to the discontinuity of the normal derivative, the original argument does not carry through directly. So we need to additionally introduce a sequence of approximated solutions {vj }j≥1 to u, see (3.3.5), to justify the argument. Thirdly, for the estimate of the lower bound for the blow-up time, our method is new 7 in order to deal with more general domains. In Subsection 4.3.2, the lower bound of T ∗ is derived without any convexity assumption of the domain which is a common requirement in the previous works. Let M (t) denote the supremum of the solution on Ω × [0, t]. The idea is to chop the range of M (t) into suitable pieces and find a lower bound for the time spent in each piece by analysing the representation formula of the solution. Then adding all these lower bounds together yields a lower bound for T ∗ . This strategy does not introduce any differential inequalities which often appeared in the historical works on the blow-up time estimate. The proofs in Subsection 4.4.3 and Subsection 4.5.2 adopt the similar idea but with some convexity assumptions on the domain. These assumptions make it possible to chop the range of M (t) in a more delicate way to improve the estimate. Finally, for the strategies to prevent the finite-time blowup, the ideas are similar to those in Chapter 4. 1.4 Notations In this thesis, unless stated otherwise, • Ω represents a bounded open subset in Rn (n ≥ 2) with C 2 boundary ∂Ω. • Γ1 and Γ2 denote two disjoint relatively open subsets of ∂Ω. ∂Γ1 = ∂Γ2 Γ is C 1 . Moreover, Γ1 = ∅ and ∂Ω = Γ1 ∪ Γ ∪ Γ2 . • |Ω| and |Γ1 | represents the volume of Ω and the surface area of Γ1 respectively. That is, |Ω| = |Γ1 | = dx, Ω 8 dS(x). Γ1 The normal derivative in (1.1.1) is understood in the following way: for any (x, t) ∈ ∂Ω × (0, T ], ∂u(x, t) ∂n(x) − lim (Du)(xh , t) · → n (x), + (1.4.1) h→0 − where → n (x) denotes the exterior unit normal vector at x and xh − x − h→ n (x) for x ∈ ∂Ω. Since ∂Ω is C 2 , xh belongs to Ω when h is positive and sufficiently small. In the above notations, Γ1 is not allowed to be empty, since otherwise the blowup will not occur. On the other hand, Γ2 is allowed to be empty and in that case, problem (1.1.1) has been studied extensively in the past. In Section 4.6, the results obtained in this thesis will be compared with the previous results when Γ2 is empty. Finally, in some extreme situations, Γ may also be empty. For example, let Ω = {x ∈ Rn : 12 < |x| < 1}, Γ1 = {x ∈ Rn : |x| = 21 } and Γ2 = {x ∈ Rn : |x| = 1}. It is worth mentioning that all the results in this thesis also apply to this situation. For any function f : A → R, we follow the convention to denote the supremum norm of f to be ||f ||∞,A = sup |f (x)|. x∈A When there is no ambiguity, ||f ||∞ will be used short for ||f ||∞,A . For any T > 0, define AT = C 2,1 (Ω × (0, T ]) ∩ C(Ω × [0, T ]) and BT = {g : (Γ1 ∪ Γ2 ) × (0, T ] → R for i = 1 or 2, g|Γ ×(0,T ] is uniformly continuous}. i 9 For any g ∈ BT , the restriction function g|Γ ×(0,T ] (i = 1 or 2) has a unique continuous i extension to Γi × [0, T ] which will be denoted as gi . But one should notice that g may not be able to extend to a continuous function on ∂Ω × (0, T ], since it may have a jump between Γ1 and Γ2 . We endow BT with the supremum norm: ||g||∞, (Γ ∪Γ )×(0,T ] = 1 2 sup |g(x, t)|. (x,t)∈(Γ1 ∪Γ2 )×(0,T ] It is readily seen that BT is a Banach space. We write M0 = max u0 (x) (1.4.2) x∈Ω and denote M (t) to be the supremum of the solution u to (1.1.1) on Ω × [0, t]: M (t) = sup u(x, τ ). (1.4.3) (x,τ )∈Ω×[0,t] Φ always refers to the fundamental solution to the heat equation: Φ(x, t) = 1 (4πt)n/2 exp − |x|2 , 4t ∀ (x, t) ∈ Rn × (0, ∞). (1.4.4) The surface integral with respect to the variable x will be denoted as dS(x). In addition, C = C(a, b . . . ) and Ci = Ci (a, b . . . ) represent positive constants which only depend on the parameters a, b . . . . One should also note that C and Ci may stand for different constants from line to line. 10 1.5 Main results The solution to (1.1.1) is understood in the following way. Definition 1.5.1. For any T > 0, a solution to (1.1.1) on Ω × [0, T ] means a function u ∈ C 2,1 Ω×(0, T ] ∂u(x,t) ∂n(x) C Ω×[0, T ] that satisfies (1.1.1) pointwise and for any (x, t) ∈ Γ×(0, T ], exists and 1 ∂u(x, t) = uq (x, t). ∂n(x) 2 (1.5.1) Definition 1.5.2. The maximal existence time T ∗ for (1.1.1) is defined as T ∗ = sup{T ≥ 0 : there exists a solution to (1.1.1) on Ω × [0, T ]}. When T ∗ > 0, a function u ∈ C 2,1 Ω×(0, T ∗ ) C Ω×[0, T ∗ ) is called a maximal solution if u|Ω×[0,T ] is a solution to (1.1.1) on Ω × [0, T ] for any T ∈ (0, T ∗ ). Based on these two definitions, the existence and uniqueness theory of the maximal solution u is established in Theorem 2.1.1. In addition, Theorem 2.1.1 also claims the positivity of u at any positive time. For convenience, we will just call u to be solution instead of maximal solution. Actually, we deal with the existence and uniqueness theory in much more general settings. First, the key tool, jump relation of the single-layer potential with piecewise continuous density, is set up in Subsection 2.2.2. Secondly, the theory for linear case is built in Section 2.3. Finally, the theory for the nonlinear case in a very general form is framed in Section 2.4. The targeted problem (1.1.1) is just a special case in Section 2.4. Theorem 3.1.1 concludes that the solution u does not exist globally. Moreover, it is the supremum norm M (t) that blows up first. As a convention, we just call T ∗ to be the blow-up 11 time. If the initial data u0 is strictly positive, then an explicit formula for an upper bound of T ∗ is given in Theorem 3.1.1. In Theorem 4.1.1, a lower bound for T ∗ is obtained for any C 2 domain. If it is locally convex near Γ1 , then Theorem 4.1.3 improves the lower bound estimate significantly. Combining these estimates, the asymptotic behaviour of T ∗ on q, M0 and |Γ1 | are understood quite well. The following is a summary of the conclusions. 1, the order of T ∗ is exactly (q − 1)−1 . • As q 0, the order of T ∗ is at least ln(M0−1 ); if the region near Γ1 is convex, then • As M0 −(q−1) the order of T ∗ is at least M0 −(q−1) least M0 / ln(M0−1 ); if Ω is convex, then the order of T ∗ is at . On the other hand, if the initial data u0 does not oscillate too much, −(q−1) then the order of T ∗ is at most M0 • As |Γ1 | . 0, the order of T ∗ is at least ln(|Γ1 |−1 ) and at most |Γ1 |−1 . If the region 1 − near Γ1 is convex, then the order of T ∗ is at least |Γ1 | n−1 and |Γ1 |−1 ln |Γ1 |−1 2 ln |Γ1 |−1 for n ≥ 3 for n = 2. If Ω is convex, then the order of T ∗ is at least 1 − |Γ1 | n−1 for n ≥ 3 and |Γ1 |−1 ln |Γ1 |−1 for n = 2. In order to compare with the previous results on the lower bound estimate of T ∗ for the convex domains and Γ1 = ∂Ω, we also derive a result for the convex domains in Section 4.4, see Theorem 4.1.4. Since the results will be compared under the assumption that Γ1 = ∂Ω, the order of the lower bound on |Γ1 | as |Γ1 | the lower bound on M0 as M0 0 is not of interest. Instead, the order of 0 or M0 → ∞ is more important. So the lower bound in Theorem 4.1.4 has a better order on M0 , no matter M0 0 or M0 → ∞, than those in Theorem 4.1.3 and Remark 4.5.11, but it loses order on |Γ1 |−1 as |Γ1 | 12 0. Finally, we investigate two strategies to prevent the blowup: repairing the broken part and adding a pump. For the method of repairing the broken part, it is shown in Theorem 5.1.2 that if the area of the broken part decreases at an exponential rate, then the solution will not blow up in finite time. Furthermore, let the region near Γ1 be convex and M0 denote the maximum of the initial data. Then Theorem 5.1.4 claims that for any M1 > M0 , the solution will be bounded by M1 (M1 > M0 ) if the area of the broken part decreases at a super exponential rate. For the method of adding a pump, it is shown in Theorem 5.1.5 that by adding a suitable pump near the broken part, the solution will not blow up in finite time. 13 Chapter 2 Existence and Uniqueness 2.1 Main theorem and outline of the approach The goal of this chapter is to establish the existence and uniqueness of the targeted problem (1.1.1). The main theorem is as follows. Theorem 2.1.1. The maximal existence time T ∗ for (1.1.1) is positive and there exists a unique maximal solution u ∈ C 2,1 Ω × (0, T ∗ ) ∩ C Ω × [0, T ∗ ) to (1.1.1). Moreover, u(x, t) > 0 for any (x, t) ∈ Ω × (0, T ∗ ). Since (1.1.1) is a nonlinear problem, the strategy is to first deal with the linear case and then build the nonlinear case by fixed point theorems. The arguments for the nonlinear problem is standard, the main difficulty lies in the linear case. The key technique is a generalization of the jump relation for the heat potentials. Previously, the jump relation of the heat potentials with continuous density is well-known, but due to the discontinuity of the normal derivative in (1.1.1), we need to extend the result to the heat potentials with piecewise continuous density. The organization of this chapter is as follows. Section 2.2 discusses the jump relation of the heat potentials with piecewise continuous density. In Section 2.3, we study the linear problem (2.3.1) and prove the global existence and uniqueness of the classical solution. In Section 2.4, we first set up the local existence and uniqueness for the nonlinear problem 14 (2.4.1) and then discuss the maximal solution. As a corollary, Theorem 2.1.1 is justified. 2.2 Jump relation The key tool in the proof of the existence of the solution to the parabolic equations with Neumann boundary conditions is the jump relation of the single-layer and double-layer potentials. Historically, people study these potentials with continuous density, but in order to adapt to the current problem, we generalize the results for the potentials with piecewise continuous density in this section. 2.2.1 Heat potentials with continuous density Let g ∈ C ∂Ω × [0, T ] . The single-layer heat potential with density g is given by t Φ(x − y, t − τ )g(y, τ ) dS(y) dτ, U (x, t) = 0 ∀ (x, t) ∈ Ω × (0, T ], (2.2.1) ∂Ω where Φ is defined as in (1.4.4) and dS means the surface integral. The double-layer heat potential is given by t V (x, t) = 0 ∂Φ(x − y, t − τ ) g(y, τ ) dS(y) dτ, ∂n(y) ∂Ω ∀ (x, t) ∈ Ω × (0, T ], (2.2.2) − where → n (y) denotes the exterior unit normal vector at y. The following Theorem 2.2.1 and 2.2.2 are two fundamental properties of the jump relations. Theorem 2.2.1. Let g ∈ C ∂Ω × [0, T ] and U be the single-layer heat potential defined in 15 (2.2.1). Then for any (x, t) ∈ ∂Ω × (0, T ], − lim DU (xh , t) · → n (x) = h→0+ t 0 1 ∂Φ(x − y, t − τ ) g(y, τ ) dS(y) dτ + g(x, t), ∂n(x) 2 ∂Ω (2.2.3) − where xh = x − h→ n (x). Moreover, the convergence in the above limit is uniform on ∂Ω × [τ0 , T ] for any τ0 > 0. Proof. The pointwise convergence of (2.2.3) can be found in Theorem 1, page 137 in Section − 2, Chapter 5 of [7], note that the author in [7] uses → ν (x) to denote the interior unit normal direction at x, so the jump is −g(x, t)/2 which differs a sign with (2.2.3). Actually in that theorem, it proves the pointwise convergence in much more general cases: arbitrary parabolic kernels; C 1,α domains; and convergence from interior cones. As a result, the uniform convergence is not clear in that setting. But if restricted to the heat kernel Φ, C 2 domain case and convergence from the normal direction, then the uniform convergence of (2.2.3) can be established through the same proof. Theorem 2.2.2. Let g ∈ C ∂Ω × [0, T ] and V be the double-layer heat potential defined in (2.2.2). Then lim V (xh , t) = V (x, t) − h→0+ 1 g(x, t) 2 ∀ (x, t) ∈ ∂Ω × (0, T ], (2.2.4) − where xh = x − h→ n (x). Moreover, the convergence in the above limit is uniform on ∂Ω × [τ0 , T ] for any τ0 > 0. In particular, V can be continuously extended from Ω × (0, T ] into Ω × (0, T ]. Proof. We refer the readers to the proof of Theorem 9.5, page 176, Section 2, Chapter 9 − − of [20], note that the author in [20] uses → ν (y), rather than → n (y), to denote exterior unit 16 normal direction at y. In that proof, it only deals with the dimension n = 2 or 3, but the proof for arbitrary dimensions can be carried out almost identically. Proposition 2.2.3. Theorem 2.2.1 and Theorem 2.2.2 are equivalent. Proof. By the explicit formulas (2.2.1) and (2.2.2), it suffices to show the uniform convergence on ∂Ω × (0, T ] of the following limit. t lim h→0+ 0 t ∂Ω − − DΦ xh − y, t − τ · → n (y) − → n (x) g(y, τ ) dS(y) dτ − − DΦ(x − y, t − τ ) · → n (y) − → n (x) g(y, τ ) dS(y) dτ. = 0 (2.2.5) ∂Ω By Corollary 3.2.2, there exists σ > 0 such that for any h < σ and for any x ∈ ∂Ω, there exists an interior ball touching x with radius h. As a result, |xh − x| ≤ |xh − y| for any y ∈ ∂Ω. Thus, |x − y| ≤ |x − xh | + |xh − y| ≤ 2 |xh − y|. Now using the fact ∂Ω ∈ C 2 again, we get → − − n (y) − → n (x) ≤ C|y − x| ≤ C |xh − y|. Consequently, C |xh − y|2 |x − y|2 − − DΦ xh − y, t − τ · → n (y) − → n (x) g(y, τ ) ≤ exp − h 4(t − τ ) (t − τ )n/2+1 ≤ ≤ 17 |x − y|2 exp − h 8(t − τ ) (t − τ )n/2 C C (t − τ )n/2 exp − |x − y|2 . 32(t − τ ) Since the right hand side of the above inequality is integrable on ∂Ω × [0, t], if we split the integral as following t t = 0 ∂Ω + 0 ∂Ω = I + II, ∂Ω then the integral I is uniformly small and the uniform convergence on II is clear. As a result, the uniform convergence of (2.2.5) is established. 2.2.2 Heat potentials with piecewise continuous density The previous subsection introduces jump relation of the single-layer and double-layer potentials with continuous density. But Theorem 2.2.1 and Theorem 2.2.2 are still not enough for our problems. For example, in order to show the existence of the solution to (2.3.1), the boundary functions β and g are only assumed in BT , not in C ∂Ω × [0, T ] . Thus we need to adapt this jump relation to our case. So in this section, we will establish a similar jump relation for the single-layer heat potential with piecewise continuous density as that in Theorem 2.2.1. This jump relation for the single-layer heat potential with discontinuous density in Theorem 2.2.6 will be mainly applied to show the existence and uniqueness of the solution to (2.3.1) and (2.4.1). First, we extend the definition of the single-layer heat potential to include the function space BT in which the function is only piecewise continuous on the boundary. For any ϕ ∈ BT , define the single-layer heat potential with density ϕ to be t Φ(x − y, t − τ )ϕ(y, τ ) dS(y) dτ, U (x, t) = 0 ∂Ω 18 ∀ (x, t) ∈ Ω × (0, T ]. (2.2.6) Then we introduce a similar definition in which the integration is on Γi (i=1 or 2): t Ui (x, t) = Φ(x − y, t − τ )ϕ(y, τ ) dS(y) dτ, 0 ∀ (x, t) ∈ Ω × (0, T ]. (2.2.7) Γi Recalling that the interface between Γ1 and Γ2 is defined to be Γ = Γ1 ∩ Γ2 . Thus, the surface measure of Γ is 0, which implies that U (x, t) = U1 (x, t) + U2 (x, t). Recalling also the notations in Section 1.4: for i = 1 or 2, ϕ|Γ ×(0,T ] is uniformly coni tinuous, thus we can extend this function to a continuous function ϕi on Γi × [0, T ]. In the following, when x ∈ Γ and t > 0, Lemma 2.2.5 will establish the jump relation for Ui (i = 1 or 2) at (x, t) with the jump ϕi (x, t)/4; when x ∈ Γi (i = 1 or 2) and t > 0, Lemma 2.2.4 state the jump relation for Ui at (x, t) with the jump ϕi (x, t)/2 (here ϕi (x, t)/2=ϕ(x, t)/2, since x ∈ Γi ). The essence of the proof is the same as that of Theorem 2.2.1. Before the proof, we introduce some notations that will be used in the proof and later argument. We write 0 and ˜ 0 to be the origins in Rn and Rn−1 respectively and en denotes the point (0, 0, · · · , 0, 1) in Rn . For any point y = (y1 , y2 , · · · , yn ) ∈ Rn , we write y˜ = (y1 , y2 , · · · , yn−1 ). For any r > 0, Br B(0, r) 19 means the ball in Rn with radius r and B(˜ 0, r) Br represents the ball in Rn−1 with radius r. Γ is used to denote the Gamma function, i.e. Γ : (0, ∞) → R defined by ∞ Γ(a) = ta−1 e−t dt. 0 It should not be confused with the partial boundaries Γ1 , Γ2 and Γ. Lemma 2.2.4. Assume ϕ ∈ BT and i = 1 or 2. Define Ui as in (2.2.7). Then for any x ∈ Γi and t ∈ (0, T ], − lim DUi (xh , t) · → n (x) = + h→0 t 0 1 ∂Φ(x − y, t − τ ) ϕ(y, τ ) dS(y) dτ + ϕ(x, t), ∂n(x) 2 Γi (2.2.8) − where xh = x − h→ n (x). Proof. The proof is almost the same as that of Theorem 2.2.1, since Γi is a relatively open subset of ∂Ω and ϕ is continuous on Γi × (0, T ]. But the situation when x ∈ Γ is different from that in Lemma 2.2.4, since Γ is the boundary of Γi . The following lemma claims that in this situation, the jump is only half of that in Lemma 2.2.4. The proof for Lemma 2.2.5 is also similar to that of Theorem 2.2.1, but for completeness, we include a detailed proof. Lemma 2.2.5. Assume ϕ ∈ BT and i = 1 or 2. Define Ui as in (2.2.7). Then for any 20 x∈Γ Γ1 ∩ Γ2 and t ∈ (0, T ], − lim DUi (xh , t) · → n (x) = + h→0 t 1 ∂Φ(x − y, t − τ ) ϕ(y, τ ) dS(y) dτ + ϕi (x, t), ∂n(x) 4 Γi 0 (2.2.9) − where xh = x−h→ n (x) and ϕi represents the continuous extension of ϕ|Γ ×(0,T ] on Γi ×[0, T ]. i Proof. We assume i = 1 (The case i = 2 is similar). By (1.4.4), (2.2.9) becomes − (xh − y) · → n (x) t lim − h→0+ t =− 0 exp − |xh − y|2 4(t − τ ) (t − τ )n/2+1 − (x − y) · → n (x) |x − y|2 exp − 4(t − τ ) Γ1 (t − τ )n/2+1 0 Γ1 ϕ1 (y, τ ) dS(y) dτ ϕ1 (y, τ ) dS(y) dτ + (4π)n/2 ϕ1 (x, t). 2 (2.2.10) Without loss of generality, we assume x = 0, otherwise we can do a translation. After this, − we further assume → n (0) = −en , otherwise we can do a rotation which preserves dot product − and the distance. By these two simplifications, we have x = 0 and → n (x) = −en , therefore xh = hen and (2.2.10) is reduced to t h − yn lim h→0+ 0 t =− 0 Γ1 Γ1 exp (t − τ )n/2+1 yn (t − τ )n/2+1 exp − |hen − y|2 − 4(t − τ ) |y|2 4(t − τ ) ϕ1 (y, τ ) dS(y) dτ ϕ1 (y, τ ) dS(y) dτ + (4π)n/2 ϕ1 (0, t). 2 By a change of variable in τ , it is equivalent to t h − yn lim h→0+ 0 t =− 0 Γ1 Γ1 τ n/2+1 yn τ n/2+1 exp exp − − |y|2 4τ |hen − y|2 4τ ϕ1 (y, t − τ ) dS(y) dτ ϕ1 (y, t − τ ) dS(y) dτ + 21 (4π)n/2 ϕ1 (0, t). 2 (2.2.11) Because ∂Ω ∈ C 2 and Γ = ∂Γ1 ∈ C 1 , we can straighten the boundary. More specifically, after relabeling the coordinates, there exist φ1 ∈ C 2 : Rn−1 → R, φ2 ∈ C 1 : Rn−2 → R, η0 > 0 and a neighborhood Sη0 ⊂ ∂Ω of 0 such that Sη0 can be parametrized as Sη0 = { y˜, φ1 (˜ y ) : y˜ ∈ Bη0 } and for any y ∈ Γ ∩ Sη0 , we not only have yn = φ1 (˜ y ), but also yn−1 = φ2 (y1 , y2 , · · · , yn−2 ). Fixing η0 and then for any η < η0 , we define Sη = { y˜, φ1 (˜ y ) : y˜ ∈ Bη }, which is a subset of Sη0 and a small neighborhood of 0. Then we denote Sη,1 = Sη ∩ Γ1 , Bη,1 = {˜ y ∈ Bη : y˜, φ1 (˜ y ) ∈ Sη,1 }, Sη = Sη ∩ Γ, Pη = {˜ y ∈ Bη : y˜, φ1 (˜ y ) ∈ Sη }. After these preparations, we begin the technical proof. Given any > 0, we want to find δ = δ( ) > 0 such that for any 0 < h < δ, the difference between the two sides of (2.2.11) is within Cε for some constant C. For any η ∈ (0, η0 ) which will be determined later, we split the integral over Γ1 in (2.2.11) into two parts: Γ = S + Γ \S . Since Γ1 \ Sη,1 is away from 0, it is easy to see there 1 η,1 1 η,1 22 exists δ1 = δ1 (η, ) such that when 0 < h < δ1 , then t h − yn τ n/2+1 Γ1 \Sη,1 0 t exp yn + τ n/2+1 Γ1 \Sη,1 0 − |hen − y|2 4τ (2.2.12) |y|2 − 4τ exp ϕ1 (y, t − τ ) dS(y) dτ ϕ1 (y, t − τ ) dS(y) dτ < . − ˜ = 0. ˜ As a result, for any y ∈ Sη,1 , Next since → n (0) = −en , then Dφ1 (0) ˜ = |Dφ1 (θ˜ |yn | = |φ1 (˜ y )| = |φ1 (˜ y ) − φ1 (0)| y ) · y˜| (2.2.13) ˜ |˜ ≤|Dφ1 (θ˜ y ) − Dφ1 (0)| y| ≤ C |˜ y |2 , where by the mean value theorem θ is some number between 0 and 1. By (2.2.13), together with the fact |hen − y| ≥ |˜ y |, we attain |yn | exp − |hen − y|2 4τ ≤ C |˜ y |2 exp − |˜ y |2 . 4τ Noticing |˜ y |2 t 0 Sη,1 τ n/2+1 exp − |˜ y |2 4τ dS(y) dτ < ∞, then it follows from Lebesgue’s dominated convergence theorem that t yn lim h→0+ 0 t τ n/2+1 yn = 0 Sη,1 Sη,1 τ n/2+1 exp exp − |y|2 4τ − |hen − y|2 4τ ϕ1 (y, t − τ ) dS(y) dτ ϕ1 (y, t − τ ) dS(y) dτ. 23 As a result, there exists δ2 = δ2 (η, ) such that when 0 < h < δ2 , then t yn 0 Sη,1 τ n/2+1 t yn − 0 Sη,1 − exp τ n/2+1 |hen − y|2 4τ |y|2 − 4τ exp ϕ1 (y, t − τ ) dS(y) dτ (2.2.14) ϕ1 (y, t − τ ) dS(y) dτ < . Now it suffices to verify that |Iη (h, t) − 21 (4π)n/2 ϕ1 (0, t)| < Cε, where t h Iη (h, t) 0 Sη,1 τ n/2+1 exp − |hen − y|2 4τ ϕ1 (y, t − τ ) dS(y) dτ. (2.2.15) Recalling that yn = φ(˜ y ), (2.2.15) can be rewritten as t h Iη (h, t) = 0 Bη,1 t h = 0 τ n/2+1 Bη,1 τ n/2+1 exp − |hen − y|2 4τ exp − |˜ y |2 + |h − yn |2 4τ ϕ1 (y, t − τ ) 1 + |Dφ1 (˜ y )|2 d˜ y dτ ϕ1 (y, t − τ ) 1 + |Dφ1 (˜ y )|2 d˜ y dτ, (2.2.16) where y = y˜, φ1 (˜ y ) . Iη is hard to compute, so we approximate it by a simpler function. We define I˜η (h, t) as following t I˜η (h, t) = 0 h Bη,1 t h = 0 τ n/2+1 Bη,1 τ n/2+1 |hen − (˜ y , 0)|2 4τ exp − exp |˜ y |2 + h2 − 4τ ϕ1 (0, t − τ ) d˜ y dτ (2.2.17) ϕ1 (0, t − τ ) d˜ y dτ. Our strategy is to show that I˜η (h, t) is close to both 21 (4π)n/2 ϕ1 (0, t) and Iη (h, t). Based on (2.2.17) and noticing h is strictly positive, so the integrand is absolutely integrable. Thus, we can reverse the order of integration. Then using the change of variable 24 τ → σ = |˜ y |2 + h2 /(4τ ), we attain t I˜η (h, t) = h − exp 0 τ n/2+1 ∞ 4n/2 σ n/2−1 h Bη,1 = y |2 +h2 Bη,1 |˜ 4t (|˜ y |2 + h2 )n/2 |˜ y |2 + h2 4τ ϕ1 (0, t − τ ) dτ d˜ y e−σ ϕ1 0, t − |˜ y |2 + h2 dσ d˜ y 4σ ∞ = h 4n/2 (|˜ y |2 + h2 )n/2 Bη,1 σ n/2−1 e−σ ϕ1 0, t − |˜ y |2 +h2 4t h = 4n/2 Bη,1 (|˜ y |2 |˜ y |2 + h2 dσ d˜ y 4σ + h2 )n/2 H |˜ y |2 + h2 , t d˜ y, (2.2.18) where ∞ H(λ, t) λ 4t σ n/2−1 e−σ ϕ1 0, t − λ dσ. 4σ It is readily to see that lim H(λ, t) = Γ λ→0 n ϕ1 (0, t). 2 (2.2.19) Consequently, there exists δ3 = δ3 ( ) such that when η < δ3 and 0 < h < δ3 , then H |˜ y |2 + h2 , t − Γ n ϕ (0, t) < , 2 1 ∀ y˜ ∈ Bη,1 . (2.2.20) After taking care of the H term in (2.2.18), let’s consider the following integration h (|˜ y |2 + h2 )n/2 d˜ y, Bη,1 where the integrand h |˜ y |2 + h2 −n/2 is radial in y˜ ∈ Rn−1 and positive when h > 0. Since 25 Γ = ∂Γ1 ∈ C 1 , it ensures that Pη almost bisects Bη when η is small, which means Bη,1 is close to a hemisphere and |Bη,1 | 1 = . 2 η→0 |Bη | lim This limit is the essential reason why the jump is 21 ϕ(x, t) in (2.2.8). As a result, we can find δ4 = δ4 ( ) such that for any η < δ4 , Bη,1 1− < 1 2 Bη h 2 (|˜ y | +h2 )n/2 d˜ y h (|˜ y |2 +h2 )n/2 d˜ y <1+ , i.e. h (|˜ y |2 + h2 )n/2 d˜ y− h 1 2 (|˜ y |2 + h2 )n/2 d˜ y < Bη Bη,1 Next, we will estimate Bη h (|˜ y |2 + h2 )n/2 2 d˜ y . Making the change of variable y˜ → z˜ h (|˜ y |2 + h2 )n/2 1 d˜ y= Bη (|˜ z |2 + 1)n/2 d˜ z. Bη/h On one hand, (|˜ z |2 + 1)n/2 Bη/h (2.2.21) Bη h (|˜ y |2 +h2 )n/2 1 d˜ y. 1 d˜ z≤ Rn−1 26 (|˜ z |2 + 1)n/2 d˜ z= π n/2 , Γ n2 y˜/h, while on the other hand, lim 1 h→0 Bη/h (|˜ z |2 + 1)n/2 1 d˜ z= Rn−1 (|˜ z |2 + 1)n/2 d˜ z= π n/2 . Γ n2 Thus, there exists δ5 = δ5 (η, ) such that for any 0 < h < δ5 , h (|˜ y |2 + h2 )n/2 d˜ y− π n/2 < Γ n2 (2.2.22) Bη and h (|˜ y |2 + h2 )n/2 d˜ y− π n/2 2Γ n2 < Cε (2.2.23) Bη,1 by noticing (2.2.21). It then follows from (2.2.18), (2.2.20) and (2.2.23) that (4π)n/2 ϕ1 (0, t) < Cε. I˜η (h, t) − 2 (2.2.24) Now it suffices to show that I˜η (h, t) is close to Iη (h, t). Firstly, because of (2.2.13), |˜ y |2 + |h − yn |2 is comparable to |˜ y |2 + h2 . More precisely, there exist positive constants m1 < 1 and M1 > 1 such that m1 |˜ y |2 + h2 ≤ |˜ y |2 + |h − yn |2 ≤ M1 |˜ y |2 + h2 . (2.2.25) We can equivalently write it to be m1 |hen − (˜ y , 0)|2 ≤ |hen − y|2 ≤ M1 |hen − (˜ y , 0)|2 . 27 (2.2.26) Consequently, it follows from (2.2.16) and (2.2.17) that |Iη (h, t) − I˜η (h, t)| t ≤ 0 h τ n/2+1 Bη,1 t + 0 e− h τ n/2+1 Bη,1 e |hen −y|2 4τ − − e− |hen −(˜ y ,0)|2 4τ |hen −(˜ y ,0)|2 4τ ϕ1 (y, t − τ ) ϕ1 (y, t − τ ) 1 + |Dφ1 (˜ y )|2 − ϕ1 (0, t − τ ) d˜ y dτ 1 + |Dφ1 (˜ y )|2 d˜ y dτ I + II. (2.2.27) For II, since ϕ1 ∈ C Γ1 × [0, T ] , then there exists δ6 = δ6 ( ) such that when η < δ6 , ϕ1 (y, t − τ ) 1 + |Dφ1 (˜ y )|2 − ϕ1 (0, t − τ ) < , ∀ y ∈ Bη,1 , τ ∈ [0, t]. As a result, t II ≤ h τ n/2+1 Bη,1 0 t = 0 h τ n/2+1 Bη,1 t = h Bη,1 0 h Bη,1 0 |hen −(˜ y ,0)|2 4τ − |˜ y |2 +h2 4τ e e 1 τ n/2+1 ∞ ≤ − e − |˜ y |2 n/2 + h2 h =C Bη,1 |˜ y |2 + h2 d˜ y dτ |˜ y |2 +h2 4τ 4n/2 n/2 d˜ y dτ dτ d˜ y σ n/2−1 e−σ dσ d˜ y d˜ y, where the second inequality is due to the change of variable τ → σ 28 |˜ y |2 +h2 4τ . Now by another change of variabel y˜ → z˜ II ≤ C y˜/h, we get 1 Rn−1 (|˜ z |2 + 1)n/2 d˜ z = Cε. (2.2.28) To estimate I, firstly it is easy to see that for any h > 0 and y ∈ Bη,1 , h ≤ |hen − (˜ y , 0)|. (2.2.29) Then by (2.2.13), |hen − y|2 − |hen − (˜ y , 0)|2 = (h − yn )2 − h2 = |yn | |2h − yn | ≤ C |˜ y |2 2h + |˜ y |2 ≤ C |hen − (˜ y , 0)|3 . Now it follows from the mean value theorem and (2.2.26) that e − |hen −y|2 4τ − −e |hen −(˜ y ,0)|2 4τ 2 1 − m1 |hen −(˜y,0)| 4τ ≤ |hen − y|2 − |hen − (˜ y , 0)|2 e 4τ 2 |hen − (˜ y , 0)|3 − m1 |hen −(˜y,0)| 4τ ≤C e (2.2.30) τ Thus, based on (2.2.29) and (2.2.30), we attain m |hen −(˜ y ,0)|2 − 1 −n/2−2 4τ τ e t I≤ 0 Bη,1 29 |hen − (˜ y , 0)|4 d˜ y dτ. Again, by reversing the order of integration and the change of variable τ → σ |hen −(˜ y ,0)|2 , 4τ we get I≤ |hen t − (˜ y , 0)|4 Bη,1 m |hen −(˜ y ,0)|2 − 1 −n/2−2 4τ τ e dτ d˜ y 0 ∞ 1 σ n/2 e−m1 σ dσ d˜ y n−2 y , 0)| 0 Bη,1 |hen − (˜ 1 ≤C d˜ y. y |n−2 Bη,1 |˜ ≤C Hence, there exists δ7 = δ7 ( ) such that when η < δ7 , then I< . (2.2.31) Combining (2.2.31) and (2.2.28), we get |Iη (h, t) − I˜η (h, t)| < Cε. Therefore, we finish the proof. In summary, for any > 0, we firstly determine δ3 ( ), δ4 ( ), δ6 ( ), δ7 ( ) and choose η < min{η0 , δ3 , δ4 , δ6 , δ7 }. Then we determine δ1 (η, ), δ2 (η, ), δ5 (η, ) and choose δ < min{δ1 , δ2 , δ3 , δ5 }. Theorem 2.2.6. Let ϕ ∈ BT and define U to be the single-layer heat potential with density ϕ as (2.2.6). 30 (1) If x ∈ Γ1 ∪ Γ2 and t ∈ (0, T ], then − lim DU (xh , t)·→ n (x) = h→0+ t 0 1 ∂Φ(x − y, t − τ ) ϕ(y, τ ) dS(y)dτ + ϕ(x, t). (2.2.32) ∂n(x) 2 ∂Ω (2) If x ∈ Γ and t ∈ (0, T ], then t − lim DU (xh , t) · → n (x) = h→0+ ∂Φ(x − y, t − τ ) ϕ(y, τ ) dS(y) dτ ∂n(x) ∂Ω 0 + 1 ϕ (x, t) + ϕ2 (x, t) , 4 1 (2.2.33) where ϕi (i = 1 or 2) represents the continuous extension of ϕ|Γ ×(0,T ] on Γi × [0, T ]. i Proof. (1) Without loss of generality, we suppose x ∈ Γ1 , then by Lemma 2.2.4, − lim DU1 (xh , t) · → n (x) = h→0+ t 0 1 ∂Φ(x − y, t − τ ) ϕ(y, τ ) dS(y) dτ + ϕ(x, t). ∂n(x) 2 Γ1 In addition, since the distance between x and Γ2 is positive, then it is easy to see that − lim DU2 (xh , t) · → n (x) = h→0+ t 0 ∂Φ(x − y, t − τ ) ϕ(y, τ ) dS(y) dτ. ∂n(x) Γ2 Adding these two equations together, (2.2.32) follows. (2) (2.2.33) is directly implied by Lemma 2.2.5. 31 2.3 2.3.1 Linear case Definition of the local solution In this section, we will show the existence and uniqueness of the solution to the following linear initial-boundary value problem:    ut (x, t) − ∆u(x, t) = f (x, t)      in Ω × (0, T ], ∂u(x,t) + β(x, t)u(x, t) = g(x, t) on (Γ1 ∪ Γ2 ) × (0, T ], ∂n(x)        u(x, 0) = ψ(x) on Ω, (2.3.1) where f ∈ C α,α/2 Ω × [0, T ] , β, g ∈ BT , ψ ∈ C 1 (Ω). We will first show the existence and then apply the existence result to derive the uniqueness. Definition 2.3.1. For any T > 0, a solution to (2.3.1) on Ω × [0, T ] means a function u in ∂u(x,t) AT that satisfies (2.3.1) pointwise and for any (x, t) ∈ Γ × (0, T ], ∂n(x) exists and 1 ∂u(x, t) 1 + β1 (x, t) + β2 (x, t) u(x, t) = g (x, t) + g2 (x, t) , ∂n(x) 2 2 1 (2.3.2) where βi and gi denote the continuous extensions of β|Γ ×(0,T ] and g|Γ ×(0,T ] to Γi × [0, T ] i i (i = 1 or 2). 2.3.2 Auxilliary lemmas Before showing the existence, we state some preliminary results. Lemma 2.3.2. Suppose Ω is an open bounded subset in Rn with ∂Ω ∈ C 2 , then there exists 32 a constant C > 0 such that for any x, y ∈ ∂Ω, − |(x − y) · → n (x)| ≤ C |x − y|2 . Proof. This is a standard fact, we omit the proof. Lemma 2.3.3. Suppose Ω is an open, bounded set in Rn with ∂Ω ∈ C 2 , 0 ≤ a < n − 1, 0 ≤ b < n − 1, then there exists a constant C = C(a, b, n, Ω) such that for any x, z ∈ ∂Ω,    C |x − z|n−1−a−b if a + b > n − 1, dS(y) ≤ a b  ∂Ω |x − y| |y − z|  C if a + b < n − 1. Proof. We refer the readers to ( [7], Lemma 1, Sec. 2, Chap. 5). The following Lemma is mentioned in ( [7], Theorem 2, Sec. 2, Chap. 5), but it does not explicitly give the estimate (2.3.6), which will be used in some other places of this thesis. So for the convenience of the readers, we decide to include a complete proof for it. For any T > 0, define DΩ,T = {(x, t; y, τ ) x, y ∈ Ω, x = y, 0 ≤ τ < t ≤ T } (2.3.3) to be the domain of {Kj }j≥0 mentioned in Lemma 2.3.4. The sequence of the functions {Kj }j≥0 will be utilized to find the explicit formula for ϕ in (2.3.26), where K ∗ is defined in (2.3.24). K ∗ is evidently bounded by K in Lemma 2.3.4. Lemma 2.3.4. Given K0 : DT,Ω → R. Let C be a positive constant such that for any (x, t; y, τ ) ∈ DT,Ω , |K0 (x, t; y, τ )| ≤ C (t − τ )−3/4 |x − y|−(n−3/2) . 33 (2.3.4) For any j ≥ 1, define Kj : DT,Ω → R by t Kj (x, t; y, τ ) τ ∂Ω K0 (x, t; z, σ) Kj−1 (z, σ; y, τ ) dS(z) dσ. Then all the Kj (j ≥ 1) are well-defined and the series ∞ j=0 |Kj | (2.3.5) converges uniformly to some function K on DT,Ω . Moreover, there exists some constant C ∗ = C ∗ (n, Ω, T ) such that for any (x, t; y, τ ) ∈ DT,Ω , K(x, t; y, τ ) ≤ C ∗ (t − τ )−3/4 |x − y|−(n−3/2) . (2.3.6) Proof. We first justify that all the Kj (j ≥ 1) are well-defined. In fact, by (2.3.4) and (2.3.5), one has t |K1 (x, t; y, τ )| ≤ C τ (t − σ)−3/4 |x − z|−(n−3/2) (σ − τ )−3/4 |z − y|−(n−3/2) dS(z) dσ ∂Ω t =C |x − z|−(n−3/2) |z − y|−(n−3/2) dS(z). (t − σ)−3/4 (σ − τ )−3/4 dσ ∂Ω τ Making the change of variable ρ σ−τ t−τ |K1 (x, t; y, τ )| ≤ C (t − τ )−1/2 for σ, then we obtain |x − z|−(n−3/2) |z − y|−(n−3/2) dS(z). ∂Ω Now there are two cases: • If n > 2, then by Lemma 2.3.3, |x − z|−(n−3/2) |z − y|−(n−3/2) dS(z) ≤ C |x − y|−(n−2) . ∂Ω 34 Therefore |K1 (x, t; y, τ )| ≤ C (t − τ )−1/2 |x − y|−(n−2) . (2.3.7) • If n = 2, then we can not use Lemma 2.3.3 directly since (n − 3/2) + (n − 3/2) = n − 1 is on the borderline. However, since Ω is bounded, we have |z − y|−(n−3/2) ≤ C |z − y|−(n−3/2) |z − y|−1/4 . Then Lemma 2.3.3 can be applied to get |x − z|−(n−3/2) |z − y|−(n−3/2) dS(z) ∂Ω |x − z|−(n−3/2) |z − y|−(n−5/4) dS(z) ≤C ∂Ω ≤ C |x − y|−1/4 . Consequently, |K1 (x, t; y, τ )| ≤ C (t − τ )−1/2 |x − y|−1/4 = C (t − τ )−1/2 |x − y|−(n−7/4) . (2.3.8) Comparing (2.3.4) with (2.3.7) and (2.3.8), the exponent of (t − τ ) term is added by 1/4 and the exponent of (x − y) term increases by at least 1/4. In other words, the singularity of K1 in both time and space variables is weaker than K0 by a certain number 1/4. Thus, after finite steps, we can find j0 (only depending on n) such that Kj0 (x, t; y, τ ) ≤ C, ∀ (x, t; y, τ ) ∈ DΩ,T , for some constant C. Moreover, for any j > j0 , Kj is also well-defined and bounded. 35 (2.3.9) From (2.3.4) and Lemma 2.3.3, there exists a constant C1 such that |K0 (x, t; y, τ )| ≤ C1 (t − τ )−3/4 |x − y|−(n−3/2) , ∀ (x, t; y, τ ) ∈ DΩ,T (2.3.10) and dS(y) ∂Ω ≤ C1 , |x − y|n−3/2 ∀ x ∈ ∂Ω. For the rest of the proof, C1 and C will be fixed. We start from (2.3.9) to prove by induction that for any m ≥ 0 and for any (x, t; y, τ ) ∈ DΩ,T , |Kj0 +m (x, t; y, τ )| ≤ where Q C Qm (t − τ )m/4 , Γ 1 + m/4 (2.3.11) C12 Γ(1/4). When m = 0, (2.3.11) is just (2.3.9). Now we suppose (2.3.11) is true for m = i and try to verify it for m = i + 1. Applying (2.3.11) with m = i and (2.3.4), we obtain t |Kj0 +i+1 (x, t; y, τ )| ≤ ≤ ≤ = τ ∂Ω |K0 (x, t; z, σ) Kj0 +i (z, σ; y, τ )| dS(z) dσ C1 C Qi Γ 1 + i/4 t τ C12 C Qi Γ 1 + i/4 |x − z|−(n−3/2) dS(z) (t − σ)−3/4 (σ − τ )i/4 dσ ∂Ω t (t − σ)−3/4 (σ − τ )i/4 dσ τ C Qi+1 Γ(1/4) Γ 1 + i/4 t (t − σ)−3/4 (σ − τ )i/4 dσ, τ where the third inequality and the fourth equality are due to the definitions of C1 and Q 36 respectively. By the change of variable ρ |Kj0 +i+1 (x, t; y, τ )| ≤ = σ−τ t−τ for σ, we attain C Qi+1 (t − τ )(i+1)/4 1 (1 − ρ)−3/4 ρi/4 dρ Γ(1/4) Γ 1 + i/4 0 C Qi+1 (t − τ )(i+1)/4 . Γ 1 + (i + 1)/4 Consequently, (2.3.11) is true for m = i + 1 and therefore it is true for any m ≥ 0. By (2.3.11), we get that for any m ≥ 0 and for any (x, t; y, τ ) ∈ DΩ,T , |Kj0 +m (x, t; y, τ )| ≤ C Qm T m/4 . Γ(1 + m/4) (2.3.12) This estimate is significant because the right hand side of (2.3.12) is independent of x, y, t, τ . By applying the ratio test, we can prove ∞ m=0 Qm T m/4 < ∞. Γ(1 + m/4) (2.3.13) Next, due to (2.3.4) and the fact that the singularity of K0 is stronger than any other Kj (j ≥ 1), we can find a constant C2 = C2 (n, Ω, T ) such that for any (x, t; y, τ ) ∈ DΩ,T , j0 |Kj (x, t; y, τ )| ≤ C2 (t − τ )−3/4 |x − y|−(n−3/2) . (2.3.14) j=0 ∞ |Kj (x, t; y, τ )| Combining (2.3.14), (2.3.12) and (2.3.13) together, it is readily to see that j=0 converges to K(x, t; y, τ ) uniformly on DΩ,T and moreover, there exists some constant C ∗ , 37 only depending on n, Ω and T such that for any (x, t; y, τ ) ∈ DT,Ω , K(x, t; y, τ ) ≤ C ∗ (t − τ )−3/4 |x − y|−(n−3/2) . . Lemma 2.3.5. If f ∈ C α,α/2 Ω × [0, T ] and t Φ(x − y, t − τ ) f (y, τ ) dS(y) dτ, W (x, t) 0 ∀ (x, t) ∈ Ω × [0, T ], Ω then W ∈ C 2,1 Ω × (0, T ] and (Wt − ∆W )(x, t) = f (x, t), ∀ (x, t) ∈ Ω × (0, T ]. Proof. This is a standard argument, we refer the readers to ( [7], Theorem 9, Sec. 5, Chap. 1). 2.3.3 Existence The idea of the proof for the following Theorem 2.3.6 is analogous to ( [7], Theorem 2, Sec. 3, Chap. 5). Theorem 2.3.6. For any T > 0, there exists a solution u ∈ AT to (2.3.1) on Ω × [0, T ]. Proof. We will construct a solution u to (2.3.1). Firstly, since ψ ∈ C 1 (Ω) and ∂Ω ∈ C 2 , one can extend ψ to a larger domain. More precisely, there exists an open set Ω1 ⊃ Ω and ψ1 ∈ C 1 (Ω1 ) such that ψ1 agrees with ψ on Ω. In the rest of the proof, for convenience, we 38 just write ψ for ψ1 . We are looking for a solution u in the following form: t Φ(x − y, t) ψ(y) dy + u(x, t) = Ω1 Φ(x − y, t − τ ) f (y, τ ) dy dτ 0 Ω (2.3.15) t Φ(x − y, t − τ ) ϕ(y, τ ) dS(y) dτ, + 0 ∀ (x, t) ∈ Ω × [0, T ], ∂Ω where ϕ ∈ BT will be determined later. Due to Lemma 2.3.5, it is easy to see that the function u defined in (2.3.15) belongs to AT and satisfies the first and the third equations in (2.3.1), so in order to verify u to be the solution, the only things left to check are ∂u(x, t) + β(x, t)u(x, t) = g(x, t), ∂n(x) ∀ (x, t) ∈ (Γ1 ∪ Γ2 ) × (0, T ] (2.3.16) and ∂u(x, t) 1 1 + β1 (x, t)+β2 (x, t) u(x, t) = g (x, t)+g2 (x, t) , ∂n(x) 2 2 1 ∀ (x, t) ∈ Γ×(0, T ]. (2.3.17) The plan is to firstly find a function ϕ ∈ BT such that u defined in (2.3.15) satisfies (2.3.16), then we will prove this u satisfies (2.3.17) as well. By (1.4.1) and (2.3.15), for any (x, t) ∈ (Γ1 ∪ Γ2 ) × (0, T ], it follows from the Lebesgue’s dominated convergence theorem and Theorem 2.2.6 that t ∂u(x, t) ∂Φ(x − y, t) ∂Φ(x − y, t − τ ) = ψ(y) dy + f (y, τ ) dy dτ ∂n(x) ∂n(x) ∂n(x) Ω1 0 Ω t ∂Φ(x − y, t − τ ) 1 + ϕ(y, τ ) dS(y) dτ + ϕ(x, t). (2.3.18) ∂n(x) 2 0 ∂Ω 39 Therefore, (2.3.16) is reduced to for any (x, t) ∈ (Γ1 ∪ Γ2 ) × (0, T ], t ϕ(x, t) = K(x, t; y, τ ) ϕ(y, τ ) dS(y) dτ + H(x, t), 0 (2.3.19) ∂Ω where K(x, t; y, τ ) = −2 ∂Φ(x − y, t − τ ) + β(x, t) Φ(x − y, t − τ ) ∂n(x) and ∂Φ(x − y, t) + β(x, t) Φ(x − y, t) ψ(y) dy ∂n(x) Ω1 t ∂Φ(x − y, t − τ ) −2 + β(x, t) Φ(x − y, t − τ ) f (y, τ ) dy dτ ∂n(x) 0 Ω H(x, t) = − 2 + 2 g(x, t). In other words, the proof of (2.3.16) becomes the search for a fixed point ϕ ∈ BT of (2.3.19). In the following, we will construct a fixed point of (2.3.19) in BT . Noticing 1 (t − τ )n/2 − e |x−y|2 4(t−τ ) |x−y|2 − |x − y|2 2n−3 4 e 4(t−τ ) (t − τ )−3/4 |x − y|−(n−3/2) = t−τ ≤ C(t − τ )−3/4 |x − y|−(n−3/2) , so by the similar argument, we have |x − y|2 (t − τ )n/2+1 e − |x−y|2 4(t−τ ) ≤ C(t − τ )−3/4 |x − y|−(n−3/2) . 40 Then it follows from Lemma 2.3.2 that for any x, y ∈ ∂Ω, 0 ≤ τ < t ≤ T , 1 |K(x, t; y, τ )| ≤ C (t − τ )n/2 + |x − y|2 (t − τ )n/2+1 − e |x−y|2 4(t−τ ) ≤ C (t − τ )−3/4 |x − y|−(n−3/2) . (2.3.20) Then using the fact ψ ∈ C 1 (Ω1 ) and the integration by parts, we obtain ∂Φ(x − y, t) ψ(y) dy ∂n(x) Ω1 − Dy Φ(x − y, t) · → n (x) ψ(y) dy =− Ω1 − Φ(x − y, t) Dψ(y) · → n (x) dy. − − Φ(x − y, t) ψ(y) → n (y) · → n (x) dy + =− (2.3.21) Ω1 ∂Ω1 Consequently, as a function in (x, t), ∂Φ(x − y, t) ψ(y) dy ∈ C ∂Ω × [0, T ] ⊂ BT . ∂n(x) Ω1 Then it is readily to check that H ∈ BT . Next, two sequences of functions {Kj }j≥0 and {ϕj }j≥0 will be constructed as following. First, define K0 = K on DΩ,T and ϕ0 = H on (Γ1 ∪ Γ2 ) × (0, T ]. Then for any j ≥ 1, define t Kj (x, t; y, τ ) = τ K0 (x, t; z, σ) Kj−1 (z, σ; y, τ ) dS(z) dσ (2.3.22) Ki (x, t; y, τ ) H(y, τ ) dS(y) dτ + H(x, t) (2.3.23) ∂Ω on DΩ,T and j−1 ϕj (x, t) = i=0 0 t ∂Ω 41 on (Γ1 ∪ Γ2 ) × (0, T ]. Because of (2.3.4), we can prove by induction that for any j ≥ 0, ϕj is well defined and belongs to BT . Our goal is to show that ϕj (x, t) uniformly converges to some function ϕ(x, t) on (Γ1 ∪ Γ2 ) × (0, T ] as j → ∞, which makes ϕ to be the fixed point of (2.3.19) in BT . Writing ∞ K ∗ (x, t; y, τ ) Kj (x, t; y, τ ), (2.3.24) j=0 by Lemma 2.3.4, we know K ∗ is well-defined and ∞ j=0 Kj converges uniformly to K ∗ on DT,Ω . Moreover, there exists a constant C ∗ = C ∗ (n, Ω, T ) such that for any (x, t; y, τ ) ∈ DΩ,T , |K ∗ (x, t; y, τ )| ≤ C ∗ (t − τ )−3/4 |x − y|−(n−3/2) . (2.3.25) Consequently, it follows from (2.3.23) and (2.3.24) that ϕj converges uniformly to the function ϕ on (Γ1 ∪ Γ2 ) × (0, T ], where t K ∗ (x, t; y, τ ) H(y, τ ) dS(y) dτ + H(x, t). ϕ(x, t) 0 (2.3.26) ∂Ω Thus, ϕ is a fixed point of (2.3.19) in BT and therefore the function u defined in (2.3.15) satisfies (2.3.16). Now as our plan, it only left to show this function u satisfies (2.3.17) as well. Making use ∂u(x,t) of (2.3.15), (1.4.1) and Theorem 2.2.6, we get for any x ∈ Γ, 0 < t ≤ T , ∂n(x) exists and t ∂u(x, t) ∂Φ(x − y, t) ∂Φ(x − y, t − τ ) = ψ(y) dy + f (y, τ ) dy dτ ∂n(x) ∂n(x) ∂n(x) Ω1 0 Ω t + 0 ∂Φ(x − y, t − τ ) 1 1 ϕ(y, τ ) dS(y) dτ + ϕ1 (x, t) + ϕ2 (x, t). ∂n(x) 4 4 ∂Ω 42 (2.3.27) Then we choose two sequences of points {ξk }k≥1 ⊂ Γ1 and {zj }j≥1 ⊂ Γ2 which converge to x, it follows from (2.3.18) that t ∂Φ(ξk − y, t) ∂Φ(ξk − y, t − τ ) ∂u(ξk , t) = ψ(y) dy + f (y, τ ) dy dτ ∂n(ξk ) ∂n(ξk ) ∂n(ξk ) Ω1 0 Ω t + 0 ∂Φ(ξk − y, t − τ ) 1 ϕ(y, τ ) dS(y) dτ + ϕ(ξk , t) ∂n(ξk ) 2 ∂Ω and t ∂u(zj , t) ∂Φ(zj − y, t) ∂Φ(zj − y, t − τ ) = ψ(y) dy + f (y, τ ) dy dτ ∂n(zj ) ∂n(zj ) ∂n(zj ) Ω1 0 Ω t ∂Φ(zj − y, t − τ ) 1 + ϕ(y, τ ) dS(y) dτ + ϕ(zj , t). ∂n(zj ) 2 0 ∂Ω Taking k → ∞ and j → ∞, we obtain t ∂Φ(x − y, t) ∂Φ(x − y, t − τ ) ∂u(ξk , t) = ψ(y) dy + f (y, τ ) dy dτ ∂n(x) ∂n(x) k→∞ ∂n(ξk ) Ω1 0 Ω (2.3.28) t ∂Φ(x − y, t − τ ) 1 + ϕ(y, τ ) dS(y) dτ + ϕ1 (x, t) ∂n(x) 2 0 ∂Ω lim and t ∂u(zj , t) ∂Φ(x − y, t) ∂Φ(x − y, t − τ ) = ψ(y) dy + f (y, τ ) dy dτ ∂n(x) ∂n(x) j→∞ ∂n(zj ) Ω1 0 Ω (2.3.29) t ∂Φ(x − y, t − τ ) 1 + ϕ(y, τ ) dS(y) dτ + ϕ2 (x, t). ∂n(x) 2 0 ∂Ω lim Adding (2.3.28) and (2.3.29) together and noticing (2.3.27), we attain ∂u(zj , t) ∂u(x, t) 1 ∂u(ξk , t) = lim + lim . ∂n(x) 2 k→∞ ∂n(ξk ) j→∞ ∂n(zj ) 43 (2.3.30) Moreover, since u satisfies (2.3.16), we have ∂u(ξk , t) + β(ξk , t) u(ξk , t) = g(ξk , t), ∂n(ξk ) ∂u(zj , t) + β(zj , t) u(zj , t) = g(zj , t). ∂n(zj ) Sending k → ∞ and j → ∞, we obtain ∂u(ξk , t) = g1 (x, t) − β1 (x, t) u(x, t) k→∞ ∂n(ξk ) ∂u(zj , t) lim = g2 (x, t) − β2 (x, t) u(x, t) j→∞ ∂n(zj ) lim (2.3.31) (2.3.32) Combining (2.3.30), (2.3.31) and (2.3.32) together, (2.3.17) follows. 2.3.4 Comparison principles, uniqueness and global solution Next, we will prove the comparison principle and the uniqueness of the solution by applying Theorem 2.3.6. But before that, let’s prove the following easier comparison result. Lemma 2.3.7. Suppose in (2.3.1), f ≥ 0 on Ω × [0, T ], ψ > 0 on Ω and inf (Γ1 ∪Γ2 )×[0,T ] g > 0, then the solution u > 0 on Ω × [0, T ]. Proof. Since ψ > 0 on Ω, we have m min ψ > 0. Now we claim u > 0 on Ω × [0, T ]. If Ω not, then there will exist x0 ∈ Ω and t0 ∈ (0, T ] such that u(x0 , t0 ) = 0 = 44 min u. Ω×[0,t0 ] By the strong maximum principle, x0 ∈ ∂Ω. If x0 ∈ Γ1 ∪ Γ2 , then 0 < g(x0 , t0 ) = ∂u(x0 , t0 ) ∂u(x0 , t0 ) + β(x0 , t0 ) u(x0 , t0 ) = ≤ 0, ∂n(x0 ) ∂n(x0 ) which is impossible. If x0 ∈ Γ, then 1 g (x , t ) + g2 (x0 , t0 ) 2 1 0 0 ∂u(x0 , t0 ) 1 + β1 (x0 , t0 ) + β2 (x0 , t0 ) u(x0 , t0 ) = ∂n(x0 ) 2 0< = ∂u(x0 , t0 ) ≤ 0, ∂n(x0 ) which is also a contradiction. Thus, the Lemma follows. Corollary 2.3.8. Suppose in (2.3.1), f ≥ 0 on Ω × [0, T ], ψ ≥ 0 on Ω and g ≥ 0 on (Γ1 ∪ Γ2 ) × (0, T ], then the solution u ≥ 0 on Ω × [0, T ]. In particular, the solution to (2.3.1) on Ω × [0, T ] is unique. If further assuming that ψ ≡ 0 on Ω, then u(x, t) > 0 for any (x, t) ∈ Ω × (0, T ]. Proof. Due to Theorem 2.3.6, there exists a solution v ∈ AT to the following problem:    vt (x, t) − ∆v(x, t) = 1      in ∂v(x,t) Ω × (0, T ], + β(x, t)v(x, t) = 1 on (Γ1 ∪ Γ2 ) × (0, T ], ∂n(x)        v(x, 0) = 1 on Ω. 45 Now for any > 0, define w = u + v, then w satisfies    (w )t (x, t) − ∆w (x, t) = f + ≥ ε      in Ω × (0, T ], ∂w (x,t) + β(x, t)w (x, t) = g + ≥ ε on (Γ1 ∪ Γ2 ) × (0, T ], ∂n(x)        w (x, 0) = ψ + ≥ ε on Ω. By invoking Lemma 2.3.7, w ≥ 0 on Ω × [0, T ]. Taking → 0, we get u ≥ 0 on Ω × [0, T ]. Now we further assume that ψ is not identical 0 on Ω. If there exists some (x0 , t0 ) ∈ Ω × (0, T ] such that u(x0 , t0 ) = 0, then u(x0 , t0 ) = min u. Ω×[0,t0 ] By strong maximum principle and ψ ≡ 0, x0 has to be on the boundary ∂Ω. In addition, it follows from the boundary condition ∂u(x0 , t0 ) + β(x0 , t0 )u(x0 , t0 ) = g(x0 , t0 ) ∂n(x0 ) that ∂u(x0 ,t0 ) ∂n(x0 ) ≥ 0. But on the other hand, the Hopf lemma implies that ∂u(x0 ,t0 ) ∂n(x0 ) < 0, which is a contradiction. So u(x, t) > 0 for any (x, t) ∈ Ω × (0, T ]. Corollary 2.3.9. Let u be a solution to (2.3.1). Then for any T > 0, there exist constants C = C(Ω, β, T ) such that for any x ∈ Ω and 0 ≤ t ≤ T , |u(x, t)| ≤ C sup |f | + Ω×(0,T ] sup |g| + sup |ψ| . (Γ1 ∪Γ2 )×(0,T ] Ω Proof. Similar to the proof of Corollary 2.3.8, there exists a solution v ∈ AT to the following 46 problem:    vt (x, t) − ∆v(x, t) = 1      in Ω × (0, T ], ∂v(x,t) + β(x, t)v(x, t) = 1 on (Γ1 ∪ Γ2 ) × (0, T ], ∂n(x)        v(x, 0) = 1 on Ω. Let K = supΩ×(0,T ] |f | + sup(Γ ∪Γ )×(0,T ] |g| + supΩ |ψ|. Then it follows from Corollary 1 2 2.3.8 that |u| ≤ Kv. Since v is bounded on Ω × [0, T ], the conclusion follows. By combining Theorem 2.3.6 and Corollary 2.3.8, the existence and uniqueness of the global solutions is established. Theorem 2.3.10. The linear problem (2.3.1) has a unique global solution u ∈ C 2,1 Ω × (0, ∞) ∩ C Ω × [0, ∞) , where the global solution means a function that solves (2.3.1) for any T > 0. 2.4 2.4.1 Nonlinear case Definition of the local solution This section is devoted to the existence and uniqueness of the solution to the following nonlinear initial-boundary value problem:     ut (x, t) − ∆u(x, t) = f (x, t) in Ω × (0, T ],          ∂u(x,t) = F x, u(x, t) on Γ1 × (0, T ], ∂n(x)   ∂u(x,t)  =0   ∂n(x)        u(x, 0) = ψ(x) on Γ2 × (0, T ], on Ω, 47 (2.4.1) where f ∈ C α,α/2 Ω × [0, T ] , F ∈ C 1 (Γ1 × R) and ψ ∈ C 1 (Ω). For any (x, σ) ∈ Γ1 × R, we will use Fxi (x, σ) (1 ≤ i ≤ n) to denote the ith spatial derivative of F and use Fσ (x, σ) to denote the derivative of its (n + 1)th variable. Definition 2.4.1. For any T > 0, a solution to (2.4.1) on Ω × [0, T ] means a function u in ∂u(x,t) AT that satisfies (2.4.1) pointwise and for any (x, t) ∈ Γ × (0, T ], ∂n(x) exists and ∂u(x, t) 1 = F x, u(x, t) . ∂n(x) 2 2.4.2 (2.4.2) Comparison principles and uniqueness This time, we will first show some comparison principles and then discuss the existence of the solution. Theorem 2.4.2. Suppose ui ∈ AT (i = 1, 2) is the solution to (2.4.1) on Ω × [0, T ] with right hand side fi , Fi and ψi . If f1 ≥ f2 on Ω × [0, T ], F1 ≥ F2 on Γ1 × R and ψ1 ≥ ψ2 on Ω, then u1 ≥ u2 on Ω × [0, T ]. As a consequence, the solution to (2.4.1) is unique. Proof. Let w = u1 − u2 , f = f1 − f2 and ψ = ψ1 − ψ2 . Then f ≥ 0 on Ω × [0, T ] and ψ ≥ 0 on Ω. In addition, since F1 ≥ F2 on Γ1 × R, we have F1 x, u1 (x, t) − F2 x, u2 (x, t) ≥ F1 x, u1 (x, t) − F1 x, u2 (x, t) = β(x, t)w(x, t), where 1 β(x, t) = 0 (F1 )σ x, su1 (x, t) + (1 − s)u2 (x, t) ds. 48 Thus, w satisfies the equations below     wt (x, t) − ∆w(x, t) = f (x, t) ≥ 0 in Ω × (0, T ],          ∂w(x,t) − β(x, t)w(x, t) ≥ 0 on Γ1 × (0, T ], ∂n(x)   ∂w(x,t)  =0   ∂n(x)        w(x, 0) = ψ(x) ≥ 0 on Γ2 × (0, T ], on Ω. Now it follows from Corollary 2.3.8 that w ≥ 0. Theorem 2.4.3. Suppose u ∈ AT is the solution to (2.4.1) with f ≥ 0 on Ω × [0, T ], F (·, 0) ≥ 0 on Γ1 and ψ ≥ 0 on Ω, then u ≥ 0 on Ω × [0, T ]. If further assuming ψ ≡ 0, then u(x, t) > 0, ∀ x ∈ Ω, 0 < t ≤ T . Proof. To prove the first statement, we write F x, u(x, t) = F x, u(x, t) − F (x, 0) + F (x, 0) = β(x, t)u(x, t) + F (x, 0), where 1 β(x, t) = Fσ x, su(x, t) ds. 0 Hence, u satisfies              ut (x, t) − ∆u(x, t) = f (x, t) ≥ 0 ∂u(x,t) ∂n(x) in Ω × (0, T ], − β(x, t) u(x, t) = F (x, 0) ≥ 0 on Γ1 × (0, T ],   ∂u(x,t)  =0   ∂n(x)        u(x, 0) = ψ(x) ≥ 0 on Γ2 × (0, T ], on Ω. 49 It then follows from Corollary 2.3.8 that u ≥ 0. Now suppose additionally that ψ ≡ 0, then similar to the proof of Corollary 2.3.8, by applying the strong maximum principle and the Hopf lemma, we get u(x, t) > 0, ∀ x ∈ Ω, 0 < t ≤ T . 2.4.3 Existence Now we turn to the existence of the solution. As a common process to deal with the nonlinear problem, we will take advantage of the theories for the linear problems and some fixed point theorems. For any T > 0 and R > 0, define XT = C Ω × [0, T ] and XT,R = {v ∈ XT : ||v|| ≤ R}, where the norm in XT is the supremum norm as following ||u|| = |u(x, t)|, max (x,t)∈Ω×[0,T ] ∀ u ∈ XT . Then XT is a Banach space and XT,R is a closed and thus complete subset of XT . For any v ∈ XT,R , it follows from Theorem 2.3.6 and Corollary 2.3.8 that there exists a unique solution u ∈ AT to the following problem     ut (x, t) − ∆u(x, t) = f (x, t) in Ω × (0, T ],          ∂u(x,t) = F x, v(x, t) on Γ1 × (0, T ], ∂n(x)   ∂u(x,t)  =0   ∂n(x)        u(x, 0) = ψ(x) (2.4.3) on Γ2 × (0, T ], on Ω. Thus, it determines a mapping ΨT : XT,R → AT . Our strategy is to pick up suitable T and R such that ΨT has a fixed point in XT,R , which turns out to be the unique solution to 50 (2.4.1). In the proof of Theorem 2.4.5, we will utilize the Schauder fixed point theorem, which requires to verify the following three things: (i) ΨT maps XT,R to AT ∩ XT,R for some suitably T and R; (ii) ΨT : XT,R → XT,R is continuous; (iii) ΨT (XT,R ) is precompact in XT,R . The requirement (iii) is the most technical part, so in order to make the argument more readable, we state the following Lemma 2.4.4 separately which will be used in the proof of (iii). Actually, Lemma 2.4.4 is a fact mentioned without justification in the proof of ( [7], Theorem 13, Sec. 5, Chap. 7), but here for the convenience of the readers, we would like to give a complete proof. Lemma 2.4.4. Given T > 0 and {ϕj }j≥1 ⊂ L∞ (Γ1 ∪ Γ2 ) × (0, T ] , we define t wj (x, t) = 0 ∂Ω Φ(x − y, t − τ ) ϕj (y, τ ) dS(y) dτ, ∀ (x, t) ∈ Ω × [0, T ]. (2.4.4) If {ϕj }j≥1 are uniformly bounded on (Γ1 ∪ Γ2 ) × (0, T ], then {wj }j≥1 are uniformly bounded and equicontinuous on Ω × [0, T ]. Proof. From the assumption, there is a constant C ∗ such that for any j ≥ 1 and (x, t) ∈ (Γ1 ∪ Γ2 ) × (0, T ], |ϕj (x, t)| ≤ C ∗ . (2.4.5) In the rest of the proof, we will use C to denote a constant which only depends on n, Ω, T 51 and C ∗ . Noticing that |Φ(x − y, t − τ )| ≤ |x−y|2 − C (t − τ )−n/2 e 4(t−τ ) ≤ C (t − τ )−3/4 |x − y|−(n−3/2) , (2.4.6) it then follows from (2.4.5) and Lemma 2.3.3 that t |wj (x, t)| ≤ C 0 ∂Ω t ≤C (t − τ )−3/4 |x − y|−(n−3/2) dS(y) dτ (t − τ )−3/4 dτ (2.4.7) 0 ≤ C t1/4 ≤ C T 1/4 = C. This showed the uniform boundedness. Next, we will prove the equicontinuity, which means for any > 0, there exists δ > 0 such that the following (1) and (2) are satisfied. (1) For any j ≥ 1, t ∈ [0, T ] and x1 , x2 in Ω with |x1 − x2 | < δ, we have |wj (x1 , t) − wj (x2 , t)| ≤ Cε. (2) For any j ≥ 1, x ∈ Ω, 0 ≤ t1 < t2 ≤ T with t2 − t1 < δ, we have |wj (x, t1 ) − wj (x, t2 )| ≤ Cε. By a change of variable in τ , we attain t wj (x, t) = 0 ∂Ω Φ(x − y, τ ) ϕj (y, t − τ ) dS(y) dτ, 52 ∀ (x, t) ∈ Ω × [0, T ]. (2.4.8) Sometimes it is easier to use (2.4.8) to do the estimate. To prove (1), we split the interval [0, t] into [0, ] and [ , t] and take advantage of the uniform boundedness of {ϕj }j≥1 , it follows from (2.4.8) that |x −y|2 − 14τ −n/2 τ e t |wj (x1 , t) − wj (x2 , t)| ≤C |x −y|2 − 24τ −e dS(y) dτ ∂Ω +C 0 dS(y) dτ |x −y|2 − 24τ −n/2 τ e dS(y) dτ ∂Ω +C 0 |x −y|2 − 14τ −n/2 τ e ∂Ω I + II + III. Analogous to the derivation of (2.4.7), we get II ≤ C 1/4 and III ≤ C 1/4 . To estimate I, we exploit the mean value theorem and find for any τ ∈ (0, T ], y ∈ ∂Ω, |x −y|2 − 14τ e |x −y|2 − 24τ −e ≤ C τ −1/2 |x1 − x2 |. As a result, t I≤C ∂Ω τ −(n+1)/2 |x1 − x2 | dS(y) dτ ≤ C −(n+1)/2 δ. As long as we take δ ≤ (n+3)/2 , then I ≤ Cε. Thus, we finish the proof of (1). In order to prove (2), we consider two cases. 53 • t1 ≤ 2ε. In this case, we choose δ ≤ ε, then again by (2.4.7), |wj (x, t2 ) − wj (x, t1 )| ≤ |wj (x, t2 )| + |wj (x, t1 )| 1/4 ≤ C t2 1/4 + C t1 ≤ C (2ε + δ)1/4 + C (2ε)1/4 ≤ C 1/4 . • t1 > 2ε. In this case, using (2.4.4), we get |wj (x, t2 ) − wj (x, t1 )| t1 − − 1 ≤ 0 ∂Ω t2 + t1 − t1 + t1 − e (t2 − τ )n/2 1 (t2 − τ )n/2 1 (t1 − τ )n/2 |x−y|2 4(t2 −τ ) − 1 (t1 − τ )n/2 − e − |x−y|2 4(t2 −τ ) |ϕ (y, τ )| dS(y) dτ j − |x−y|2 4(t1 −τ ) |ϕ (y, τ )| dS(y) dτ j e e |x−y|2 4(t1 −τ ) |ϕj (y, τ )| dS(y) dτ I + II + III. If we choose δ ≤ ε, then similar to (2.4.7) and using the mean value theorem, 1/4 II ≤ C t2 ≤ − (t1 − )1/4 C (t1 − )3/4 (t2 − t1 + ) ≤ C −3/4 (δ + ) ≤ C 1/4 . 54 (2.4.9) Similarly, 1/4 III ≤ C t1 − (t1 − )1/4 ≤ C 1/4 . (2.4.10) Employing the mean value theorem again, there exists a constant C such that for any τ ∈ [0, t1 ), x ∈ Ω, y ∈ ∂Ω, we have 1 (t2 − τ )n/2 − e |x−y|2 4(t2 −τ ) 1 − (t1 − τ )n/2 e − |x−y|2 4(t1 −τ ) ≤ C (t1 − τ )−n/2−1 |t2 − t1 | ≤ C (t1 − τ )−n/2−1 δ. If we choose δ ≤ εn/2+3 , then t1 − I≤C 0 ≤C ∂Ω −n/2−1 δ (t1 − τ )−n/2−1 dS(y) dτ (2.4.11) ≤C . Combining (2.4.11), (2.4.9), (2.4.10), we proved (2) and therefore the Lemma follows. Now similar to the arguments in ( [7], Theorem 13, Sec. 5, Chap. 7) and ( [27], Theorem 1.3), we conclude the following theorem on the existence of the solution. Theorem 2.4.5. For the nonlinear equation (2.4.1) with f, F, ψ described there. (1) There exists T0 > 0 such that for any 0 < T ≤ T0 , there exists a unique solution u ∈ AT to (2.4.1) on Ω × [0, T ]. (2) If F is bounded on Γ1 × R, then for any T > 0, there exists a unique solution u ∈ AT to (2.4.1) on Ω × [0, T ]. 55 Proof. Just as the heuristic idea before Lemma 2.4.4, in order to prove the existence of a solution, we will use Schauder fixed point theorem to prove ΨT has a fixed point in XT,R for suitable T and R. Thus, we need to verify the following three requirements: (i) ΨT maps XT,R to AT ∩ XT,R ; (ii) ΨT : XT,R → XT,R is continuous; (iii) ΨT (XT,R ) is precompact in XT,R . In the following, we will prove (1) and (2) in Theorem 2.4.5 together. Actually, the proofs of requirements (ii) and (iii) for (1) and (2) are identically the same, only the proofs of requirement (i) has slightly difference. Firstly, given T > 0, let’s recall how we construct u ΨT (v) for any v ∈ XT,R . We will use the same notations as those in the proof of Theorem 2.3.6 with β = 0 and g(x, t) = F x, v(x, t) ✶Γ1 (x), where ✶Γ1 (x)    1, x ∈ Γ1 , (2.4.12)   0, x ∈ / Γ1 , is the indicator function. Thus u has the following expression: for any (x, t) ∈ Ω × [0, T ], t Φ(x − y, t) ψ(y) dy + u(x, t) Ω1 Φ(x − y, t − τ ) f (y, τ ) dy dτ 0 Ω t Φ(x − y, t − τ ) ϕ(y, τ ) dS(y) dτ. + 0 ∂Ω Here ϕ ∈ BT satisfies for any (x, t) ∈ (Γ1 ∪ Γ2 ) × (0, T ], t ϕ(x, t) = K(x, t; y, τ ) ϕ(y, τ ) dS(y) dτ + H(x, t), 0 ∂Ω 56 (2.4.13) where K(x, t; y, τ ) = −2 ∂Φ(x − y, t − τ ) ∂n(x) (2.4.14) and H(x, t) = − 2 ∂Φ(x − y, t) ψ(y) dy ∂n(x) Ω1 t −2 0 ∂Φ(x − y, t − τ ) f (y, τ ) dy dτ ∂n(x) Ω (2.4.15) + 2F x, v(x, t) ✶Γ1 (x). Since this K also satisfies (2.3.4), we can follow the same way as the derivations of (2.3.26), (2.3.25) to get t K ∗ (x, t; y, τ ) H(y, τ ) dS(y) dτ + H(x, t) ϕ(x, t) = 0 (2.4.16) ∂Ω for some function K ∗ . Moreover, there exists a constant C ∗ = C ∗ (n, Ω, T ), such that |K ∗ (x, t; y, τ )| ≤ C ∗ (t − τ )−3/4 |x − y|−(n−3/2) . (2.4.17) Next, we will first assume requirement (i) and prove requirements (ii) and (iii), then we will confirm requirement (i) for the Cases (1) and (2) in Theorem 2.4.5 respectively. Given T > 0, we assume there exists R > 0 such that ΨT : XT,R → AT ∩ XT,R . Define MF , MFσ : [0, ∞) → R by MF (r) = sup Γ1 ×[−r,r] 57 |F (x, σ)| and MFσ (r) = sup |Fσ (x, σ)|. Γ1 ×[−r,r] Then for any fixed r ≥ 0, both MF (r) and MF (r) are finite number since F ∈ C 1 (R). In the following, for any v ∈ XT,R , we define u, ϕ and H as in (2.4.13), (2.4.14) and (2.4.15) respectively. For any vj ∈ XT,R (j ≥ 1), we define uj , ϕj and Hj analogously. • Proof of Requirement (ii). Given {vj }j≥1 ⊂ XT,R and vj → v in XT,R , we want to show ΨT (vj ) → ΨT (v) in XT,R . Since v and all the vj (j ≥ 1) belong to XT,R , then for any (x, t) ∈ Ω × [0, T ], |v(x, t)| ≤ R and |vj (x, t)| ≤ R. Thus, by the mean value theorem and the fact MF (R) < ∞, it follows from (2.4.15) that Hj ⇒ H on (Γ1 ∪ Γ2 ) × (0, T ] (here “ ⇒ ” means uniform convergence). Then by (2.4.16) and (2.4.17), ϕj ⇒ ϕ on (Γ1 ∪ Γ2 ) × (0, T ]. Finally, due to the expression (2.4.13), we have uj ⇒ u on Ω × [0, T ], which is equivalent to say ΨT (vj ) → ΨT (v) in XT,R . • Proof of Requirement (iii). In this proof, we will use C to denote a constant which is independent of j, x and t, but may depend on n, Ω, Ω1 , T , R, MF (R), sup |f |, sup |ψ| and sup |Dψ|. C may be different from line to line. Given any sequence {vj }j≥1 ⊂ XT,R , we want to show {ΨT (vj )}j≥1 has a subsequence which converges to some function u in XT,R . Since vj ∈ XT,R for any j ≥ 1, then for any j ≥ 1 and for any (x, t) ∈ Ω × [0, T ], |vj (x, t)| ≤ R. Now recalling (2.3.21), we know ∂Φ(x − y, t) ψ(y) dy ∂n(x) Ω1 is bounded by some constant C. Consequently by (2.4.15), these exists some constant 58 C such that for any j ≥ 1 and for any (x, t) ∈ Ω × [0, T ], |Hj (x, t)| ≤ C. Then due to (2.4.16) and (2.4.17), there exists some constant C such that for any j ≥ 1 and for any (x, t) ∈ Ω × [0, T ], |ϕj (x, t)| ≤ C. Now using (2.4.13) and Lemma 2.4.4, we find {uj }j≥1 is uniformly bounded and equicontinuous on Ω × [0, T ]. Hence it follows from the Arzela-Ascoli theorem that {uj }j≥1 has a subsequence {ujk }k≥1 which converges uniformly to some function u on Ω × [0, T ]. Since ujk ∈ XT,R , it is readily to see that u is also in XT,R . Thus, ΨT (XT,R ) is precompact in XT,R . Now we turn to verify Requirement (i). • Proof of Requirement (i) for (1). We will find 0 < T0 ≤ 1 such that for any 0 < T ≤ T0 , there exists R > 0 such that ΨT maps XT,R to AT ∩XT,R . In this proof, C will denote a constant which is independent of x, t, R and T , but may depend on n, Ω, Ω1 , sup |f |, sup |ψ| and sup |Dψ|. C may be different from line to line. For the first term of (2.4.15), we recall (2.3.21) again to get for any (x, t) ∈ (Γ1 ∪ Γ2 ) × (0, T ], ∂Φ(x − y, t) ψ(y) dy ∂n(x) Ω1 − − Φ(x − y, t) ψ(y) → n (y) · → n (x) dy + ≤ ∂Ω1 Ω1 |x − y|−n dy + C ≤ C. ≤ C − Φ(x − y, t) Dψ(y) · → n (x) dy (2.4.18) ∂Ω1 59 For the second term of (2.4.15), we have for any (x, t) ∈ (Γ1 ∪ Γ2 ) × (0, T ], t 0 ∂Φ(x − y, t − τ ) f (y, τ ) dy dτ ∂n(x) Ω t (t − τ )−3/4 |x − y|−(n−1/2) dS(y) dτ ≤ C sup |f | 0 Ω 1/4 ≤ C t1/4 ≤ C T 1/4 ≤ C T0 ≤ C, (2.4.19) where the last inequality is due to the assumption T0 ≤ 1. Then it follows from (2.4.18), (2.4.19) and (2.4.15) that for any (x, t) ∈ (Γ1 ∪ Γ2 ) × (0, T ], |H(x, t)| ≤ C + C MF (R). (2.4.20) Although the constant C ∗ in (2.4.17) depends on T , it is readily to see that C ∗ is an increasing function in T . As a result, when T is bounded by 1, C ∗ will also be bounded by some constant C, which only depends on n and Ω. Based on this observation and (2.4.16), we get for any (x, t) ∈ (Γ1 ∪ Γ2 ) × (0, T ], |ϕ(x, t)| ≤ C ∗ [C + C MF (R)] t 0 (t − τ )−3/4 |x − y|−(n−3/2) dS(y) dτ ∂Ω + C + C MF (R) ≤ [C + C MF (R)] T 1/4 + C + C MF (R) ≤ C + C MF (R). (2.4.21) 60 Now by (2.4.13) and (2.4.21), we obtain for any (x, t) ∈ Ω × [0, T ], t (t − τ )−3/4 |x − y|−(n−3/2) dS(y) dτ |u(x, t)| ≤ sup |ψ| + t sup |f | + sup |ϕ| 0 ∂Ω ≤ C + C + C MF (R) C T 1/4 1/4 ≤ C + C MF (R) T0 1/4 C1 + C1 MF (R) T0 . (2.4.22) Hence, if we choose R and T0 ≤ 1 such that R = 2C1 1/4 and T0 then we have ||u|| ≤ R and therefore u MF (2C1 ) < 1, (2.4.23) ΨT (v) ∈ XT,R . • Proof of Requirement (i) for (2). We will prove for any T > 0, there exists R > 0 such that ΨT maps XT,R to AT ∩ XT,R . In this proof, F is a bounded function on Γ1 × R, so sup |F | < ∞. In the following, we will use C to denote a constant just like that in Γ1 ×R the proof for (1) but allowing C to depend on T and sup |F |. As the same derivations Γ1 ×R as (2.4.18), (2.4.19) and (2.4.20), we attain for any (x, t) ∈ (Γ1 ∪ Γ2 ) × (0, T ], |H(x, t)| ≤ C + C MF (R) ≤ C + C sup |F | = C. R 61 (2.4.24) Then based on (2.4.16) and (2.4.24), we get for any (x, t) ∈ (Γ1 ∪ Γ2 ) × (0, T ], t |ϕ(x, t)| ≤ C ∗ C 0 (t − τ )−3/4 |x − y|−(n−3/2) dS(y) dτ + C ∂Ω ≤ C T 1/4 + C = C. Therefore, by (2.4.13) and (2.4.21) again, we obtain for any (x, t) ∈ Ω × [0, T ], t (t − τ )−3/4 |x − y|−(n−3/2) dS(y) dτ |u(x, t)| ≤ sup |ψ| + t sup |f | + sup |ϕ| 0 ∂Ω ≤ C + C T 1/4 C2 . Thus, as long as we take R > C2 , we have ||u|| ≤ R and consequently u ΨT (v) ∈ XT,R . 2.4.4 Maximal solution and application to the target problem As we can see from Theorem 2.4.2 that the solution to (2.4.1) is only proved to be global when the function F is bounded on R. Thus, we need to study the maximal solution when F is unbounded. Definition 2.4.6. We call T∗ sup{T ≥ 0 : there exsits a solution to (2.4.1) on Ω × [0, T ]} 62 the maximal existence time for (2.4.1). Moreover, a function u ∈ C 2,1 Ω × (0, T ∗ ) ∩ C Ω × [0, T ∗ ) is called a maximal solution if it solves (2.4.1) on Ω × [0, T ] for any T ∈ (0, T ∗ ). Remark 2.4.7. It follows from Theorem 2.4.5 and Theorem 2.4.2 that the maximal solution exists and is unique. Just like ( [27], Corollary 1.1), when T ∗ is finite, it coincides with the blow-up time of the L∞ norm of u. Thus in order to estimate T ∗ , one only needs to focus on the L∞ norm of the solution. We state it more precisely in the following theorem. Theorem 2.4.8. Let T ∗ be the maximal existence time for (2.4.1) and u be the maximal solution. If T ∗ < ∞, then lim t sup T ∗ (x,τ )∈Ω×[0,t] |u(x, τ )| = ∞. Proof. We prove by contradiciton. Assume lim t sup T ∗ (x,τ )∈Ω×[0,t] |u(x, τ )| < ∞. Then we denote K = supΩ×[0,T ∗ ) |u| < ∞. The strategy below is to find a solution that exists beyond T ∗ , which contradicts to the definition of T ∗ . To do this, we firstly construct a bounded and C 1 function F : Γ1 × R → R such that F (x, σ) = F (x, σ) for any x ∈ Γ1 and |σ| ≤ K + 1. Then by Theorem 2.4.5, for any T > 0, 63 the problem              u˜t (x, t) − ∆˜ u(x, t) = f (x, t) in ∂u ˜(x,t) ∂n(x) Ω × (0, T ], on Γ1 × (0, T ], = F x, u˜(x, t) (2.4.25)   ∂u ˜(x,t)  =0   ∂n(x)        u˜(x, 0) = ψ(x) on Γ2 × (0, T ], on Ω, has a unique solution u˜. Especially, when T = T ∗ + 1, there exists a solution u˜ ∈ AT ∗ +1 to (2.4.25). Since u also solves (2.4.25) for T < T ∗ , then by uniqueness, u˜ = u on Ω × [0, T ∗ ). Therefore |˜ u| = K. sup Ω×[0,T ∗ ) Now by continuity, there exists > 0 such that sup |˜ u| ≤ K + 1. Ω×[0,T ∗ + ] Recalling that F (x, σ) and F (x, σ) coincide when |σ| ≤ K + 1, so u˜ is the solution of (2.4.1) on Ω × [0, T ∗ + ]. This contradicts to the definition of T ∗ . As a particular application of the theories established in this section, one can apply Theorem 2.4.5, Theorem 2.4.2, Theorem 2.4.3 with f = 0, F (x, σ) = σ q and ψ = u0 to our targeted problem (1.1.1) to obtain Theorem 2.1.1 64 Chapter 3 Upper Bound Estimate of the Blow-up Time 3.1 Main theorem and outline of the approach The goal of this chapter is to show the finite-time blowup of the solution to (1.1.1) and find an upper bound for the blow-up time. In this chapter, M (t) is defined as in (1.4.3). The next theorem is the main result of this chapter. Theorem 3.1.1. Let T ∗ be the maximal existence time for (1.1.1). Then T ∗ < ∞ and lim M (t) = ∞, t T∗ where M (t) is defined as in (1.4.3). In addition, if min u0 (x) > 0, then x∈Ω T∗ ≤ 1 1−q u (x) dx. (q − 1)|Γ1 | Ω 0 Inspired by [36], the idea of this proof is very simple, define u1−q (x, t) dx. h(t) = Ω 65 (3.1.1) Then by formal computations (assuming u is smooth up to the boundary), h (t) = (1 − q) Ω u−q ut dx u−q ∆u dx = (1 − q) Ω ∇ · (u−q Du) dx ≤ (1 − q) Ω u−q = (1 − q) ∂Ω ∂u dS ∂n = (1 − q)|Γ1 |. (3.1.2) (3.1.2) implies that h(t) decreases at a speed which is at least (q − 1)|Γ1 | every unit time. As a result, (3.1.1) is justified. Although the solution u is smooth inside the domain, it is not C 1 up to the boundary. So we need to consider the integral of u1−q over interior domains of Ω and then take limit as the interior domains approach Ω. In addition, in the limiting process, the uniform limit of the normal derivative is required, so we also have to approximate u by a sequence of smoother functions whose normal derivative is uniformly continuous up to the boundary after any positive time and strictly less than the blow-up time. The organization of this chapter is as follows. In Section 3.2, we explore some geometric properties of the C 2 domains which make it possible to approximate from inside. All the conclusions in this section should be standard, but for the sake of completeness, we also include the detailed proofs. Section 3.3 discusses how to approximate the solution by some functions with desired regularity of the normal derivative. Finally, Section 3.4 carries out the rigorous proof for the main theorem. 66 3.2 Approximation for C 2 domain from inside For any bounded domain Ω ⊂ Rn and for any h > 0, we define Ωh = {x ∈ Ω : dist(x, ∂Ω) > h} (3.2.1) Ωch = {x ∈ / Ω : dist(x, ∂Ω) > h}. (3.2.2) and Lemma 3.2.1. Let Ω be a bounded, open subset of Rn with ∂Ω ∈ C 2 . Then for any x ∈ ∂Ω, there exist positive constants r and σ which only depend on x and Ω such that for any y ∈ B(x, r) ∩ ∂Ω, there is an interior ball touching y with radius σ. Namely, − B σ (y − σ → n (y)) ∩ Ωc = {y}. (3.2.3) Analogously, there is also an exterior ball at y with radius σ. Namely, − B σ (y + σ → n (y)) ∩ Ω = {y}. (3.2.4) Proof. We will only prove (3.2.3), since the proof of (3.2.4) is almost identical. Fix any − ˜ −1), x ∈ ∂Ω, by translation and rotation, we can assume x to be the origin and → n (x) = (0, ˜ denotes the origin in Rn−1 . where 0 Since ∂Ω ∈ C 2 , there exist a C 2 function φ : Rn−1 → R and r ∈ (0, 1] (depending on φ and Ω) such that 67 (1) For any y ∈ B4r ∩ ∂Ω, → − n (y) · en < 0; yn = φ(˜ y ); |Dφ(˜ y )| ≤ 1/4. (3.2.5) (2) B4r ∩ Ω = B4r ∩ {y : yn > φ(˜ y )}. c (3) B4r ∩ Ω = B4r ∩ {y : yn < φ(˜ y )}. Now for any y ∈ Br ∩ ∂Ω, writing y = (˜ y , φ(˜ y )), then (Dφ(˜ y ), −1) → − , n (y) = < Dφ(˜ y) > 1 + |x|2 . We will show that there exists a positive where < · > is defined as < x >= constant σ, which only depends on φ and Ω such that (3.2.3) is satisfied. Let σ ∈ (0, r] be determined later. Define − y0 = y − σ→ n (y). Then y 0 = y˜ − σDφ(˜ y) < Dφ(˜ y) > and yn0 = φ(˜ y) + σ , < Dφ(˜ y) > where y 0 denotes the first n − 1 components of y 0 and yn0 denotes the last component of y 0 . For any z = (˜ z , zn ) ∈ B σ (y 0 ), we have |z − y 0 | ≤ σ and |z| ≤ |z − y 0 | + |y 0 | ≤ σ + (|y| + σ) ≤ 3r. So in order to show (3.2.3), it suffices to prove zn > φ(˜ z ) for any z ∈ B σ (y 0 ) \ {y}. Since 68 |z − y 0 | ≤ σ, then zn ≥ yn0 − σ 2 − |˜ z − y 0 |2 . (3.2.6) Next, the goal is to verify yn0 − σ 2 − |˜ z − y 0 |2 ≥ φ(˜ z ). (3.2.7) Namely, φ(˜ z ) − φ(˜ y) ≤ σ − < Dφ(˜ y) > σ 2 − |˜ z − y 0 |. By Taylor expansion, there exists some η between y˜ and z˜ such that 1 φ(˜ z ) − φ(˜ y ) = Dφ(˜ y ) · (˜ z − y˜) + (˜ z − y˜)T D2 φ(η)(˜ z − y˜). 2 Let λ = max|ξ|≤3r ||D2 φ(ξ)||2 , where || · ||2 denotes the matrix norm. Then 1 φ(˜ z ) − φ(˜ y ) ≤ Dφ(˜ y ) · (˜ z − y˜) + λ|˜ z − y˜|2 . 2 So it suffices to show 1 σ z − y˜|2 ≤ − Dφ(˜ y ) · (˜ z − y˜) + λ|˜ 2 < Dφ(˜ y) > σ 2 − |˜ z − y 0 |. Noticing σ 2 − |˜ z − y 0 | = σ 2 − |˜ z − y˜ + = σDφ(˜ y) 2 | < Dφ(˜ y) > σ2 σ 2−2 − |˜ z − y ˜ | Dφ(˜ y ) · (˜ z − y˜). < Dφ(˜ y) > < Dφ(˜ y ) >2 69 (3.2.8) Let A = Dφ(˜ y ) · (˜ z − y˜); B= σ ; < Dφ(˜ y) > E = |˜ z − y˜|2 . Then (3.2.8) boils down to 1 A + λE ≤ B − 2 B 2 − E − 2AB, that is 1 B 2 − E − 2AB ≤ B − A − λE. 2 (3.2.9) Since |˜ z − y˜| ≤ |˜ z − y 0 | + |y 0 − y˜| ≤ σ + σ = 2σ, then |A| ≤ |˜ z − y˜|/4 ≤ σ/2 and E ≤ 4σ 2 . On the other hand, |Dφ(˜ y )| ≤ 1/4 implies that B ≥ 2σ/3. As a result, as long as we take σ min{r, 1 }, 12λ B − A − 12 λE ≥ 0. Hence (3.2.9) reduces to 1 2 B 2 − E − 2AB ≤ B − A − λE . 2 70 Simplifying the above inequality, we obtain an equivalent form 1 λE(B − A) ≤ E + A2 + λ2 E 2 , 4 which is always true because of the fact that |λE(B − A)| ≤ λE(σ/2 + σ) ≤ E/8 ≤ E. (3.2.10) Thus, we proved (3.2.7). Then it follows from (3.2.6) that zn ≥ φ(˜ z ), which means z ∈ Ω. In addition, if zn = φ(˜ z ), then (3.2.10) should take “equal sign”. This implies E = 0, which means z˜ = y˜. Moreover, since both (3.2.6) and (3.2.7) should also take “equal sign”, we have zn = φ(˜ z ) = φ(˜ y ) = yn , which implies z = y. As a result, (3.2.3) is justified. Corollary 3.2.2. Let Ω be a bounded, open subset of Rn with ∂Ω ∈ C 2 . Then there exists a positive constant σ = σ(Ω) such that for any x ∈ ∂Ω, there exists an interior ball and also an exterior ball at x with radius σ. Proof. It follows from (3.2.1) and the compactness of ∂Ω. Corollary 3.2.3. Let Ω be a bounded, open subset of Rn with ∂Ω ∈ C 2 . Then there exists a positive constant σ = σ(Ω) such that − (1) for any h ∈ (0, σ), the map Ψh : ∂Ω → ∂Ωh defined by Ψh (x) = x − h→ n (x) is a C 1 diffeomorphism. − (2) for any h ∈ (0, σ), the map Ψh : ∂Ω → ∂Ωch defined by Ψh (x) = x + h→ n (x) is a C 1 diffeomorphism. Proof. Again, we will only prove for (1), since the proof for (2) is analogous. 71 Define σ1 as that in Corollary 3.2.2. Then for any h ∈ (0, σ1 ) and for any x ∈ ∂Ω, there − n (x)) ∩ Ωc = {x}. As a exists an interior ball touching x with radius h. Namely, B h (x − h→ result, dist(Ψh (x), ∂Ω) = h. So Ψh is well-defined. Moreover, this also implies that Ψh is an injection. To show Ψh is surjective, take any y ∈ Ω such that dist(y, ∂Ω) = h. Then there − exists x ∈ ∂Ω such that |y − x| = h. As a result, y = x − h→ n (x) = Ψh (x). − Now Ψh has been proved to be a bijection. Next, since ∂Ω is C 2 , then → n is C 1 on ∂Ω. Hence, by the inverse function theorem, there exists σ2 > 0 such that for any h ∈ (0, σ2 ), Ψh is a C 1 diffeomorphism. Finally, by taking σ = min{σ1 , σ2 }, (1) is justified. The following corollary is a simple version of the tubular neighborhood theorem. Corollary 3.2.4. Let Ω be a bounded, open subset of Rn with ∂Ω ∈ C 2 . Then there exists a positive constant σ = σ(Ω) such that the map Ψ : ∂Ω×(−σ, σ) → {x ∈ Rn : dist (x, ∂Ω) < σ} − defined by Ψ(x, h) = x − h→ n (x) is a C 1 diffeomorphism. Proof. First, by Corollary 3.2.3, there exists σ > 0, only depending on Ω, such that Ψ : − ∂Ω × (−σ, σ) → {x ∈ Rn : dist (x, ∂Ω) < σ} defined by Ψ(x, h) = x − h→ n (x) is a bijection. Then again by the inverse function theorem, by choosing σ smaller enough, Ψ is a C 1 diffeomorphism. Lemma 3.2.5. Let Ω be a bounded, open subset of Rn with ∂Ω ∈ C 2 . Then there exists a positive constant σ = σ(Ω) such that for any h ∈ (0, σ) and for any x ∈ ∂Ω, − → − n→ h Ψh (x) = n (x), (3.2.11) where Ψh is defined as in Corollary 3.2.3 and − n→ h denotes the exterior unit normal vector with respect to ∂Ωh . 72 Proof. By Corollary 3.2.3, there exists σ = σ(Ω) such that for any h ∈ (0, σ), Ψh is a C 1 diffeomorphism between ∂Ω to ∂Ωh . Next fix h ∈ (0, σ) and x ∈ ∂Ω, we will show − → − n→ h Ψh (x) = n (x). Just like the proof for Theorem 2.2.6, by translating and rotating the coordinates, we − can assume that x to be the origin 0 and → n (0) = −en = (0, 0, · · · , 0, −1). As a result, Ψh (x) = hen and then it is equivalent to prove that − n→ h (hen ) = −en . Suppose the point near 0 can be parametrized by y = (˜ y , yn ) = (˜ y , φ(˜ y )). So when y is near 0, the exterior unit normal vector at y is (Dφ(˜ y ), −1) → − . n (y) = < Dφ(˜ y) > Then the point near hen can be parametrized by z = Ψh (y) = (˜ y , φ(˜ y )) − h (Dφ(˜ y ), −1) < Dφ(˜ y) > F (˜ y ). ˜ n−1 . As a result, the tangent plane at hen with respect to ∂Ωh is spanned by {Di F (0)} i=1 Thus in order to prove − n→ h (hen ) = −en , it suffices to show en is perpendicular to each ˜ or equivalently the nth component of Di F (0) ˜ is 0. For each 1 ≤ i ≤ n − 1, by direct Di F (0), ˜ = 0, we find the nth component of Di F (0) ˜ is indeed 0. calculations and noticing Di φ(0) 73 3.3 Approximation for the solution by smoother functions By Theorem 2.1.1, there exists a unique nonnegative maximal solution u ∈ C 2,1 Ω × (0, T ∗ ) ∩ C Ω × [0, T ∗ ) to (1.1.1) such that     ut (x, t) = ∆u(x, t) in Ω × (0, T ∗ ),          ∂u(x,t) = uq (x, t) on Γ1 × (0, T ∗ ), ∂n(x)   ∂u(x,t)  =0   ∂n(x)        u(x, 0) = u0 (x) on Γ2 (3.3.1) × (0, T ∗ ), on Ω, where T ∗ is the maximal existence time to (1.1.1). It is readily seen that the normal derivative of this solution is not continuous along the boundary. But in many discussions, it really requires the continuity of the normal derivative. Thus, the purpose of this section is to construct a sequence of functions that has normal derivatives of better behavior hear he boundary. Lemma 3.3.1. Assume g is continuous on ∂Ω × [0, T ] and ψ ∈ C 1 (Ω). Let v be the solution to    vt (x, t) = ∆v(x, t) in      ∂v(x,t) Ω × (0, T ], on ∂Ω × (0, T ], = g(x, t) ∂n(x)        v(x, 0) = ψ(x) in Ω. Then for any τ0 > 0, the following limit converges uniformly on ∂Ω × [τ0 , T ]. lim h→0+ ∂v(xh , t) ∂v(x, t) = . ∂n(x) ∂n(x) 74 Proof. We will mimic the proof of Theorem 2.3.6 with f = 0, β = 0, Γ1 = ∂Ω and Γ2 = ∅. First, we extend ψ to a C 1 function to a larger domain Ω1 Ω. Then from that proof, the solution v can be written as t Φ(x − y, t)ψ(y) dy + v(x, t) = Ω1 Φ(x − y, t − τ )ϕ(y, τ ) dS(y) dτ 0 (3.3.2) ∂Ω v1 (x, t) + v2 (x, t), (3.3.3) where ϕ is a continuous function on ∂Ω × [0, T ] that is related to g and ψ. Now fix any τ0 ∈ (0, T ), it is readily seen that the following limit converges uniformly on ∂Ω × [τ0 , T ]. ∂v (x, t) ∂v1 (xh , t) = 1 . ∂n(x) h→0+ ∂n(x) lim On the other hand, Theorem 2.2.1 implies the uniform convergence of ∂v2 (xh , t) ∂v (x, t) = 2 ∂n(x) h→0+ ∂n(x) lim on ∂Ω × [τ0 , T ]. Next, we take a sequence of boundary pieces {Γ1,j }j≥1 such that Γ1,j ⊂ Γ1 and Γ1,j Γ1 (see Figure 3.1). Then a sequence of C ∞ cut-off functions {ηj }∞ j=1 are chosen so that for Γ1,j Γ1 Figure 3.1: Γ1,j 75 each j ≥ 1, ηj (x)    = 1,      x ∈ Γ1,j , ∈ [0, 1],        = 0, x ∈ Γ1 \ Γ1,j , (3.3.4) x ∈ ∂Ω \ Γ1 . In addition, we require that ηj+1 (x) ≥ ηj (x), ∀ j ≥ 1, x ∈ ∂Ω. For any T ∈ (0, T ∗ ) and for each j ≥ 1, we consider    (vj )t (x, t) = ∆vj (x, t)      in Ω × (0, T ], ∂vj (x,t) ∂n(x) = ηj (x) uq (x, t) on ∂Ω × (0, T ],        v (x, 0) = u (x) in Ω, 0 j (3.3.5) ∂v (x,t) j = ηj (x) uq (x, t) is just the solution where the function u in the boundary condition ∂n(x) to (3.3.1). Since u is continuous and (3.3.5) is linear in vj , it follows from Theorem 2.3.6 that there exists a solution vj ∈ C 2,1 (Ω × 0, T ] ∩ C Ω × [0, T ] to (3.3.5). In addition, by (3.3.4) and Corollary 2.3.8, vj ≤ u on Ω × [0, T ]. Lemma 3.3.2. For any (x, t) ∈ Ω × (0, T ], lim vj (x, t) = u(x, t). j→∞ Proof. Define the indicator function ✶Γ1 as (2.4.12) and let wj = u − vj , then     (wj )t (x, t) = ∆wj (x, t)      ∂wj (x,t) = gj (x, t) ∂n(x)         wj (x, 0) = 0 where gj (x, t) = [✶Γ1 (x) − ηj (x)]uq (x, t). 76 in Ω × (0, T ], on (Γ1 ∪ Γ2 ) × (0, T ], on Ω, (3.3.6) Then similar to the proof of Theorem 2.3.6, for any (x, t) ∈ Ω × (0, T ], t wj (x, t) = 0 ∂Ω Φ(x − y, t − τ ) ϕj (y, τ ) dS(y) dτ, (3.3.7) where ϕj ∈ BT satisfies for any x ∈ Γ1 ∪ Γ2 and t ∈ (0, T ], t ϕj (x, t) = 0 ∂Ω K(x, t; y, τ ) ϕj (y, τ ) dS(y) dτ + 2gj (x, t) (3.3.8) with K(x, t; y, τ ) = −2 ∂Φ(x − y, t − τ ) . ∂n(x) Since K satisfies (2.3.4), we can follow the same way as the derivations of (2.3.25) and (2.3.26) to obtain |K ∗ (x, t; y, τ )| ≤ C ∗ (t − τ )−3/4 |x − y|−(n−3/2) . (3.3.9) for some constant C ∗ = C ∗ (n, Ω, T ). Moreover, t ϕj (x, t) = 0 ∂Ω K ∗ (x, t; y, τ ) Hj (y, τ ) dS(y) dτ + 2gj (x, t). (3.3.10) Due to the fact that u is bounded on Ω × [0, T ] and the choice of {ηj }j≥1 , we know Hj is also bounded on Ω × [0, T ] and lim Hj (x, t) = 0, j→∞ ∀ x ∈ Γ1 ∪ Γ2 , t ∈ (0, T ]. Then it follows from (3.3.9), (3.3.10) and the Lebesgue’s dominated convergence theorem 77 that ∀ x ∈ Γ1 ∪ Γ2 , t ∈ (0, T ]. lim ϕj (x, t) = 0, j→∞ In addition, the boundedness of Hj implies the boundedness of ϕj , hence by (3.3.7) and the Lebesgue’s dominated convergence theorem, we obtain lim wj (x, t) = 0, j→∞ ∀ (x, t) ∈ Ω × (0, T ]. Lemma 3.3.3. For any j ≥ 1 and T ∈ (0, T ∗ ), let vj be the solution to (3.3.5) on Ω×[0, T ] . Then for any φ ∈ C Ω × [0, T ] and for any 0 < t1 < t2 ≤ T , lim t2 k→∞ t1 φ(x, t) ∂Ω1/k t2 ∂vj (x, t) ∂vj (x, t) φ(x, t) dS(x) dt = dS(x) dt, ∂nk (x) ∂n(x) ∂Ω t1 where − n→ k denotes the normal derivative with respect to ∂Ω1/k . Proof. By Corollary 3.2.3 and Lemma 3.2.5, for sufficiently large k, the function Ψk : ∂Ω → ∂Ω1/k defined by Ψk (ξ) = ξ − 1→ − n (ξ), k ∀ ξ ∈ ∂Ω, is a C 1 bijection such that − → − n→ k (Ψk (ξ)) = n (ξ), 78 ∀ ξ ∈ ∂Ω. As a result, by the change of variable x = Ψk (ξ) for x and denoting dS(x) = Fk (ξ)dS(ξ), ∂Ω1/k = ∂Ω = ∂Ω φ(x, t)Dvj (x, t) · − n→ k (x) dS(x) φ Ψk (ξ), t Dvj (Ψk (ξ), t) · − n→ k Ψk (ξ) Fk (ξ) dS(ξ) − φ Ψk (ξ), t Dvj (Ψk (ξ), t) · → n (ξ)Fk (ξ) dS(ξ). Integrating t from t1 to t2 , t2 t1 = ∂Ω1/k t2 t1 ∂Ω φ(x, t)Dvj (x, t) · − n→ k (x) dS(x) dt − φ Ψk (ξ), t Dvj (Ψk (ξ), t) · → n (ξ)Fk (ξ) dS(ξ). (3.3.11) It is readily seen that φ Ψk (ξ), t converges uniformly to φ(ξ, t) and Fk (ξ) converges to 1 uniformly on ∂Ω × [t1 , t2 ] as k → ∞. In addition, it follows from Lemma 3.3.3 that − − Dvj (Ψk (ξ), t) · → n (ξ) converges uniformly to Dvj (ξ, t) · → n (ξ). Thus the limit can be taken inside the integral to justify the conclusion. 3.4 Upper bound on life span: case of C 2 domain The goal of this section is to prove the unique solution u of (1.1.1) always blows up (i.e. L∞ norm of u goes to ∞) in finite time. In addition, we will derive an upper bound for the blow-up time in terms of |Γ1 |, the nonlinearity q and the initial data u0 . The usual way to prove the blowup of a solution is to introduce a suitable energy function and then derive a differential inequality to show the energy function blows up. This process usually involves integration by parts and therefore requires some continuity of the spatial 79 derivative Du near the boundary. However, u is not smooth, since the normal derivative of u is not continuous along Γ. Thus, some approximations are needed to get through this process. For any k, let Ω1/k be the same as in (3.2.1). For any x ∈ ∂Ω1/k , we use − n→ k (x) to denote − the exterior unit normal vector at x with respect to ∂Ω1/k . For any x ∈ ∂Ω, → n (x) represents the exterior unit normal vector at x with respect to ∂Ω. Proof of Theorem 3.1.1. Let u be the maximal solution to (1.1.1) as in Theorem 2.1.1. Fix any 0 < τ0 < T < T ∗ . For any j ≥ 1, let vj be the solution to (3.3.5). Then define 0 = min (x,t)∈Ω×[τ0 ,T ] v1 (x, t). It follows from Corollary 2.3.8 that 0 > 0. Noticing that {vj }j≥1 is an increasing sequence of functions that converges to u, so 0 ∀ j ≥ 1, ∀ (x, t) ∈ Ω × [τ0 , T ]. ≤ vj (x, t) ≤ u(x, t) ≤ M (T ), (3.4.1) Mimicking the idea in [36], for any j ≥ 1 and k ≥ 1, define hj,k : (0, T ] → R and hj : (0, T ] → R by hj,k (t) = 1−q Ω1/k vj (x, t) dx and hj (t) = 1−q Ω vj 80 (x, t) dx. Then hj,k (t) = (1 − q) −q Ω1/k vj (vj )t dx −q = (1 − q) Ω1/k vj ∆vj dx −q = (1 − q) Ω1/k −q−1 ∇ · (vj Dvj ) + q vj |Dvj |2 dx −q ≤ (1 − q) Ω1/k ∇ · (vj Dvj ) dx −q = (1 − q) ∂Ω1/k vj ∂vj dS. ∂nk (3.4.2) Integrating (3.4.2) for t from τ0 to T , T hj,k (T ) − hj,k (τ0 ) ≤ (1 − q) τ0 −q ∂Ω1/k vj (x, t) ∂vj (x, t) dS(x) dt. ∂nk (x) Taking k → ∞, by Lebesgue’s dominated convergence theorem and Lemma 3.3.3, T hj (T ) − hj (τ0 ) ≤ (1 − q) −q τ0 T ∂Ω τ0 ∂Ω vj (x, t) −q = (1 − q) ∂vj (x, t) dS(x) dt ∂n(x) vj (x, t)ηj (x)uq (x, t) dS(x) dt. (3.4.3) When j → ∞, it follows from Lemma 3.3.2 that vj goes to u pointwise. Moreover, ηj converges almost everywhere to ✶Γ1 on ∂Ω. Thus by taking j → ∞ in (3.4.3) and noticing the bound (3.4.1), it follows from the Lebesgue’s dominated convergence theorem that u1−q (x, T ) dx − Ω Ω u1−q (x, τ0 ) dx ≤ (1 − q) 81 T τ0 |Γ1 | dt = (1 − q)(T − τ0 )|Γ1 |. So (q − 1)(T − τ0 )|Γ1 | ≤ Ω ≤ Ω u1−q (x, τ0 ) dx − u1−q (x, T ) dx Ω u1−q (x, τ0 ) dx. Namely 1 u1−q (x, τ0 ) dx. T ≤ τ0 + (q − 1)|Γ1 | Ω Noticing the right hand side of the above inequality is independent of T , so we can send T to T ∗ , then T ∗ ≤ τ0 + 1 u1−q (x, τ0 ) dx. (q − 1)|Γ1 | Ω (3.4.4) Hence, T ∗ is finite. Then it follows from Theorem 2.4.8 and the positivity of u that lim M (t) = ∞. t T∗ Now if min u0 (x) > 0, then u has a strictly positive lower bound on Ω × [0, T ∗ ). As a x∈Ω result, by taking τ0 → 0 in (3.4.4), (3.1.1) follows. 82 Chapter 4 Lower Bound Estimate of the Blow-up Time 4.1 Main theorems and outline of the approach The goal of this section is to obtain lower bounds for the blow-up time. Again in this chapter, M (t) is defined as in (1.4.3). The main results consist of three parts. Part I: C 2 domain Ω. Theorem 4.1.1. Assume (1.1.2). Let T ∗ be the maximal existence time for (1.1.1). Then there exists a constant C = C(n, Ω) such that T∗ ≥ C − 2 ln 1 + (3M0 )−4(q−1) |Γ1 | n−1 , q−1 (4.1.1) where M0 is given by (1.4.2). From this theorem, we can study the asymptotic behaviour of T ∗ . As q 1, T ∗ is at least of order (q − 1)−1 . Combining with the upper bound in Theorem 3.1.1, the order of T ∗ is exactly (q − 1)−1 . As M0 0, T ∗ is at least of order ln(M0−1 ). As M0 → ∞, T ∗ is at least of order 83 −4(q−1) M0 . 0, T ∗ is at least of order ln |Γ1 |−1 . As |Γ1 | Part II: C 2 domain Ω with local convexity near Γ1 . Definition 4.1.2. Let Ω be a bounded, open subset in Rn . Then for any Γ ⊆ ∂Ω and d > 0, the boundary part of Ω near Γ within distance d is defined to be [Γ]d {x ∈ ∂Ω : dist (x, Γ) < d}. (4.1.2) In the following, as a standard notation, for any set A ⊆ Rn , Conv (A) denotes the convex hull of the set A. Theorem 4.1.3. Assume (1.1.2). Let T ∗ be the maximal existence time for (1.1.1) and M0 be defined as in (1.4.2). Assume Conv [Γ1 ]d ⊆ Ω for some d > 0. Then there exist constants Y0 = Y0 (n, Ω, d) and C = C(n, Ω, d) such that the following statements hold. • Case 1: n ≥ 3. Denote q−1 Y = M0 |Γ1 |1/(n−1) . Y If Y ≤ q0 , then T∗ ≥ C . (q − 1)Y | ln Y | (4.1.3) • Case 2: n = 2. Denote q−1 Y = M0 |Γ1 | ln 1 +1 . |Γ1 | Y If Y ≤ q0 , then T∗ ≥ C . (q − 1)Y | ln Y | 84 (4.1.4) It is readily seen that the asymptotic behaviour of T ∗ has been improved a lot. More precisely, −(q−1) 1 − 0, the order of T ∗ is at least |Γ1 | n−1 As |Γ1 | |Γ1 |−1 ln M0−1 . 0, the order of T ∗ is at least M0 As M0 ln |Γ1 |−1 2 ln |Γ1 |−1 for n ≥ 3 and for n = 2. Recalling the upper bound in Theorem 3.1.1, if u0 does not oscillate too much, which 1−q means Ω u0 −(q−1) M0 1−q dx is comparable to M0 0, the order of T ∗ is at most , then as M0 . So the lower bound is almost optimal on M0 . In addition, the order of T ∗ is at most |Γ1 |−1 , so when n = 2, the lower bound is also almost sharp. Part III: Convex C 2 domain Ω. 1 , there exists Theorem 4.1.4. Assume (1.1.2). Let Ω be convex. Then for any α ∈ 0, n−1 C = C(n, Ω, α) such that T∗ ≥ C q−1 (q − 1)M0 |Γ1 |α 1+(n−1)α 1−(n−1)α 1 min 1, q−1 qM0 |Γ1 |α , (4.1.5) where T ∗ is the maximal existence time for (1.1.1) and M0 is given by (1.4.2). In particular, if α is chosen to be 0 in (4.1.5), then C1 T∗ ≥ (q q−1 − 1)M0 min 1, 1 q−1 , (4.1.6) qM0 for some C1 = C1 (n, Ω). Remark 4.1.5. As |Γ1 | 0, the lower bound (4.1.5) is not the best that we can get. By 1 − Remark 4.5.11, the order of T ∗ is at least |Γ1 | n−1 for n ≥ 3 and |Γ1 |−1 ln |Γ1 |−1 for 85 n = 2. Noticing that when |Γ1 | is not sufficiently small, Theorem 4.1.3 and Remark 4.5.11 are not applicable. So the advantage of (4.1.5) is its validity for any Γ1 . This advantage is important when comparing with the previous results for Γ1 = ∂Ω. In addition, Theorem 4.1.4 concluded better asymptotic behavior of T ∗ as M0 0 than Theorem 4.1.3 and Remark 4.5.11. As M0 −(q−1) 0, (4.1.6) implies that T ∗ is at least of order M0 . So by the same discussions after Theorem 4.1.3, if the initial data u0 does not oscillate too much, then −(q−1) the order of T ∗ is exactly M0 . −2(q−1) As M0 → ∞, (4.1.6) implies that T ∗ is at least of order M0 . To obtain a lower bound for T ∗ , we first exploit the common Gronwall’s technique in Subsection 4.3.1. But this estimate is too rough, in order to get better results, we need to be more careful. The idea is to chop the range of M (t) into suitable pieces and find a lower bound for the time spent in each piece by analysing the representation formula. Then adding all these lower bounds yields a lower bound for T ∗ . In the proof for Theorem 4.1.1, the pieces are just chosen to be [3k−1 M0 , 3k M0 ] for k ≥ 1. In the proofs of Theorem 4.1.3 and Theorem 4.1.4, due to the convexity assumptions and more delicate divisions of the range, the results can be improved significantly. More precisely, we will chop the range to be [Mk−1 , Mk ] for k ≥ 1, where the sequence {Mk }k≥0 satisfies a nonlinear iterative relation. The organization of this chapter is as follows. In Section 4.2, we first prove that the classical solution u is also the weak solution which implies the representation formula of u. This representation formula is the fundamental tool in the later sections. In Section 4.3, two methods are presented to obtain the lower bound of T ∗ for any C 2 domains Ω. The first method is simpler and uses the Gronwall’s inequality. The second method performs more 86 delicate estimate and yield Theorem 4.1.1. In Section 4.4, the lower bound is derived for any convex domains Ω and Theorem 4.1.4 is verified. We also explain the main idea in the proof which is also used in the next section. In Section 4.5, we deal with C 2 domains Ω with local convexity near Γ1 and justify Theorem 4.1.3. We also mention a similar result for the convex domains in Remark 4.5.11. Finally, Section 4.6 compares the results in this chapter with the historical works. 4.2 Weak solution and representation formula By Theorem 2.1.1, there exists a unique nonnegative maximal solution u ∈ C 2,1 Ω × (0, T ∗ ) ∩ C Ω × [0, T ∗ ) to (1.1.1), where T ∗ is the maximal existence time to (1.1.1). In this section, we will first verify that the solution u to (1.1.1) is also a weak solution (See Definition 4.2.1) and then derive representation formulas (4.2.2) and (4.2.3) for u. 4.2.1 Weak solution Definition 4.2.1. Let T ∗ be the maximal existence time for (1.1.1). Then a function u ∈ C Ω × [0, T ∗ ) is called a weak solution of (1.1.1) if for any t ∈ (0, T ∗ ) and for any φ ∈ C 2 Ω × [0, t] , t u(y, τ )(φτ + ∆φ)(y, τ ) dy dτ = 0 Ω t uq (y, τ )φ(y, τ ) dS(y) dτ − 0 Γ1 Ω t + 0 u(y, t)φ(y, t) − u0 (y)φ(y, 0) dy ∂φ(y, τ ) dS(y) dτ. u(y, τ ) ∂n(y) ∂Ω (4.2.1) In order to prove u satisfies (4.2.1), we will again take advantage of vj which is the solution to (3.3.5). 87 Theorem 4.2.2. The maximal solution u to (1.1.1) is also a weak solution as in Definition 4.2.1. Proof. Fix any t ∈ (0, T ∗ ). Let {vj }j≥1 be the solution to (3.3.5). For any k ≥ 1, define Ω1/k as in (3.2.1). Then for any 0 < τ0 < t, k ≥ 1, j ≥ 1 and φ ∈ C 2 Ω × [0, t] , we have t t τ0 Ω1/k (vj )t (y, τ ) φ(y, τ ) dy dτ = τ0 Ω1/k ∆vj (y, τ ) φ(y, τ ) dy dτ. Integrating by parts and arranging the terms, t τ0 Ω1/k vj (y, τ ) (φτ + ∆φ)(y, τ ) dy dτ t = Ω1/k vj (y, t)φ(y, t) − vj (y, τ0 )φ(y, τ0 ) dy − t + τ0 ∂Ω1/k vj (y, τ ) τ0 ∂Ω1/k ∂vj (y, τ ) φ(y, τ ) dS(y) dτ ∂nk (y) ∂φ(y, τ ) dS(y) dτ, ∂nk (y) where − n→ k denotes the exterior unit normal vector with respect to ∂Ω1/k . Sending k → ∞, by Lebesgue’s dominated convergence theorem and Lemma 3.3.3, we 88 obtain t τ0 Ω vj (y, τ ) (φτ + ∆φ)(y, τ ) dy dτ t = Ω vj (y, t)φ(y, t) − vj (y, τ0 )φ(y, τ0 ) dy − t + τ0 ∂Ω vj (y, τ ) ∂vj (y, τ ) φ(y, τ ) dS(y) dτ τ0 ∂Ω ∂n(y) ∂φ(y, τ ) dS(y) dτ ∂n(y) t = Ω vj (y, t)φ(y, t) − vj (y, τ0 )φ(y, τ0 ) dy − t + τ0 ∂Ω vj (y, τ ) τ0 ∂Ω ηj (y)uq (y, t) φ(y, τ ) dS(y) dτ ∂φ(y, τ ) dS(y) dτ. ∂n(y) Taking j → ∞, then it follows from Lemma 3.3.2 and the Lebesgue’s dominated convergence theorem that t u(y, τ ) (φτ + ∆φ)(y, τ ) dy dτ τ0 Ω t = Ω u(y, t)φ(y, t) − u(y, τ0 )φ(y, τ0 ) dy − t + u(y, τ ) τ0 ∂Ω uq (y, τ )φ(y, τ ) dS(y) dτ τ0 Γ1 ∂φ(y, τ ) dS(y) dτ. ∂n(y) Finally by sending τ0 → 0, (4.2.1) holds. 4.2.2 Representation formula Next by (4.2.1) and some standard steps, we are able to attain the representation formula of u for inside points. Note that this representation formula is different from (2.3.15) which is used in the proof of the existence of the solution. Lemma 4.2.3. For the maximal solution u to (1.1.1), it has the representation formula for 89 the inside points (x, t) ∈ Ω × [0, T ∗ ), t u(x, t) = Ω Φ(x − y, t) u0 (y) dy + t − 0 Φ(x − y, t − τ ) uq (y, τ ) dS(y) dτ 0 Γ1 (4.2.2) ∂Φ(x − y, t − τ ) u(y, τ ) dS(y) dτ. ∂n(y) ∂Ω Proof. Fix x ∈ Ω and t ∈ (0, T ∗ ). Define φ : Ω × [0, t) → R by φ(y, τ ) = Φ(x − y, t − τ ) = 1 1 (4π)n/2 (t − τ )n/2 e − |x−y|2 4(t−τ ) . > 0, define φ : Ω × [0, t] → R by For any φ (y, τ ) = Φ(x − y, t + − τ ) = 1 1 (4π)n/2 (t + − τ )n/2 e − |x−y|2 4(t+ −τ ) . From these, one can see that φ is smooth in its domain and (φ )τ (y, τ ) + ∆y (φ )(y, τ ) = 0, ∀ (y, τ ) ∈ Ω × [0, t]. Applying (4.2.1) for φ = φ , t φ (y, t)u(y, t) dy = Ω Ω φ (y, 0)u0 (y) dy + t − 0 0 φ (y, τ ) uq (y, τ ) dS(y) dτ Γ1 ∂φ (y, τ ) u(y, τ ) dS(y) dτ. ∂Ω ∂n(y) 90 Sending → 0, it follows from the Lebesgue’s dominated convergence theorem that t u(x, t) = Ω φ(y, 0) u0 (y) dy + t − 0 φ(y, τ ) uq (y, τ ) dS(y) dτ 0 Γ1 ∂φ(y, τ ) u(y, τ ) dS(y) dτ ∂Ω ∂n(y) t = Ω Φ(x − y, t) u0 (y) dy + t − 0 0 Φ(x − y, t − τ ) uq (y, τ ) dS(y) dτ Γ1 ∂Φ(x − y, t − τ ) u(y, τ ) dS(y) dτ. ∂n(y) ∂Ω The last equality is because φ(y, τ ) = Φ(x − y, t − τ ). Now we have proved (4.2.2) for (x, t) ∈ Ω × (0, T ∗ ), next it is obvious to see that (4.2.2) holds for any point (x, t) ∈ Ω × {0}, thus the Theorem follows. Lemma 4.2.3 only gives the formula for the inside points, but we still need the formula for the boundary points (x, t) ∈ ∂Ω × [0, T ∗ ) . In order to get that, we combine Lemma 4.2.3 and Theorem 2.2.2. Corollary 4.2.4. For the maximal solution u to (1.1.1), it has the representation formula for the boundary points (x, t) ∈ ∂Ω × [0, T ∗ ), t u(x, t) =2 Ω Φ(x − y, t) u0 (y) dy + 2 t −2 0 0 Φ(x − y, t − τ ) uq (y, τ ) dS(y) dτ Γ1 (4.2.3) ∂Φ(x − y, t − τ ) u(y, τ ) dS(y) dτ. ∂n(y) ∂Ω − Proof. For any h > 0, we write xh = x − h→ n (x) for x ∈ ∂Ω. As shown in the proof of Theorem 2.2.2, when h is sufficiently small, xh ∈ Ω for any x ∈ ∂Ω. Consequently we can 91 apply Lemma 4.2.3 to conclude that t u(xh , t) = Ω Φ(xh − y, t) u0 (y) dy + t − 0 0 Γ1 Φ(xh − y, t − τ ) uq (y, τ ) dS(y) dτ ∂Φ(xh − y, t − τ ) u(y, τ ) dS(y) dτ. ∂n(y) ∂Ω Taking h → 0+ , then it follows from Theorem 2.2.2 that t u(x, t) = Ω Φ(x − y, t) u0 (y) dy + t − 0 0 Φ(x − y, t − τ ) uq (y, τ ) dS(y) dτ Γ1 ∂Φ(x − y, t − τ ) 1 u(y, τ ) dS(y) dτ + u(x, t), ∂n(y) 2 ∂Ω which implies (4.2.3). Now we have proved (4.2.3) for (x, t) ∈ ∂Ω × (0, T ∗ ), next it is obvious to see that (4.2.3) holds for any point (x, t) ∈ ∂Ω × {0}, hence the Theorem follows. 4.2.3 Time-shifted representation formula In Corollary 4.2.4, it derived the representation formula (4.2.3), where the initial time is 0 and the initial data is u(·, 0) = u0 (·) ∈ C 1 (Ω). Now for any T ∈ (0, T ∗ ), we are asking that if regarding T to be the initial time and u(·, T ) to be the initial data, then is there a representation formula similar to (4.2.3)? It seems trivial, but we should be careful, since u0 is in C 1 (Ω) but u(·, T ) does not. The next lemma claims that as long as u is the solution to (1.1.1) with the assumption (1.1.2), then for any T ∈ [0, T ∗ ), there also holds a representation formula which is similar to (4.2.3) but with “initial data u(·, T )”. Lemma 4.2.5. Assume (1.1.2). Let T ∗ be the maximal existence time and u be the maximal 92 solution to (1.1.1). Then for any x ∈ ∂Ω, T ∈ [0, T ∗ ) and t ∈ [0, T ∗ − T ), t Φ(x − y, t − τ ) uq (y, T + τ ) dS(y) dτ Φ(x − y, t) u(y, T ) dy + 2 u(x, T + t) = 2 Ω 0 t −2 0 Γ1 ∂Φ(x − y, t − τ ) u(y, T + τ ) dS(y) dτ ∂n(y) ∂Ω (4.2.4) Proof. When T = 0, (4.2.4) is just the representation formula (4.2.3) which has been proven. Next let T > 0. We intend to verify (4.2.4) which can be regarded as a representation formula with initial time T and initial data u(·, T ). Define v : Ω × [0, T ∗ − T ) → R by v(x, t) = u(x, T + t). Then v ∈ C 2,1 Ω × (0, T ∗ − T) C Ω × [0, T ∗ − T ) and satisfies              vt (x, t) = ∆v(x, t) ∂v(x,t) ∂n(x) in Ω × (0, T ∗ − T ), = uq (x, T + t) on Γ1 × (0, T ∗ − T ), (4.2.5)   ∂v(x,t)  =0   ∂n(x)        v(x, 0) = u(x, T ) on Γ2 × (0, T ∗ − T ), in Ω. Note that (4.2.5) is a linear problem in v, since u is a fixed function. We continuously extend u(·, T ) to Rn and still denote it to be u(·, T ). Then for any j ≥ 1, we choose the standard mollifier {η } >0 and define gj (x) = η j (·) ∗ u(·, T ) (x), 93 where j is chosen to be so small that max |gj (x) − u(x, T )| ≤ 1/j. (4.2.6) x∈Ω Since gj ∈ C 1 (Ω), it follows from Theorem 2.3.10 that there exists vj ∈ C 2,1 Ω × (0, T ∗ − T) C Ω × [0, T ∗ − T ) such that     (vj )t (x, t) = ∆vj (x, t) in Ω × (0, T ∗ − T ),       ∂vj (x,t)    ∂n(x) = uq (x, T + t) on Γ1 × (0, T ∗ − T ),  ∂vj (x,t)   =0   ∂n(x)        vj (x, 0) = gj (x) (4.2.7) on Γ2 × (0, T ∗ − T ), in Ω. In addition, by an analogous argument as that for Subsections 4.2.1 and 4.2.2, there exists a representation formula for (4.2.7): for any (x, t) ∈ ∂Ω × [0, T ∗ − T ), t vj (x, t) = 2 Ω Φ(x − y, t) gj (y) dy + 2 t −2 0 0 Φ(x − y, t − τ ) uq (y, T + τ ) dS(y) dτ Γ1 ∂Φ(x − y, t − τ ) vj (y, τ ) dS(y) dτ. ∂n(y) ∂Ω Let wj = vj − v, then wj ∈ C 2,1 Ω × (0, T ∗ − T )     (wj )t (x, t) = ∆wj (x, t)       ∂w    ∂nj (x, t) = 0 C Ω × [0, T ∗ − T ) and satisfies in Ω × (0, T ∗ − T ), on Γ1 × (0, T ∗ − T ),  ∂wj   on Γ2 × (0, T ∗ − T ),   ∂n (x, t) = 0        wj (x, 0) = gj (x) − u(x, T ) in Ω. 94 (4.2.8) So it follows from the maximum principle and the Hopf lemma that for any (x, t) ∈ Ω × [0, T ∗ − T ), |wj (x, t)| ≤ max |gj (x) − u(x, T )| ≤ 1/j. x∈Ω Thus ∀ (x, t) ∈ Ω × [0, T ∗ − T ). |vj (x, t) − v(x, t)| ≤ 1/j, Now fixing any point (x, t) ∈ ∂Ω × [0, T ∗ − T ) and let j → ∞ in (4.2.8), then it follows from Lebesgue’s dominated convergence theorem that t Φ(x − y, t − τ ) uq (y, T + τ ) dS(y) dτ Φ(x − y, t) u(y, T ) dy + 2 v(x, t) = 2 0 Ω t −2 0 Γ1 ∂Φ(x − y, t − τ ) v(y, τ ) dS(y) dτ. ∂n(y) ∂Ω Finally noticing that v(x, t) = u(x, T + t) and v(y, τ ) = u(y, T + τ ), we obtain (4.2.4). 4.3 Lower bound on life span: case of C 2 domain 4.3.1 A traditional way by Gronwall-type technique Lemma 4.3.1. Let Ω be a bounded open subset in Rn with ∂Ω ∈ C 2 . Then there exists a constant C = C(Ω, n) such that for any x ∈ ∂Ω and t > 0, 1 t(n−1)/2 ∂Ω − e |x−y|2 4t dS(y) ≤ C. Proof. It is easy to prove this conclusion by simply using the definition of a C 2 boundary and the dimension of ∂Ω is n − 1, we omit the proof here. 95 Theorem 4.3.2. Assume (1.1.2). Let T ∗ be the maximal existence time for (1.1.1). Then there exists a constant C = C(n, q, Ω) such that T∗ ≥C 2 − n+2 ln |Γ1 |−1 − (n + 2)(q − 1) ln M0 − ln(q − 1) − ln C 2 n+2 , (4.3.1) where M0 is defined as in (1.4.2) and C remains bounded as q → 1. As a result, no matter |Γ1 | 0, M0 1, we have T ∗ → ∞. 0 or q Proof. In the following, C will be used to denote a positive constant which only depends on n, Ω, q and is bounded when q 1. Moreover, C may be different from line to line. We prove by analyzing the representation formula (4.2.3) for u on the boundary points (x, t) ∈ ∂Ω × [0, T ∗ ): t u(x, t) = 2 Ω Φ(x − y, t) u0 (y) dy + 2 t −2 0 0 Φ(x − y, t − τ ) uq (y, τ ) dS(y) dτ Γ1 ∂Φ(x − y, t − τ ) u(y, τ ) dS(y) dτ ∂n(y) ∂Ω = I + II + III. (4.3.2) Define M , M : [0, T ∗ ) → R by M (t) = max u(y, t) y∈∂Ω and M (t) = max M (τ ). τ ∈[0,t] 96 (4.3.3) It is clear that M is increasing and also blows up at T ∗ . It is also easy to see that I ≤ 2M0 , t II ≤ C |x−y|2 − (t − τ )−n/2 e 4(t−τ ) M q (τ ) 0 (4.3.4) dS(y) dτ Γ1 t =C |x−y|2 − 4τ −n/2 τ e M q (t − τ ) 0 (4.3.5) dS(y) dτ Γ1 and |x − y|2 t III ≤ C M (τ ) 0 ∂Ω (t − τ )n/2+1 e − |x−y|2 4(t−τ ) dS(y) dτ 2 |x − y|2 − |x−y| 4τ dS(y) dτ e ∂Ω τ n/2+1 0 2 t 1 − |x−y| 8τ dS(y) dτ M (t − τ ) ≤C e ∂Ω τ n/2 0 t M (t − τ ) =C t ≤C (4.3.6) M (t − τ ) τ −1/2 dτ. 0 In (4.3.6), the first inequality is due to Lemma 2.3.2, the last inequality is due to Lemma 4.3.1 and the second inequality is because 2 |x − y|2 − |x−y| 8τ e ≤ sup r e−r/8 ≤ C. τ r≥0 1 , it follows Now we are trying to estimate (4.3.5) and (4.3.6) further. Taking m = 1 + n+1 97 from Holder’s inequality that 2 τ −n/2 − |x−y| 4τ e dS(y) ≤ τ 1/m 2 −n/2 e Γ1 − m|x−y| 4τ |Γ1 |(m−1)/m dS(y) Γ1 =τ 1/m 2 − n2 + n−1 2m τ − m|x−y| − n−1 2 4τ e |Γ1 |(m−1)/m dS(y) Γ1 n n−1 ≤ C τ − 2 + 2m |Γ1 | 2n+1 m−1 m 1 = C τ − 2n+4 |Γ1 | n+2 , where the second inequality is because of Lemma 4.3.1. Thus (4.3.5) leads to t 1 II ≤ C |Γ1 | n+2 2n+1 − M q (t − τ ) τ 2n+4 dτ. (4.3.7) 0 By Holder’s inequality again, t − 2n+1 M q (t − τ ) τ 2n+4 t dτ ≤ qm M m−1 (t − τ ) dτ m−1 m t 0 0 M q(n+2) (τ ) dτ 1 n+2 0 = dτ 1 m 0 t = (2n+1)m − 2n+4 τ t τ 2n+1 − 2n+2 dτ n+1 n+2 0 t 1 C t 2n+4 M q(n+2) (τ ) dτ 1 n+2 . 0 Based on this, (4.3.7) becomes II ≤ 1 1 C |Γ1 | n+2 t 2n+4 t 0 98 M q(n+2) (τ ) dτ 1 n+2 . (4.3.8) To estimate III, it follows from Holder’s inequality that t III ≤ C 1 n+2 M n+2 (τ ) dτ t τ 0 t n = C t 2n+4 0 1 n+2 M n+2 (τ ) dτ − 12 n+2 n+1 n+1 n+2 dτ . (4.3.9) 0 Combining (4.3.2), (4.3.4), (4.3.8), (4.3.9), we obtain u(x, t) ≤C t 1 1 M0 + |Γ1 | n+2 t 2n+4 t n + t 2n+4 1 n+2 M q(n+2) (τ ) dτ 0 1 n+2 M n+2 (τ ) dτ . 0 Since x is arbitrary on ∂Ω, by raising both sides to the power n + 2, t M n+2 (t) ≤ C M0n+2 + |Γ1 | t1/2 t M q(n+2) (τ ) dτ + tn/2 0 M n+2 (τ ) dτ . 0 Thus, due to the definition of M , M n+2 (t) ≤ C M0n+2 + |Γ1 | t1/2 ≤ C 1 + tn/2 t M q(n+2) t (τ ) dτ + tn/2 M 0 n+2 (τ ) dτ 0 t M0n+2 + |Γ1 | M q(n+2) t (τ ) dτ + 0 M n+2 (τ ) dτ . 0 As a consequence, M n+2 (t) ≤ C 1 + tn/2 M0n+2 t + |Γ1 | M 0 q(n+2) t (τ ) dτ + M 0 99 n+2 (τ ) dτ . (4.3.10) We define t E(t) = M0n+2 + |Γ1 | M q(n+2) t (τ ) dτ + M n+2 (τ ) dτ, (4.3.11) 0 0 then E(0) = M0n+2 and E(t) is increasing. Now (4.3.10) becomes M n+2 (t) ≤ C 1 + tn/2 E(t) and consequently E (t) = |Γ1 | M q(n+2) (t) + M (n+2) (t) q ≤ C |Γ1 | 1 + tn/2 E q (t) + C 1 + tn/2 E(t). (4.3.12) Moreover, E(t) also blows up at T ∗ , since M is increasing. (4.3.12) looks like the Bernoulli equation, so we multiply both sides by E −q (t) and define Ψ(t) E 1−q (t). Then Ψ(t) → 0 as t approaches to T ∗ and q Ψ (t) + C(q − 1) 1 + tn/2 Ψ(t) ≥ −C(q − 1)|Γ1 | 1 + tn/2 . (4.3.13) We introduce the integration factor µ(t) which is defined as t µ(t) exp C (q − 1) 1 + τ n/2 dτ , ∀ t ≥ 0. 0 It is readily seen that n µ(t) ≤ C exp C t1+ 2 . 100 (4.3.14) Multiplying (4.3.13) by µ(t), one gets µ(t)Ψ(t) q ≥ −C(q − 1)|Γ1 | 1 + tn/2 µ(t). −(n+2)(q−1) Integrating this inequality and noticing that µ(0)Ψ(0) = M0 −(n+2)(q−1) µ(t)Ψ(t) ≥ M0 t − C(q − 1)|Γ1 | 1 + τ n/2 q , one obtains µ(τ ) dτ. (4.3.15) 0 It follows from (4.3.14) that t 1 + τ n/2 q t µ(τ ) dτ ≤ C 0 1 + τ n/2 q n exp C τ 1+ 2 dτ 0 ≤ C 1 + tn/2 q n t exp C t1+ 2 n ≤ C exp (C + 1) t1+ 2 . Plugging in (4.3.15), we obtain −(n+2)(q−1) µ(t)Ψ(t) ≥ M0 n − C(q − 1)|Γ1 | exp C t1+ 2 . Taking t → T ∗ , one obtains n −(n+2)(q−1) C(q − 1)|Γ1 | exp C (T ∗ )1+ 2 ≥ M0 n −(n+2)(q−1) exp C (T ∗ )1+ 2 ≥ C −1 (q − 1)−1 |Γ1 |−1 M0 n+2 C (T ∗ ) 2 ≥ ln |Γ1 |−1 − (n + 2)(q − 1) ln M0 − ln(q − 1) − ln C. 101 Hence, T∗ 4.3.2 ≥C 2 − n+2 ln |Γ1 |−1 − (n + 2)(q − 1) ln M0 − ln(q − 1) − ln C 2 n+2 . Better estimate by a new method The aim of this subsection is the same as the last subsection, which is to find a lower bound of T ∗ . But this subsection will provide a different method which leads to a better lower bound as in (4.1.1). Comparing (4.1.1) with (4.3.1), for convenience of statement, we use T1 and T2 to represent the lower bounds in (4.1.1) and (4.3.1) respectively. The advantage of (4.1.1) is in the following aspects. • T1 is always positive, but T2 will be negative unless |Γ1 | is sufficiently small, M0 is sufficiently small or q is sufficiently close to 1. • As q 1 , while the order of T is only ln 1 1, the order of T1 is q−1 2 q−1 2 n+2 . • As |Γ1 | 0, the order of T1 is ln |Γ1 | , while the order of T2 is only ln |Γ1 | 1 1 • As M0 0, the order of T1 is ln M1 , while the order of T2 is only ln M1 0 0 2 n+2 . 2 n+2 . The problem of the method in Subsection 4.3.1 is that the estimate for the integral terms lose a lot when t is large. For example, in (4.3.6), the term 1 ∂Ω τ n/2 |x−y|2 e− 8τ 102 dS(y) is bounded by C τ −1/2 . When τ is small, this estimate is okay. But when τ is large, the 2 |x−y| 1 e− 8τ dS(y) obviously decays like τ −n/2 , so the bound τ −1/2 is too rough. term ∂Ω n/2 τ As a consequence, the main change in this subsection is that we divide the range of M (t) into small pieces and analyse each piece separately. Once we found the lower bound of the time spent in each piece, adding them together yields a lower bound for T ∗ . In the rest of this thesis, we define n−1 2 e−|x−y| /(4τ ) dS(y). sup sup τ − 2 B1 τ >0 x∈∂Ω (4.3.16) ∂Ω It is shown in Lemma 4.3.1 that B1 is a finite positive constant depending only on Ω and n. 1 , let In addition, for convenience of notation, for any α ∈ 0, n−1 nα 1 − (n − 1)α . 2 (4.3.17) It is readily seen that 0 < nα ≤ 21 . Lemma 4.3.3. Let Ω and Γ1 be the same as in (1.1.1). Then there exists C = C(n, Ω) such 1 , x ∈ ∂Ω and t > 0, that for any α ∈ 0, n−1 t Φ(x − y, t − τ ) dS(y) dτ ≤ 0 Γ1 C |Γ1 |α tnα . 1 − (n − 1)α In particular, if α = 0, then t Φ(x − y, t − τ ) dS(y) dτ ≤ C 0 Γ1 103 √ t. (4.3.18) 1 . We denote Proof. Let x ∈ ∂Ω, t > 0 and α ∈ 0, n−1 t Φ(x − y, t − τ ) dS(y) dτ. LHS = 0 Γ1 By a change of variable in τ , t Φ(x − y, τ ) dS(y) dτ LHS = 0 = Γ1 1 (4π)n/2 0 t 2 e−|x−y| /(4τ ) dS(y) dτ. τ −n/2 (4.3.19) Γ1 For any m ≥ 1, applying Holder’s inequality, 2 e−m |x−y| /(4τ ) 2 e−|x−y| /(4τ ) dS(y) ≤ 1/m Γ1 Γ1 |Γ1 |(m−1)/m . (4.3.20) Recalling the definition of B1 in (4.3.16), 2 e−m |x−y| /(4τ ) dτ = Γ1 2 e− |x−y| /[4(τ /m)] dτ Γ1 ≤ τ (n−1)/2 B1 m ≤ τ (n−1)/2 B1 . Combining this inequality with (4.3.20), Γ1 2 1/m e−|x−y| /(4τ ) dS(y) ≤ B1 τ (n−1)/(2m) |Γ1 |(m−1)/m ≤ (B1 + 1) τ (n−1)/(2m) |Γ1 |(m−1)/m . 104 (4.3.21) Plugging (4.3.21) into (4.3.19), LHS ≤ B1 + 1 (4π)n/2 t |Γ1 |(m−1)/m n n−1 τ − 2 + 2m dτ. (4.3.22) 0 Let m= 1 . 1−α Then m ≥ 1 and (m − 1)/m = α. Therefore (4.3.22) becomes LHS ≤ = B1 + 1 (4π)n/2 |Γ1 |α B1 + 1 (4π)n/2 nα t τ nα −1 dτ 0 |Γ1 |α tnα , 1 . where the last equality is due to the assumption that α < n−1 Proof of Theorem 4.1.1. In this proof, C denote the constants which only depend on n and Ω, the values of C may be different in different places. But C ∗ will represent a fixed value which also depends only on n and Ω. Recalling that M (t) is defined as in (1.4.3). For any k ≥ 0, define Mk = 3k M0 (4.3.23) and Tk to be the first time that the function M (t) reaches Mk . Obviously T0 = 0. For any k ≥ 1, denote tk Tk − Tk−1 (4.3.24) to be the time spent in the kth step. For any k ≥ 1, we are trying to find a lower bound tk∗ for tk , then summing up all tk∗ gives a lower bound for T ∗ . 105 By the maximum principle and the Hopf lemma, there exists xk ∈ Γ1 such that u(xk , Tk ) = Mk . (4.3.25) Applying the time-shifted representation formula (4.2.4) with T = Tk−1 and (x, t) = (xk , tk ), then u(xk , Tk ) = 2 Ω Φ(xk − y, tk ) u(y, Tk−1 ) dy ∂Φ(xk − y, tk − τ ) u(y, Tk−1 + τ ) dS(y) dτ ∂n(y) ∂Ω tk −2 0 tk +2 0 Γ1 Φ(xk − y, tk − τ ) uq (y, Tk−1 + τ ) dS(y) dτ. (4.3.26) Combining (4.3.25) and (4.3.26), Mk ≤ 2Mk−1 q + 2Mk Φ(xk − y, tk ) dy + 2Mk Ω tk 0 Γ1 tk 0 ∂Φ(xk − y, tk − τ ) dS(y) dτ ∂n(y) ∂Ω Φ(xk − y, tk − τ ) dS(y) dτ Ik + IIk + IIIk . (4.3.27) Since Φ is the fundamental solution of the heat equation, it is evident that Ik ≤ 2Mk−1 . (4.3.28) Secondly it is easy to check that IIk ≤ C 106 tk Mk . (4.3.29) Now define r= 1 , 2(n − 1) then it follows from (4.3.18) that 1/4 IIIk ≤ C |Γ1 |r tk q Mk . (4.3.30) Combining (4.3.27), (4.3.28), (4.3.29) and (4.3.30), there exists a constant C = C(n, Ω) such that 1/4 tk Mk + C |Γ1 |r tk Mk ≤ 2Mk−1 + C q Mk . Recalling that Mk = 3k M0 and Mk−1 = 3k−1 M0 , so 1/4 qk q 3 M0 . 3k M0 ≤ 2 · 3k−1 M0 + C tk 3k M0 + C |Γ1 |r tk Subtracting 2 · 3k−1 M0 and then dividing by 3k−1 M0 , we obtain the existence of a constant C = C(n, Ω) such that 1≤C 1/4 (q−1)k q−1 3 M0 . tk + C |Γ1 |r tk (4.3.31) Rearranging (4.3.31), 1/4 2 tk q−1 (q−1)k 1/4 3 tk + |Γ1 |r M0 − 1 ≥ 0. C 1/4 Regarding the above inequality to be a quadratic inequality in tk , then it is readily seen 1/4 that tk has to be greater than the positive root of the corresponding quadratic equation, 107 that is 1/4 tk ≥ 1 2 q−1 (q−1)k 3 − |Γ1 |r M0 2(q−1) 2(q−1)k 3 |Γ1 |2r M0 + + 4 C . After some algebraic simplification, we obtain 1/4 tk 1 ≥ 2(q−1) 2(q−1)k 3 |Γ1 |2r M0 C . + 4 C Hence, there exists C ∗ = C ∗ (n, Ω) such that tk ≥ C∗ 4(q−1) 4(q−1)k 3 |Γ1 |4r M0 . +1 After obtaining the lower bound for each tk , we can add all of them together to get a lower bound for T ∗ . Namely, ∞ T∗ ≥ C∗ 1 4r 4(q−1) 34(q−1)k k=1 |Γ1 | M0 . +1 Therefore, ∞ T ∗ ≥ C∗ 1 = 1 4(q−1) 4(q−1)x |Γ1 |4r M0 3 dx +1 C∗ 1 ln 1 + . 4(q−1) 4(q−1) 4(q − 1) ln(3) |Γ1 |4r M0 3 1 , then (4.1.1) follows. Recalling that r = 2(n−1) 108 4.4 Lower bound on life span: case of convex C 2 domain 4.4.1 Main idea First of all, let us recall the method in Subsection 4.3.1. Define M as in (4.3.3). Then for any t > 0, there exists x0 ∈ ∂Ω such that u(x0 , t) = M (t), so it follows from the representation formula (4.2.3) that t Φ(x0 − y, t) dy + 2 M (t) ≤ 2M0 Ω t 0 0 ∂Φ(x0 − y, t − τ ) dS(y) dτ ∂n(y) ∂Ω Φ(x0 − y, t − τ ) dS(y) dτ M q (τ ) +2 M (τ ) Γ1 I + II + III. (4.4.1) I, II and III are called the constant functional, linear functional and nonlinear functional in M (t) respectively. After estimating ∂Φ(x0 − y, t − τ ) dS(y) and ∂n(y) ∂Ω Φ(x0 − y, t − τ ) dS(y), Γ1 the lower bound in Theorem 4.3.2 is achieved by applying a Gronwall-type technique to (4.4.1). However, this lower bound is only logarithm of |Γ1 |−1 as |Γ1 | 0. The obstruction that prevents this method obtaining a polynomial order of |Γ1 |−1 for the lower bound is explained through the following remark. 109 Remark 4.4.1. Consider the following two simple integral inequalities. First,    φ1 (t) ≤ A + t φ1 (τ ) dτ + |Γ1 | t φq (τ ) dτ, 0 0 1 t > 0, (4.4.2)   φ (0) = A > 0. 1 It is easy to see by the Gronwall’s inequality that the blow-up time T1∗ satisfies T1∗ ≥ 1 1 , ln 1 + q−1 q−1 A |Γ1 | which is of order ln(|Γ1 |−1 ) as |Γ1 | → 0. Second,    φ2 (t) ≤ A + |Γ1 | t φq (τ ) dτ, 0 2 t > 0, (4.4.3)   φ (0) = A > 0. 2 It is easy to see by Gronwall’s inequality that the blow-up time T2∗ satisfies T2∗ ≥ 1 , (q − 1)Aq−1 |Γ1 | which is of order |Γ1 |−1 . From these two differential equations, the obstruction that prevents the lower bound being a polynomial order of |Γ1 |−1 is the linear term 0t φ1 (τ ) dτ in (4.4.2). Corresponding to (4.4.1), it is the linear term II: t M (τ ) 0 ∂Φ(x0 − y, t − τ ) dS(y) dτ. ∂n(y) ∂Ω If the linear term II can be eliminated from (4.4.1), then the lower bound is expected to be a polynomial order |Γ1 |−α for some α > 0. Taking advantage of the convexity of Ω, the 110 identity (4.4.8) in Lemma 4.4.2 can be used to absorb the linear term II into the constant term I in (4.4.1). Let’s see how it works. First, if t ∈ (0, T ∗ ) satisfies M (t) > M0 and max u(x, t) = M (t), x∈∂Ω (4.4.4) then there exists a point x0 ∈ ∂Ω such that u(x0 , t) = max u(x, t) = M (t). Thus, it follows x∈∂Ω from (4.3.3) and (4.4.1) that M (t) ≤ 2M0 t Φ(x0 − y, t) dy + 2M (t) Ω 0 t + 2M q (t) 0 ∂Φ(x0 − y, t − τ ) dS(y) dτ ∂n(y) ∂Ω Φ(x0 − y, t − τ ) dS(y) dτ. (4.4.5) Γ1 Invoking (4.4.8), t 0 ∂Φ(x0 − y, t − τ ) 1 dS(y) dτ = − Φ(x0 − y, t) dy. ∂n(y) 2 ∂Ω Ω Plugging this identity into (4.4.5) and simplifying, one has M (t) Ω Φ(x0 − y, t) dy ≤ M0 t Φ(x0 − y, t) dy + M q (t) Ω 0 Φ(x0 − y, t − τ ) dS(y) dτ. Γ1 (4.4.6) Now this estimate does not contain the linear term, which should enable us to get a lower bound of polynomial order |Γ1 |−β for some β > 0. To continue from (4.4.6), the Gronwall-type technique will not work, since (4.4.6) is proved to be true only for t satisfying (4.4.4). Then what kind of time t satisfies (4.4.4)? By the maximum principle, if at some t > 0, M (t) > M (t1 ) for any 0 ≤ t1 < t, then such 111 t satisfies (4.4.6). As an instance, for any λ1 > 1, if we write M1 = λ1 M0 and denote T1 to be the first time that M (t) reaches M1 , then T1 satisfies (4.4.4). Another disadvantage of (4.4.6) is that although it gets rid of the linear functional of M (t), there is an extra term Ω Φ(x0 − y, t) dy on the left hand side, which decays like t−n/2 when t becomes large. Hence, to avoid the effect of this decay, λ1 should be kept small. Taking these restrictions into consideration, we need to come up with some delicate strategies. The rough idea is as follows. We will firstly choose a suitably small λ1 such that there is still a lower bound t∗ for T1 , where T1 is the first time for M (t) to reach λ1 M0 . Then we regard u(·, T1 ) as the “initial data” and repeat the first step. Finally if such process can proceed for at least L0 steps, then L0 t∗ is a lower bound for T ∗ , since the time in each step has a lower bound t∗ (the choice of t∗ will be the same in each step). 4.4.2 Auxiliary lemmas The second conclusion of the next lemma is the only place that the convexity is used in this section. Lemma 4.4.2. Let Φ be the heat kernel as in (1.4.4). Then t Φ(x − y, t) dy − Ω 0 ∂Φ(x − y, t − τ ) 1 dS(y) dτ = , ∂n(y) 2 ∂Ω ∀ x ∈ ∂Ω, t > 0. (4.4.7) In addition, if Ω is convex, then t Φ(x − y, t) dy + Ω 0 ∂Φ(x − y, t − τ ) 1 dS(y) dτ = , ∂n(y) 2 ∂Ω 112 ∀ x ∈ ∂Ω, t > 0. (4.4.8) Proof. The problem    ut (x, t) = ∆u(x, t) in      ∂u(x,t) Ω × (0, ∞), on ∂Ω × (0, ∞), =0 ∂n(x)        u(x, 0) = 1 in (4.4.9) Ω, obviously has the unique solution u ≡ 1 on Ω × [0, ∞). As a result, plugging u ≡ 1 into the representation formula (4.2.3) (taking Γ1 = ∅), (4.4.7) follows. ∂Φ(x−y,t−τ ) ∂n(y) Now if Ω is convex, then ≤ 0 for any x, y ∈ ∂Ω. Thus, (4.4.7) implies (4.4.8). Lemma 4.4.3. Define F : ∂Ω × [0, 1] → R to be F (x, t) =     Ω Φ(x − y, t) dy    1/2 for x ∈ ∂Ω, t ∈ (0, 1], for x ∈ ∂Ω, t = 0. Then F is continuous on ∂Ω × [0, 1]. As a result, b1 min F (4.4.10) ∂Ω×[0,1] is a positive constant depending only on Ω and the dimension n. Proof. Since ∂Ω has been assumed to be C 2 , the proof can be carried out by standard analysis. We can also prove it by applying (4.4.7) and noticing the uniform decay for x ∈ ∂Ω of the following integral t lim t→0 0 ∂Φ(x − y, t − τ ) dS(y) dτ = 0. ∂n(y) ∂Ω 113 The details are omitted here. Next, M0 and M (t) are still defined as in (1.4.2) and (1.4.3). In addition, we define Eq = (q − 1)q−1 /q q , ∀ q > 1. (4.4.11) By elementary calculus, 1 1 1 < Eq < min , 3q q (q − 1) e < 1. (4.4.12) Lemma 4.4.4. For any q > 1 and m > 0, write Eq as in (4.4.11) and define g : (m, ∞) → R by g(λ) = λ−m , λq ∀ λ > m. (4.4.13) Then the following two claims hold. q (1) For any y ∈ 0, m1−q Eq , there exists unique λ ∈ m, q−1 m such that g(λ) = y. (2) For any y > m1−q Eq , there does not exist λ > m such that g(λ) = y. q Proof. Since g is strictly increasing on the interval m, q−1 m and strictly decreasing on the q q interval q−1 m, ∞ , it reaches the maximum at λ = q−1 m. Noticing that g q m = m1−q Eq , q−1 then the claims (1) and (2) follow directly. 114 4.4.3 Proof of Theorem 4.1.4 Proof. Let M (t) be defined as in (1.4.3). In the following, the first step is that for any t∗ ∈ (0, 1], we will find a finite strictly increasing sequence {Mk }0≤k≤L such that if Tk denotes the first time that M (t) = Mk , then Tk − Tk−1 ≥ t∗ for 1 ≤ k ≤ L. The second step is to derive a lower bound for Lt∗ as an explicit formula in t∗ and then maximize that lower bound for t∗ ∈ (0, 1]. Step 1. Let t∗ ∈ (0, 1] which will be determined later in Step 2. Define M0 as that in (1.4.2) and T0 = 0. Then for k ≥ 1, suppose Mk−1 has been constructed, we are trying to find Mk such that Tk − Tk−1 ≥ t∗ . Denote tk = Tk − Tk−1 . We will first check what happens if tk ≤ 1. By the maximum principle, there exists xk ∈ ∂Ω such that u(xk , Tk ) = Mk , so Tk satisfies (4.4.4). Applying the time-shifted representation formula (4.2.4) with T = Tk−1 and (x, t) = (xk , tk ), u(xk , Tk ) = 2 Ω Φ(xk − y, tk ) u(y, Tk−1 ) dy tk −2 0 ∂Φ(xk − y, tk − τ ) u(y, Tk−1 + τ ) dS(y) dτ ∂n(y) ∂Ω tk +2 0 Γ1 Φ(xk − y, tk − τ ) uq (y, Tk−1 + τ ) dS(y) dτ. (4.4.14) As a result, Mk ≤ 2Mk−1 + 2Mk Φ(xk − y, tk ) dy Ω tk 0 tk q + 2Mk ∂Φ(xk − y, tk − τ ) dS(y) dτ ∂n(y) ∂Ω 0 Γ1 Φ(xk − y, tk − τ ) dS(y) dτ. 115 (4.4.15) Since Ω is convex, it follows from (4.4.8) that tk 0 ∂Φ(xk − y, tk − τ ) 1 dS(y) dτ = − Φ(xk − y, tk ) dy. ∂n(y) 2 ∂Ω Ω Plugging this identity into (4.4.15) and simplifying, (Mk − Mk−1 ) tk q Ω Φ(xk − y, tk ) dy ≤ Mk 0 Γ1 Φ(xk − y, tk − τ ) dS(y) dτ. (4.4.16) Due to the assumption that tk ≤ 1, it follows from Lemma 4.4.3 that Ω Φ(xk − y, tk ) dy ≥ b1 . (4.4.17) In addition, Lemma 4.3.3 implies the existence of a constant C = C(n, Ω) such that tk 0 Γ1 Φ(xk − y, tk − τ ) dS(y) dτ ≤ α C |Γ1 |α tn k , nα (4.4.18) where nα is defined in (4.3.17). Plugging (4.4.17) and (4.4.18) into (4.4.16), α C |Γ1 |α tn Mk − Mk−1 k . ≤ q b1 n α Mk (4.4.19) In summary, this paragraph claims that if tk ≤ 1, then Mk will satisfy (4.4.19). Based on the above observation, denote δ1 = α C |Γ1 |α tn ∗ , b1 n α (4.4.20) where the constant C is the same as that in (4.4.19). Then we define Mk to be the solution 116 (if it exists) to Mk − Mk−1 = δ1 . q Mk (4.4.21) With such a choice for Mk , it is evident that tk ≥ min{1, t∗ } = t∗ . On the other hand, by applying Lemma 4.4.4, (4.4.21) has a solution Mk > Mk−1 if and 1−q only if δ1 ≤ Mk−1 Eq . In addition, as long as such a solution exists, Mk can be chosen to satisfy Mk−1 < Mk ≤ q M . q − 1 k−1 Thus, the strategy of constructing {Mk } is summarized as following. First, M0 is defined to be the same as in (1.4.2). Next, for k ≥ 1, suppose Mk−1 has been constructed, then based on Lemma 4.4.4, whether defining Mk depends on how large Mk−1 is. q−1 q If Mk−1 δ1 ≤ Eq , then we define Mk ∈ Mk−1 , q−1 Mk−1 to be the solution to (4.4.21). q−1 If Mk−1 δ1 > Eq , then there does not exist Mk > Mk−1 which solves (4.4.21). So we do not define Mk and stop the construction. Based on this construction, if {Mk }1≤k≤L0 have been defined, then for any 1 ≤ k ≤ L0 , Tk − Tk−1 ≥ t∗ . Therefore, Tk ≥ kt∗ for any 1 ≤ k ≤ L0 . Applying Theorem 3.1.1, L0 ≤ T ∗ /t∗ < ∞, which means the cardinality of {Mk } has be to finite. So we can assume the constructed sequence is {Mk }0≤k≤L for some finite L. 117 Step 2. By Lemma 4.4.5, L> 1 1 − 3q , q−1 10(q − 1) M δ 1 0 so T ∗ ≥ Lt∗ > 1 1 − 3q t∗ . 10(q − 1) M q−1 δ1 0 (4.4.22) Plugging (4.4.20) into (4.4.22), T∗ ≥ = 1 b1 nα α − 3q t t1−n ∗ ∗ q−1 α 10(q − 1) M C |Γ | 1 0 3q C1 nα α −t , t1−n ∗ ∗ q−1 α 10(q − 1) qM |Γ | 1 0 (4.4.23) where C1 = b1 /(3C) is a constant only depending on n and Ω. In order to maximize the right hand side of (4.4.23), let A= C 1 nα q−1 qM0 |Γ1 |α , β = 1 − nα ∈ 1 ,1 2 and define t∗ (min{1, βA})1/(1−β) . (4.4.24) Then it follows from Lemma 4.4.6 that t∗ maximizes the right hand side of (4.4.23) on (0, 1] and T∗ ≥ 3q β/(1−β) (1 − β)A min{1, βA} . 10(q − 1) 118 Noticing that β ≥ 1/2, so T∗ ≥ ≥ ≥ β/(1−β) A 3q (1 − β)A min 1, 10(q − 1) 2 3C1 n2α q−1 10(q − 1)M0 C2 q−1 (q − 1)M0 min 1, |Γ1 |α |Γ1 |α 1 nα −1 C 1 nα q−1 2qM0 |Γ1 |α 1 nα −1 1 min 1, q−1 qM0 |Γ1 |α , (4.4.25) where C2 = 3C1 n2α 10 1 nα −1 C nα min 1, 1 2 (4.4.26) is a constant depending on n, Ω and α. In particular, if we choose α = 0 in (4.4.25) and (4.4.26), then it follows from nα = 1−(n−1)α 2 = 12 that T∗ ≥ C3 min 1, q−1 (q − 1)M0 1 q−1 , qM0 where C3 is a positive constant only depending on n and Ω. Lemma 4.4.5. Given q > 1, M0 > 0 and δ1 > 0, denote Eq as (4.4.11) and construct a finite sequence {Mk }0≤k≤L inductively as follows. For k ≥ 1, suppose Mk−1 has been constructed, then based on Lemma 4.4.4, whether defining Mk depends on how large Mk−1 is. q−1 q If Mk−1 δ1 ≤ Eq , then we define Mk ∈ Mk−1 , q−1 Mk−1 to be the solution to Mk − Mk−1 = δ1 . q Mk q−1 (4.4.27) If Mk−1 δ1 > Eq , then there does not exist Mk > Mk−1 which solves (4.4.27). So we 119 do not define Mk and stop the construction. Denote the last term of this construction to be ML , then L> 1 1 − 3q . 10(q − 1) M q−1 δ1 0 (4.4.28) Proof. First, we want to mention that the construction indeed stops in finite steps. In fact, it follows from (4.4.27) that the sequence {Mk } is strictly increasing and q−1 q Mk = Mk−1 + Mk δ1 ≥ 1 + M0 δ1 Mk−1 . As a result, q−1 k Mk ≥ 1 + δ1 M0 q−1 Thus, when k is sufficiently large, Mk M0 . will exceed Eq /δ1 , which forces the construction to stop. Next, suppose the construction stops at ML , that is to say, the constructed sequence is {Mk }0≤k≤L , then the lower bound (4.4.28) for L will be justified based on two situations. q−1 Case 1. M0 δ1 > Eq . In this case, it follows from (4.4.12) that 1 q−1 M0 δ1 < 1 < 3q, Eq so the right hand side of (4.4.28) is negative. Thus, (4.4.28) holds since L ≥ 0. q−1 Case 2. M0 δ1 ≤ Eq . In this case, it is evident that L ≥ 1. Therefore, since the last 120 term of the sequence is ML , then q−1 ML−1 δ1 ≤ Eq q−1 δ1 > Eq . q−1 δ1 . and ML According to the recursive relation (4.4.27), Mk−1 = Mk 1 − Mk Raising both sides of the above equality to the power q − 1 and multiplying by δ1 , q−1 q−1 Mk−1 δ1 = Mk q−1 Let xk = Mk q−1 δ1 1 − Mk δ1 q−1 . δ1 . Then xk−1 = xk (1 − xk )q−1 , ∀ 1 ≤ k ≤ L. (4.4.29) Moreover, q−1 x0 = M0 δ1 , xL−1 ≤ Eq and xL > Eq . q ML−1 , so Noticing that ML ≤ q−1 xL = q−1 ML q−1 q 1 Eq = . xL−1 ≤ ML−1 q−1 q Since the right hand side of (4.4.29) is a nonlinear function in xk , it is better to consider the “reversed” relation of (4.4.29). Thus, we define a new sequence {yk }0≤k≤L in the 121 following way: y0 min{1/2, Eq } and yk yk−1 (1 − yk−1 )q−1 , ∀ 1 ≤ k ≤ L. (4.4.30) In addition, define h : (0, 1) → R by h(t) = t (1 − t)q−1 . It is easy to see that h is strictly increasing on (0, 1/q] and strictly decreasing on [1/q, 1). As a result, it follows from 0 < y0 ≤ Eq < xL ≤ 1/q that y1 = h(y0 ) < h(xL ) = xL−1 . Keep doing this, we get yk < xL−k for any 0 ≤ k ≤ L. In particular, when k = L, q−1 yL < x0 = M0 δ1 . Since {yk } is a decreasing positive sequence and y0 ≤ 1/2, then yk ≤ 1/2 for any 0 ≤ k ≤ L. As a result, it follows from (4.4.30) and the mean value theorem that for any 1 ≤ k ≤ L, yk ≥ yk−1 1 − 2(q − 1)yk−1 . Recalling (4.4.12) again, yk−1 ≤ y0 ≤ Eq < 1 , (q − 1)e so 1 − 2(q − 1)yk−1 > 1 − 122 2 1 > . e 5 (4.4.31) Hence, taking the reciprocal in (4.4.31) yields 1 1 ≤ yk yk−1 1 − 2(q − 1)yk−1 = < 1 yk−1 1 yk−1 + 2(q − 1) 1 − 2(q − 1)yk−1 + 10(q − 1). (4.4.32) Summing up (4.4.32) for k from 1 to L, then 1 1 < + 10(q − 1)L. yL y0 q−1 Since yL < M0 (4.4.33) δ1 and y0 = min 1 , Eq 2 > 1 , 3q it follows from (4.4.33) that 1 q−1 M0 δ1 < 3q + 10(q − 1)L. Thus, L> 1 1 − 3q . q−1 10(q − 1) M δ 1 0 Lemma 4.4.6. Given two constants A > 0 and β ∈ (0, 1), define f : (0, 1] → R by f (t) = A tβ − t. Let t0 = (min{1, βA})1/(1−β) . 123 Then f (t0 ) = sup f (t) ≥ (1 − β)A (min{1, βA})β/(1−β) . (4.4.34) 0 0, there exists C = C(n, Ω, d) such that for any x ∈ Γ1 and t ∈ (0, 1], t 0 ∂Ω\[Γ1 ]d ∂Φ(x − y, t − τ ) d2 dS(y) dτ ≤ C exp − . ∂n(y) 8t (4.5.1) Proof. In this proof, C denotes a constant which depends only on n, Ω and d. By a change of variable in τ and the definition of Φ, t 0 ∂Ω\[Γ1 ]d ∂Φ(x − y, t − τ ) dS(y) dτ ∂n(y) t ∂Φ(x − y, t − τ ) dS(y) dτ ∂n(y) 0 ∂Ω\[Γ1 ]d − t |(x − y) · → n (y)| |x − y|2 ≤ C dS(y) dτ. exp − n +1 4τ 0 ∂Ω\[Γ1 ]d 2 τ = (4.5.2) − Since ∂Ω is assumed to be C 2 , then |(x − y) · → n (y)| ≤ C|x − y|2 . As a result, − |(x − y) · → n (y)| n τ 2 +1 |x − y|2 exp − 4τ ≤ C|x − y|−n n |x − y|2 1+ 2 |x − y|2 exp − τ 4τ ≤ C|x − y|−n exp − ≤ Cd−n exp − |x − y|2 8τ d2 . 8τ where the last inequality is due to the fact that x ∈ Γ1 and y ∈ ∂Ω \ [Γ1 ]d . 125 (4.5.3) Plugging (4.5.3) into (4.5.2), t 0 ≤ ∂Ω\[Γ1 ]d t Cd−n 0 ≤ Cd−n |∂Ω| ∂Φ(x − y, t − τ ) dS(y) dτ ∂n(y) exp − ∂Ω\[Γ1 ]d t exp − 0 ≤ Cd−n |∂Ω| t exp − d2 8t d2 dS(y) dτ 8τ d2 dτ 8τ ≤ C exp − d2 , 8t where the last inequality is due to the assumption that t ≤ 1. By exploiting Lemma 4.5.1, the following is a variant of the identity (4.4.8). So (4.5.4) will play the same role in the proof of Theorem 4.1.3 as (4.4.8) did in the proof of Theorem 4.1.4. Corollary 4.5.2. Let Ω and Γ1 be the same as in (1.1.1). Assume there exists d > 0 such that Conv [Γ1 ]d ⊆ Ω. Then there exists C = C(n, Ω, d) such that for any x ∈ Γ1 and t ∈ (0, 1], t Φ(x − y, t) dy + 0 Ω 1 d2 ∂Φ(x − y, t − τ ) dS(y) dτ ≤ + C exp − . ∂n(y) 2 8t ∂Ω (4.5.4) Proof. Splitting the second term on the left hand side of (4.5.4) into two parts: t = ∂Φ(x − y, t − τ ) dS(y) dτ ∂n(y) 0 ∂Ω t t ∂Φ(x − y, t − τ ) ∂Φ(x − y, t − τ ) dS(y) dτ + dS(y) dτ ∂n(y) ∂n(y) 0 [Γ1 ]d 0 ∂Ω\[Γ1 ]d 126 It follows from x ∈ Γ1 and Conv [Γ1 ]d ⊆ Ω that ∂Φ(x − y, t − τ ) ≤ 0, ∂n(y) ∀ y ∈ [Γ1 ]d . As a result, t ∂Φ(x − y, t − τ ) dS(y)dτ ∂n(y) 0 ∂Ω t t ∂Φ(x − y, t − τ ) ∂Φ(x − y, t − τ ) dS(y)dτ + dS(y)dτ =− ∂n(y) ∂n(y) 0 ∂Ω\[Γ1 ]d 0 [Γ1 ]d t t ∂Φ(x − y, t − τ ) ∂Φ(x − y, t − τ ) dS(y)dτ + 2 dS(y)dτ. (4.5.5) ≤− ∂n(y) ∂n(y) 0 ∂Ω\[Γ1 ] 0 ∂Ω d Combining (4.5.5) with Lemma 4.5.1, there exists a constant C = C(n, Ω, d) such that t ∂Φ(x − y, t − τ ) dS(y) dτ ∂n(y) 0 ∂Ω t ∂Φ(x − y, t − τ ) d2 ≤ − dS(y) dτ + C exp − . ∂n(y) 8t 0 ∂Ω (4.5.6) Hence it follows from (4.4.7) and (4.5.6) that t ∂Φ(x − y, t − τ ) dS(y) dτ ∂n(y) Ω 0 ∂Ω t ∂Φ(x − y, t − τ ) d2 ≤ Φ(x − y, t) dy − dS(y) dτ + C exp − ∂n(y) 8t Ω 0 ∂Ω 1 d2 = + C exp − . 2 8t Φ(x − y, t) dy + Next, we introduce a simple fact which can be regarded as a rearrangement result. 127 Lemma 4.5.3. Let n ≥ 1 and f : (0, ∞) → [0, ∞) be a decreasing function. Then for any bounded, open subset U of Rn and for any x ∈ Rn , f (|x − y|) dy ≤ U f (|z|) dz (4.5.7) BR (0) where R satisfies |BR (0)| = |U | (namely the volume of BR (0) equals the volume of U ). Proof. Define U1 = U − {x}. Then by a change of variable z = y − x, f (|x − y|) dy = f (|z|) dz U1 U f (|z|) dz f (|z|) dz + = U1 \BR (0) U1 ∩BR (0) I1 + I2 , (4.5.8) Since f is decreasing, then I2 ≤ f (R)|U1 \BR (0)|. Since R is chosen such that |BR (0)| = |U | = |U1 |, then we have |BR (0)\U1 | = |U1 \BR (0)|. As a result, I2 ≤ f (R)|BR (0)\U1 | ≤ f (|z|) dz, (4.5.9) BR (0)\U1 where the last inequality is again due to the decay of f . Combining (4.5.8) and (4.5.9), we finish the proof. 128 Definition 4.5.4. Let Ω be a bounded, open subset of Rn with C 1 boundary. Let Γ be a relative open subset of ∂Ω. We say Γ is given by a graph if (upon relabelling and reorienting the coordinates axes), there exists a bounded, open subset U ⊆ Rn−1 and a C 1 function φ : Rn−1 → R such that Γ = {(˜ y , φ(˜ y )) : y˜ ∈ U }. In the following, for any x ∈ Rn , we will decompose it to be x = (˜ x, xn ), where x˜ denotes the first n − 1 components of x. Lemma 4.5.5. Let Ω be a bounded, open subset of Rn (n ≥ 3) with C 1 boundary. Let Γ be a relatively open subset of ∂Ω that is given by a graph as defined in Definition 4.5.4. Then there exists a constant C = C(n, ||∇φ||L∞ (U ) ), where φ and U are the same as in Definition 4.5.4, such that for any x ∈ Rn , 1 dS(y) ≤ C|Γ|1/(n−1) , n−2 |x − y| Γ where |Γ| Γ dS. Proof. By Definition 4.5.4, without loss of generality, we can assume there exists a C 1 function φ : Rn−1 → R and a bounded, open subset U of Rn−1 such that Γ = {(˜ y , φ(˜ y )) : y˜ ∈ U }. 129 (4.5.10) Thus, 1 + |∇φ(˜ y )|2 1 dS(y) = d˜ y n−2 x, xn ) − (˜ y , φ(˜ y ))|n−2 Γ |x − y| U |(˜ 1 + |∇φ(˜ y )|2 ≤ d˜ y |˜ x − y˜|n−2 U ≤C 1 d˜ y. x − y˜|n−2 U |˜ Define 1 f (r) = n−2 , r ∀ r > 0. Then it follows from Lemma 4.5.3 that 1 d˜ y= f (|˜ x − y˜|) d˜ y x − y˜|n−2 U |˜ U f (|˜ z |) d˜ z ≤ BR (0) = CR = C|U |1/(n−1) . Again by the parametrization (4.5.10), it is readily seen that |U | ≤ |Γ|. Hence, 1 dS(y) ≤ C|U |1/(n−1) ≤ C|Γ|1/(n−1) . n−2 Γ |x − y| Corollary 4.5.6. Let Ω be a bounded, open subset of Rn (n ≥ 3) with C 1 boundary. Then there exists a constant C = C(n, Ω) such that for any relative open subset Γ of ∂Ω and for any x ∈ Rn , 1 dS(y) ≤ C|Γ|1/(n−1) . n−2 |x − y| Γ 130 Proof. Since ∂Ω is C 1 , for any point x0 ∈ ∂Ω, the boundary part of Ω near x0 can be straightened out (thus can be given by a graph as in Definition 4.5.4). As a result, we can split ∂Ω into finite pieces: K ∂Ω = Ai , (4.5.11) i=1 where each Ai (1 ≤ i ≤ K) is given by the graph of some C 1 function φi on some open set Ui ⊆ Rn−1 . The number of total pieces K and ||∇φi ||L∞ (U ) only depend on Ω. i So for any 1 ≤ i ≤ K, Γ ∩ Ai is also a boundary part this is given by a graph. Therefore by Lemma 4.5.5, there exists a constant C = C(n, Ω) such that for any 1 ≤ i ≤ K, 1 dS(y) ≤ C|Γ ∩ Ai |1/(n−1) . n−2 |x − y| Γ∩Ai Hence, K 1 1 dS(y) ≤ dS(y) n−2 n−2 |x − y| Γ∩A Γ |x − y| i i=1 K |Γ ∩ Ai |1/(n−1) ≤C i=1 ≤ CK|Γ|1/(n−1) = C|Γ|1/(n−1) . Lemma 4.5.5 and Corollary 4.5.6 will be applied to show our desired Lemma 4.5.7 which is an improvement of Lemma 4.3.3 when n ≥ 3. Lemma 4.5.7. Let n ≥ 3. Let Ω and Γ1 be the same as in (1.1.1). Then there exists 131 C = C(n, Ω) such that for any x ∈ Rn and t ≥ 0, t 0 Γ1 Φ(x − y, t − τ ) dS(y) dτ ≤ C|Γ1 |1/(n−1) . (4.5.12) Proof. In this proof, unless otherwise stated, C represents constants which only depend on n and Ω. First, by the explicit formula (1.4.4) of Φ and a change of variable in τ , we have t t Φ(x − y, t − τ ) dS(y) dτ = C 0 Γ1 2 τ −n/2 e−|x−y| /(4τ ) dτ dS(y). Γ1 0 Then by the change of variable s = |x − y|2 /(4τ ) for τ , t 2 τ −n/2 e−|x−y| /(4τ ) dτ dS(y) ≤ C Γ1 0 ∞ n 1 s 2 −2 e−s ds dS(y). n−2 Γ1 |x − y| |x−y|2 /(4t) (4.5.13) n Since n ≥ 3, s 2 −2 e−s is integrable on (0, ∞). As a result, ∞ ∞ n n −2 −s 1 1 2 e ds dS(y) ≤ s s 2 −2 e−s ds dS(y) n−2 n−2 2 |x − y| |x − y| Γ1 Γ1 |x−y| /(4t) 0 =C 1 dS(y). n−2 Γ1 |x − y| Now applying Corollary 4.5.6, 1 dS(y) ≤ C|Γ1 |1/(n−1) . n−2 Γ1 |x − y| The following Lemma 4.5.8, Corollary 4.5.9 and Lemma 4.5.10 are parallel results as Lemma 4.5.5, Corollary 4.5.6 and Lemma 4.5.7, but they deal with dimension n = 2 rather 132 than n ≥ 3. Lemma 4.5.8. Let Ω be a bounded, open subset of R2 with C 1 boundary. Let Γ be a relatively open subset of ∂Ω that is given by a graph as defined in Definition 4.5.4. Then there exists a constant C = C(Ω, ||∇φ||L∞ (U ) ), where φ and U are the same as in Definition 4.5.4, such that for any x ∈ Ω, ln Γ dΩ 1 dS(y) ≤ C|Γ| ln +1 , |x − y| |Γ| where dΩ denotes the diameter of Ω, namely dΩ = sup{|u − v| : u, v ∈ Ω}. Proof. By Definition 4.5.4, without loss of generality, we can assume there exists a C 1 function φ : R → R and a bounded, open set U ⊆ R such that Γ = {(˜ y , φ(˜ y )) : y˜ ∈ U }. (4.5.14) In addition, we define f (r) =   d   ln rΩ , 0 < r ≤ dΩ ,    0, r > dΩ . Since x = (˜ x, xn ) ∈ Ω, then for any (˜ y , φ(˜ y )) ∈ Γ, |˜ x − y˜| ≤ |(˜ x, xn ) − (˜ y , φ(˜ y ))| ≤ dΩ . 133 (4.5.15) As a result, ln Γ dΩ dΩ dS(y) = ln |x − y| |(˜ x, xn) − (˜ y , φ(˜ y ))| U ≤C ln U 1 + |∇φ(˜ y )|2 d˜ y dΩ d˜ y |˜ x − y˜| f (|˜ x − y˜|) d˜ y. =C (4.5.16) U Now it follows from Lemma 4.5.3 that f (|˜ x − y˜|) d˜ y≤ U f (|˜ z |) d˜ z BR (0) R f (r) dr, =2 (4.5.17) 0 where |BR (0))| = |U |, namely 2R = |U |. For any y˜1 , y˜2 ∈ U , |˜ y1 − y˜2 | ≤ |(˜ y1 , φ(˜ y1 )) − (˜ y2 , φ(˜ y2 ))| ≤ dΩ , which implies diam(U ) ≤ dΩ . Moreover, since U ⊆ R, then |U | ≤ diam(U ) ≤ dΩ . Thus, R = |U |/2 ≤ dΩ /2. So it follows from (4.5.15) that R R dΩ dr r 0 d = R ln Ω + 1 . R f (r) dr = 0 ln 134 (4.5.18) Again by the parametrization 4.5.14, it is readily seen that |U | ≤ |Γ|. Therefore, R ≤ min |Γ| dΩ , . 2 2 Define g(r) = r ln dΩ +1 , r ∀ r > 0. Then g is increasing when r ∈ (0, dΩ ]. In addition, (4.5.18) implies that R f (r) dr = g(R). 0 Next, we will estimate g(R) in the following two situations. • |Γ| ≤ dΩ . g(R) ≤ g(|Γ|) = |Γ| ln = |Γ| ln ≤ C|Γ| ln dΩ +1 |Γ| 1 + ln(dΩ ) + 1 |Γ| 1 +1 |Γ| (4.5.19) for some constant C only depending on Ω. • |Γ| > dΩ . g(R) ≤ g(dΩ ) = dΩ . Define h(r) = r ln 1 +1 , r 135 ∀ r > 0. (4.5.20) Then h (r) = − 1 < 0, r(1 + r)2 ∀ r > 0. This implies h (r) > 0 for any r > 0, since lim h (r) = 0. Hence, h is an increasing r→∞ function and 1 1 + 1 = h(|Γ|) ≥ h(dΩ ) = dΩ ln +1 . |Γ| dΩ |Γ| ln Thus, there exists a constant only depending on Ω such that g(R) ≤ C|Γ| ln 1 +1 . |Γ| (4.5.21) Combining (4.5.16) through (4.5.21), the conclusion follows. Corollary 4.5.9. Let Ω be a bounded, open subset of R2 with C 1 boundary. Then there exists a constant C = C(Ω) such that for any relative open subset Γ of ∂Ω and for any x ∈ Ω, ln Γ dΩ 1 dS(y) ≤ C|Γ| ln +1 , |x − y| |Γ| where dΩ denotes the diameter of Ω, namely dΩ = sup{|u − v| : u, v ∈ Ω}. Proof. Similar to the proof of Corollary 4.5.6, we first decompose ∂Ω as that in (4.5.11). Then K dΩ dΩ ln dS(y) ≤ ln dS(y). |x − y| |x − y| Γ Γ∩A i i=1 Now since each Γ ∩ Ai is given by a graph, we can apply Lemma 4.5.8 to conclude there 136 exists a constant C = C(Ω) such that for each 1 ≤ i ≤ K, ln Γ∩Ai 1 dΩ dS(y) ≤ C|Γ ∩ Ai | ln +1 . |x − y| |Γ ∩ Ai | Recalling the function h defined in (4.5.20) is an increasing function, so |Γ ∩ Ai | ln 1 1 + 1 ≤ |Γ| ln +1 . |Γ ∩ Ai | |Γ| As a result, ln Γ 1 dΩ dS(y) ≤ C|Γ| ln +1 . |x − y| |Γ| Next, Lemma 4.5.8 and Corollary 4.5.9 will be applied to show our desired Lemma 4.5.10 which is an improvement of Lemma 4.3.3 when n = 2. Lemma 4.5.10. Let n = 2. Let Ω and Γ1 be the same as in (1.1.1). Then there exists C = C(Ω) such that for any x ∈ Ω and 0 < t ≤ 1, t 0 Γ1 Φ(x − y, t − τ ) dS(y) dτ ≤ C|Γ1 | ln 1 +1 . |Γ1 | (4.5.22) Proof. We proceed similarly as that in the proof of Lemma 4.5.7 until (4.5.13). Next, the situation is different since sn/2−2 e−s is not integrable near s = 0 when n = 2. For convenience, we rewrite (4.5.13) when n = 2 as following: t Γ1 0 2 τ −1 e−|x−y| /(4τ ) dτ dS(y) ≤ C ∞ Γ1 |x−y|2 /(4t) 137 s−1 e−s ds dS(y). (4.5.23) Since t ≤ 1 and x ∈ Ω, |x − y|2 /(4t) ≥ |x − y|2 /4. Thus, ∞ |x−y|2 /(4t) s−1 e−s ds ≤ = ≤ ∞ s−1 e−s ds 2 |x−y| /4 d2Ω ∞ s−1 e−s ds s−1 e−s ds + 2 2 |x−y| /4 dΩ ∞ d2Ω 1 −1 s ds + 2 e−s ds 2 2 dΩ d |x−y| /4 Ω = 2 ln dΩ + C. |x − y| As a result, ∞ Γ1 |x−y|2 /(4t) s−1 e−s ds dS(y) ≤ C|Γ1 | + 2 ln Γ1 dΩ dS(y). |x − y| Now applying Corollary 4.5.9, ln Γ1 dΩ 1 dS(y) ≤ C|Γ1 | ln +1 . |x − y| |Γ1 | Finally noticing that |Γ1 | ≤ 1 |Γ1 | ln 1 +1 ln |∂Ω| 1 = C|Γ1 | ln +1 , |Γ1 | the lemma is proved. 138 1 +1 |Γ1 | (4.5.24) 4.5.2 Proof of Theorem 4.1.3 The idea of the proof in this subsection is similar to that in Subsection 4.4.1. The main difference is that in the proof of Theorem 4.1.4, we treat t∗ as a variable in (0, 1] and the choice of {Mk } depends on t∗ ; however, in the proof below, the lower bound t∗ will be a fixed number in (0, 1] and the choice of {Mk } does not depend on t∗ . On the technical aspects, • First, Lemma 4.5.4 will be used instead of Lemma 4.4.8 to overcome the lack of global convexity. • Secondly, Lemma 4.5.7 and Lemma 4.5.10 will be exploited in place of Lemma 4.3.3 to obtain higher order of |Γ1 |−1 for the lower bound of T ∗ as |Γ1 | 0. The price for obtaining this higher order is that the results in this section only work for small |Γ1 |. Proof of Theorem 4.1.3. We will only give the proof for the case n ≥ 3, since the proof for n = 2 follows the same argument except applying Lemma 4.5.10 instead of Lemma 4.5.7 to estimate the last term IIIk in (4.5.27). First, without loss of generality, we can assume d ≤ 1 for convenience. In the proof below, Ci (1 ≤ i ≤ 3) and Cj∗ (1 ≤ j ≤ 2) denote constants which only depend on n, Ω and d. Let M (t) be defined as in (1.4.3). The first step of the proof is to find a constant t∗ ∈ (0, 1] and a finite strictly increasing sequence {Mk }0≤k≤L such that if Tk denotes the first time that M (t) = Mk , then Tk − Tk−1 ≥ t∗ for 1 ≤ k ≤ L. The second step is to derive a lower bound for Lt∗ . Step 1. Let t∗ ∈ (0, 1] which will be determined later in this step. Define M0 as that in (1.4.2) and T0 = 0. Then for k ≥ q, suppose Mk−1 has been constructed, we are trying to define Mk such that Tk − Tk−1 ≥ t∗ . 139 Denote tk = Tk − Tk−1 . We will first check what happens if tk ≤ 1. By the maximum principle and the Hopf lemma, there exists xk ∈ Γ1 such that u(xk , Tk ) = Mk . (4.5.25) Applying the representation formula (4.2.4) with T = Tk−1 and (x, t) = (xk , tk ), u(xk , Tk ) = 2 Ω Φ(xk − y, tk ) u(y, Tk−1 ) dy ∂Φ(xk − y, t − τ ) u(y, Tk−1 + τ ) dS(y) dτ ∂n(y) ∂Ω tk −2 0 tk +2 0 Γ1 Φ(xk − y, tk − τ ) uq (y, Tk−1 + τ ) dS(y) dτ. (4.5.26) Noticing (4.5.25), the above equality implies that Mk ≤ 2Mk−1 q + 2Mk Φ(xk − y, tk ) dy + 2Mk Ω tk 0 Γ1 q + 2Mk Ω Φ(xk Ω tk 0 Γ1 0 ∂Φ(xk − y, tk − τ ) dS(y) dτ ∂n(y) ∂Ω Φ(xk − y, tk − τ ) dS(y) dτ = 2(Mk−1 − Mk ) + 2Mk tk Φ(xk − y, tk ) dy − y, tk ) dy + tk 0 ∂Φ(xk − y, tk − τ ) dS(y) dτ ∂n(y) ∂Ω Φ(xk − y, tk − τ ) dS(y) dτ Ik + IIk + IIIk . (4.5.27) First, since tk ≤ 1, Lemma 4.4.3 yields that Ik ≤ 2(Mk−1 − Mk )b1 , 140 (4.5.28) where b1 is the same as in (4.4.10). Secondly it follows from Corollary 4.5.2 that there exists a constant C1 such that IIk ≤ Mk + C1 Mk exp − d2 . 8tk (4.5.29) Thirdly, Lemma 4.5.7 implies the existence of a constant C2 such that q IIIk ≤ C2 Mk |Γ1 |1/(n−1) . (4.5.30) Combining (4.5.27), (4.5.28), (4.5.29) and (4.5.30), Mk ≤ 2b1 (Mk−1 − Mk ) + Mk + C1 Mk exp − d2 q + C2 Mk |Γ1 |1/(n−1) . 8tk Subtracting Mk from both sides, 2b1 (Mk − Mk−1 ) ≤ C1 Mk exp − d2 q + C2 Mk |Γ1 |1/(n−1) . 8tk Dividing by 2b1 and rearranging the equation, C d2 1 − 1 exp − 2b1 8tk q C2 Mk |Γ1 |1/(n−1) Mk − Mk−1 ≤ . 2b1 Define C1∗ = max C1 ,1 , 2b1 C2∗ = C2 , 2b1 (4.5.31) then 1 − C1∗ exp − d2 8tk q Mk − Mk−1 ≤ C2∗ Mk |Γ1 |1/(n−1) . 141 (4.5.32) Let us further temporarily assume tk is so small that exp − d2 8tk ≤ 1 Mk − Mk−1 , 2C1∗ Mk (4.5.33) which is equivalent to 1 − C1∗ exp − d2 8tk 1 Mk − Mk−1 ≥ (Mk − Mk−1 ). 2 Then it follows from (4.5.32) that Mk − Mk−1 ≤ 2C2∗ |Γ1 |1/(n−1) . q Mk (4.5.34) As a summary, this paragraph claims that if tk ≤ 1 and (4.5.33) holds, then Mk will satisfy (4.5.34). Based on this observation, denote δ1 = 4C2∗ |Γ1 |1/(n−1) . (4.5.35) Then we define Mk to be the solution (if it exists) to Mk − Mk−1 = δ1 . q Mk (4.5.36) With this choice of Mk , it is evident from (4.5.34) that either tk > 1 or tk violates (4.5.33). 142 Due to (4.5.36), tk violating (4.5.33) implies d2 exp − 8tk q−1 q−1 Mk δ1 M0 δ1 1 Mk − Mk−1 ≥ . > = 2C1∗ Mk 2C1∗ 2C1∗ (4.5.37) Now if (the following requirement will be clear later) q−1 M0 δ1 ≤ 1 , 6q (4.5.38) then the right hand side of (4.5.37) is smaller than 1. Therefore, (4.5.37) is equivalent to 2C1∗ d2 ln q−1 8 M0 δ1 −1 2C1∗ d2 t∗ = ln q−1 8 M0 δ1 −1 tk > . Define . (4.5.39) Since d ≤ 1 and C1∗ ≥ 1, it is obvious that t∗ ∈ (0, 1]. Moreover, we can conclude that tk ≥ min{1, t∗ } = t∗ . On the other hand, by applying Lemma 4.4.4, (4.5.36) has a solution Mk > Mk−1 if and 1−q only if δ1 ≤ Mk−1 Eq . In addition, as long as such a solution exists, Mk can be chosen to satisfy Mk−1 < Mk ≤ q M . q − 1 k−1 Thus, the strategy of constructing {Mk } is summarized as following. First, M0 is defined to be the same as in (1.4.2). Next, for k ≥ 1, suppose Mk−1 has been constructed, then 143 based on Lemma 4.4.4, whether defining Mk depends on how large Mk−1 is. q−1 q If Mk−1 δ1 ≤ Eq , then we define Mk ∈ Mk−1 , q−1 Mk−1 to be the solution to (4.5.36). q−1 If Mk−1 δ1 > Eq , then there does not exist Mk > Mk−1 which solves (4.5.36). So we do not define Mk and stop the construction. Based on this construction, if {Mk }1≤k≤L0 have been defined, then for any 1 ≤ k ≤ L0 , Tk − Tk−1 ≥ t∗ . Therefore, Tk ≥ kt∗ for any 1 ≤ k ≤ L0 . Applying Theorem 3.1.1, L0 ≤ T ∗ /t∗ < ∞, which means the cardinality of {Mk } has be to finite. So we can assume the constructed sequence is {Mk }0≤k≤L for some finite L. Step 2. By Lemma 4.4.5, L> 1 1 − 3q . 10(q − 1) M q−1 δ1 0 Taking advantage of the requirement (4.5.38), T ∗ ≥ Lt∗ > t∗ q−1 20(q − 1)M0 Writing q−1 Y = M0 |Γ1 |1/(n−1) , then (4.5.38) reduces to Y ≤ 1 . 24C2∗ q 144 . δ1 In addition, recalling the definition (4.5.39) of t∗ , then T∗ ≥ C1∗ C3 ln (q − 1)Y 2C2∗ Y −1 for some constant C3 . Define Y0 = min 2C2∗ 1 , ,1 . 24C2∗ C1∗ Y Then for any Y ≤ q0 , C1∗ ln 2C2∗ Y ≤ 2 ln 1 Y = 2| ln Y |. Hence, T∗ ≥ C3 . 2(q − 1)Y | ln Y | Remark 4.5.11. If the whole domain Ω is convex, then due to Lemma 4.4.2, (4.5.29) becomes IIk ≤ Mk . Based on this change, all the exponential terms in the proof of Theorem 4.1.3 will disappear. As a result, t∗ can be just chosen as 1 instead of the expression (4.5.39) which contains the logarithm term in the denominator. Consequently, the logarithm term which is in the denominator of (4.1.3) and (4.1.4) will also disappear. Namely, the lower bound in (4.1.3) and (4.1.4) can be improved to be T∗ ≥ C . (q − 1)Y 145 4.6 Comparison with previous works As mentioned in the introduction, there are vast literature on the blow-up problems of the parabolic type equations. But few of them deal with the lower bound estimate of the blow-up time. A popular method dealing with the lower bound is established in [32–34]. After that, the similar idea is also applied to some more generalized problems, see [6, 30, 40, 41, 43]. In this section, we will compare Theorem 4.1.4 with the result in [34]. In [34], it studied the problem    ut (x, t) = ∆u(x, t)      ∂u(x,t) = F u(x, t) ∂n(x)        u(x, 0) = u (x) 0 in Ω × (0, T ], on ∂Ω × (0, T ], in (4.6.1) Ω, where Ω is a convex, bounded open subset in R3 with smooth boundary and 0 ≤ F (s) ≤ ksm (4.6.2) for some k > 0 and m ≥ 3/2. By introducing the energy function u4(m−1) (x, t) dx ϕ(t) = Ω and adopting a Sobolev-type inequality developed in [32], they derive a first order differential inequality for ϕ(t) and then obtain a lower bound for T ∗ : T∗ ≥ ∞ dη ϕ(0) K1 η + K2 η 3/2 + K3 η 3 146 , (4.6.3) where K1 , K2 and K3 are some positive constants depending on Ω and m. Let us compare Theorem 4.1.4 with (4.6.3) for the problem (4.6.1) where F (s) = sq . For convenience of statement, the lower bounds in Theorem 4.1.4 and (4.6.3) are denoted as T1 and T2 respectively. • First, T1 works for any q > 1 and also give the exact asymptotic rate of T ∗ as q 1. However, T2 is valid only for q ≥ 3/2, due to the restriction (4.6.2) and m ≥ 3/2 (let k = 1 and m = q). • Secondly, as M0 −(q−1) 0, T1 is of order M0 ; however, if the initial data u0 does not oscillate too much, that is 4(q−1) ϕ(0) = Ω u0 4(q−1) (x) dx ∼ M0 |Ω|, then 1 ∼ 4(q − 1) ln(M0−1 ), ϕ(0) T2 ∼ ln which is only a logarithm order of M0−1 . −2(q−1) • Thirdly, as M0 → ∞, T1 is of order M0 ; however, if the initial data u0 again does not oscillate too much, then T2 grows like −8(q−1) T2 ∼ [ϕ(0)]−2 ∼ M0 Since M0 is large, −2(q−1) M0 −8(q−1) >> M0 147 . . Chapter 5 Prevention of Blowup 5.1 Main theorems and outline of the approach In this chapter, we provide two methods to prevent the finite-time blowup. The first strategy is to repair the damaged part. The second way is by adding a suitable pump near the 1 , n is defined as in (4.3.17). damaged part. In this chapter, for any α ∈ 0, n−1 α Part I: Repairing the broken part The first strategy is to repair the broken part Γ1 in the original problem (1.1.1). We are trying to find the repairing rate at which the temperature can be prevented from blowing up in finite time. The setup below will be studied.              ut (x, t) = ∆u(x, t) in ∂u(x,t) ∂n(x) = uq (x, t) Ω × (0, T ], on Γ1,t × (0, T ], (5.1.1)   ∂u(x,t)  =0   ∂n(x)        u(x, 0) = u0 (x) on Γ2,t × (0, T ], in Ω, where Γ1,t represents the broken part at time t and Γ1,0 = Γ1 . However, since the broken boundary in problem (5.1.1) is changing, the existence of the solution is harder to justify. So in this part, we will assume the existence of the weak solution to (5.1.1) as in Definition 148 5.1.1 and then show that the weak solution does not blow up in finite time. Definition 5.1.1. Given T > 0, a continuous function u on Ω × [0, T ] is called a weak solution to (5.1.1) if for any T0 ∈ [0, T ), t ∈ (0, T − T0 ] and for any φ ∈ C 2 (Ω × [0, t]), t 0 Ω u(y, T0 + τ )(φτ + ∆φ)(y, τ ) dy dτ t = Ω u(y, T0 + t)φ(y, t) − u(y, T0 )φ(y, 0) dy − t + 0 ∂Ω u(y, T0 + τ ) 0 Γ1,τ uq (y, T0 + τ )φ(y, τ ) dS(y) dτ ∂φ(y, τ ) dS(y) dτ. ∂n(y) (5.1.2) It is readily seen that if u is a smooth solution, then it is a weak solution. In the rest of this section, we will only deal with the weak solution. The first result below works for any C 2 domain Ω and it says that as long as the area of |Γ1,t | decreases at some exponential rate, the temperature will not blow up in finite time. Theorem 5.1.2. Let M0 be defined as in (1.4.2). Then there exists a constant C = C(n, Ω, q, M0 ) such that if |Γ1,t | ≤ e−Ct |Γ1 |, then for any weak solution u to (5.1.1) on Ω × [0, T ], u(x, t) ≤ 3M0 eCt , ∀ (x, t) ∈ Ω × [0, T ]. The second result deals with convex domains and it says that as long as |Γ1,t | decreases at a suitable exponential rate, the temperature can be bounded by any data that is larger than the initial maximum. 149 Theorem 5.1.3. Assume Ω is convex. Let M0 be defined as in (1.4.2). Then for any B > M0 , there exists a constant C = C(n, Ω, q, M0 , B) such that if |Γ1,t | ≤ e−Ct |Γ1 |, then for any weak solution u to (5.1.1) on Ω × [0, T ], u(x, t) ≤ B, ∀ x ∈ Ω, 0 ≤ t ≤ T. Again, the global convexity may not be practical in real applications. So we want to obtain a similar result by only assuming local convexity near Γ1 . But this time, we can only prove the boundedness of the temperature with double-exponential decay rate of |Γ1,t |. Theorem 5.1.4. Assume Conv [Γ1 ]d ⊆ Ω for some d > 0. Let M0 be defined as in (1.4.2). Then for any B > M0 , there exists a constant C = C(n, Ω, d, q, M0 , B) such that if |Γ1,t | ≤ exp 2 (n − 1) 1 − eCt |Γ1 |, then for any weak solution u to (5.1.1) on Ω × [0, T ], u(x, t) ≤ B, ∀ x ∈ Ω, 0 ≤ t ≤ T. Part II: Adding a pump In this part, we consider adding a negative source locally to 150 prevent the finite-time blowup. Now the problem becomes     ut (x, t) = ∆u(x, t) − ψ1 (x) up (x, t) in Ω × (0, T ]          ∂u(x,t) = uq (x, t) on Γ1 × (0, T ] ∂n(x)   ∂u(x,t)  =0   ∂n(x)        u(x, 0) = u0 (x) where (see Figure 5.1) Ω1 Ω2 (5.1.3) on Γ2 × (0, T ] in Ω, Γ1 Ω Γ, p > 1, q > 1 and    = 1, x ∈ Ω1 ,      ψ1 (x) = 0, x ∈ Ω \ Ω2 ,        ∈ (0, 1), x ∈ Ω \ Ω 2 1 is a smooth function. Ω Γ2 Ω2 Ω1 Γ1 Γ Figure 5.1: Model with a pump For problem (5.1.3), by the similar outline as that in Section 2.4, we can show it has a local nonnegative solution and the solution can extend as long as the L∞ norm of u is finite. 151 We want to demonstrate that by choosing suitable p, the solution will not blow up, which implies the solution is global. Now we consider the following problem    vt (x, t) = ∆v(x, t) − ψ1 (x) v p (x, t) in      ∂v(x,t) = η(x) v q (x, t) ∂n(x)        v(x, 0) = u (x), 0 where Ω × (0, T ], on ∂Ω × (0, T ], in (5.1.4) Ω,    = 1, x ∈ Γ1 ,      η(x) = 0, x ∈ ∂Ω \ Γ,        ∈ (0, 1), x ∈ Γ \ Γ . 1 By comparison principle, v ≥ u. So if we can prove that v is always finite, then u will not blow up. The following conclusion is valid for any C 2 domain Ω. Theorem 5.1.5. If p > 1, q > 1 and p > 2q − 1, then the solution v to (5.1.4) does not blow up in finite time. As a result, the solution u to (5.1.3) exists globally. The organization of this chapter is that in Section 5.2, we discuss how to prevent the finite-time blowup by repairing the broken part. In Section 5.3, we provide another way to control the solution by adding a suitable pump. 5.2 Repairing the broken part In this section, we study the problem (5.1.1). The results are divided into three subsections, due to different geometric properties of Ω. The Subsection 5.2.1 deals with any C 2 domains 152 but the conclusion only prevents blowup in finite time without providing any specific bound. Next, both Subsection 5.2.2 and Subsection 5.2.3 try to control the temperature under any value that is larger than the initial maximum. Subsection 5.2.2 assumes the global convexity of Ω while Subsection 5.2.3 only requires the local convexity near Γ1 . 5.2.1 Prevention of finite-time blowup The idea in this subsection is similar to that in Subsection 4.3.1. The difference is that we will use the decay of |Γ1,t | to eliminate the nonlinear effect on the boundary. First, we need the analogous representation formulas for the weak solution. The first lemma is the representation formula of the solution for the inside points. Then its corollary is the representation formula for the boundary points. Lemma 5.2.1. Assume u is a weak solution to (5.1.1) on Ω × [0, T ] for some T > 0. Then for any x ∈ Ω, T0 ∈ [0, T ) and t ∈ (0, T − T0 ], t u(x, T0 + t) = Ω Φ(x − y, t) u(y, T0 ) dy + t − 0 0 Γ1,τ Φ(x − y, t − τ ) uq (y, T0 + τ ) dS(y) dτ ∂Φ(x − y, t − τ ) u(y, T0 + τ ) dS(y) dτ ∂n(y) ∂Ω (5.2.1) Proof. This proof is similar to that of Lemma 4.2.3. Fix any x ∈ Ω, T0 ∈ [0, T ) and t ∈ (0, T − T0 ]. For > 0, define φ (y, τ ) = Φ(x − y, t − τ + ). Plugging φ into (5.1.2) and sending → 0, (5.2.1) will be justified. 153 Corollary 5.2.2. Assume u is a weak solution to (5.1.1) on Ω × [0, T ] for some T > 0. Then for any x ∈ ∂Ω, T0 ∈ [0, T ) and t ∈ (0, T − T0 ], t u(x, T0 + t) = 2 Ω Φ(x − y, t)u(y, T0 ) dy + 2 t 0 Γ1,τ Φ(x − y, t − τ )uq (y, T0 + τ )dS(y)dτ ∂Φ(x − y, t − τ ) u(y, T0 + τ )dS(y)dτ. ∂n(y) ∂Ω −2 0 (5.2.2) Proof. Similar to the proof for Corollary 4.2.4, by Lemma 5.2.1 and the jump relation of the double-layer heat potential, (5.2.2) can be proved. In the proof below as well as the proofs for Theorem 5.1.3 and Theorem 5.1.4, Corollary 5.2.2 play an essential role. Proof of Theorem 5.1.2. Applying the representation formula (5.2.2) to (5.1.1) with T0 = 0, we get for any x ∈ ∂Ω and t ∈ (0, T ), t u(x, t) = 2 Ω Φ(x − y, t) u0 (y) dy − 2 t 0 ∂Φ(x − y, t − τ ) u(y, τ ) dS(y) dτ ∂n(y) ∂Ω Φ(x − y, t − τ ) uq (y, τ ) dS(y) dτ. +2 0 Γ1,τ Define M as (4.3.3). Then for any m > 1, t M (t) ≤ 2M0 + 2 t +2 M (τ ) 0 M q (τ ) 0 Φ(x − y, t − τ ) dS(y) dτ Γ1,τ t ≤ 2M0 + C ∂Φ(x − y, t − τ ) dS(y) dτ ∂n(y) ∂Ω t 1 (t − τ )− 2 M (τ ) dτ + C 0 0 154 m−1 n n−1 M q (τ ) |Γ1,τ | m (t − τ )− 2 + 2m dτ. In order for the integrability of the last term of the above inequality, the power − n2 + n−1 2m should be greater than −1, which means m < n−1 n−2 . Define A : [0, T ] → R by A(t) = |Γ1,t |. (5.2.3) In addition, let α = (m − 1)/m and β = 1 − nα . 1 ), β ∈ ( 1 , 1) and Then α ∈ (0, n−1 2 t M (t) ≤ 2M0 + C t 1 (t − τ )− 2 M (τ ) dτ + C (t − τ )−β Aα (τ ) M q (τ ) dτ. (5.2.4) 0 0 If q−1 Aα (τ )M q−1 (τ ) ≤ 3q−1 |Γ1 |α M0 , then t M (t) ≤ 2M0 + C t 1 (t − τ )− 2 M (τ ) dτ + C 0 0 q−1 (t − τ )−β 3q−1 |Γ1 |α M0 M (τ ) dτ. Now we are looking for a function v(t) = 3M0 ekt (5.2.5) for some constant k determined later such that t v(t) ≥ 2M0 + C t 1 (t − τ )− 2 v(τ ) dτ + C 0 0 155 q−1 (t − τ )−β 3q−1 |Γ1 |α M0 v(τ ) dτ. Plugging (5.2.5) into the above inequality, we obtain an equivalent form t 3M0 ekt ≥ 2M0 + 3CM0 0 t q (t − τ )−1/2 ekτ dτ + 3q CM0 |Γ1 |α (t − τ )−β ekτ dτ. (5.2.6) 0 Notice t e−kt t (t − τ )−β ekτ dτ = (t − τ )−β e−k(t−τ ) dτ 0 0 t = τ −β e−kτ dτ 0 = 1 kt s −β −s e ds k 0 k ≤ k −(1−β) Γ(β) = k −nα Γ(1 − nα ), where Γ is the standard Gamma function. So in order to satisfy (5.2.6), it suffices to have 2 q−1 1 ≥ e−kt + Ck −1/2 + C 3q−1 M0 |Γ1 |α k −nα 3 for some constant C = C(n, Ω, α). Noticing 32 e−kt ≤ 2/3, so by taking k ≥ 36C 2 , then it suffices to have 1 q−1 ≥ C 3q−1 M0 |Γ1 |α k −nα . 6 The above inequality is equivalent to q−1 k ≥ (C 3q−1 M0 156 |Γ1 |α )1/nα . Therefore, it suffices to have q−1 k = C max 1, 3q−1 M0 |∂Ω|α 1/nα (5.2.7) for some constant C = C(n, Ω, α). By this choice of k, if A satisfies q−1 Aα (τ ) v q−1 (τ ) ≤ 3q−1 M0 |Γ1 |α . (5.2.8) Then it follows from (5.2.6) and (5.2.8) that t v(t) ≥ 2M0 + C t 1 (t − τ )− 2 v(τ ) dτ + C 0 (t − τ )−β Aα (τ )v q (τ ) dτ. (5.2.9) 0 Since v(0) = 3M0 > M (0), then it follows from (5.2.4) and (5.2.9) that M (t) ≤ v(t), ∀ 0 ≤ t ≤ T. According to (5.2.5) and (5.2.8), Aα (t) ≤ e−(q−1)kt |Γ1 |α , namely A(t) ≤ exp − (q − 1)kt |Γ1 |. α 1 , or equivalently by choosing m = 2n−2 , the proof is finished. Finally, by choosing α = 2(n−1) 2n−3 157 5.2.2 Control of the solution under a given value for convex domains In Subsection 5.2.1, we have discussed the strategy which can prevent the finite-time blowup. But for the practical problems, it is more useful to control the temperature by a certain value. This section provides a way to control the temperature by any value that is larger than the maximum of the initial data under the assumption that Ω is convex. The idea of this subsection is similar to that in Section 4.4. Proof of Theorem 5.1.3. Define A(t) as in (5.2.3). The goal is to bound u above by B for any B > M0 . Let > 0 such that B = e M0 and let t∗ ∈ (0, 1] be a constant which will be determined later. Define T0 = 0 and p0 = 1. For any k ≥ 1 and Mk = exp (1 − 2−k ) M0 . > 0, let (5.2.10) Let Tk be the first time that M (t) reaches Mk , then the maximum principle implies the existence of xk ∈ ∂Ω such that u(xk , Tk ) = Mk . In the following, we will use induction to show that Tk ≥ kt∗ . When k = 0, it is obviously true. Step k (k ≥ 1): Suppose Tk−1 ≥ (k − 1)t∗ . Let tk Tk − Tk−1 be the time spent in the kth step. We intend to show tk ≥ t∗ , which implies Tk ≥ kt∗ . First, if tk ≥ 1, then it has already implied that tk ≥ t∗ . So in the following, we assume tk < 1. By representation 158 formula (5.2.2), we have u(xk , Tk ) = 2 Ω Φ(xk − y, tk ) u(y, Tk−1 ) dy tk +2 0 ∂Φ(xk − y, tk − τ ) u(y, Tk−1 + τ ) dS(y) dτ ∂n(y) ∂Ω tk +2 0 Γ1,T k−1 +τ Φ(xk − y, tk − τ ) uq (y, Tk−1 + τ ) dS(y) dτ. According to the definition of Tk−1 and Tk , Mk ≤ 2Mk−1 q + 2Mk Φ(xk − y, tk ) dy + 2Mk Ω tk 0 Γ1,T k−1 +τ tk 0 ∂Φ(xk − y, tk − τ ) dS(y) dτ ∂n(y) ∂Ω Φ(xk − y, tk − τ ) dS(y) dτ. (5.2.11) Since Ω is assumed to be convex, by applying Lemma 4.4.2 and Lemma 4.4.3 to the above inequality, we obtain tk q b1 (Mk − Mk−1 ) ≤ 2Mk 0 tk q ≤ 2Mk Γ1,T k−1 +τ 0 Γ1,T k−1 Φ(xk − y, tk − τ ) dS(y) dτ Φ(xk − y, tk − τ ) dS(y) dτ. Now Lemma 4.3.3 implies the existence of a constant C = C(n, Ω) such that q α (Mk − Mk−1 )b1 ≤ CMk Aα (Tk−1 )tn k , 1 ). Recalling the expression (5.2.10), then for any α ∈ (0, n−1 Mk − Mk−1 > 2−k M0 159 (5.2.12) and Mk < e M0 . In addition, by the assumption that Tk−1 ≥ (k − 1)t∗ and the fact that A is a decreasing function, we know A(Tk−1 ) ≤ A (k − 1)t∗ . Therefore, it follows from (5.2.12) that q−1 α 2−k < Ce q M0 Aα (k − 1)t∗ tn k . (5.2.13) Now if t − A(t) ≤ 2 αt∗ |Γ1 |, (5.2.14) which implies Aα (k − 1)t∗ ≤ |Γ1 |α 2−(k−1) , then it follows from (5.2.13) that tk ≥ 1/nα 1 . 2C e q M q−1 |Γ1 |α 0 Define t∗ = min 1, 1/nα 1 . 2C e q M q−1 |Γ1 |α 0 (5.2.15) Then tk ≥ t∗ and this finishes Step k. In summary, if t∗ is defined as in (5.2.15) and A(t) satisfies (5.2.14), then the above induction process is valid and can proceed forever. Thus, u will be bounded by B for all the 160 time. Noticing that t∗ ≥ min 1, 1/nα 1 2C e q M q−1 |∂Ω|α 0 K, where K is a constant which only depends on n, Ω, q, M0 , α and B. In order to satisfy (5.2.14), it suffices to have t − A(t) ≤ 2 αK |Γ1 |. 1 , the proof is finished. Finally, taking α = 2(n−1) 5.2.3 Control of the solution under a given value for the domain with local convexity near Γ1 Again, since the global convexity is not practical in real applications. In this subsection, we try to extend the result to locally convex case. But the decreasing rate will be required to be much faster than the convex case. The idea of this subsection is similar to the previous subsection. But different lower bounds will be chosen for each piece, since only the local convexity is available instead of the global convexity. Proof of Theorem 5.1.4. In the following, unless stated otherwise, C and C1 will represent constants which may depend on n, Ω, d, α and B. The value of C may change from line to line. The goal is to bound u above by B. Let > 0 such that B = e M0 and let t∗ ∈ (0, 1] be a constant which will be determined later. Define T0 = 0 and p0 = 1. For any k ≥ 1, define Mk = exp (1 − 2−k ) M0 . (5.2.16) Let Tk be the first time that M (t) reaches Mk . Then the maximum principle implies the 161 existence of xk ∈ ∂Ω such that u(xk , Tk ) = Mk . We will use induction and choose suitable t∗ to show that for any k ≥ 0, k Tk ≥ i=1 0 with the convention that i=1 t∗ i t∗ , i (5.2.17) = 0. Since limk→∞ Tk = ∞ and Mk ≤ B, the solution u will be bounded by B for all the time. When k = 0, (5.2.17) is obviously true. Step k (k ≥ 1): Suppose (5.2.17) is true for k − 1, we will show it also holds for k. Let tk Tk − Tk−1 be the time spent in the kth step. Then it suffices to show that tk ≥ tk∗ . If tk ≥ 1, then it has already satisfied tk ≥ tk∗ , so we assume tk < 1 below. By the representation formula (5.2.2), we have u(xk , Tk ) = 2 Ω Φ(xk − y, tk ) u(y, Tk−1 ) dy tk −2 0 ∂Φ(xk − y, tk − τ ) u(y, Tk−1 + τ ) dS(y) dτ ∂n(y) ∂Ω tk +2 0 Γ1,T k−1 +τ Φ(xk − y, tk − τ ) uq (y, Tk−1 + τ ) dS(y) dτ. According to the definition of Tk−1 and Tk , Mk ≤ 2Mk−1 q + 2Mk Φ(xk − y, tk ) dy + 2Mk Ω tk 0 Γ1,T k−1 +τ tk 0 ∂Φ(xk − y, tk − τ ) dS(y) dτ ∂n(y) ∂Ω Φ(xk − y, tk − τ ) dS(y) dτ. (5.2.18) Since Conv [Γ1 ]d ⊆ Ω, by applying Lemma 4.4.3 and Corollary 4.5.2 to the above inequality, 162 we obtain 2b1 (Mk − Mk−1 ) ≤ CMk exp − tk d2 q + 2Mk 8tk 0 Γ1,T Φ(xk − y, tk − τ ) dS(y) dτ. k−1 +τ Define A(t) as in (5.2.3). Noticing |Γ1,T k−1 +τ | ≤ |Γ1,T k−1 Mk − Mk−1 ≤ CMk exp − |, Lemma 4.3.3 implies d2 q α + CMk Aα (Tk−1 )tn k , 8tk 1 . Again, noticing that for some α ∈ 0, n−1 Mk − Mk−1 > 2−k M0 and Mk < e M0 , then 2−k M0 ≤ Ce M0 exp − d2 q α + Ceq M0 Aα (Tk−1 )tn k . 8tk Dividing by M0 , then there exists a constant C1 = C1 (n, Ω, d, q, M0 , B) such that d2 α 2−k ≤ C1 exp − + C1 Aα (Tk−1 )tn k . 8tk By induction, k−1 Tk−1 ≥ i=1 t∗ ≥ t∗ ln k. i 163 Hence, d2 α 2−k ≤ C1 exp − + C1 Aα (t∗ ln k)tn k . 8tk (5.2.19) Since the right hand side of (5.2.19) is an increasing function in tk , if t∗ and A(t) are chosen to satisfy both C1 exp − d2 8 t∗ k ≤ 2−k−1 (5.2.20) and C1 Aα (t∗ ln k) t∗ nα ≤ 2−k−1 , k (5.2.21) then tk ≥ tk∗ and this finishes Step k. In summary, if t∗ and A(t) are chosen to satisfy both (5.2.20) and (5.2.21), then the above induction process is valid and can proceed forever. Thus, u will be bounded by B for all the time. By elementary calculations, if t∗ ≤ d2 , 8[ln C1 + 2 ln 2] (5.2.22) then (5.2.20) is satisfied. Next, in order to realize (5.2.21), it suffices to have Aα (t∗ ln k) ≤ 21−k |Γ1 |α (5.2.23) and C1 t∗ nα 1 ≤ . k 4|Γ1 |α (5.2.24) It is readily to check that if t∗ ≤ 1/nα 1 , 4C1 |∂Ω|α 164 (5.2.25) then (5.2.24) is satisfied. According to (5.2.22) and (5.2.25), we define t∗ = min 1, d2 , 8[ln C1 + 2 ln 2] 1/nα 1 . 4C1 |∂Ω|α (5.2.26) It is readily seen that t∗ is a constant which only depends on n, Ω, d, q, M0 , B and α. With this choice of t∗ , if A(t) satisfies A(t) ≤ exp ln 2 1 − et/t∗ α |Γ1 |. (5.2.27) 1 and requiring Then it is readily seen that (5.2.23) holds. Finally, taking α = 2(n−1) A(t) ≤ exp 2(n − 1) 1 − et/t∗ |Γ1 |, then A(t) satisfies (5.2.27) automatically. Hence, the proof is finished. 5.3 Adding a pump In this section, we provide another way to prevent the finite-time blowup. Let v be the solution to (5.1.4). If m≥ (p − 1)N 2 and m ≥ (q − 1)N, then [3] shows that if the Lm norm of v does not blow up in finite time, then the L∞ norm of v will not blow up in finite time either. Thus, it is equivalent to bound the Lm norm of v. The idea of the proof of Theorem 5.1.5 is similar to that in [31] where the domain Ω is assumed to be star-shaped. By using Corollary 5.3.2, we are able to generalize the proof for any C 2 domain Ω. 165 Lemma 5.3.1. Let [a, b] be a bounded interval and f be a nonnegative C 1 function on [a, b]. Then for any r > 1 and > 0, b ||f ||rL∞ [a,b] ≤ b r2 b 2r−2 1 f r (t) dt + f (t) dt. b−a a a |f (t)|2 dt + a Proof. ∀ t1 , t2 ∈ [a, b], t1 f r (t1 ) − f r (t2 ) = rf r−1 (t)f (t) dt t2 b ≤r 1 |f (t)|2 dt + 1 a b 1 f 2r−2 (t) dt , a where 1 is some positive constant to be determined. Integrating t2 from a to b, b (b − a)f r (t1 ) ≤ a + b f r (t) dt + (b − a)r 1 b (b − a)r 1 |f (t)|2 dt a f 2r−2 (t) dt. a Let 1 = /r, then (b − a)f r (t1 ) ≤ b a + b f r (t) dt + (b − a) |f (t)|2 dt a (b − a)r2 b f 2r−2 (t) dt. a Since t1 is arbitrary, then ||f ||rL∞ [a,b] b ≤ |f a (t)|2 dt + b 1 r2 b 2r−2 r f (t) dt + f (t) dt. b−a a a 166 (5.3.1) Corollary 5.3.2. Let U = U × [a, b], where U is a bounded set in Rn−1 . Let w be a nonnegative C 1 function on U . Then for any s ∈ [a, b], |w(˜ x, s)|r d˜ x≤ U U |Dxn w(x)|2 dx + r2 1 wr (x) dx + w2r−2 (x) dx. b−a U U Proof. For any x˜ ∈ U , define f : [a, b] → R by f (t) = w(˜ x, t), ∀ t ∈ [a, b]. Then by Lemma 5.3.1, for any s ∈ [a, b], b |f (s)|r ≤ |f (t)|2 dt + a b 1 r2 b 2r−2 f r (t) dt + f (t) dt. b−a a a That is, b |w(˜ x, s)|r ≤ a |Dxn w(˜ x, t)|2 dt + b 1 r2 b 2r−2 wr (˜ x, t) dt + w (˜ x, t) dt. b−a a a Integrating x˜ on U , the conclusion follows. Proof of Theorem 5.1.5. Let m ≥ max (p−1)N ,2 2 and define v m (x, t) dx. E(t) = Ω 167 Then E (t) = m Ω =m Ω v m−1 vt dx v m−1 (∆v − ψ1 v p ) dx ∇ · (v m−1 ∇v) − (m − 1) v m−2 |∇v|2 dx − m =m Ω Ω v m+q−1 dS − m(m − 1) ≤m Γ ψ1 v m+p−1 dx v m−2 |∇v|2 dx − m Ω v m+p−1 dx. (5.3.2) Ω1 Next it will be shown that for any > 0, there exists a constant C = C(Γ, Ω1 , q, m) such that v m+q−1 dS ≤ Γ v m−2 |∇v|2 dx + C Ω1 v m+q−1 dx + C v m+2q−2 dx. (5.3.3) Ω1 Ω1 Noticing that v m−2 |∇v|2 = |∇(v m/2 )|2 , so by writing w = v m/2 and r = 2 + 2(q − 1)/m, then r > 2 and (5.3.3) is equivalent to wr dS ≤ Γ |∇w|2 dx + C wr dx + Ω1 Ω1 C w2r−2 dx. (5.3.4) Ω1 Fix any point x0 ∈ Γ. Since Γ is C 2 , there exists a neighborhood V of x0 in Ω1 that can be straightened by a C 2 bijection Ψ : B1 → V . Denote Br to be the ball in Rn−1 with radius r. Define U = B1/2 × 0, Then V0 1 ⊂ B1 . 2 Ψ(U ) ⊂ V . Define U0 = B1/2 × {0}, and Γ0 = Ψ(U0 ). 168 By change of variable x = Ψ(y), we obtain wr (x) dS(x) ≤ C Γ0 (w ◦ Ψ)r (y) dS(y) U0 (w ◦ Ψ)r (˜ y , 0) d˜ y. ≤C B1/2 Now applying Corollary 5.3.2 to the function w ◦ Ψ, (w ◦ Ψ)r (˜ y , 0) d˜ y B1/2 ≤ U |Dyn (w ◦ Ψ)(y)|2 dy + 2 (w ◦ Ψ)r (y) dy + r2 (w ◦ Ψ)2r−2 (y) dy. U U Then using the change of variable y = Ψ−1 (x), U |Dyn ≤C (w ◦ Ψ)(y)|2 dy +2 + r2 (w ◦ Ψ)2r−2 (y) dy U (∇w) Ψ(y) 2 U wr Ψ(y) dy + dy + 2 r2 U U 2 V0 V0 w2r−2 Ψ(y) dy U wr (x) dx + ∇w(x) dx + C =C (w ◦ Ψ)r (y) dy C w2r−2 (x) dx V0 In summary, we obtain 2 wr (x) dS(x) ≤ C Γ0 wr (x) dx + ∇w(x) dx + C V0 C V0 w2r−2 (x) dx (5.3.5) V0 Finally by a finite cover argument, (5.3.4) is justified, which also means (5.3.3) is verified. Now combining (5.3.2) and (5.3.3) with = m − 1, we obtain that v m+q−1 (x, t) dx + C E (t) ≤ C Ω1 v m+2q−2 (x, t) dx − m Ω1 v m+p−1 (x, t) dx, Ω1 169 for some constant C = C(Γ, Ω1 , q, m). Since p > 2q − 1 from the assumption, then E (t) ≤ C. This implies that E(t) will not blow up in finite time. By applying Theorem 1.1 in [3] and the assumption that p > 2q − 1, we can conclude that u will exist globally. 170 BIBLIOGRAPHY 171 BIBLIOGRAPHY [1] H. Amann. Quasilinear evolution equations and parabolic systems. Trans. Amer. Math. Soc., 293(1):191–227, 1986. [2] H. Amann. Quasilinear parabolic systems under nonlinear boundary conditions. Arch. Rational Mech. Anal., 92(2):153–192, 1986. [3] J. M. Arrieta, A. N. Carvalho, and A. Rodr´ıguez-Bernal. Parabolic problems with nonlinear boundary conditions and critical nonlinearities. J. Differential Equations, 156(2):376–406, 1999. [4] T. Cazenave, F. Dickstein, and F. B. Weissler. Global existence and blowup for sign-changing solutions of the nonlinear heat equation. J. Differential Equations, 246(7):2669–2680, 2009. [5] K. Deng and H. A. Levine. The role of critical exponents in blow-up theorems: the sequel. J. Math. Anal. Appl., 243(1):85–126, 2000. [6] C. Enache. Lower bounds for blow-up time in some non-linear parabolic problems under Neumann boundary conditions. Glasg. Math. J., 53(3):569–575, 2011. [7] A. Friedman. Partial differential equations of parabolic type. Prentice-Hall, Inc., Englewood Cliffs, N.J., 1964. [8] A. Friedman and B. McLeod. Blow-up of positive solutions of semilinear heat equations. Indiana Univ. Math. J., 34(2):425–447, 1985. [9] H. Fujita. On the blowing up of solutions of the Cauchy problem for ut = ∆u + u1+α . J. Fac. Sci. Univ. Tokyo Sect. I, 13:109–124 (1966), 1966. [10] V. Georgiev, H. Lindblad, and C. D. Sogge. Weighted Strichartz estimates and global existence for semilinear wave equations. Amer. J. Math., 119(6):1291–1319, 1997. [11] Y. Giga and R. V. Kohn. Asymptotically self-similar blow-up of semilinear heat equations. Comm. Pure Appl. Math., 38(3):297–319, 1985. [12] R. T. Glassey. Existence in the large for cmu = F (u) in two space dimensions. Math. Z., 178(2):233–261, 1981. [13] R. T. Glassey. Finite-time blow-up for solutions of nonlinear wave equations. Math. Z., 177(3):323–340, 1981. 172 [14] K. Hayakawa. On nonexistence of global solutions of some semilinear parabolic differential equations. Proc. Japan Acad., 49:503–505, 1973. [15] B. Hu. Blow-up theories for semilinear parabolic equations, volume 2018 of Lecture Notes in Mathematics. Springer, Heidelberg, 2011. [16] B. Hu and H.-M. Yin. The profile near blowup time for solution of the heat equation with a nonlinear boundary condition. Trans. Amer. Math. Soc., 346(1):117–135, 1994. [17] F. John. Blow-up of solutions of nonlinear wave equations in three space dimensions. Manuscripta Math., 28(1-3):235–268, 1979. [18] T. Kato. Blow-up of solutions of some nonlinear hyperbolic equations. Comm. Pure Appl. Math., 33(4):501–505, 1980. [19] K. Kobayashi, T. Sirao, and H. Tanaka. On the growing up problem for semilinear heat equations. J. Math. Soc. Japan, 29(3):407–424, 1977. [20] R. Kress. Linear integral equations, volume 82 of Applied Mathematical Sciences. Springer, New York, third edition, 2014. [21] T.-Y. Lee and W.-M. Ni. Global existence, large time behavior and life span of solutions of a semilinear parabolic Cauchy problem. Trans. Amer. Math. Soc., 333(1):365–378, 1992. [22] H. A. Levine. Some nonexistence and instability theorems for solutions of formally parabolic equations of the form P ut = −Au + F (u). Arch. Rational Mech. Anal., 51:371–386, 1973. [23] H. A. Levine. Nonexistence of global weak solutions to some properly and improperly posed problems of mathematical physics: the method of unbounded Fourier coefficients. Math. Ann., 214:205–220, 1975. [24] H. A. Levine. The role of critical exponents in blowup theorems. SIAM Rev., 32(2):262– 288, 1990. [25] H. A. Levine and L. E. Payne. Nonexistence theorems for the heat equation with nonlinear boundary conditions and for the porous medium equation backward in time. J. Differential Equations, 16:319–334, 1974. [26] H. Lindblad and C. D. Sogge. Long-time existence for small amplitude semilinear wave equations. Amer. J. Math., 118(5):1047–1135, 1996. [27] J. L´opez-G´omez, V. M´arquez, and N. Wolanski. Blow up results and localization of blow up points for the heat equation with a nonlinear boundary condition. J. Differential Equations, 92(2):384–401, 1991. 173 [28] P. Meier. On the critical exponent for reaction-diffusion equations. Arch. Rational Mech. Anal., 109(1):63–71, 1990. [29] C. E. Mueller and F. B. Weissler. Single point blow-up for a general semilinear heat equation. Indiana Univ. Math. J., 34(4):881–913, 1985. [30] L. E. Payne and G. A. Philippin. Blow-up phenomena in parabolic problems with time dependent coefficients under Dirichlet boundary conditions. Proc. Amer. Math. Soc., 141(7):2309–2318, 2013. [31] L. E. Payne, G. A. Philippin, and S. Vernier Piro. Blow-up phenomena for a semilinear heat equation with nonlinear boundary conditon, I. Z. Angew. Math. Phys., 61(6):999– 1007, 2010. [32] L. E. Payne and P. W. Schaefer. Lower bounds for blow-up time in parabolic problems under Neumann conditions. Appl. Anal., 85(10):1301–1311, 2006. [33] L. E. Payne and P. W. Schaefer. Lower bounds for blow-up time in parabolic problems under Dirichlet conditions. J. Math. Anal. Appl., 328(2):1196–1205, 2007. [34] L. E. Payne and P. W. Schaefer. Bounds for blow-up time for the heat equation under nonlinear boundary conditions. Proc. Roy. Soc. Edinburgh Sect. A, 139(6):1289–1296, 2009. [35] P. Quittner and P. Souplet. Superlinear parabolic problems. Birkh¨auser Advanced Texts: Basler Lehrb¨ ucher. [Birkh¨auser Advanced Texts: Basel Textbooks]. Birkh¨auser Verlag, Basel, 2007. Blow-up, global existence and steady states. [36] D. F. Rial and J. D. Rossi. Blow-up results and localization of blow-up points in an N -dimensional smooth domain. Duke Math. J., 88(2):391–405, 1997. [37] J. Schaeffer. The equation utt − ∆u = |u|p for the critical value of p. Proc. Roy. Soc. Edinburgh Sect. A, 101(1-2):31–44, 1985. [38] J. H. Shi. On the problems of preventing the blowing up of the solutions for semilinear parabolic equations. Acta Math. Sci. (English Ed.), 4(3):291–296, 1984. [39] T. C. Sideris. Nonexistence of global solutions to semilinear wave equations in high dimensions. J. Differential Equations, 52(3):378–406, 1984. [40] G. Tang, Y. Li, and X. Yang. Lower bounds for the blow-up time of the nonlinear nonlocal reaction diffusion problems in RN (N ≥ 3). Bound. Value Probl., pages 2014:265, 5, 2014. 174 [41] Y. Tao and S. Vernier Piro. Explicit lower bound of blow-up time in a fully parabolic chemotaxis system with nonlinear cross-diffusion. J. Math. Anal. Appl., 436(1):16–28, 2016. [42] G. Todorova and B. Yordanov. Critical exponent for a nonlinear wave equation with damping. J. Differential Equations, 174(2):464–489, 2001. [43] G. Viglialoro. Blow-up time of a Keller-Segel-type system with Neumann and Robin boundary conditions. Differential Integral Equations, 29(3-4):359–376, 2016. [44] W. Walter. On existence and nonexistence in the large of solutions of parabolic differential equations with a nonlinear boundary condition. SIAM J. Math. Anal., 6:85–90, 1975. [45] F. B. Weissler. An L∞ blow-up estimate for a nonlinear heat equation. Comm. Pure Appl. Math., 38(3):291–295, 1985. [46] B. T. Yordanov and Q. S. Zhang. Finite time blow up for critical wave equations in high dimensions. J. Funct. Anal., 231(2):361–374, 2006. [47] Q. S. Zhang. A blow-up result for a nonlinear wave equation with damping: the critical case. C. R. Acad. Sci. Paris S´er. I Math., 333(2):109–114, 2001. 175