SOME PROPERTIES OF BACKWARD FORWARD PARABOLIC EQUATIONS FROM POPULATION DYNAMICS By Lianzhang Bao A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Mathematics-Doctor of Philosophy 2013 ABSTRACT SOME PROPERTIES OF BACKWARD FORWARD PARABOLIC EQUATIONS FROM POPULATION DYNAMICS By Lianzhang Bao In this dissertation, we developed a new population dynamics model from probability theory and logistic model: ut = (D(u)ux ))x , D(u) < 0, (1) for u ∈ (0, α); D(u) > 0 for u ∈ (α, 1). (2) These equations describe population aggregation when the population density is small and diffusion when the population is large. When the net birth is also considered, the equation becomes: ut = (D(u)ux ))x + g(u). (3) We assume that the net birth g(u) satisfies g(0) = g(1) = 0, g(u) > 0 ∀u ∈ (0, 1). (4) Because of the singularity at point u = α, we can not obtain the traveling wave solution easily by methods used for purely diffusion model. In order to overcome this difficulty, we introduce a suitable definition of weak traveling wave solution and obtain the existence of the traveling wave solutions depending on the traveling wave speed c. Secondly, we consider some properties of the weak solution of Equation (1) with non- flux boundary conditions such as existence and nonexistence of the weak solution when x ∈ Ω ⊆ R1 and the asymptotic behavior of the solution: lim u(x, t) = t→+∞ 1 u(x, 0)dx, |Ω| Ω u(x, 0) ≥ α. (5) In the third part, we consider the original discrete model and first prove the discrete population density 0 ≤ u(j, t) ≤ 1 which satisfies our assumptions before deriving our new population model. Secondly, when u(j, 0) ≥ α = 1/2, j = 0, 1, 2, . . . , N and u(j, 0) is monotone: 1 lim u(j, t) = t→+∞ N −1 N −1 u(j, 0). (6) j=1 When N ≤ 4 and u(0, t) = u(N, t) = 0, we obtain the convergence results of the solutions as t → +∞ under very general initial conditions. Notating the forward region Q+ (t) = d {(i, t)|u(i, t) ≥ α, i ∈ [1, 2, 3, . . . , N ]}, backward region Q− (t) = {(i, t)|u(i, t) ≤ α, i ∈ d [1, 2, 3, . . . , N ]}. We can prove under general initial conditions: Q+ (t) ⊆ Q+ (t1 ), ∀t ≤ t1 . d d In the last part, we consider the traveling wave solution for a cell-to-cell model with adhesion which is different to our first model. ρt = [D(ρ)ρx ]x + g(ρ) t ≥ 0, x ∈ R, (7) with quadratic coefficient D(ρ) = 3γρ2 − 4γρ + 1, (8) where ρ(x, t) represent the cell density and γ ∈ [0, 1] is the adhesive coefficient between cells. To my family iv ACKNOWLEDGMENTS It is my great pleasure to thank those who provided me numerous supports for the completion of my dissertation. In particular, I would like to give my deepest and sincerest gratitude to my advisor, Dr. Zhengfang Zhou, for his constant encouragement and support during my six-years PHD studies at Michigan State University. His knowledge, insights and enthusiasm are invaluable. He introduced to me the field of backward-forward parabolic equations and generously shared his wonderful new ideas and expertise with me. I am also much obliged to him for helping me improve my English writing skills, my personalities. Many thanks to him can not be fully expressed in words. I would also express my appreciation to my committee members Dr. Gabor Francsisc, Dr. Sheldon E. Newhouse, Dr. Moxun Tang and Dr. Baisheng Yan for their time, efforts and valuable suggestions. I am grateful to Dr. Tien-Yien Li for sharing research and teaching experience with me. I am also indebted to Dr. Peiru Wu for sharing industrial experience and career training for me. I am also taking this opportunity to thank Dr. Chengrong Yang, who have provided me kindly suggestions and support all the time. I also own a great debt of gratitude to Mr. Bizhe Bao and Lianxu Bao who helped my studies when I was young in my hometown. Many thanks give to my friends here in US: Dr. Guangqun Cao, Dr. Langhua Hu, Dr. Tao Huang, Dr. Xiaoyu Li, Dr. Junshan Lin, Dr. Maureen Morton, Dr. Xiaoyi Mu, Mr. Jiarong Shi, Dr. Yuliang Wang, Mr. Xun Wang, Mr. Tianshuang Wu, Mr. Shanfu Yan Mr. Fangjin Ye, Dr. Qiong Zheng and so on. Thanks in particular to Dr. Wenmei Huang and Dr. Chunlei Zhang for taking care of me when I first came to US in 2007; Ms. Laura Havenga and Mr. Ken Chester for helping me improving my English. v I would like to thank my family for their unconditional and unlimited love and support. Thanks to my cousin Xinxin Zhu while we were in Michigan State University together. Finally, but not least, I would like to express my appreciation to the entire Department of Mathematics for the hospitality and services and I appreciated the financial support from Michigan State University. Thanks spartan! vi TABLE OF CONTENTS LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Chapter 1 1.1 The 1.2 The 1.3 Our Introduction . . . . . . . . . . . . . . review of the literature . . . . . . . . . derivation of the new population models main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 21 27 Chapter 2 Traveling wave solution for the new population model 2.1 Preliminary results and properties of traveling wave solution . . . . 2.2 Existence of the traveling wave solutions . . . . . . . . . . . . . . . 2.3 Regularity of weak traveling wave solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 31 39 45 Chapter 3 Regularity and numerical analysis of 3.1 Regularity of the continuous model . . . . . . 3.2 Numerical analysis of the lattice model . . . . 3.3 Asymptotic behaviors when N ≤ 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . the new population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . model 52 . . . . 52 . . . . 58 . . . . 67 Chapter 4 Traveling wave solution for a cell-to-cell adhesion model . . . . 4.1 Properties of the weak traveling wave solution . . . . . . . . . . . . . . . . . 4.2 Main results of traveling wave solutions when 3/4 ≤ γ ≤ 1 . . . . . . . . . . 73 73 83 BIBLIOGRAPHY 96 . . . . . . . . . . . . . vii . . . . . . . . . . . . . . . . . LIST OF FIGURES Figure 1.1 Random walk in one-dimensional space. . . . . . . . . . . . . . . . . 4 Figure 1.2 Front type (I), (−∞, +∞), sharp type (II), (a, +∞). . . . . . . . . 13 Figure 1.3 Sharp type (III), (−∞, b), sharp type (IV), (a, b). . . . . . . . . . . 13 Figure 1.4 Interface behaviors of the backward forward parabolic. . . . . . . . . 17 Figure 3.1 Asymptotic behavior of monotone initial solution (t × 104 ). For interpretation of the references to color in this and all other figures, the reader is referred to the electronic version of this dissertation. . . . . 59 The behaviors of backward forward regions as time changes from t=0 to t=0.95095. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Figure 3.2 viii Chapter 1 Introduction The use of mathematics and probability to understand the population distribution has been used for centuries which can traced back to Fisher [11]. Most of previous work use the reaction diffusion models to describe the population distribution ut = (D(u)ux )x + g(u), where u is the population density, coefficient D(u) is positive and g(u) represent the net birth death term. However motivated by the need for survival, mating, to overcome a hostile environment, the population have a tendency of aggregation when their density is in some intervals I, in which case the coefficient D(u) can be negative for u ∈ I; but diffusion otherwise. We first review some literatures relating to population modeling, then we establish new population models using the combination of probability theory and logistic model. 1.1 The review of the literature The review of the population model. Movement is a fundamental process for almost all biological organisms, ranging from the single cell level to the population level. There are three categories of mathematical model that researchers have developed to describe this biological phenomena [38]: The first one is discrete model where space is divide into a lattice 1 of points with the variables defined only at the points and time changes in ”jump”. The second one is the continuous model where all variables are considered to be defined at every point in space and time changes continuously. The last one is the hybrid which is a mixture of the previous two. Each of these models have advantages and disadvantages depending on the phenomenon under consideration, and on the length scale over which we wish to investigate the phenomenon. Discrete models of biophysical processes are of use when we are interested in the behavior of individual cells, as well as their interactions with other cells and the medium which surrounds them [1], [3], [18]. Usually, the cells are considered to be points which move on a lattice according to certain rules. These rules can be modified according to the states of neighboring points, such as whether or not they are occupied. Individual based models have found useful application to many physical systems and even simple rules of interaction can give rise to remarkably complex behavior. In particular, individual-based models have found applications in ecology, pattern formation, tumor growth and angiogenesis associated with malignancy, amongst many others. Continuous models frequently involve the development of a reaction-diffusion equation [5]. These are useful when the length scale over which we wish to investigate the phenomenon is much greater than the diameter of the individual elements composing it. These models have been found to be particularly useful in the study of pattern formation in nature, especially the phenomenon of ”diffusion driven instability”. The application of hybrid models - where cells are models as discrete entities with their movements being influenced by continuous spatial fields - has also been found to be a useful approach [7]. Discrete and continuous approaches to the modeling of cell migration developed by Bao and Zhou [5] via biased random walk will be under investigated. 2 If we consider both a continuous and discrete models of the same phenomenon, we would expect the models to give rise to similar solutions at length scales where their ranges of applicability overlap. If the solutions are totally different, we should track their differences and find reasons. Recently some models of diffusion-aggregation processes have been proposed in population dynamics. By using a random walk approach and assuming that the individuals of a population have the same probability of moving from one point to another, Skellam [35] derived the reaction-diffusion equation (Fisher-KPP equation) ut = D 2 u + g(u), (1.1) for u(r, t), the population density at the point r at time t, where D > 0 and g(u) is a monostable nonlinear reaction term (i.e, the net rate of growth like u(u − 1)). However, evidences from ecology and biology show the migration and movement of some species are depends on the local density information. Taylor and Taylor 1977 [36] introduced the ’∆ Model’ via the idea of random walk and Turchin 1989 [37] extended their model to the following: Consider a population of animals that move randomly until they perceive a conspecific, at which time they bias their movement towards that individual. The population is distributed along one-dimensional discrete space with the distance between spatial nodes equal to h. Let p(x, t) be the probability of finding an organism of any spatial position x at time t. During the time interval τ any individual can make a step of length h right with probability R(x, t), left with probability L(x, t), or make no move with probability N (x, t) (Figure 1.1). Movement of organisms is influenced by each other in the following way: (i) when there are 3 Figure 1.1: Random walk in one-dimensional space. no other animals at adjacent positions, each animal moves randomly, i.e. the probabilities of moving left or right are the same; (ii) if there is a conspecific on an adjacent position, the animal moves there with conditional probability k (conditioned on the presence of the other animal), or ignores the neighbor with conditional probability 1 − k. When the local population density is low, we can ignore the probability of having more than one conspecific in the immediate vicinity of each moving individual. These assumptions imply that at low density (p 1), R(x, t) = 1/2r(x, t) + kp(x + h, t), L(x, t) = 1/2r(x, t) + kp(x − h, t), where r(x, t) is the random component of movement r(x, t) = 1 − N (x, t) − kp(x + h, t) − kp(x − h, t). When the local density increases, for examples as a result of aggregation, the probability R(x, t) and L(x, t) will not strictly hold, since the probability that there are conspecific both 4 on the left and on the right can no longer be neglected. In addition, at high population densities the attraction between individuals can be greatly reduced, or even reversed, becoming repulsion. Instead of postulating any particular mechanism of behavioral interactions at high population densities, Turchin simply assume that k is a decreasing function of p(x, t). If there are no births and deaths, p(x, t) satisfies the recurrence equation (Okubo 1980): p(x, t) = N (x, t − τ )p(x, t − τ ) + R(x − h, t − τ )p(x − h, t − τ ) + L(x + h, t − τ )p(x + h, t − τ ). (1.2) The above equation can be used to predict how the probability distribution p(x, t) changes with time. However, it has several disadvantages. First, the equation is difficult to deal with analytically. Second, the spatial domain within which movement occurs is usually continuous, which means that there is no unique (or natural) way of breaking it into a lattice of discrete points. Taking a diffusion approximation of recursive equation p(x, t) avoids these problems. The first step is to expand all terms in Taylor series and the result is τ ∂ h2 ∂ 2 ∂ [ν(x, t)p(x, t)] + O(h3 ), p(x, t) = −h [δ(x, t)p(x, t)] + 2 ∂t ∂x 2 ∂x (1.3) where bias δ(x, t) = R(x, t) − L(x, t) and mobility ν(x, t) = R(x, t) + L(x, t) = 1 − N (x, t). O(h3 ) denotes the terms of the order h3 and higher. 5 To calculate δ(x, t), substituting R(x, t), L(x, t) and again expanding in Taylor series: δ(x, t) = R(x, t) − L(x, t) = k[p(x + h, t) − p(x − h, t) = 2kh ∂ p(x, t) + O(h3 ). ∂x Now substituting δ(x, t) in Equation (1.3) results in 2h2 ∂ ∂ h2 ∂ 2 ∂ [ν(x, t)p(x, t)] p(x, t) = − [kp(x, t) p(x, t)] + ∂t τ ∂x ∂x 2τ ∂x2 ∂ O(h3 ) h ∂x p(x, t)O(h3 ) + + . τ τ The next step in obtaining the diffusion approximation is to take limits in such a way that h2 and τ → 0 in the same rate, i.e. limh,τ →0 h2 /τ = d. Since O(h3 )/τ → 0 the equation becomes ∂u ∂ ∂u ∂2 = −2 (ku ) + 1/2 2 (νu), ∂t ∂x ∂x ∂x (1.4) where he substituted continuous density u instead of p(x, t), and set d equal to 1 by measuring movement on the appropriate time scale. In Turchin’s paper, He said the first time on the right hand side of Equation (1.4) represents aggregative movement at the rate k in the direction of increasing population gradient ∂u , ∂x while the second term represents random movement at a rate ν. The drawbacks of Turchin’s model are: (i) we should also use Taylor expression to ν = R(x, t) + L(x, t). (ii) After the derivation his model, he assumed that k is not constant and is a function of u which may result the different expression in Taylor series. (iii) By substituting 6 u instead of probability p(x, t), there are no very obvious physical relations between these two factors. Padr´n Victor 1998 [32] derived another diffusion aggregation model: In his model, he o supposed the transition probability p(x, t) is depend on the population density and p(x, t) = p(u) = e−u . He then assumed R(x, t) = L(x, t) = p(u(x, t)) and got: uk+1 = uk p(uk ) + uk (1 − p(uk )) 0 0 1 1 0 uk+1 = uk p(uk ) + uk (1 − 2p(uk )) + uk p(uk ) j+1 j+1 j j j−1 j−1 j uk+1 = uk −1 p(uk −1 ) + uk (1 − p(uk )) N N N N N where j = 1, 2, . . . , N − 1. uk = u(jh, kτ ) and by using so called diffusion approximation j again (Assuming limh,τ →0 h2 /τ = constant), he derived ∂uk j ∂t = ∂2 f (uk ) + O(τ, h2 ), j 2 ∂x (1.5) where f (u) = up(u). This formal manipulation yields the O(τ, h2 ) continuous approximation ut = ∂2 f (u), ∂x2 (1.6) f (u) = ue−u may involve negative diffusivity so that the standard initial-boundary value problems are not well-posed. The corresponding problems are, however, well-posed for the underlying discrete process; suggesting that the diffusion approximation is not always a good approximation. In his paper, he chose an alternative approaches close to Rosenau’s [34] quasi-continuous 7 approximation and the truncated order is O(τ, h4 ), where he get ∂uk j = ∂t ∂2 h2 ∂ 4 f (uk ) + f (uk ) + O(τ, h4 ). j j 12 ∂x4 ∂x2 Hence ∂uk h2 ∂ 4 ∂2 j − f (uk ) = f (uk ) + O(τ, h4 ). j j 2 ∂t 12 ∂x4 ∂x Substituting back and we can get ∂uk j 2 2 ∂uk ∂2 j k) + h ∂ = f (uj + O(τ, h4 ), 2 2 ∂t ∂t 12 ∂x ∂x which leds to the following Sobolev regularized equation: ut = ∆f (u) + h2 ∆ut . 12 When we use the diffusion approximation, the very basic assumption is lim h2 /τ = constant. h,τ →0 So the reasonable high order approximation should have the wave term: ∂uk j ∂t + τ ∂2 k ∂2 h2 ∂ 4 uj = f (uk ) + f (uk ) + O(τ, h4 ) j j 2 ∂t2 12 ∂x4 ∂x2 Which will led to the wave approximation: ∂uk j ∂t + τ ∂2 k ∂2 h2 uj = f (uk ) + ∆ut . j 2 ∂t2 12 ∂x2 8 (1.7) Which is totally different to the Sobolev regularization. Another drawback in this modeling is the assumption to the transition probability p(u), when u is small enough we can find the natural probability N (x, t) = 1 − 2p(u) is negative which violate the assumption of probability N (x, t) ≥ 0. Hans G. Othmer et al [18] derived the above diffusion aggregation in another way which combined cell’s response to some chemical. Their governing equation is as follows: ∂pi = T+ (W )pi−1 + T− (W )pi+1 − (T+ (W ) + T− (W ))pi . i−1 i+1 i i ∂t Here pi (t) is the conditional probability that a walker is at i ∈ Z and at time t, conditioned on the fact that it begins at i = 0 at t = 0. T± () are the transition probabilities per unit i time for a one-step jump to i ± 1, and W is given by W = (. . . , w−1 , w0 , w1 , . . . ). They also assumed that the transition probability is depend on the density of the control species at that site which will led to ∂pi = T+ (wi−1 )pi−1 + T− (wi+1 )pi+1 − (T+ (wi ) + T− (wi ))pi . i−1 i+1 i i ∂t By using the diffusion approximation assumption and Taylor expression, they can get: ∂2 ∂p = D 2 (T(w)p). ∂t ∂x Then they supposed for simplicity that the production of the signal or control substance w 9 depends quadratically on the local density of individuals, and decays via first-order kinetics. The evolution of w is given by dw = p2 − µw. dt They further assumed that the production of the signal occurs on a much faster time scale than movement, then the first term in the solution of the resulting singularly-perturbed version of the above equation is w = p2 /µ, and obtain the governing equation ∂p ∂2 = D 2 (T(p)p). ∂t ∂x If we take T(p) = 1 K+p2 or T(p) = e−p , then the equation can lead to a backward parabolic equation for suitable initial data. Other backward forward parabolic equations. Backward forward parabolic equations become more and more important not only in ecology and biology but also in the image processing. One famous example is Perona-Malik equation ut = div( u(x, t) ), 1 + | u(x, t)|2 ∀(x, t) ∈ Ω × [0, T ). (1.8) Let V = ux in one dimension, then V satisfies: Vt = [ In Equation (1.9), the coefficient (1 − V 2 ) Vx ]x . (1 + V 2 )2 (1−V 2 ) (1+V 2 )2 (1.9) is positive when |V | < 1 and negative when |V | > 1. The idea of the Perona-Malik equation is that when the image is near the edge, the gradient always big and we want to enhance the edge by using reverse diffusion and smooth 10 the noise by forward diffusion. There are plenty of literatures relative to this backward forward equation. Perona-Malik equation has very good effect in edge detection [33] and has some stability properties of the discrete schemes [9]. However, Kichenassamy [20] 1997 proved the result of nonexistence of global weak solution in one dimension without infinitely smooth initial condition which was called as ”The Perona-Malik Paradox”. But in higher dimension case when n ≥ 2, Ghisi and Gobbino [16] showed that the Perona-Malik equation admits radial solutions of class C 2,1 with a transcritical initial condition. The review of the traveling wave solution for Fisher-KPP equations. Traveling wave solutions (t.w.s) is a very important mathematical solution in reaction-diffusion equation, which in ecology correspond to invasions; in cell biology correspond to the advancing edge of an expanding cell population, such as a growing tumor. We recall that a traveling wave solution is a solution u(x, t) having a constant profile, that is u(x, t) = u(x − ct) = u(t) for some function u(t), and the constant c is the wave speed. In particular that a traveling wave solution connecting the steady states 1 and 0 always satisfies the boundary value problem (D(u)u ) + cu + g(u) = 0, (1.10) u(−∞) = 1, u(+∞) = 0.. (1.11) g(u) > 0 ∀u ∈ (0, 1). (1.12) Where g(0) = g(1) = 0, The first systematic analysis on the existence of traveling wave solution of the standard Fisher-KPP equation appeared in two separate works due to Fisher [11] and Kolmogorov et 11 al. [21]. The main ideas of the methodology introduced by Kolmogorov et al. are still used today. For the non-linear diffusion ut = [D(u)ux ]x + g(u), where D(u) is a strictly positive function on [0,1] and the kinetic part g(u) is as in the classic Fisher-KPP equation, Hadeler [17] gave the lower bound on c for the existence of traveling wave solution of front type. In the degenerate case where D(0) = 0 with D(u) > 0 ∀u ∈ (0, 1], S´nchez-Gardu˜ o and Maini [12], [13] used a dynamical systems approach to a n prove the existence and nonexistence of the traveling wave and the monotone decreasing property of the traveling front. Observe that the associated system in the phase-plane(u, u ),    u = v, (1.13)   D(u)v = −cv − D(u)v 2 − g(u), ˙ (where ˙ stands for differentiation with respect to u) presents a singularity when u = 0. In 1 order to remove it, one can introduce a parameter τ = τ (ξ) satisfying dτ = D(u(ξ)) . Except dξ at u = 0 where dτ is not defined, we have dτ > 0. So τ is invertible and we get the new dξ dξ system    u = D(u)v, (1.14)   v = −cv − D(u)v 2 − g(u) ˙ Where now denote differentiation with respect to τ , which is equivalent to (1.13) in the positive half-plane {(u, v) : u > 0, v ∈ R}. 12 , Figure 1.2: Front type (I), (−∞, +∞), sharp type (II), (a, +∞). , Figure 1.3: Sharp type (III), (−∞, b), sharp type (IV), (a, b). ˙ We note that when D(0) > 0, system (1.14) has three equilibria: P0 = (0, 0), P1 = (1, 0) ˙ and Pc = (0, −c/D(0)). It can admit both connections between P0 and P1 , and connections between Pc and P1 . The former one corresponds to the typical front type traveling wave solution, whereas the latter one gives rise to a new type of traveling wave solution, having profile which reaches the equilibrium 0 in a finite time ξ ∗ , with negative slope u (ξ ∗ ) = − ˙ c . D(0) In other words, this new connection called sharp-type t.w.s is defined on (−∞, ξ ∗ ] instead of on the whole real line. Theorem 1.1.1 [S´nchez-Gardu˜ o, and Maini ([12], Theorem 2)] Assume g(u) and D(u) a n satisfies the following conditions: g(u) ∈ C 2 ([0, 1]), g(u) > 0 in (0, 1), g(0) = g(1) = 0, g(0) > 0 and g(1) < 0; ˙ ˙ 13 ˙ ¨ D(u) ∈ C 2 ([0, 1]), D(0) = 0, D(u) > 0 ∀u ∈ (0, 1], D > 0, D(u) = 0 ∀u ∈ [0, 1]. Then a positive c∗ exists such that equation (1.10) has 1. no t.w.s for 0 < c < c∗ ; 2. a monotone t.w.s u(x, t) = u(x − c ∗ t) of sharp-type for c = c∗ ; 3. a monotone t.w.s of front-type satisfying (1.10) for each c > c∗ . In the doubly degenerate Fisher-KPP equations when D(0) = D(1) = 0 with D(u) > 0 elsewhere, under less regularity conditions on g(u) and D(u), Malaguti and Marcelli [30] obtained a continuum of traveling wave solutions having wave speed c greater than a threshold value c∗ and they showed the appearance of a sharp-type profile when c = c∗ . Theorem 1.1.2 [Malaguti and Marcelli 2003] Consider equation(1.10) with g(u) satisfy (1.12) and D(u) ∈ C 1 ([0, 1]), D(0) = 0, and D(u) > 0 for all u ∈ (0, 1], furthermore ˙ D(0) > 0. Then there exists a constant c∗ > 0 such that equation(1.10) has 1. no t.w.s for 0 < c < c∗ ; 2. a monotone t.w.s. u of sharp-type with wave speed c∗ ; 3. a monotone t.w.s. u of front-type satisfying (1.10) for every wave speed c > c∗ . Moreover it holds 0 < c∗ ≤ 2 D(s)g(s) . s s∈(0,1] sup Finally, for all c ≥ c∗ the wavefront, respectively of front or sharp-type, is unique up to translation of the origin. When we consider the diffusion-aggregation equation with mono-stable reaction terms: ut = [D(ρ)ux ]x + g(u) t ≥ 0, x ∈ R, 14 where g(u) is a monostable nonlinear term and D(u) changes its sign once, from positive to negative values, in the interval u ∈ [0, 1], Maini et al. [28] proved the existence of infinitely many C 1 traveling wave solutions and these fronts are parameterized by their wave speed and monotonically connect to the stationary states u = 0 and u = 1. In degenerate case, i.e. when D(0) = 0 and/or D(1) = 0, sharp profiles appear, corresponding to the minimum wave speed. Given h(u) − h(u0 ) , u − u0 u→u+ D+ h(u0 ) = lim sup 0 and D− , D+ , D− denote the lower right Dini-derivtive, upper left Dini-derivative, lower left Dini-derivative respectively. Theorem 1.1.3 (Maini et al. 2006) Let g(u) ∈ C[0, 1] and D(u) ∈ C 1 [0, 1] be given functions respectively satisfying Equation (1.12) and D(u) > 0 in (0, β), D(ρ) < 0 in (β, 1). D+ (D(0)g(0)), D− (D(1)g(1)) < +∞. (1.15) Then, there exists a value c∗ > 0, satisfying 2 D(s)g(s) D(s)g(s) , sup } s 0 c∗ , 0β β1 15 – sharp of type (II) if and only if D(1) = 0 and c∗ < c∗ , 0β β1 – sharp of type (III) if and only if D(0) = D(1) = 0 and c∗ = c∗ , 0β β1 – of front-type and satisfying (1.10)in the remaining cases; • a unique (up to space shifts) t.w.s satisfying (1.10) for c > c∗ which always has a front-type profile. In 2010 S´nchez-Gardu˜ o et al. [14] investigated a family of degenerate negative diffusion a n equation (D(ρ) ≤ 0) with logistic-like growth rate g(u) and studied the one dimensional traveling wave dynamics for these equations. In 2011 Kuzmin and Ruggerini [22] considered a modified aggregation diffusion model, assumed the following sign condition on the diffusivity D(u): D(u) > 0 in (0, β) ∪ (γ, 1), D(u) < 0 in (β, γ), D (γ) = 0. with g(u) satisfying bi-stable condition. They gave necessary and sufficient conditions under which traveling wave solutions of the equation exist and provided an estimate for the minimal speed c∗ . In their situation, D (γ) = 0 is the very important condition in proving the existence of C 1 traveling waves. The review of the interface behaviors. There are very few literatures considering the interface behaviors of the backward forward parabolic equation. For the famous PeronaMalik equation (1.8) ut = div( u(x, t) ) ∀(x, t) ∈ Ω × [0, T ). 1 + | u(x, t)|2 The first paper deal with the interface behavior between the backward and forward regions 16 Figure 1.4: Interface behaviors of the backward forward parabolic. traces back to Grebenev [15] 1996. In his paper, he considered the problem ut = 1/2(u2 )xx , (x, t) ∈ QT = R × (0, T ), (1.17) without an sign restrictions for the function u. In the following weak solution framework Definition 1.1.4 A continuous weak solution of Equation (1.17) as a function u on QT with the following: u ∈ C(QT ) ∩ L∞ (QT ), x2 x1 u(x, t2 )ψ(x, t2 )dxdt − x2 x1 ux ∈ L∞ (QT ), loc u(x, t1 )ψ(x, t1 )dxdt − t2 t1 x2 x1 {uψt − uux ψx }dxdt = 0 for arbitrary t1 < t2 and x1 < x2 such that the rectangle [x1 , x2 ] × [t1 , t2 ] is in QT and any ψ ∈ C 1,1 (QT ) having compact support for all t ∈ [t1 , t2 ]. 17 In his paper, he used the ”level-set approach” method and mainly studied the emergence of an interfacial layer region for the following case: Case (a). u(x1 , t) > 0, u(y1 , t) < 0 for t ∈ [t1 , t2 ], y1 = x1 + 2δ, u(x, t1 ) > 0 if x ∈ [x1 , x + δ), u(x, t1 ) < 0 if x ∈ (x1 + δ, y1 ]. From conditions on u(x1 , t), u(y1 , t) follows that there exist functions h+ (t), h− (t), such that u(x, t) > 0 if x1 ≤ x < h+ (t), and u(x, t) < 0 if h− (t) < x ≤ y1 . The continuity of u implies that h+ (t) is a lower semicontinuous function and h− (t) is an upper semicontinuous function. He proved the continuous function in the following: Let D = {(x, t) ∈ P : −1 < x < h+ (t)}, P = (x1 , x + 2δ) × (t1 , t2 ) for some δ > 0. Since u continuous in QT , D is an open subset of R2 . Taking into account the idea the (n) (n) level-set approach for an approximation of h+ , h− by the level curves ξ+ , ξ− of u, he (j) considered u on the open set D. The existence of the smooth level curves ξ+ lie in D and connect points (x j , t) ∈ D to some point (y j , t1 ) of interval (x1 , x + δ) along which (j) u(ξ+ , t) = j , j = 1, 2, 3, . . . , . . . . Here { j } are noncritical values of u with the following properties: lim j = 0, j→∞ j+1 < j, 1 < min u(x1 , t), t ∈ [t1 , t2 ]. (j) The sequence {ξ+ } is a monotonously increasing. Considering a piecewise smooth loop L j (j) (j+s) bounded above by t = τ , below by t = t1 and laterally by ξ+ , ξ+ udx + 1/2∂/∂x(u)2 dt = 0. Lj 18 , it is easy to show that (1.18) From Equation (1.18), it follows (j+s) (τ ) ξ+ (j)(τ ) ξ+ (j+s) (t1 ) ξ+ u(x, τ )dx − (j)(t ) ξ+ 1 u(x, t1 )dx + j ( (j) dx + ux dt) − j+s ( (j+s) dx + ux dt) ξ+ ξ+ or (j) (j) ξ+ (τ ) − ξ+ (t1 ) + + − τ (j) ux (ξ+ (t), t)dt t1 (j+s) (j+s) (t1 ) ξ+ ξ+ (τ ) −1 { u(x, t1 )dx} − u(x, τ )dx − (j)(t ) j (j)(τ ) ξ+ 1 ξ+ τ (j+s) (j+s) (j) −1 ux (ξ+ (t), t)dt + ξ+ (τ ) − ξ+ (t1 )} j+s × j { t1 = 0. (1.19) Now we pass to the limit assuming s → ∞ in (1.19) and obtain (j)(τ ) ξ+ j − ξ+ (t1 ) + τ t1 (j) ux (ξ+ (t), t)dt + −1 j h+ (τ ) (j) ξ+ (τ ) u(x, τ )dx = −1 j t1 u(x, t1 )dx. (j) ξ+ (t1 ) (1.20) (j) In order to prove the derivatives of (ξ+ ) are uniformly bounded relatively to j. Grebenev used Equation (1.20) and got (j) (ξ+ ) (j) (τ ) + ux (ξ+ (τ ), τ ) + −1 { j h+ (τ ) (j) ξ+ (τ ) u(x, τ )dx} = 0 In order to test the last term is bounded. Grebenev took a set (j) (j+s) T = {(x, t) ∈ D : t1 < τ ≤ τ + ∆τ < t2 , ξ+ (t) ≤ x ≤ ξ+ 19 (t)}. Then he got (j+s) (τ ) ξ+ u(x, τ )dx u(x, τ + ∆τ )dx − (j) (j) ξ+ (τ ) ξ+ (τ +∆τ ) τ +∆τ τ +∆τ (j+s) (j) uux (ξ+ (t), t)dt − uux (ξ+ (t), t)dt. τ τ (j+s) (τ +∆τ ) ξ+ = = T ut dxdt = (1.21) Passing to the limit when s → ∞, and using the relation (j+s) lim uux (ξ+ s→∞ (t), t) = 0, (1.22) he got h+ (τ ) h+ (τ +∆τ ) | (j) ξ+ (τ +∆τ ) u(x, τ + ∆τ )dx − (j) ξ+ (τ ) u(x, τ )dx| ≤ ∆τ Kp , where Kp is some positive number which in general depends on P . From the above, he got the continuity properties of the interface. There is a mistake in Equation (1.19), which will lead to the continuity property easily and also the assumption on ux ∈ L∞ (QT ) is not very appropriate. Because for the porous loc media equation, the special Berrenbeltt weak solution’s derivative is not bounded. These questions will be partly solved in our framework in later chapter. For the Perona-Malik equation in one-dimension: ut − (a(u2 )ux )x x = 0 ∀(x, t) ∈ QT := (−1, 1) × R+ , ux (±1, t) = 0 f or t ∈ R+ , u(x, 0) = u0 (x) 0n (−1, 1) and u0 (±1) = 0. (1.23) (1.24) (1.25) Kawohl and Kutev [19] 1998 proved that the backward regions shrink but always exist under 20 the initial solution was monotone. Theorem 1.1.5 Suppose that u(x, t) is a weak C 1 solution of (1.23), (1.24), (1.25) and the initial solution is concave. Then the union of supersonic and sonic regime of u(x, t) shrinks in time,i.e. Q− ∪ Q0 ; = (Q− ∪ Q0 ) ∩ {t = τ } satisfies the inclusion (Q− ∪ Q0 ) ⊂ (Q− ∪ Q0 ) τ τ τ τ s s for every 0 ≤ s ≤ τ. Also they had the following result under certain assumptions, Theorem 1.1.6 i) the number of the connected components of Q+ (u) ∩ {t = s} and of (Q− (u) ∪ Q0 (u)) ∩ {t = s} is invariant in s, and ii) in particular, no component of Q0 (u) can originate in Q+ (u). 1.2 The derivation of the new population models Aggregation diffusion population model. By the need for survival, mating or to overcome the hostile environment, the population have the tendency of aggregation when the population density is small. Here we consider one species living in a one-dimensional habitat. To derive the model we follow a biased random walk approach plus a diffusion approximation. First we discretize space in a regular manner. Let h be the distance between two successive points of the mesh, and let u(x, t) be the population density that any individual of the population is at the point x at time t. During a time period τ an individual which at time t is at the position x, can either: 1. move to the right of x to the point x + h, with probability R(x, t), or 2. move to the left of x to the point x − h, with probability L(x, t) or 3. stay at the position x, with probability N (x, t). 21 Assume that there are no other possibilities of movement we have N (x, t) + R(x, t) + L(x, t) = 1. We assume that R(x, t) = K(u(x + h, t)), L(x, t) = K(u(x − h, t)). Here we let K(u(x, t)) measures the probability of movement which depend on the population. Using the notations above, the density u(x, t) can be written as follows: u(x, t+τ ) = N (x, t)u(x, t)+R(x−h, t)u(x−h, t)+L(x+h, t)u(x+h, t)+τ g(u(x, t)). (1.26) Where g(u(x,t)) is the net birth rate at (x, t). By using Taylor series, we obtain the following approximation u(x, t) + τ d(Ru) h2 d2 (Ru) du = N (x, t)u(x, t) + {R(x, t)u(x, t) − h + } dt dx 2 dx2 d(Lu) h2 d2 (Lu) + {L(x, t)u(x, t) + h + } + τ g(u(x, t)), dx 2 dx2 then we get τ d(Lu) h2 d2 (Lu) du d(Ru) h2 d2 (Ru) = {−h + } + {h + } + τ g(u(x, t)). dt dx 2 dx2 dx 2 dx2 22 Set β(x, t) = R(x, t) − L(x, t) = K(u(x + h, t)) − K(u(x − h, t)) = 2h d [K(u(x, t)] + O(h3 ) dx and ν(x, t) = R(x, t) + L(x, t) = K(u(x + h, t)) + K(u(x − h, t)) = 2K(u(x, t)) + O(h2 ). We can get d[(R − L)u] h2 d2 [(R + L)u] du = −h + + τ g(u(x, t)). τ dt dx 2 d2 x Now we substitute β and ν in the above equation, we can get d 2 2 du 2 d{ dx [K(u(x, t)]u} + h d [2K(u(x, t))u] + τ g(u(x, t)) + O(h3 ). τ = −2h dt dx 2 d2 x We assume that h2 /τ → C > 0 (finite) as τ, h → 0, we get the following d{ d [K(u(x, t)]u} d2 [K(u(x, t))u] du = −2C dx +C + g(u(x, t)), dt dx d2 x du d d[K(u(x, t))u] = C{−2 [K(u(x, t)]u + }x + g(u(x, t)), dt dx dx du = C(−2K ux u + K ux u + Kux )x = C[(K − uK )ux ]x + g(u(x, t)). dt If we let K(u(x, t)) = 1/2(u2 − u3 ), we can test that 0 ≤ K(u(x, t)) ≤ 1 which satisfies the assumption of probability and that K − uK = u2 (u − 1/2), which means aggregation when 0 ≤ u < 1/2. By plugging this probability in Equation (1.26), and we denote the discrete 23 density u(x, t) = ut , u(x − h, t) = ut , u(x + h, t) = ut , . . . , we have: j j−1 j+1 ut+τ j = ut ut ut ut t ) + j j+1 (ut + ut t t t t + j j−1 (ut + ut uj j−1 − 1)(uj−1 − uj j+1 − 1)(uj+1 − uj ), j j 2 2 (1.27) which is a special finite difference scheme of the following backward forward parabolic equation: ut = [D(u)ux ]x (x, t) ∈ QT , (1.28) where QT := [0, 1] × [0, T ], D(u) < 0 in (0, α), D(u) > 0 in (α, 1). (1.29) Cell-to-cell adhesion population model. Here we consider one species living in a one-dimensional habitat. To derive the model we follow a biased random walk approach plus a diffusion approximation. First we discretize space in a regular manner. Let h be the distance between two successive points of the mesh, and let ρ(x, t) be the population density that any individual of the population is at the point x at time t and 0 ≤ ρ(x, t) ≤ 1. During a time period τ an individual which at time t is at the position x, can either: 1. move to the right of x to the point x + h, with probability R(x, t), or 2. move to the left of x to the point x − h, with probability L(x, t) or 3. stay at the position x, with probability N (x, t). Assume that there are no other possibilities of movement we have N (x, t) + R(x, t) + L(x, t) = 1. 24 We assume that R(x, t) = 1/2(1 − ρ(x + h, t))(1 − αρ(x − h, t)), L(x, t) = 1/2(1 − ρ(x − h, t))(1 − αρ(x + h, t)). Here the first factor 1 − ρ(x + h, t) of R(x, t) models volume filling with a scaled maximal density 1, and the second factor 1 − αρ(x − h, t) is a simple model for adhesion, which assumes that the probability of a given species jumping to the right is reduced by the presence of neighbors to the left, the meaning of L(x, t) is analogous. We can see that 0 ≤ R(x, t), L(x, t), R(x, t) + L(x, t) ≤ 1, which is different from [3]. In their case R(x, t) = (1 − ρ(x + h))(1 − αρ(x − h))/h2 , L(x, t) = (1 − ρ(x − h))(1 − αρ(x + h))/h2 may go to infinity as h → 0, which can not be explained from natural situation of probability. Using the notations above, the density ρ(x, t) can be written as follows: ρ(x, t + τ ) = N (x, t)ρ(x, t) + R(x − h, t)ρ(x − h, t) + L(x + h, t)ρ(x + h, t) + τ g(ρ(x, t)). Where g(ρ(x, t)) is the net birth rate at (x, t). By using Taylor series, we obtain the following approximation ρ(x, t) + τ d(Rρ) h2 d2 (Rρ) dρ = N (x, t)ρ(x, t) + {R(x, t)ρ(x, t) − h + } dt dx 2 dx2 d(Lρ) h2 d2 (Lρ) + {L(x, t)ρ(x, t) + h + } + τ g(ρ(x, t)), dx 2 dx2 25 then we get τ d(Rρ) h2 d2 (Rρ) dρ d(Lρ) h2 d2 (Lρ) = {−h + + } + {h } + τ g(ρ(x, t)). dt dx 2 dx2 dx 2 dx2 Set β(x, t) = R(x, t) − L(x, t) = 1/2(α − 1)(ρ(x + h) − ρ(x − h)) = (α − 1)h d [ρ(x, t)] + O(h3 ) dx and ν(x, t) = R(x, t) + L(x, t) = 1/2(2 − (1 + α)(ρ(x + h) + ρ(x − h)) + αρ(x + h)ρ(x − h) = 1 − (1 + α)ρ(x) + αρ2 (x) + O(h). We can get τ d[(R − L)ρ] h2 d2 [(R + L)ρ] dρ + τ g(ρ(x, t)). = −h + dt dx 2 d2 x Now we substitute β and ν in the above equation, we can get τ dρ h2 = (1 − α)h2 (ρx ρ)x + [(1 − (1 + α)ρ(x) + αρ2 (x))ρ(x)]xx + τ g(ρ(x, t)) + O(h3 ). dt 2 Using the same diffusion approximation as in [5], [18], [35], [32] and assuming that h2 /τ → C > 0 (finite) as τ, h → 0, we get the following: dρ = C[(1 − α)ρx ρ]x + C/2[(1 − 2(1 + α)ρ + 3αρ2 )ρx ]x + g(ρ(x, t)), dt dρ = C/2[(1 − 4αρ + 3αρ2 )ρx ]x + g(ρ(x, t)). dt 26 For simplicity, we just let C = 2, then we get Equation 7. We recall from [3] that equations ρt = [D(ρ)ρx ]x t ≥ 0, x ∈ R, (1.30) 3 with quadratic coefficient D(ρ) = 3αρ2 − 4αρ + 1, is globally well posed if α < 4 . When 3 α = 3/4, Equation (1.30) degenerate at ρ = 2/3. When α > 4 , Equation (1.30) is ill posed iff the initial density profile protrudes into the ’unstable’ interval Iα = (ρ (α), ρ (α)) := ( 2α − α(4α − 3) 2α + , 3α α(4α − 3) ) ⊂ [1/3, 1]. 3α (1.31) Equation (7)-(8) can be seen as an extension of standard Fisher-KPP equation ρt = D 2 ρ + g(ρ) (1.32) where D > 0 is constant. 1.3 Our main results For the first aggregation diffusion model, because it has the singularity at point u = α, we can not use the usual method to prove the existence of the traveling wave solution. Here we introduce a new type of traveling wave solution which only require continuity at u = α and do not need C 1 regularity. For the precise definitions of C 1 traveling wave and weak traveling wave solutions, see Definitions 2.1.1 and 2.1.2. Theorem 1.3.1 Let D(u) ∈ C[0, 1]∩C 1 [0, α]∩C 1 [α, 1] and g(u) ∈ C[0, 1] be given functions 27 respectively satisfying (2) and (4). Then, there exists a value c∗ > 0, satisfying 2 max{D (α+ )g(α), D (α− )g(α)} ≤ c∗ ≤ 2 max{ sup − s∈(0,α] D(s)g(s) D(s)g(s) , sup }, s s∈(α,1] s − α such that Equation (3) has i) no weak traveling wave solution satisfying (1.10) for c < c∗ ; ii) a unique (up to space shifts) weak traveling wave solution for c ≥ c∗ . Furthermore if D(u) ∈ C 1 [0, 1], we can obtain the C 1 smooth traveling wave solution when c > c∗ , but when c = c∗ , we obtain the weak traveling wave solution which may not be C 1 . Theorem 1.3.2 Let D(u) ∈ C 1 [0, 1] and g(u) ∈ C[0, 1] be given functions respectively satisfying (2) and (4). Then, there exists a value c∗ > 0, satisfying 2 D (α)g(α) ≤ c∗ ≤ 2 max{ sup − s∈(0,α] D(s)g(s) D(s)g(s) , sup }, s s∈(α,1] s − α such that Equation (3) has i) no C 1 traveling wave solution satisfying (1.10) for c < c∗ ; ii) a unique (up to space shifts) C 1 traveling wave solution for c > c∗ . When we consider the general weak solution of Equation (1) under the following framework Definition 1.3.3 A locally integrable function u(x, t) will be said to be a weak solution of the backward forward Equation (1) with non-flux condition if 1 0 u2 (x, t) + |D(u)|u2 dx x 28 (1.33) 1 is uniformly bounded for bounded t and if for any test function φ(x, t) in C0 , [uφt − D(u)ux φx ]dxdt = 0. (1.34) Then we have the following two results: Theorem 1.3.4 Suppose that D(u) satisfies (2), then there exists a nonnegative classical solution in C 2,1 (QT ) of Problem (1) for all T > 0, provided α < u(x, 0) ∈ C 1,β ([0, 1]), β ∈ (0, 1), and satisfies the non-flux conditions D(u)ux (0, t) = D(u)ux (1, t) = 0. Theorem 1.3.5 Suppose u0 ≥ α a.e. on [0, 1], and u(x, t) is the solution of Equation (1) 1 with non-flux boundary condition, then solution u(x, t) will go to constant C = 0 u(x, 0)dx. Theorem 1.3.6 Suppose 0 < u(x, 0) < α a.e. and u(x, 0) is not C ∞ on [0, 1], then Equation (1) has no weak solution. When we consider the original discrete model and let α = 1/2, we obtain the following results: Theorem 1.3.7 For all the initial data 1/2 = α ≤ u(j, 0) ≤ 1, j = 0, . . . , N , with nonflux boundary condition, then 1/2 ≤ u(j, t) ≤ 1 for all t ≥ 0. Furthermore, if u(j, 0) are monotone in j, then u(j, t) are monotone in j, lim u(j, t) = t→∞ 1 N −1 N −1 u(j, 0). j=1 Theorem 1.3.8 For all the initial data 0 ≤ u(j, 0) ≤ 1, j = 0, . . . , N , with non-flux boundary condition, the solution of Equation (1.27) is bounded in [0, 1] and the total density is conservative. 29 We denote the forward region Q+ (t) = {(i, t)|u(i, t) ≥ 1/2, i ∈ [1, 2, 3, . . . , N ]} and backward d region Q− (t) = {(i, t)|u(i, t) < 1/2, i ∈ [1, 2, 3, . . . , N ]}, then we have the result: d Theorem 1.3.9 Under any general initial condition u(j, 0), for any t1 > t ≥ 0 Q+ (t) ⊆ Q+ (t1 ), d d (1.35) In the special case when N ≤ 4 and u(0, t) = u(N, t) = 0 for t ≥ 0 which corresponding to the hostile environment in biology, we also obtain the asymptotic behaviors of this discrete model under general initial condition. For the cell-to-cell adhesion model where the coefficient may change from positive to negative then to positive again. By using the idea of weak traveling wave solution introduced in our first population model, we also prove the monotonicity of the weak traveling wave solution and the existence of the traveling wave solution. Theorem 1.3.10 Let D(ρ) and g(ρ) be given functions respectively satisfying (8) and (4) and 3/4 ≤ α ≤ 1. There exists a value c∗ > 0, satisfying max{D (0)g(0), D (ρ (γ))g(ρ (γ))} ≤ ≤ max{ sup s∈(0,ρ (γ)] D(s)g(s) , s (c∗ )2 4 D(s)g(s) }, s − ρ (γ) s∈(ρ (γ),1],ρ=ρ (γ) sup such that Equation (3) has i) no weak traveling wave solution satisfying (1.10) for c < c∗ ; ii) a unique (up to space shifts) weak traveling wave solution for c = c∗ . ii) a unique (up to space shifts) C 1 traveling wave solution for c > c∗ . Where ρ (γ) and ρ (γ) are given in (1.31). 30 Chapter 2 Traveling wave solution for the new population model When considering the existence of the traveling wave solutions u(x − ct) for the new population model, we find there is a singularity at point u = α and we can not obtain the traveling wave solution easily by methods used for purely diffusion model (which means D(u) ≥ 0 for all u ∈ [0, 1]). In order to overcome this difficulty, we introduce a suitable definition of weak traveling wave solution and obtain the existence of the traveling wave solutions from 1 to α and from α to 0. By gluing these two traveling wave solutions together, we obtain the existence of the weak traveling wave solutions. 2.1 Preliminary results and properties of traveling wave solution In the following we will consider the equation ut = [D(u)ux ]x + g(u) t ≥ 0, 31 x ∈ R. Throughout this paper we assume that 0 ≤ u ≤ 1, D(u) < 0 in (0, α), D(u) > 0 in (α, 1), g(u) > 0 in (0, 1), g(0) = g(1) = 0. g(u) is a mono-stable nonlinear reaction term. We recall that a traveling wave solution for (3) is a solution of the form u(x, t) = u(x−ct) for some constant traveling speed c. The equation we have to deal with is changed to (D(u)u ) + cu + g(u) = 0, (2.1) Where stands for derivation with respect to the wave variable ξ = x − ct. Following [28], we have the definition of the classical traveling wave solution for D(u) ∈ C 1 [0, 1]. Definition 2.1.1 When D(u) ∈ C 1 [0, 1], a classical traveling wave solution of (2.1) is a function u ∈ C 1 (a, b), with (a, b) ⊆ R, such that D(u)u ∈ C 1 (a, b), satisfying Equation (2.1) in (a, b) and the boundary conditions u(a+ ) = 1 lim D(u(ξ))u (ξ) = ξ→a+ u(b− ) = 0, , lim D(u(ξ))u (ξ) = 0. ξ→b− (2.2) (2.3) Condition (2.3) added to the classical boundary condition (2.2) is motivated by the possible occurrence of sharp type profiles, that is, solutions reaching the equilibria at a finite value a or b. However, when the existence interval is the whole real line, then condition (2.3) is automatically satisfied and then Definition 2.1.1 is reduced to the classical one which is the 32 front type traveling wave solution. However u may not be C 1 because of the singularity of the equation at u = α. We have to give a definition of weak solution. Definition 2.1.2 A weak traveling wave solution of (2.1), is a function u ∈ C(a, b) satisfyb ing boundary condition (2.2) and for any t with u(t) = 0, α and 1, u (t) exists, a g(u(s))ds = c and t D(u)u (t) + cu(t) + g(u(s)ds = c, (2.4) a Remark: Equation (2.4) comes from t [D(u)u (t) + cu(t) + g(u(s)ds] a = (D(u)u (t)) + cu (t) + g(u(t)) = 0, b and limt→a+ u(t) = 1, limt→a+ D(u(t))u (t) = 0. a g(u(s))ds = c comes from u ds = −c[u(b) − u(a)] = c. (D(u)u ) ds − c g(u(s))ds = a b b b a a Also from the definition of the weak traveling wave solution and the property of g(u), c > 0. When the set {t|u(t) = 0, α, 1} is isolated, then D(u(t))u (t) can be extended to a continuous function on (a, b). With this definition (2.4), if u(t) is a weak solution on (a, b) with −∞ < a < b < ∞, we can extend u(t) to (−∞, ∞), defined by    u(t), a < t < b,     u(t) = t ≤ a,  1,      0, t ≥ b. 33 It is easy to check that u(t) is a weak traveling wave solution on (−∞, ∞). Proposition 2.1.3 Let u be a weak traveling wave solution to (2.1) satisfying (2.2). If a = −∞, u(ξ) < 1 for ξ ∈ (−∞, b) if b = +∞, u(ξ) > 0 on (a, +∞) Proof. then then limξ→−∞ D(u(ξ))u (ξ) = 0, limξ→+∞ D(u(ξ))u (ξ) = 0. We only prove when b = +∞, the idea for a = −∞ is the same. Assume b = +∞, let T := inf{ξ : u(t) < α for all t ∈ (ξ, +∞)} and define α H(u) := D(s)g(s)ds, u 1 Φ(ξ) := [D(u(ξ))u (ξ)]2 − H(u(ξ)) 2 for u ∈ (0, α) and ξ ∈ (T, +∞). Then Φ (ξ) = (D(u(ξ))u (ξ)) D(u(ξ))u (ξ) + D(u(ξ))g(u(ξ))u (ξ) = −cD(u(ξ))(u (ξ))2 . Since D(u(ξ)) ≤ 0, we have Φ(ξ) is monotone increasing. The limit limξ→+∞ Φ(ξ) exists. On the other hand, we know α lim H(u(ξ)) = ξ→+∞ 0 D(s)g(s)ds ∈ R. Hence, from the definition of Φ(ξ), we deduce existence of the limit lim D(u(ξ))|u (ξ)| =: l ∈ [−∞, 0]. ξ→+∞ If l < 0, note u(+∞) = 0 and D(u(ξ)) ≤ 0 in (T, ∞), then limξ→+∞ |u (ξ)| := k ∈ (0, ∞], 34 in contradiction with the boundedness of u. Therefore, l = 0. 2 As a consequence, when (a, b) = (−∞, ∞) and 0 < u(t) < 1 on (−∞, ∞), Condition (2.3) is satisfied and Definition 2.1.1 reduces to the classical front type traveling wave solution. We should also remark that if there exists a < a1 such that u(a1 ) = 1, then u ≡ 1 on [a, a1 ]. In fact, if u(ξ) = 1, then we can find a subinterval (a2 , a3 ) ∈ (a, a1 ) such that 0 < u(ξ) < 1 on (a2 , a3 ), limξ→a2 u(ξ) = limξ→a3 u(ξ) = 1. Furthermore we have ξn ∈ (a2 , a3 ), ξn → a3 such that u (ξn ) ≥ 0, which implies that ξn ξn g(u(s))ds ≤ D(u(ξn ))u (ξn ) + cu(ξn ) + cu(ξn ) + a g(u(s))ds = c. a Hence c < cu(a3 ) + a3 ξn g(u(s))ds = a lim (cu(ξn ) + ξn →a3 g(u(s))ds) a ξn ≤ lim [D(u(ξn ))u (ξn ) + cu(ξn ) + ξn →a3 g(u(s))ds] = c. a A contradiction. Similarly, if there exists b1 < b such that u(b1 ) = 0, we have u ≡ 0 on [b1 , b]. Let (a∗ , b∗ ) (possibly the whole real line) denote the interval such that 0 < u(ξ) < 1, limξ→a∗ = 1, limξ→b∗ = 0. The following result is needed in order to prove the existence Theorem. Proposition 2.1.4 Let u(t) be the weak traveling wave solution of the equation (2.1), then u is strictly decreasing on (a∗ , b∗ ). Proof. We divide this proof into three steps. 35 Step 1. We claim that 0 < u(ξ) < α and u (ξ) < 0 for every t ∈ (T2 , b∗ ), (2.5) where T2 := inf{ξ : u(t) < α for every t ∈ (ξ, b∗ )}. It is obvious that 0 < u(ξ) < α on (T2 , b∗ ) from the definition of T2 and b∗ . It remains to show that u (ξ) < 0 on (T2 , b∗ ). Let us argue by contradiction. If not, there exists a value ξ ∗ ∈ (T2 , b∗ ) such that u (ξ ∗ ) ≥ 0 and α > u(ξ ∗ ) > 0. Then, d (D(u)u )|ξ=ξ ∗ = −cu (ξ ∗ ) − g(u(ξ ∗ )) < −cu (ξ ∗ ) ≤ 0, dξ hence D(u(ξ))u (ξ) is decreasing in a neighborhood of ξ ∗ , and since D(u(ξ)) < 0, we get u (ξ) > 0 in (ξ ∗ , ξ ∗ + δ) for some δ > 0. Note that u(b∗ ) = 0, u(ξ) has a local maximum at t∗ ∈ (ξ ∗ , b∗ ), from the equation (2.1) we have D(u)u + D (u)(u )2 + cu + g(u) = 0. u (t∗ ) = 0 leads to u (t∗ ) = − g(u) > 0, D(u) which contradicts to the fact that u has local maximum at t∗ . Step 2. We claim that α < u(ξ) < 1 and u (ξ) < 0 for every ξ ∈ (a∗ , T1 ), 36 (2.6) where T1 := sup{ξ : u(t) > α for every t ∈ (a∗ , ξ)}. We use a similarly argument as in step 1. If not, there exists a ξ ∗ ∈ (a∗ , T1 ) such that u (ξ ∗ ) ≥ 0 and u(ξ ∗ ) < 1. Then, as above, D(u(ξ))u (ξ) is decreasing in a neighborhood of ξ ∗ and being D(u(ξ)) > 0, we deduce u (ξ) > 0 in (ξ ∗ − δ, ξ ∗ ) for some δ > 0. From u(a∗ ) = 1, there exists η ∈ (a∗ , ξ ∗ ) such that u(η) is the local minimum of u on [a∗ , ξ ∗ ] . Hence u (η) = 0, α < u(η) < 1. By Equation (2.1), we have u (η) = − g(u(η)) < 0, D(u(η)) a contradiction to the fact η is the local minimum. So (2.6) is proved. Step 3. Let us show that T1 = T2 . First we prove lim D(u(t))u (t) = lim D(u(t))u (t) = 0. − t→T1 + t→T2 In the following we only prove limt→T − D(u(t))u (t) = L = 0, the proof is the same for 1 limt→T + D(u(t))u (t) = 0. 2 First, for any small δ > 0 such that (T1 − δ, T1 ) ⊂ (a∗ , T1 ), D(u(t)) > 0, u (t) < 0 which leads to D(u(t))u (t) < 0 on (T1 − δ, T1 ). Hence limt→T − D(u(t))u (t) = L ≤ 0. 1 If L < 0, from Equation (2.4), we have lim D(u(t))u (t) ≤ lim D(u(t))u (t) < 0. + t→T2 − t→T1 A contradiction to D(u(t))u (t) > 0, for t > T2 . 37 Now, we prove T1 = T2 . If not, we have T2 > T1 . By using Equation (2.4), lim D(u(t))u (t) = lim D(u(t))u (t) = 0, lim u(t) = lim u(t) = α, − t→T1 − t→T1 + t→T2 + t→T2 we have T2 0< g(u(s))ds = 0. T1 A contradiction. 2 Let u(ξ) be the solution of (2.4) and let ξ(u) : (0, 1) → (a∗ , b∗ ) be the inverse function, whose existence is ensured by u (ξ) < 0 except at ξ = T1 . Set z(u) := D(u)u (ξ(u)) for u ∈ (0, α) ∪ (α, 1). We get the following: z(u) = ˙ g(u)D(u) 1 D(u) dz = (D(u)u ) = (D(u)u ) = −c − . du u (ξ) z(u) z(u) (2.7) Before we prove our main theorem regarding aggregation and diffusion process, let us recall some results for reaction diffusion equation. When D(u) is strictly positive in (0, 1), including ˙ the case D(0) = 0 or D(1) = 0. In [30], it was proved when D(0) = 0 but D(0) = 0, the problem of existence of traveling wave solution of (2.1) is equivalent to the solvability condition of the following boundary value problem    z = −c − D(u)g(u) u ∈ (0, 1), ˙  z     z(u) < 0,       + z(0 ) = 0 z(1− ) = 0.  38 (2.8) To simplify notation, we put if u(t) is a front-type traveling wave solution,  ∗ t  t :=    +∞  if u(t) is a sharp traveling wave solution. (2.9) We start with Theorem 2.1.5 concerning the solvability of problem (2.8). The result appeared ˙ in [29] for D(u) > 0 in [0,1], and then in [30] for D(0) = 0 but D(0) = 0. Theorem 2.1.5 Let g ∈ C[0, 1], satisfying (2) and D(u) ∈ C 1 [0, 1] with D(u) > 0 for all u ∈ (0, 1) and assume that (Dg) (0+ ) < +∞ holds, then there exists c∗ > 0 satisfying 2 (Dg) (0+ ) ≤ c∗ ≤ 2 D(s)g(s) s s∈(0,1] sup (2.10) such that (2.8) is solvable if and only if c ≥ c∗ . Moreover, for every c ≥ c∗ , the solution is unique. We state the following equivalent relation between solution of (2.1) and (2.8) for D(u) > 0 when u ∈ (0, 1). The theorem is proven in [28]. Theorem 2.1.6 Let g ∈ C[0, 1] satisfying (2) and D ∈ C 1 [0, 1] with D(u) > 0 in (0, 1). The existence of a traveling wave solution u(t) of Equation (2.1), with wave speed c, satisfying (2.2),(2.3) is equivalent to the solvability of Problem (2.8), with the same c. 2.2 Existence of the traveling wave solutions Because D(u) changes sign at u = α in our case, we have to divide the interval (0, 1) into two subintervals (0, α) and (α, 1). In each subinterval, we try to find a traveling wave. To 39 construct our weak solution, we have to make sure that each solution on subinterval is a sharp type at u = α. We can piece together the global traveling wave solution. Theorem 2.2.1 Let D(u) ∈ C[0, 1]∩C 1 [0, α]∩C 1 [α, 1] and g(u) ∈ C[0, 1] be given functions respectively satisfying (2) and (4). Then, there exists a value c∗ > 0, satisfying 2 max{D (α+ )g(α), D (α− )g(α)} ≤ c∗ ≤ 2 max{ sup − s∈(0,α] D(s)g(s) D(s)g(s) , sup }, s s∈(α,1] s − α such that Equation (3) has i) no weak traveling wave solution satisfying (2.1) for c < c∗ ; ii) a unique (up to space shifts) weak traveling wave solution for c ≥ c∗ . Proof. The strategy is to construct traveling wave solution for a∗ < t < T1 and T1 < t < b∗ such that • u(t) ∈ [0, α) for t ∈ (T1 , b∗ ), • u(t) ∈ (α, 1] for t ∈ (a∗ , T1 ), • limt→T + u(t) = limt→T − u(t) = α, 1 1 • u(a∗+ ) = 1, u(b∗− ) = 0, D(u(a∗+ ))u (a∗+ ) = D(u(b∗− ))u (b∗− ) = 0 t • D(u)u (t) + cu(t) + a∗ g(u(s)ds = c. If we can find such solutions, then the combined solution will give us the existence of traveling solution. Let us consider the first order equation (2.7) for 0 < u < α. We make the following change of variable: D(u) := −D(1 − u), 40 g(u) := g(1 − u). Since D(u)g(u) > 0 for every 1 − α < u < 1, with D(1)g(1) = D(1 − α)g(1 − α) = 0. According to Theorem 2.1.5, if we consider Equation (2.8) on the interval [γ, 1] where γ < 1 with z(γ) = z(1) = 0 we change variable z(s) = z[γ + s(1 − γ)] and get [z] (s) = (1 − γ)z [γ + s(1 − γ)] = −c(1 − γ) − (1 − γ)[D(γ + s(1 − γ))g(γ + s(1 − γ))] , z(s) with z(0) = z(1) = 0. We can get the same inequality as (2.10) on [γ, 1]. Because D(1 − α) = 0, we can apply Theorem 2.1.5 in [1 − α, 1] and derive the existence of a threshold c∗ > 0 1 satisfying 2 D [(1 − α)+ ]g(1 − α)) ≤ c∗ ≤ 2 1 D(s)g(s) , s∈[1−α,1) 1 − s sup (2.11) that is 2 D (α− )g(α) ≤ c∗ ≤ 2 1 sup − s∈(0,α] D(s)g(s) , s (2.12) such that the equation ω := −c − ˙ D(u)g(u) , ω 1 − α < u < 1, (2.13) admits negative solutions ω(u), satisfying ω((1 − α)+ ) = ω(1− ) = 0, if and only if c ≥ c∗ . 1 Putting z(u) := −ω(1 − u), we have z(u) = ω(1 − u) = −c − ˙ ˙ D(1 − u)g(1 − u) D(u)g(u) = −c − , ω(1 − u) z(u) 41 0 < u < α. Moreover, z(0+ ) = z(α− ) = 0, and z(u) > 0 for every u ∈ (0, α). So, if we consider the Cauchy problem    u = z(u) ,  D(u)   u(0) = α .  2 0 < u < α, (2.14) Let u(t) be the unique solution of (2.14) defined in its maximal existence interval (t1 , t2 ), with −∞ ≤ t1 < t2 ≤ +∞. So we can see u(t) is a solution of (2.1) in (t1 , t2 ). Observe that u (t) < 0 for every t ∈ (t1 , t2 ), so there exists the limit u(t− ) ∈ [0, α). 2 Since z(u) = 0 in (0, α), we deduce that u(t− ) = 0. Moreover, since limξ→t− u (ξ) = 2 2 z(u) limu→0− D(u) , If D(0) = 0, then limξ→t− u (ξ) = 0, t2 = +∞. Otherwise if t2 < +∞, then 2 the solution could be continued in the whole half-line (t2 , +∞), in contradiction with the maximality of the interval (t1 , t2 ). Taking into account that u(t+ ) = α and according to Equation (2.7) there exists the 1 inequality α t+ = ξ(α− ) − ξ( ) = 1 2 = α α 2 α D(u) du z(u) −c z (u) − ]du α g(u) g(u) [ 2 α = −c z(u) z(u)g (u) −( ) − ]du < M. α g(u) g(u) g 2 (u) [ 2 We obtain a solution u(t) of Equation (2.7) in (t1 , t2 ) such that t1 ∈ R, t2 ≤ +∞, u(t1 ) = α, u(t2 ) = 0. Using space shift, we can transfer t1 to T1 . Moreover D(0) = 0 implies t2 = +∞. When α < u < 1, let us consider the first order equation (2.7) for α < u < 1. We have D(u)g(u) is a mono-stable and Theorem 2.1.5 holds in [α, 1], we can deduce the existence of 42 a threshold value c∗ > 0, satisfying the estimate 2 D (α+ )g(α) ≤ c∗ ≤ 2 2 D(s)g(s) , s∈(α,1] s − α sup such that Equation (2.7) has a unique negative solution z(u) in (α, 1) with z(α+ ) = z(1− ) = 0, if and only if c ≥ c∗ . We just consider the Cauchy problem 2    u = z(u) , α < u < 1,  D(u)   u(0) = α+1 .  2 (2.15) We can repeat the same arguments developed for the case when 0 < u < α, and obtain that the unique solution u(t) of Problem (2.15) is a solution of (2.1) in (τ1 , τ2 ), with τ1 ≥ − + −∞, τ2 ∈ R, u(τ1 ) = 1, u(τ2 ) = α. Using space shift again, we transfer τ2 to T1 . Moreover, when D(1) = 0, we can have the inverse function ξ(u) in [1, α+1 ] and ξ(1− ) = −∞. 2 Putting c∗ := max{c∗ , c∗ }, and glue the solutions of (2.14) and (2.15) by a time-shift, we 1 2 obtaining a continuous function u(t) on some interval (a, b), with −∞ ≤ a < b ≤ +∞ which is a decreasing solution of Equation (2.4) in (a, b) and satisfies u(a+ ) = 1, u(b− ) = 0, lim u(t) = lim u(t) = α. + t→T1 − t→T1 From the construction of u, we also have lim D(u(t))u (t) = lim D(u(t))u (t) = 0. + t→T1 − t→T1 43 Note that u(t) is smooth on (a∗ , T1 ), we have, t D(u)u (t) + cu(t) + a∗ g(u(s)ds = constant, t ∈ (a∗ , T1 ]. (2.16) Taking the limit t → a∗ , we see that the constant must be c. Similarly t D(u)u (t) + cu(t) + b∗ g(u(s)ds = 0, t ∈ [T1 , b∗ ). (2.17) Take t = T1 in both (2.16) and(2.17), we see that b∗ a∗ g(u(s))ds = c. Furthermore for T1 < t < b∗ , Equation (2.16) also holds. In fact, in this case, using Equation (2.17) and b∗ a∗ g(u(s)ds = T1 a∗ b∗ t g(u(s)ds + g(u(s)ds + T1 g(u(s)ds. t We have t D(u)u (t) + cu(t) + = g(u(s)ds a∗ b∗ D(u)u (t) + cu(t) − b∗ g(u(s)ds + t a∗ b∗ g(u(s)ds = 0 + a∗ g(u(s)ds = c. II) Non-existence for c < c∗ . We proved in the Proposition 2.1.4 that u (t) < 0 for u ∈ (0, 1) and this implies the existence of the inverse function ξ(u) in (0, 1) especially in (0, α) where D(u) < 0, then we define ω(u) := −D(1 − u)u (ξ(1 − u)), it can be checked the 44 negative solution of (2.14), satisfying ω(1− ) := ω((1 − α)+ ) = 0. Therefore, by applying Theorem 2.1.5 , we can deduce that c ≥ c∗ . 1 When α < u < 1, we have D(u) > 0, we can also use Theorem 2.1.5 again to get c ≥ c∗ . 2 Summarizing, c ≥ c∗ is a necessary condition for the existence of weak traveling wave solution of (2.1). 2 2.3 Regularity of weak traveling wave solutions If D(u) ∈ C 1 [0, 1], we can prove the weak traveling wave solution for c > c∗ in Theorem + − 2.2.1 actually is C 1 , which is our Theorem 2.3.2. When c = c∗ , u (T1 ) and u (T1 ) may take possible two different values, which leads to only weak traveling wave solution. Before we prove this theorem, we need the following lemma. Lemma 2.3.1 Under the assumption of Theorem 2.1.5, for every c > c∗ ,the following limit, z(u) = λ1 , u→α+ u − α lim 1 exist, where λ1 = 2 (−c + z(u) = λ2 u→α− u − α lim c2 − 4D (α+ )g(α)) and λ2 = 1 (−c + 2 z(u) =− c+ u→α+ D(u) z(u) + u (T1 ) = lim =− c+ u→α− D(u) − u (T1 ) = lim c2 − 4D (α− )g(α)). 2g(α) c2 − 4D (α+ )g(α) 2g(α) , c2 − 4D (α− )g(α) However, when c = c∗ , λ1 may take two values 1 (−c∗ ± 2 45 (2.18) . (c∗ )2 − 4D (α+ )g(α)). Proof of Lemma 2.3.1. We follow the idea from [30] to prove the existence of the limits. For c ≥ c∗ , let z(u) be the solution of (2.7) satisfying z(α+ ) = z(1− ) = 0. First we prove 2 the existence of the limit z(u) . u→α+ u − α λ1 := lim (2.19) z(u) The idea for λ2 = limu→α+ u−α is the same and we only prove λ1 . Assume by contradiction that z(u) z(u) > lim inf := l. u→α+ u − α u→α+ u − α 0 ≥ L := lim sup Let χ ∈ (l, L) and let {un } be an decreasing sequence converging to α such that z(un ) = χ, un − α d z(u) ( ) ≥ 0. du u − α |u=un and z(u) z(un ) 1 d z(u) ˙ ˙ ˙ Since du ( u−α ) = u−α (z(u) − u−α ), we have z(un ) − un −α = z(un ) − χ ≥ 0, hence z(un ) = −c − ˙ D(un )g(un ) ≥ χ. χ(un − α) Passing to the limit as n → +∞, since χ < 0, we have χ2 + cχ + [Dg] (α+ ) ≥ 0, that is 1 χ ≤ 2 (− c2 − 4D (α+ )g(α) − c) or 1 ( 2 c2 − 4D (α+ )g(α) − c) ≤ χ. Similarly, we can choose an decreasing sequence {νn } converging to α, such that z(νn ) = χ, νn − α d z(u) ( ) ≤ 0. du u − α |u=νn and we can deduce χ2 + cχ + [Dg] (α) ≤ 0, hence 1 (− 2 1 c2 − 4D (α+ )g(α) − c) ≤ χ ≤ ( 2 46 c2 − 4D (α+ )g(α) − c). By the arbitrariness of χ ∈ (l, L), we conclude that 1 λ1 = l = L = (± 2 c2 − 4D (α+ )g(α) − c). (2.20) Given c > c∗ , let z(u) and z ∗ (u) be the solutions of Equation (2.7) on [α, 1] with z(α) = z(1) = 0, respectively, for c and c∗ . Assuming the existence of u ∈ [α, 1] satisfying z ∗ (u) ≥ z(u), from (2.7) we then have z ∗ (u) = −c∗ − ˙ D(u)g(u) D(u)g(u) = z(u). ˙ ∗ (u) > −c − z z(u) This implies the contradictory conclusion 0 = z ∗ (1− ) > z(1− ) = 0. Hence z ∗ (u) < z(u) for all u ∈ [α, 1]. Which yield z ∗ (α+ ) ≤ z(α+ ), from ˙ ˙ 1 z ∗ (α+ ) = (± ˙ 2 1 which leads to λ1 = 2 ( (c∗ )2 − 4D (α+ )g(α) − c∗ ), c2 − 4D (α+ )g(α) − c). When D (α+ ) = 0, we obtain z(u) = lim u→α+ D(u) u→α+ 1 = (+ 2 lim z(u) u − α · u − α D(u) 1 D (α+ ) −4D (α+ )g(α) 1 = · 2( c2 − 4D (α+ )g(α) + c) D (α+ ) −2g(α) = < 0. 2 − 4D (α+ )g(α) + c) ( c c2 − 4D (α+ )g(α) − c) · 47 When D (α+ ) = 0, It is natural from continuity that we guess z(u) g(α) =− . c u→α+ D(u) lim Next, we establish this rigorously. If D (α+ ) = 0, we can get z(α)(z(α) + c) = 0. Hence ˙ ˙ z(α) = 0 or z(α) = −c. Because z(α+ ) ≥ z ∗ (α+ ), we get z(α) = 0 ˙ ˙ ˙ ˙ ˙ By the mean value theorem, there exists a sequence {νn }n such that νn → α+ and z(νn ) → 0 as n → ∞. From Equation (2.7), we can get ˙ z(νn ) g(α) →− , D(νn ) c z(u) g(α) Next, we show that limu→α+ D(u) = − c . Fix as n → ∞. (2.21) g(α) ∈ (0, c ). Since D(u) is positive in the right of α, from (2.21) there exists n0 such that z(νn ) ≤ (− g(α) + )D(νn ), c n ≥ n0 . (2.22) g(α) ˙ Let ξ (u) := (− c + )D(u). Since ξ(u) → 0 as u → α+ and −c + c g(u) g(α) → −c + c > 0, g(α) − c g(α) − c as u → α+ , g(u) ˙ so it is possible to find a δ > 0 such that −c + c g(α)−c > ξ (u) for each u ∈ (α, α + δ). Let n1 ≥ n0 such that νn1 ∈ (α, α + δ). We claim that z(u) ≤ ξ (u) for all u ∈ (α, νn1 ), that is z(u) ≤ (− g(α) + )D(u), c 48 for u ∈ (α, νn1 ). (2.23) Suppose by contradiction that there exists u0 ∈ (α, νn1 ) such that z(u0 ) > ξ (u0 ). Since n1 ≥ n0 , from (2.22) it follows that z(νn1 ) ≤ ξ (νn1 ). The mean value theorem concludes ˙ that there exists u1 ∈ (u0 , νn1 ), such that z(u1 ) > ξ (u1 ) and z(u1 ) − ξ (u1 ) < 0. On the ˙ other hand, we have z(u1 ) = −c + ˙ D(u1 )g(u1 ) D(u1 )g(u1 ) g(u1 ) > −c + = −c + c . g(α) −z(u1 ) g(α) − c ( c − )D(u1 ) g(u1 ) ˙ Since u1 ∈ (u0 , νn1 ) ⊂ (α, α + δ), we get z(u1 ) > −c + c g(α)−c > ξ (u1 ). A contradiction. ˙ Similarly, from (2.21) we can choose n0 such that z(νn ) ≥ η(νn ) := (− g(α) − )D(νn ), c for n ≥ n0 . (2.24) Again, η (u) → 0 as u → α+ and ˙ −c + c g(α) g(u) → −c + c < 0, g(α) + c g(α) + c as u → α+ , g(u) there exists δ > 0 such that −c + c g(u)+c < η (u) for each u ∈ (α, α + δ). Let n1 ≥ n0 such ˙ that νn ∈ (α, α + δ). We prove by contradiction that z(u) ≥ η (u) for each u ∈ (α, νn ), 1 1 that is z(u) ≥ (− g(α) − )D(u), c for u ∈ (α, νn1 ). (2.25) Suppose there exists u0 ∈ (α, νn1 ) such that z(u0 ) < η (u0 ). Since n1 ≥ n0 , from (2.24) it follows that z(νn1 ) ≥ η (νn1 ). Therefore, there exists u1 ∈ (u0 , νn ), such that z(u1 ) < 1 49 η (u1 ) and z(u1 ) − η (u1 ) > 0. On the other hand we have ˙ ˙ z(u1 ) = −c + ˙ D(u1 )g(u1 ) D(u1 )g(u1 ) g(u1 ) < −c + . = −c + c g(α) −z(u1 ) g(α) + c ( c + )D(u1 ) g(u ) 1 Since u1 ∈ (u0 , νn ) ⊂ (α, α + δ), we get z(u1 ) < −c + c g(α)+c < η (u1 ), hence a contra˙ ˙ 1 diction holds. By the Inequality (2.23) and (2.25), we proved that z(u) g(α) =− . c u→α+ D(u) lim The same argument can be used to prove that z(u) 1 = ( c2 − 4D (α− )g(α) − c), − u−α 2 u→α 2g(α) z(u) + u (T1 ) = lim =− . c + c2 − 4D (α− )g(α) u→α− D(u) lim 2 Theorem 2.3.2 Let D(u) ∈ C 1 [0, 1] and g(u) ∈ C[0, 1] be given functions respectively satisfying (2) and (4). Then, there exists a value c∗ > 0, satisfying 2 D (α)g(α) ≤ c∗ ≤ 2 max{ sup − s∈(0,α] D(s)g(s) D(s)g(s) , sup }, s s∈(α,1] s − α such that Equation (3) has i) no C 1 traveling wave solution satisfying (1.10) for c < c∗ ; ii) a unique (up to space shifts) C 1 traveling wave solution for c > c∗ . 50 Proof. First by using the results from Theorem 2.2.1, we obtain the existence of weak − traveling wave solutions for c > c∗ . Secondly, from Lemma 2.3.1, we can obtain u (T1 ) = + u (T1 ) when D(u) ∈ C 1 [0, 1] and c > c∗ . Which lead to the existence of C 1 traveling wave solution in Theorem 2.2.1. 2 51 Chapter 3 Regularity and numerical analysis of the new population model If we consider both a continuous and discrete models of the same phenomenon, we would expect the models to give rise to similar solutions at length scales where their ranges of applicability overlap. In the following, we would treat the continuous and discrete model separately and obtain some similar and different properties of these two models. 3.1 Regularity of the continuous model In this section, we first give the definition of a weak solution of Equation (1) with D(u) always satisfying Equation (2) and non-flux condition on u. Definition 3.1.1 A locally continous function u(x, t) is a weak solution of the backward forward Equation (1) on QT = {(x, t), x ∈ [0, 1], t ∈ [0, T ]} with non-flux condition if 1 0 u2 (x, t) + |D(u)|u2 dx x (3.1) 1 is uniformly bounded for t ∈ [0, T ] and if for any test function φ(x, t) in C0 [QT ], T 0 1 0 [uφt − D(u)ux φx ]dxdt = 0. 52 (3.2) We first establish the first derivative estimates on u in the domain where D(u) > 0. For simplicity, we limit ourselves to showing how to obtain an a priori estimate. For any point (x0 , t0 ) such that u(x0 , t0 ) > α, there is a δ > 0 such that D[u(x, t)] ≥ c > 0 for (x, t) ∈ QT = [x1 , x1 + δ] × [t0 , t0 + δ] and (x0 , t0 ) ∈ QT . We estimate η(x)2 u2 dxdt, x where the cut-off function η(x) is smooth and vanishes outside [x1 , x1 + δ]. Differentiating the equation with respect to t and multiplying by η 2 u (if not differentiable, we use the smoothing operator to u(x, t)), we find 1d 2 dt η 2 u2 dx = = η 2 ut udx = − D(u)ux (η 2 u)x dx −[D(u)η 2 u2 + 2D(u)ηηx ux u]dx, x So, integrating with respect to time and using Young inequality, c η 2 u2 dxdt ≤ C + x 2 2D[ η 2 u2 + (C/ )ηx u2 ]dxdt, x for some constant C and for every . By taking small enough, we can get the derivative estimate in QT . Because u is locally H 1 in x, D(u) is strictly positive, by Nash-Moser estimate we can get u ∈ C α1 ,α1 /2 , then according to Ladyzenskaya et al. [Chapter V, Theorem 7.4], we can get u ∈ C 1+α1 ,α1 /2 . C 2+α2 estimate. Once we get the C 1+α1 ,α1 /2 estimate, coming back to our Equation (1), we established that D(u) has H¨lder continuity. If we view it as a linear equation, o 53 differentiating the equation, we now have uxt = (D(u)ux )xx = [D(u)uxx + D (u)ux ux ]x . If we let v = ux , we get vt = [D(u)vx ]x + [D (u)ux ]x v + [D (u)ux ]vx , which is a uniformly forward parabolic equation for ux . By using Schauder estimate, we can conclude that u is locally C 2+α2 ,1+α2 /2 for some α2 ∈ (0, 1). Higher derivative estimates. Once C 2+α2 ,1+α2 /2 estimates have been found, it is easy to derive interior estimates for higher-order derivatives. For instance, we may now differentiate the equation once more to find an equation for uxx : uxxt − [D(u)uxx + D (u)u2 ]xx = 0, x which has H older continuous coefficients. Therefore, we conclude that uxx ∈ C 2+α2 ,1+α2 /2 , ¨ which gives a fourth derivative estimate. The procedure is easily iterated and continues to infinity given D(s) is infinitely differentiable. From above estimate, we can get the following theorem for these big enough initial data. Theorem 3.1.2 Suppose that D(u) satisfies (2), then there exists a nonnegative classical solution in C 2,1 (QT ) of Problem (1) for all T > 0, provided α < u(x, 0) ∈ C 1,β ([0, 1]), β ∈ (0, 1), and satisfies the non-flux conditions D(u)ux (0, t) = D(u)ux (1, t) = 0. 54 Furthermore, when u0 ≥ α everywhere on [0, 1], then we have the following asymptotic theorem. Theorem 3.1.3 Suppose u0 ≥ α everywhere on [0, 1], and u(x, t) is the solution of Equation (1) with non-flux boundary condition, then solution u(x, t) will goes to constant 1 C = 0 u(x, 0)dx. Proof. First from the estimate above, we can see u(x, t) is infinitely differentiable given u(x, 0) > α every where on [0, 1]. So we have 1 1 d 1 (D(u)ux )x dx = D(u)ux |1 = 0, ut dx = udx = 0 dt 0 0 0 (3.3) which yields the conservation of the total density. When u(x, 0) > α everywhere in [0, 1], we can have D(u) > 0 by using the maximum principle of parabolic equation. Even more we can have u(x, t) ≥ α + δ for some small number δ and t > 0. 1 1 1 d 1 2 u dx = 2 uut dx = 2 u(D(u)ux )x dx = −2 D(u)u2 dx ≤ 0, x dt 0 0 0 0 (3.4) which means the ”energy” of the solution is decreasing. The last step we prove the solution will tend to the constant C as time t goes to infinity. 1 1 d 1 (u − C)2 dx = 2 (u − C)ut dx = 2 (u − C)x [D(u)ux ]x dx dt 0 0 0 1 = −2 1 (u − C)2 D(u)ux dx ≤ −2δ 0 0 1 ≤ −2δc (u − C)2 dx x (u − C)2 dx (By using Poincar´ inequality). e 0 55 Then by using Gronwall’s inequality, we can get 1 1 (u − C)2 dx ≤ ( 0 0 (u0 − C)2 dx)e−2δct . (3.5) 1 Let t → +∞, we can get 0 (u − C)2 dx = 0 a.e. By combining u is locally continuous, we obtain limt→+∞ u(x, t) = C. The proof of u − α ≥ δ > 0 for all t > 0. we claim min u(x, t) ≥ min u0 (x), x∈[0,1] x∈[0,1] otherwise min x∈[0,1],0≤t≤T u(x, t) = u(x0 , t0 ), T ≥ t0 > 0. (3.6) Let ν = u(x, t) + t, > 0, then νt = u t + = [D(u)ux ]x + , = [D(u)νx ]x + , 2 = D (u)νx + D(u)νxx + . νx (0, t) = νx (1, t) = 0, ν(x, t) can not choose a min in t > 0. If not, ν(x1 , t0 ) = min . • Case 1, x1 ∈ (0, 1), νx = 0, νxx ≥ 0, contradiction. • Case 2, x1 = 0 or 1. 56 (3.7) In case 2, we have u(x0 , t0 ) ≥ α or u(x0 , t0 ) < α. For the first case u(x0 , t0 ) ≥ α, we just let x0 = 1 and use the non-flux condtion on u on the boundary, we obtain D(u)ux − 0 . x−1 x→1− [D(u)ux ]x (1, t0 ) = lim Because u(1, t0 ) is the minimum point, we can have ux (1, t0 ) ≤ 0 and D(u(x, t)) > 0 near the boundary x = 1. So we can get [D(u)ux ]x (1, t0 ) ≥ 0 which will contradict to vt ≤ 0. If u(1, t0 ) < α, using the local continuity of u, we can find t1 < t0 such that u(1, t1 ) = α is the minimum point for u(x, t1 ), x ∈ [0, 1]. By using the same argument as in the first case, we can obtain a contradition. So minx∈[0,1],0≤t≤T ν(x, t) = minx∈[0,1] ν(x, 0) = minx∈[0,1] u(x, 0), which means u(x, t) + t ≥ min u(x, 0), ∀ > 0. (3.8) x∈[0,1] When → 0, u(x, t) ≥ minx∈[0,1] u(x, 0). For the canse u(x, 0) ≥ α, we use the approximation to Equation (1)    u = [(D(u) + )u ]x , (x, t) ∈ [0, 1] × (0, T ]  t x     ux (0, t) = ux (1, t) = 0,      u (x, 0) = u (x, 0). 0 Using the above result, we can get limt→+∞ u (x, t) = C. Then let (3.9) → 0, we can get limt→+∞ u(x, t) = C, for u(x, 0) ≥ α on [0,1]. 2 By using the C ∞ estimate, we also can get the following non-existence result. Theorem 3.1.4 Suppose 0 < u(x, 0) < α a.e. and u(x, 0) is not C ∞ on [0, 1], then Equation 57 (1) has no weak solution. Proof. If not, supposing there exists an weak solution u(x, t) < α in QT := [0, 1] × [0, T ], by the linear transform τ = T − t, we can reverse the time interval and get uτ = −(D(u)ux )x , ∀(x, t) ∈ QT . (3.10) Because −D(u) > 0, then from the C ∞ estimate, we can have u(x, 0) ∈ C ∞ (QT ). If the initial value u(x, 0) is not infinitely differentiable, then there is no weak solution al all. 2 3.2 Numerical analysis of the lattice model In our derivation of the new population model, we find every assumption is reasonable. But we find in Theorem 3.1.4 that there is no weak solution when the initial date is not C ∞ . Which leads us to consider the properties of the original lattice model (1.27) to see whether this model is reasonable. Theorem 3.2.1 When considering equation (1.27) with initial data 1/2 = α ≤ u(j, 0) ≤ 1, j = 0, . . . , N and non-flux boundary condition, we obtain 1/2 = α ≤ u(j, t) ≤ 1 for all t ≥ 0. Furthermore, if u(j, 0) is monotone in j, then u(j, t) is monotone in j, 1 lim u(j, t) = t→∞ N −1 58 N −1 u(j, 0). j=1 Figure 3.1: Asymptotic behavior of monotone initial solution (t × 104 ). For interpretation of the references to color in this and all other figures, the reader is referred to the electronic version of this dissertation. Proof. Suppose u(k1 , t) = max0≤j≤N u(j, t), u(k2 , t) = min0≤j≤N u(j, t), ut = u(j, t) then j from Equation (1.27), we can get ut ut −1 k1 k1 u(k1 , t + τ ) − u(k1 , t) = (ut + ut −1 − 1)(ut −1 − ut ) k1 k1 k1 k1 2 ut ut +1 k1 k1 + (ut + ut +1 − 1)(ut +1 − ut ), k1 k1 k1 k1 2 (3.11) ut + ut −1 − 1 ≥ 0, ut + ut +1 − 1 ≥ 0 and ut −1 − ut ≤ 0, ut +1 − ut ≤ 0. Which k1 k1 k1 k1 k1 k1 k1 k1 lead to u(k1 , t + τ ) − u(k1 , t) ≤ 0. 59 The same idea can be used for the minimum value point and get u(k2 , t + τ ) − u(k2 , t) ≥ 0. For arbitrary point u(i, t) u(i, t + τ ) − u(i, t) = ut ut ut ut i i−1 t (ui + ut − 1)(ut − ut ) + i i+1 (ut + ut − 1)(ut − ut ) i i i−1 i+1 i−1 i+1 i 2 2 ≤ 1/2[(ut − ut )] + 1/2[(ut − ut )] i i k1 k1 ≤ ut − ut . i k1 Again u(i, t + τ ) − u(i, t) = ut ut ut ut i i−1 t (ui + ut − 1)(ut − ut ) + i i+1 (ut + ut − 1)(ut − ut ) i i i−1 i−1 i i+1 i+1 2 2 ≥ 1/2[(ut − ut )] + 1/2[(ut − ut )] i i k2 k2 ≥ ut − ut . i k2 Which leads the result min0≤j≤N u(j, 0) ≤ u(j, t) ≤ max0≤j≤N u(j, 0). If initial solution is monotone and suppose 1/2 ≤ u(1, 0) ≤ u(2, 0) ≤ · · · ≤ u(N − 1, 0) ≤ u(N, 0) ≤ 1, and let t Cj = ut ut j j−1 2 (ut + ut − 1), j j−1 60 (3.12) we have ∆U t+τ = [C]t ∆U t , (3.13) where ∆U t+τ = [(ut+τ − ut+τ ), . . . , (ut+τ − ut+τ )]T , 2 1 N N −1  t t 0 ... 0 0 0 C2  1 − 2C1   t t t  0 0 0 1 − 2C2 C3 . . . C1    . . . .. . . . . . . . . . [C]t =  . . . . . . .    t t t  0 0 0 . . . CN −2 1 − 2CN −1 CN   t t 0 0 0 ... 0 CN −1 1 − 2CN                t t u uj−1 t When 1/2 ≤ ut ≤ 1, 0 ≤ Cj = j 2 (ut + ut − 1) ≤ 1/2, so [C]t is positive and if j j−1 j U t is positive, we can obtain U t+τ ≥ 0. Because U 0 ≥ 0 which lead to the conservation of monotonicity of ut for all t > 0. j From above two results, we can see min0≤j≤N u(j, t) = u(0, t) = u(1, t) is bounded and increasing, so the limit ut ut t 2 1 (u + ut − 1)(ut − ut ) 1 2 2 1 t→∞ 2 lim u(1, t + τ ) = lim u(1, t) + lim t→∞ t→∞ ut ut ut ut exists. We can get limt→∞ 22 1 (ut + ut − 1)(ut − ut ) = 0 and 22 1 (ut + ut − 1) > 0, which 1 2 2 1 1 2 lead to limt→∞ (ut − ut ) = 0. Combined with the existence of limt→∞ ut , we can get the 2 1 1 existence of limt→∞ ut . By using the same idea, which will lead to 2 1 lim ut = lim ut = · · · = lim ut = 1 2 N t→∞ t→∞ t→∞ N −1 61 N −1 u(j, 0). j=1 2 Theorem 3.2.2 For all the initial data 0 ≤ u(j, 0) ≤ 1, j = 0, . . . , N , with non-flux boundary condition, the solution of Equation (1.27) is bounded in [0, 1] and the total density is conservative. Proof. For the non-flux boundary condition, we just assume u(0, t) = u(1, t), u(N, t) = u(N − 1, t). From Equation (1.27), we have t t ut+τ = ut + Cj (ut − ut ) + Cj+1 (ut − ut ). j j−1 j j+1 j j (3.14) We get N −1 N −1 ut+τ j j=1 t t ut + C1 (ut − ut ) + C2 (ut − ut ) 0 1 2 1 j = j=1 t t + C2 (ut − ut ) + C3 (ut − ut ) + · · · + 1 2 3 2 t t + CN −1 (ut −2 − ut −1 ) + CN (ut − ut −1 ), N N N N t the summation of the Cj terms will result to zero. From which we obtain N −1 N −1 ut+τ = j ut , j j=1 j=1 the lattice model is conservative. 62 (3.15) Also we can rewrite Equation (1.27) to the form ut+τ j = ut j 2 + ut ut j j−1 2 (ut j + ut − 1)(ut − ut ) + j j−1 j−1 ut j 2 + ut ut j j+1 2 (ut + ut − 1)(ut − ut ). j j+1 j+1 j Considering the following equation f (x, y) = x + xy(x + y − 1)(y − x), x, y ∈ [0, 1], (3.16) if we can find the upper and lower bound of f (x, y) such that 0 ≤ f (x, y) ≤ 1, then we obtain the result u(j, t) ∈ [0, 1] for all t > 0. f (0, y) = 0, f (1, y) = 1 + y 3 − y 2 , fy (1, y) = 3y 2 − 2y. When we let fy (1, y) = 0 which lead to f (1, 2/3) = 23/27 > 1/2 and f (1, 0) = 1. We have 1/2 < f (1, y) ≤ 1. Again, we test other boundary value f (x, 0) = x and f (x, 1) = x + x2 (1 − x), fx (x, 1) = 1 + 2x − 3x2 . Let fx (x, 1) = 1 + 2x − 3x2 = 0, we get x = −1/3 or x = 1 and 0 ≤ f (x, 1) ≤ 1. Now we test if the function can get the extreme value when x, y ∈ (0, 1). Taking the derivatives, we get fx = 1 + y(x − y + y 2 − x2 ) + xy(1 − 2x) = 0, fy = x(x − y + y 2 − x2 − y + 2y 2 ) = 0. 63 After simplifications and let x = 0, we get fx = 1 + y(1 − 2y) + xy(1 − 2x) = 0, fy = 3y 2 − x2 + x − 2y = 0. We can get fx > 0 on x ∈ (0, 1/2) or y ∈ (0, 1/2), when x, y ∈ [1/2, 1], by using Theorem 3.2.1, we have 1/2 ≤ u(j, t) ≤ 1 for any 0 ≤ j ≤ N, t ≥ 0. But from equation fx = 0, we have 0 ≤ u(j, t) ≤ 1, ∀0 ≤ j ≤ N, t ≥ 0. 2 In the following we consider the interface behaviors of the forward region Q+ (t) = d {(i, t)|u(i, t) ≥ 1/2, i ∈ [0, 1, 2, . . . , N ]} and backward region Q− (t) = {(i, t)|u(i, t) < 1/2, i ∈ d [0, 1, 2, . . . , N ]}. Theorem 3.2.3 Under any general initial condition u(j, 0), for any t1 > t ≥ 0 Q+ (t) ⊆ Q+ (t1 ). d d (3.17) We first begin at the simplest situation when Q+ (t) = {(i, t)|u(i, t) ≥ 1/2, i ∈ d Proof. [1, 2, 3, . . . , N ]} = (k, t). If u(k, t) = 1/2, from Equation (1.27) ut+τ k = ut k + ut ut k k−1 2 (ut k + ut k−1 − 1)(ut k−1 − ut ) + k 64 ut ut k k+1 2 (ut + ut − 1)(ut − ut ), k k k+1 k+1 , Figure 3.2: The behaviors of backward forward regions as time changes from t=0 to t=0.95095. t t t t t t and we have (ut + ut k k−1 − 1) < 0, (uk−1 − uk ) < 0, (uk + uk+1 − 1) < 0, (uk+1 − uk ) < 0, which lead to ut+τ ≥ ut = 1/2. k k Suppose u(k, t) > 1/2, then we have u(k, t) = 1/2 + δ for some positive δ and Equation (1.27) is of the form ut ut ut ut ut+τ = 1/2 + δ + k k−1 (ut + ut − 1)(ut − ut ) + k k+1 (ut + ut − 1)(ut − ut ). k k k−1 k+1 k−1 k+1 k k k 2 2 t t If (ut + ut k k−1 − 1) ≤ 0 and (uk + uk+1 − 1) ≤ 0, it is easy to obtain u(k, t + τ ) ≥ u(k, t). Suppose (ut +ut −1) > 0 and (ut +ut −1) < 0, we have u(k−1, t) > 1−u(k, t) = 1/2−δ k k−1 k k+1 and u(k, t) + u(k − 1, t) − 1 < 1/2 + δ + 1/2 − 1 < δ, 65 u(k, t) − u(k − 1, t) < 2δ. Then ut ut ut+τ > 1/2 + δ − δ{ut ut }δ + k k+1 (ut + ut − 1)(ut − ut ). k k−1 k k+1 k+1 k k 2 Because ut ut k k−1 < 1/4 and 0 < δ < 1/2, we can obtain δ − 2δ 2 ut ut k k−1 ≥ 0, (3.18) which lead to u(k, t + τ ) ≥ u(k, t) even for both (ut + ut − 1) > 0, (ut + ut − 1) > 0. k k−1 k k+1 When Q+ (t) = {(i, t)|u(i, t) ≥ 1/2, i ∈ [1, 2, 3, . . . , N ]} has more than two successive d points, we rewrite Equation (1.27) as ut+τ j = ut j 2 + ut ut j j−1 2 ut ut ut t +ut −1)(ut −ut )+ j + j j+1 (ut +ut −1)(ut −ut ). (uj j−1 j−1 j j j+1 j+1 j 2 2 (3.19) Let’s consider the boundary points, suppose 0 ≤ u(j − 1, t) < 1/2 and 1/2 ≤ u(j, t), u(j + 1, t) ≤ 1. Using the result in simplest situation, we obtain ut j 2 + ut ut j j−1 2 (ut + ut − 1)(ut − ut ) ≥ 1/2 × 1/2 = 1/4, j j−1 j−1 j (3.20) and if u(j, t) ≥ u(j + 1, t) ≥ 1/2, ut j 2 + ut ut j j+1 2 (ut + ut − 1)(ut − ut ) j j+1 j+1 j ≥ 1/2(1/2 + δ + 1/2(1 + 1 − 1)(1/2 − 1/2 − δ) = 1/2(1/2 + δ − (1/2)δ) > 1/4, 66 otherwise if u(j, t) ≤ u(j + 1, t), ut j 2 + ut ut j j+1 2 (ut + ut − 1)(ut − ut ) j j+1 j+1 j ≥ ut ≥ 1/2. j When considering the points inside the forward region, we can have the result that 1/2 ≤ u(j, t) ≤ 1 by using the maximum and minimum principle from Theorem 3.2.1. Combining all these together, we complete the proof of theorem. 2 3.3 Asymptotic behaviors when N ≤ 4 In this section, we consider a special case when N = 4 and u(0, t) = u(4, t) = 0 for t ≥ 0 which corresponding to the hostile environment in biology, when N = 1, 2, 3, the asymptotic behaviors of the solutions are easy to be obtained. Case 1: u(1, 0) + u(2, 0) > 1, u(2, 0) + u(3, 0) < 1 and 0 ≤ u(1, 0) < u(2, 0). From Equation (1.27), we have ut ut ut+τ = ut + 1 2 (ut + ut − 1)(ut − ut ), 1 2 2 1 1 1 2 t t ut+τ = ut + C1 (ut − ut ) + C2 (ut − ut ), 2 1 2 3 2 2 ut+τ 1 (3.21) (3.22) + ut+τ 2 = ut+τ − ut+τ = 2 1 ut+τ = 3 ut+τ + ut+τ = 2 3 ut ut t + 2 3 (u2 + ut − 1)(ut − ut ), 3 3 2 2 t ut u t (ut − ut )(1 − 2C1 ) + 2 3 (ut + ut − 1)(ut − ut ), 2 1 2 3 3 2 2 ut ut ut + 2 3 (ut + ut − 1)(ut − ut ), 2 3 2 3 3 2 t ut u ut + ut + 2 1 (ut + ut − 1)(ut − ut ). 3 1 1 2 2 2 2 ut 1 + ut 2 67 (3.23) (3.24) (3.25) (3.26) If the initial solution satisfy case 1, we can see ut , ut + ut are increasing, ut , ut + ut are 1 1 2 3 2 3 decreasing. Also from Theorem 3.2.2, u(j, t) are bounded in [0, 1] which lead to the existence of the limit of each term and the following asymptotic behaviors of the solution lim ut t→∞ 1 → u1 = lim ut → u2 > 1/2, 2 lim ut t→∞ 3 → u3 = 0. t→∞ Case 2: u(1, 0) + u(2, 0) < 1, u(2, 0) + u(3, 0) < 1 and u(2, 0) is the min. From Equations (3.21)-(3.26), we have ut , ut are increasing, ut , ut + ut , ut + ut are decreasing and u(j, t) 1 3 2 1 2 2 3 are bounded in [0, 1] which lead to the existence of the limit of each term and the following asymptotic behaviors of the solution lim ut t→∞ 1 → u1 , lim ut → u2 = 0, 2 t→∞ lim ut t→∞ 3 → u3 . Case 3: u(1, 0) + u(2, 0) < 1, u(2, 0) + u(3, 0) < 1, u(1, 0) + u(2, 0) + u(3, 0) < 1 and u(2, 0) is the maximum. From Equations (3.21)-(3.26), we have ut , ut are decreasing, ut is 1 3 2 increasing and u(j, t) are bounded in [0, 1] which lead to the existence of the limit of each term and the following asymptotic behaviors of the solution lim ut t→∞ 1 → u1 = 0, lim ut → u2 = u(1, 0) + u(2, 0) + u(3, 0), 2 t→∞ lim ut t→∞ 3 → u3 = 0. 68 Case 4: u(1, 0) + u(2, 0) < 1, u(2, 0) + u(3, 0) < 1, u(1, 0) + u(2, 0) + u(3, 0) < 1 and u(3, 0) is the maximum. (When u(1, 0) is the maximum, it the same as u(3, 0) is the maximum.) From Equations (3.21)-(3.26), we have ut , ut are decreasing, ut is increasing and u(j, t) are 1 2 3 bounded in [0, 1] which lead to the existence of the limit of each term and the following asymptotic behaviors of the solution lim ut → u1 = 0, t→∞ 1 lim ut t→∞ 2 → u2 = 0, lim ut → u3 = u(1, 0) + u(2, 0) + u(3, 0). 3 t→∞ Case 5: u(1, 0) + u(2, 0) < 1, u(2, 0) + u(3, 0) < 1, 1 ≤ u(1, 0) + u(2, 0) + u(3, 0) and u(3, 0) is the maximum. (When u(1, 0) is the maximum, it the same as u(3, 0) is the maximum.) If u(2, 0) is the minimum, it goes to case 2. So we consider the case when u(3, 0) > u(2, 0) > u(1, 0). From Equations (3.21)-(3.26), we have ut , ut +ut are decreasing, ut , ut +ut 1 1 2 3 2 3 are increasing first. Because 1 ≤ u(1, 0) + u(2, 0) + u(3, 0), when ut + ut first pass 1 and 2 3 u(2, t) become maximum, then this become similar to Case 1 and the solution u(j, t) have the asymptotic limits. When u(1, t) + u(2, t) < 1, u(2, t) + u(3, t) > 1 and u(3, t) is the maximum. In this case u(1, t) + u(2, t) < 1 for all t, u(1, t) is decreasing and u(2, t) > u(1, t) for all t. In this case we can get lim ut = 0, lim ut = lim ut = 1/2{u(1, 0) + u(2, 0) + u(3, 0)}. 1 2 3 t→∞ t→∞ t→ Case 6: u(1, 0) = u(3, 0) and u(1, 0) + u(2, 0) > 1. If u(2, 0) is the minimum, then u(1, t) + u(2, t), u(2, t) + u(3, t) are increasing and u(1, t), u(3, t) are decreasing and we can 69 get the asymptotic behaviors of solutions lim u(1, t) = lim u(1, t) = lim u(1, t) = 1/3(u(1, 0) + u(2, 0) + u(3, 0)). t→∞ t→∞ t→∞ When u(2, 0) is the maximum point. In this case u(1, t), u(3, t) are increasing and u(2, t), u(1, t)+ u(2, t) are decreasing. We suppose that ut + ut = 1 + δ, then 2 1 ut ut ut+τ + ut+τ = 1 + δ − δ 1 2 (ut − ut ) > 1. 1 2 1 2 2 From the monotonicity and boundness of each term, we can get that lim ut , lim ut and lim ut 1 2 3 are all exist. Case 7: u(1, 0) + u(2, 0) > 1 and u(2, 0) + u(3, 0) > 1. If u(2, 0) is the minimum point, in this case, we can get ut + ut > 1, ut + ut > 1 for all t > 0. So we can get the following 1 2 2 t t |ut+τ − ut+τ | ≤ max{|1 − 2C1 ||ut − ut |, C2 |ut − ut |} 2 1 3 2 2 1 t t |ut+τ − ut+τ | ≤ max{|1 − 2C2 ||ut − ut |, C1 |ut − ut |}, 3 2 2 1 3 2 which leads to the convergence of the asymptotic behaviors of u(1, t), u(2, t) and u(3, t). In the case u(3, 0) > u(2, 0) > 1/2 > u(1, 0), we can get u(1, t) + u(2, t) is increasing first and u(2, t) + u(3, t) is decreasing but always larger than 1. If u(3, t) keep as maximum in this situation, then we can see the existence of the limits of the solution. But when u(2, t), u(3, t) change their orders, we can find u(1, t) + u(2, t) is decreasing and u(1, t) + u(2, t) may be less than 1, in this case we still have some difficulties to prove if the system have asymptotic limits or periodic solutions. This is one of the open questions we still need to answer. Case 8: u(1, 0) + u(2, 0) < 1 and u(2, 0) + u(3, 0) > 1, u(2, 0) > 1/2 and u(3, 0) is the 70 maximum. We can see u(1, t) + u(2, t) is increasing and u(3, t) is decreasing. If u(1, t) + u(2, t) < 1 for all t > 0, then we can get the limits of the solution. But if u(1, t0 )+u(2, t0 ) > 1 and u(2, t0 ) is the maximum for some t0 > 0, this goes back to the open question we mentioned above. Case 9: u(1, 0) + u(2, 0) = 1, u(2, 0) = u(3, 0), by using Equations (3.21)-(3.26), we can see that this is an trivial steady state solution. Case 10: 1/2 ≤ u(1, 0), u(3, 0) ≤ u(2, 0) ≤ 1 or 1/2 ≤ u(2, 0) ≤ u(1, 0), u(3, 0) ≤ 1. For the first situation u(2, 0) is the local maximum. Using Equations (3.21)-(3.26), we have ut+τ ≥ ut , ut+τ ≥ ut , ut+τ ≤ ut and 1 3 3 2 2 1 ut ut t ut+τ − ut+τ = (ut − ut )(1 − 2C1 ) + 2 3 (ut + ut − 1)(ut − ut ) 2 3 3 2 2 1 2 1 2 t ≤ (ut − ut )(1 − 2C1 ) < (ut − ut ). 2 1 2 1 If in the case ut+τ − ut+τ < 0, we can check |ut+τ − ut+τ | < 1/2|ut − ut |. But when u(2, t) 2 3 2 1 2 1 is the local minimum ut ut t ut+τ − ut+τ = (ut − ut )(1 − 2C1 ) − 2 3 (ut + ut − 1)(ut − ut ) 2 3 3 2 1 2 1 2 2 t ≤ (ut − ut )(1 − 2C1 ). 1 2 ut+τ ≤ ut + 1/2(ut − ut ) + 1/2(ut − ut ) ≤ max{ut , ut }. 2 1 2 3 2 1 3 2 Using the above three situations, we can obtain t max(|ut+τ − ut+τ |, |ut+τ − ut+τ |) ≤ max[(1 − 2C1 ), 1/2]{max(|ut − ut |, |ut − ut |)}, (3.27) 3 2 1 2 3 2 1 2 71 t where C1 = ut ut t 1 2 (u 1 2 t + ut − 1) ≤ max{u(1, 0), u(2, 0), u(3, 0)}. So 0 < (1 − 2C1 ) < 1 2 for u(1, t) = u(2, t) (if u(1, 0) = u(2, 0) and it is easy to test the sequence will change to monotone sequence and which will converge to the average), which lead to lim |(ut − ut )| = 0. 2 3 lim |(ut − ut )| = 0, 2 1 t→∞ t→∞ We get lim u(1, t) = lim u(2, t) = lim u(3, t). t→∞ t→∞ t→∞ 2 72 Chapter 4 Traveling wave solution for a cell-to-cell adhesion model In this chapter, we consider the existence and regularity of the traveling wave solution for a cell-to-cell model with adhesion which is different to our first model. ρt = [D(ρ)ρx ]x + g(ρ) t ≥ 0, x ∈ R, with quadratic coefficient D(ρ) = 3γρ2 − 4γρ + 1, where ρ(x, t) represent the cell density and γ is the adhesive coefficient between cells. 4.1 Properties of the weak traveling wave solution We recall that a traveling wave solution for Equation (7) is a solution of the form ρ(x, t) = ρ(x − ct) for some constant traveling speed c. The equation we have to deal with is changed to (D(ρ)ρ ) + cρ + g(ρ) = 0, 73 (4.1) Where stands for derivation with respect to the wave variable ξ = x − ct. Following [28], we have the definition of the classical traveling wave solution for D(ρ) ∈ C 1 [0, 1]. Definition 4.1.1 When D(ρ) ∈ C 1 [0, 1], a classical traveling wave solution of (7) is a function ρ ∈ C 1 (a, b), with (a, b) ⊆ R, such that D(ρ)ρ ∈ C 1 (a, b), satisfying Equation (4.1) in (a, b) and the boundary conditions ρ(a+ ) = 1 lim D(ρ(ξ))ρ (ξ) = ξ→a+ ρ(b− ) = 0, , lim D(ρ(ξ))ρ (ξ) = 0. ξ→b− (4.2) (4.3) Condition (4.3) added to the classical boundary condition (4.2) is motivated by the possible occurrence of sharp type profiles, that is, solutions reach the equilibria at a finite value a or b. However, when the existence interval is the whole real line, then condition (4.3) is automatically satisfied and then Definition 4.1.2 is reduced to the classical one which is the front type traveling wave solution. However ρ may not be C 1 because of the singularity of the equation at ρ = ρ (γ), ρ (γ). We have to give a definition of weak traveling wave solution. Definition 4.1.2 A weak traveling wave solution of (7), is a function ρ ∈ C(a, b) satisfying boundary condition (4.2),(4.3) and for any t with ρ(t) = 0, ρ (γ), ρ (γ) and 1, ρ (t) exists, b a g(ρ(s))ds = c and t D(ρ)ρ (t) + cρ(t) + g(ρ(s))ds = c. a 74 (4.4) Remark: Equation (4.4) comes from t [D(ρ)ρ (t) + cρ(t) + g(ρ(s))ds] a = (D(ρ)ρ (t)) + cρ (t) + g(ρ(t)) = 0, b and limt→a+ ρ(t) = 1, limt→a+ D(ρ(t))ρ (t) = 0. a g(ρ(s))ds = c comes from b b a b −(D(ρ)ρ ) ds − c g(ρ(s))ds = a ρ ds = −c[ρ(b) − ρ(a)] = c. a Also from the definition of the weak traveling wave solution and the property of g(ρ), c > 0. When the set {t|ρ(t) = 0, ρ (γ), ρ (γ), 1} is isolated, then D(ρ(t))ρ (t) can be extended to a continuous function on (a, b). With this definition (4.4), if ρ(t) is a weak solution on (a, b) with −∞ < a < b < ∞, we can extend ρ(t) to (−∞, ∞), defined by     ρ(t), a < t < b,    ρ(t) = t ≤ a,  1,      0, t ≥ b. It is easy to check that ρ(t) is a weak traveling wave solution on (−∞, ∞). Proposition 4.1.3 Let ρ(t) be a weak traveling wave solution to (4.4) satisfying (4.2). If a = −∞, ρ(ξ) < 1 similarly if for ξ ∈ (−∞, b), b = +∞, ρ(ξ) > 0 on (a, +∞), 75 then then limξ→−∞ D(ρ(ξ))ρ (ξ) = 0, limξ→+∞ D(ρ(ξ))ρ (ξ) = 0. Proof. We only prove when a = −∞, the idea for b = +∞ is the same. Assume a = −∞, let T := sup{ξ : ρ(t) > ρ (γ) for all t ∈ (−∞, ξ)} and define ρ H(ρ) := D(s)g(s)ds, ρ (γ) 1 Φ(ξ) := [D(ρ(ξ))ρ (ξ)]2 − H(ρ(ξ)) 2 for ρ ∈ (ρ (γ), 1) and ξ ∈ (−∞, T ). Then Φ (ξ) = (D(ρ(ξ))ρ (ξ)) D(ρ(ξ))ρ (ξ) + D(ρ(ξ))g(ρ(ξ))ρ (ξ) = −cD(ρ(ξ))(ρ (ξ))2 . Since D(ρ(ξ)) ≥ 0, we have Φ(ξ) is monotone decreasing. The limit limξ→−∞ Φ(ξ) exists. On the other hand, we know 1 lim H(ρ(ξ)) = ξ→−∞ ρ (γ) D(s)g(s)ds ∈ R. Hence, from the definition of Φ(ξ), we deduce existence of the limit lim D(ρ(ξ))|ρ (ξ)| =: l ∈ [0, +∞]. ξ→−∞ If l > 0, note ρ(−∞) = 1 and D(ρ(ξ)) ≥ 0 in (−∞, T ), then limξ→−∞ |ρ (ξ)| := k ∈ (0, ∞], in contradiction with the boundedness of ρ. Therefore, l = 0. 2 As a consequence, when (a, b) = (−∞, ∞) and 0 < ρ(t) < 1 on (−∞, ∞), Condition (4.3) is satisfied automatically and Definition 4.1.1 reduces to the classical front type traveling wave solution. 76 In our situation D(ρ) = 3γρ2 − 4γρ + 1, D(0) = 1, D(1) = 0 when γ = 1, otherwise D(0), D(1) > 0. Since limξ→a+ D(ρ(ξ))ρ (ξ) = 0 and if D(ρ(a+ )) = D(1) = 0, we can get ρ (a+ ) = 0 and a = −∞, otherwise the solution could be continued in the whole half-line (−∞, a) which leads to the front type traveling wave solution. So we only need to consider the sharp type traveling wave solution when γ = 1 and the equation will degenerate at ρ = 1. If the adhesive coefficient γ < 1, we can get the front type traveling wave solutions. We should also remark that if there exists a < a1 such that ρ(a1 ) = 1, then ρ ≡ 1 on [a, a1 ]. In fact, if ρ(ξ) = 1, then we can find a subinterval (a2 , a3 ) ∈ (a, a1 ) such that 0 < ρ(ξ) < 1 on (a2 , a3 ), limξ→a2 ρ(ξ) = limξ→a3 ρ(ξ) = 1. Furthermore we have ξn ∈ (a2 , a3 ), ξn → a3 such that ρ (ξn ) ≥ 0, which implies that ξn ξn g(ρ(s))ds ≤ D(ρ(ξn ))ρ (ξn ) + cρ(ξn ) + cρ(ξn ) + a g(ρ(s))ds = c. a Hence c < cρ(a3 ) + a3 ξn g(ρ(s))ds = a lim (cρ(ξn ) + ξn →a3 g(ρ(s))ds) a ξn ≤ lim [D(ρ(ξn ))ρ (ξn ) + cρ(ξn ) + ξn →a3 g(ρ(s))ds] = c. a A contradiction. From now on, we consider the minimal interval (a∗ , +∞) (possibly the whole real lime) such that ρ(ξ) = 1 for every ξ ≤ a∗ . Proposition 4.1.4 Let ρ(t) be the weak traveling wave solution of the equation (7), then ρ(t) is strictly decreasing on (a∗ , +∞). 77 Proof. In the following, we only prove the monotonicity of the solution when 0 < ρ(t) < 1 and 3/4 ≤ γ ≤ 1. For 0 ≤ γ < 3 , we have D(ρ) > 0 on ρ ∈ [0, 1] and it is not difficult to 4 use previous results to get the monotonicity of the traveling wave solutions. When 3 ≤ γ ≤ 1, the diffusivity D(ρ) is negative whenever 4 ρ ∈ Iγ := (ρ (γ), ρ (γ)) = ( 2γ − γ(4γ − 3) 2γ + , 3γ γ(4γ − 3) ), 3γ and D(ρ) ≥ 0 otherwise. We split the proof in four steps. Step 1. Let us prove that 2γ + γ(4γ − 3) < ρ(t) < 1 and ρ (t) < 0 for every t ∈ (a∗ , T1 ), 3γ (4.5) where T1 := sup{t : ρ(ξ) > 2γ + γ(4γ − 3) 3γ for every ξ ∈ (a∗ , t)}. By contradiction, If there exists t1 ∈ (a∗ , T1 ), such that ρ (t1 ) ≥ 0 and ρ(t1 ) < 1. Then from d (D(ρ)ρ )|ξ=t = −cρ (t1 ) − g(ρ(t1 )) < −cρ (t1 ) ≤ 0, 1 dξ we can get D(ρ)ρ (t) is decreasing in a neighborhood of t1 , and using D(ρ) > 0, we deduce ρ (ξ) > 0 in (t1 − δ, t1 ) for some δ > 0. From ρ(a∗ ) = 1, there must be another point a∗ < t0 < t1 satisfying ρ (t0 ) = 0 and ρ(t0 ) is the local minimum of ρ on [a∗ , t1 ]. By using equation (4.4), we can get ρ (t0 ) = − g(ρ(t0 )) < 0, D(ρ(t0 )) a contradiction to the fact t0 is the local minimum. Hence (4.5) is proven. 78 Step 2. Let us now prove 0<ρ< 2γ − γ(4γ − 3) 3γ and ρ (t) < 0 for every t ∈ (T2 , +∞), (4.6) where T2 := inf{t : ρ(ξ) < 2γ − γ(4γ − 3) 3γ for every ξ ∈ (t, +∞)}. Let us argue by contradiction. If not, there exists a value ξ ∗ ∈ (T2 , +∞) such that ρ (ξ ∗ ) ≥ 0 √ 2γ− γ(4γ−3) and > ρ(ξ ∗ ) > 0. Then, 3γ d (D(ρ)ρ )|ξ=ξ ∗ = −cρ (ξ ∗ ) − g(ρ(ξ ∗ )) < −cρ (ξ ∗ ) ≤ 0, dξ hence D(ρ(ξ))ρ (ξ) is decreasing in a neighborhood of ξ ∗ , and since D(ρ(ξ)) > 0, we get ρ (ξ) > 0 in (ξ ∗ − δ, ξ ∗ ) for some δ > 0. Let ξ := inf{ξ : ρ (t) ≥ 0 for every t ∈ (ξ, ξ ∗ )}, ξ < +∞. Since ρ is increasing on (ξ, ξ ∗ ) and limt→+∞ ρ(t) = 0; Furthermore ρ (ξ) = 0. Since otherwise ρ (ξ) > 0, a contraction to the definition of ξ. Now we can have the local minimum at ξ ∈ (T2 , +∞), from Equation (4.4) we can have D(ρ)ρ + D (ρ)(ρ )2 + cρ + g(ρ) = 0, ρ (ξ) = 0 lead to ρ (ξ) = − g(ρ(ξ)) D(ρ(ξ)) 79 < 0, contradict to ρ has local minimum at ξ. Step 3. we prove lim D(ρ(t))ρ (t) = lim D(ρ(t))ρ (t) = 0. t→T1 t→T2 In the following we only prove limt→T1 D(ρ(t))ρ (t) = L = 0, the proof is the same for limt→T2 D(ρ(t))ρ (t) = 0. First, we prove lim D(ρ(t))ρ (t) = 0. − t→T1 The idea for limt→T + D(ρ(t))ρ (t) = 0 is the similar. For any small δ > 0 such that 1 (T1 − δ, T1 ) ⊂ (a∗ , T1 ), D(ρ(t)) > 0, ρ (t) < 0 which leads to D(ρ(t))ρ (t) < 0 on (T1 − δ, T1 ). Hence limt→T − D(ρ(t))ρ (t) = L ≤ 0. 1 If L < 0, from Equation (4.4), we have lim D(ρ(t))ρ (t) ≤ lim D(ρ(t))ρ (t) < 0. + t→T1 − t→T1 A contradiction to D(ρ(t))ρ (t) > 0, for T2 > t > T1 . + + Step 4. First we prove ρ (T1 ) < 0. Otherwise ρ (T1 ) > 0, then there exist ξ1 > T1 , ρ (ξ1 ) ≥ 0, ρ(ξ1 ) > ρ(T1 ) and ξ2 < T1 , ρ(ξ2 ) = ρ(ξ1 ) such that 0 < D(ρ(ξ1 )ρ (ξ1 ) − D(ρ(ξ2 )ρ (ξ2 ) + c(ρ(ξ1 ) − ρ(ξ2 )) + A contradiction. 80 ξ1 ξ2 g(ρ(t))dt = 0. Next, we prove the result ρ (ξ) < 0 for every ξ ∈ (T1 , T2 ). If not, there must be one point T1 < ξ0 < T2 such that ρ (ξ0 ) ≥ 0. We suppose ρ(ξ0 ) ∈ √ √ 2γ− γ(4γ−3) 2γ+ γ(4γ−3) , ). If ρ (ξ0 ) = 0, we have ( 3γ 3γ d (D(ρ)ρ )|ξ=ξ = −cρ (ξ0 ) − g(ρ(ξ0 )) < −cρ (ξ0 ) ≤ 0, 0 dξ hence D(ρ(ξ))ρ (ξ) is decreasing in a neighborhood of ξ0 , and since D(ρ(ξ0 )) < 0, we get ρ (ξ) > 0 in (ξ0 , ξ0 + δ) for some δ > 0. Let η = sup{t : ρ (ξ) > 0 for every ξ ∈ (ξ0 , t)} √ 2γ− γ(4γ−3) We can see that η < +∞ and ρ(η) ≥ , and η is a local maximum point. If 3γ √ √ 2γ− γ(4γ−3) 2γ+ γ(4γ−3) ρ(η) ∈ ( , ) ,by using equation (4.4) 3γ 3γ ρ (η) = − g(ρ(η)) > 0, D(ρ(η)) a contradiction. In the case ρ(η) ≥ 2γ+ √ γ(4γ−3) , there exists T1 3γ √ < η0 < η such that ρ(η0 ) = and 0< η0 T1 (D(ρ)ρ ) ds + η0 g(ρ)ds = − T1 η0 T1 A contradiction. 81 ρ (s)ds = 0. 2γ+ γ(4γ−3) 3γ 2γ− √ √ γ(4γ−3) 2γ+ γ(4γ−3) , ) 3γ 3γ and ρ (ξ0 ) = 0. We only prove for ρ(ξ0 ) ∈ If ρ(ξ0 ) ∈ [0, 1]\( √ √ 2γ+ γ(4γ−3) 2γ− γ(4γ−3) [ , 1] and the case for ρ(ξ0 ) ∈ [0, ] is the same. By using Equation 3γ 3γ (4.4), we have ξ 0 < D(ρ(ξ0 )ρ (ξ0 ) − D(ρ(T1 )ρ (T1 ) + c(ρ(ξ0 ) − ρ(T1 )) + g(ρ(t))dt = 0. T1 A contradiction. + If ρ (ξ0 ) > 0. Because ρ (T1 ) < 0, there must be the minimum point T0 such that √ 2γ− γ(4γ−3) . We already proved above ρ (T0 ) = 0 is not ρ (T0 ) = 0 except for ρ(T0 ) = 3γ √ 2γ− γ(4γ−3) possible for T1 < T0 < T2 . But if ρ(T0 ) = ,by using Equation (4.4), we get 3γ 0< T2 T0 (D(ρ)ρ ) ds + T2 g(ρ)ds = − T2 ρ (s)ds = 0. T0 T0 A contradiction. 2 From above four steps we can conclude that ρ (ξ) < 0 for every t ∈ (a∗ , ∞), with the exception, at most, of ξ = T1 , T2 . This is the main reason we introduce weak traveling wave solution. Now we let ρ(ξ) be the solution of equation (4.4) and let ξ(ρ) : (0, 1) → (a∗ , ∞) be the inverse function, whose existence is ensured by the monotonicity of ρ(ξ). Set z(ρ) := D(ρ)ρ (ξ(ρ)) for ρ ∈ (0, ρ (γ)) ∪ (ρ (γ), 1). We get the following: z(ρ) = ˙ dz g(ρ)D(ρ) 1 D(ρ) = (D(ρ)ρ ) = (D(ρ)ρ ) = −c − . dρ ρ (ξ) z(ρ) z(ρ) In the following section we will discuss the relations between equation (4.4) and (4.7). 82 (4.7) 4.2 Main results of traveling wave solutions when 3/4 ≤ γ≤1 3 The first part of this section we will deal with the case 0 ≤ γ < 4 , then we will use these results in the second part to prove the existence and nonexistence of the traveling wave 3 solution when 4 ≤ γ ≤ 1. In this case, we can get the diffusivity coefficient D(ρ) > 1 − 4 γ > 0. Engler and Hadeler 3 [17] obtained the following result. Theorem 4.2.1 Consider the following boundary value problem    (D(ρ)ρ ) + cρ + g(ρ) = 0 (4.8)   ρ(−∞) = 1, ρ(+∞), with D, g ∈ C 1 ([0, 1]), 0 < D0 < D(s) for all s ∈ [0, 1], g(s) satisfies (4), g (0) > 0 and c is a real constant. Then (4.7) is solvable if and only if c > c∗ where 2 D(0)g (0) ≤ c∗ ≤ 2 D(s)g(s) s s∈(0,1] sup In [28] the result extended to the possible degenerate case Theorem 4.2.2 (Theorem 2 [28]) Let g(s) ∈ C[0, 1], D(s) ∈ C 1 [0, 1], respectively satisfying (4) and D(s) > 0 for all s ∈ (0, 1) and assume D(0)g (0) < ∞. Then there exists c∗ > 0 satisfying 2 D(0)g (0) ≤ c∗ ≤ 2 83 D(s)g(s) s s∈(0,1] sup such that the boundary value problem    z(ρ) = −c − g(ρ)D(ρ) , ρ ∈ (0, 1)  ˙  z(ρ)    z(ρ) < 0      z(0+ ) = 0, z(1− ) = 0 (4.9) is solvable if and only if c ≥ c∗ . Moreover, for every c ≥ c∗ , the solution is unique. Theorem 4.2.3 (Theorem 3 [28]) Let g(s) ∈ C[0, 1], D(s) ∈ C 1 [0, 1], respectively satisfying (4) and D(s) > 0 for all s ∈ (0, 1). The existence of a traveling wave solution ρ(t) of Equation (7), with wave speed c, satisfying boundary condition (4.2) and (4.3), is equivalent to the solvability of problem (4.7), with the same c. 3 In the model, if 4 ≤ γ ≤ 1, the diffusivity coefficient will be negative which will make the √ 2γ+ γ(4γ−3) proof of the existence of the traveling wave complicate, especially at the point , 3γ the derivative of z(ρ) may have possible two different values which will lead to two possible z (ρ) derivative values of ρ at T1 (ρ (T1 ) = | 2γ+√γ(4γ−3) by L´pital rule). To overcome o D (ρ) ρ= 3γ this difficulty, We will use the idea of weak traveling wave solution from [5] and consider the traveling waves in different intervals [0, 2γ − γ(4γ − 3) 2γ − ], ( 3γ γ(4γ − 3) 2γ + , 3γ γ(4γ − 3) 2γ + ), [ 3γ γ(4γ − 3) , 1]. 3γ Under certain conditions, we can glue these weak traveling waves together which will give the existence of the weak traveling wave solution in the whole interval. Theorem 4.2.4 Let D(ρ) and g(ρ) be given functions respectively satisfying (8) and (7) 84 and 3/4 ≤ γ ≤ 1. There exists a value c∗ > 0, satisfying max{D (0)g(0), D (ρ (γ))g(ρ (γ))} ≤ ≤ max{ sup s∈(0,ρ (γ)] D(s)g(s) , s (c∗ )2 4 D(s)g(s) }, s − ρ (γ) s∈(ρ (γ),1],ρ=ρ (γ) sup such that Equation (7) has i) no weak traveling wave solution satisfying (4.4) for c < c∗ ; ii) a unique (up to space shifts) weak traveling wave solution for c = c∗ . ii) a unique (up to space shifts) C 1 traveling wave solution for c > c∗ . Where ρ (γ) and ρ (γ) are given in (1.31). Before we prove Theorem 4.2.4, we introduce the following Lemma, which will be used in the proof of Theorem 4.2.4. Lemma 4.2.5 Let z ∈ C(0, 1) be the solution of problem (4.7) for every c > c∗ . Under the assumption of Theorem 4.2.4 the following limits, z(ρ) = λ1 , ρ→ρ (γ) ρ − ρ (γ) z(ρ) = λ2 , ρ→ρ (γ) ρ − ρ (γ) lim lim (4.10) exists, where 1 λ1 = (−c + 2 1 If c = c∗ , λ2 may take two values 2 (−c ± Proof. 1 λ2 = (−c + 2 c2 − 4D (ρ (γ))g(ρ (γ))), c2 − 4D (ρ (γ))g(ρ (γ))). c2 − 4D (ρ (γ))g(ρ (γ))). First we need prove the existence of z (ρ) at ρ (γ) and ρ (γ). We only prove the results at ρ (γ), the results at point ρ (γ) is similar. 85 Let z(ρ) be the solution of (4.7) satisfying z(ρ (γ)+ ) = z(1− ) = 0. First we prove the existence of the limit λ2 := z(ρ) . ρ→ρ (γ)+ ρ − ρ (γ) lim (4.11) Assume by contradiction that z(ρ) z(ρ) > lim inf := l. ρ − ρ (γ) ρ→ρ (γ)+ ρ − ρ (γ) ρ→ρ (γ)+ 0 > L := lim sup Let χ ∈ (l, L) and let (ρn )n be an decreasing sequence converging to ρ (γ) such that z(ρn ) = χ, ρn − ρ (γ) and d z(ρ) ) ≤ 0. ( dρ ρ − ρ (γ) |ρ=ρn z(ρ) z(ρ) d 1 Since dρ ( )|ρ=ρn = (z(ρ) − ˙ ) ≤ 0, we have ρ−ρ (γ) ρ−ρ (γ) ρ−ρ (γ) |ρ=ρn z(ρn ) = −c − ˙ D(ρn )g(ρn ) ≤ χ. χ(ρn − ρ (γ)) Passing to the limit as n → +∞, since χ < 0, we have χ2 + cχ + [Dg] (ρ (γ)) ≤ 0. Similarly, we can choose an decreasing sequence (νn )n converging to ρ (γ), such that z(νn ) = χ, νn − ρ (γ) and d z(ρ) ( ) ≥ 0. dρ ρ − ρ (γ) |ρ=νn we can deduce χ2 + cχ + [Dg] (ρ (γ)) ≥ 0. By the arbitrariness of χ ∈ (l, L), we conclude that 1 l = L = λ2 = ± (−c + 2 c2 − 4D (ρ (γ))g(ρ (γ))). 86 The same argument can be used to prove that z(ρ) ρ→ρ (γ)− ρ − ρ (γ) lim exists and its value is either 1 (−c + 2 c2 − 4D (ρ (γ))g(ρ (γ))) or 1 − (−c + 2 c2 − 4D (ρ (γ))g(ρ (γ))). Given c > c∗ , let z(ρ) and z ∗ (ρ) be the solutions of Equation (4.7) on [ρ (γ), 1] with z(ρ (γ)) = z(1) = 0, respectively, for c and c∗ . Assuming the existence of ρ ∈ [ρ (γ), 1] satisfying z ∗ (ρ) ≥ z(ρ), from (4.7) we then have z ∗ (ρ) = −c∗ − ˙ D(ρ)g(ρ) D(ρ)g(ρ) = z(ρ). ˙ ∗ (ρ) > −c − z z(ρ) This implies the contradictory conclusion 0 = z ∗ (1− ) > z(1− ) = 0. Hence z ∗ (ρ) < z(ρ) for all ρ ∈ [ρ (γ), 1]. Which yield z ∗ (ρ (γ)+ ) ≤ z(ρ (γ)+ ), from ˙ ˙ 1 z ∗ (ρ (γ)+ ) = (± (c∗ )2 − 4D (ρ (γ)+ )g(ρ (γ)) − c∗ ), ˙ 2 1 which leads to λ2 = 2 ( c2 − 4D (ρ (γ)+ )g(ρ (γ)) − c). But when c = c∗ , z ∗ (ρ (γ)+ ) have ˙ two possible values. The idea in proving λ1 is the same but z(ρ) := l ≥ 0, ρ→ρ (γ)− ρ − ρ (γ) lim inf 87 which will lead to the positivity of λ1 = 1 (−c + 2 c2 − 4D (ρ (γ))g(ρ (γ))). When 3/4 < γ ≤ 1, D (ρ (γ)), D (ρ (γ) = 0, we obtain ρ − ρ (γ) z(ρ) z(ρ) · = lim D(ρ) ρ→ρ (γ)+ D(ρ) ρ→ρ (γ)+ ρ − ρ (γ) 1 1 (+ c2 − 4D (ρ (γ)+ )g(ρ (γ)) − c) · = 2 D (ρ (γ)+ ) lim = −4D (ρ (γ)+ )g(ρ (γ)) 1 + 2( c2 − 4D (ρ (γ)+ )g(ρ (γ)) + c) D (ρ (γ) ) · −2g(ρ (γ)) = < 0. ( c2 − 4D (ρ (γ)+ )g(ρ (γ)) + c) If γ = 3/4, ρ (γ) = ρ (γ) = 2/3; D(ρ (γ)) = 0, and D (ρ (γ)) = 0. It is natural from continuity that we guess g(ρ (γ)) z(ρ) =− . c ρ→ρ (γ)+ D(ρ) lim Next, we establish this rigorously. If D (ρ (γ)+ ) = 0, we can get z(ρ (γ))(z(ρ (γ)) + c) = 0. ˙ ˙ Hence z(ρ (γ)) = 0 or z(ρ (γ)) = −c. Because z(ρ (γ)+ ) ≥ z ∗ (ρ (γ)+ ), we get z(ρ (γ)) = 0 ˙ ˙ ˙ ˙ ˙ By the mean value theorem, there exists a sequence {νn }n such that νn → ρ (γ)+ and z(νn ) → 0 as n → ∞. From Equation (4.7), we can get ˙ z(νn ) g(ρ (γ)) →− , D(νn ) c z(ρ) Next, we show that limρ→ρ (γ)+ D(ρ) = − as n → ∞. g(ρ (γ)) . c Fix ∈ (0, (4.12) g(ρ (γ)) ). c Since D(ρ) is positive in the right of ρ (γ), from (4.7) there exists n0 such that z(νn ) ≤ (− g(ρ (γ)) + )D(νn ), c 88 n ≥ n0 . (4.13) Let ξ (ρ) := (− g(ρ (γ)) c −c + c ˙ + )D(ρ). Since ξ(ρ) → 0 as ρ → ρ (γ)+ and g(ρ) g(ρ (γ)) → −c + c > 0, g(ρ (γ)) − c g(ρ (γ)) − c so it is possible to find a δ > 0 such that −c+c g(ρ) g(ρ (γ))−c as ρ → ρ (γ)+ , ˙ > ξ (ρ) for each ρ ∈ (ρ (γ), ρ (γ)+ δ). Let n1 ≥ n0 such that νn1 ∈ (ρ (γ), ρ (γ) + δ). We claim that z(ρ) ≤ ξ (ρ) for all ρ ∈ (ρ (γ), νn1 ), that is z(ρ) ≤ (− g(ρ (γ)) + )D(ρ), c for ρ ∈ (ρ (γ), νn1 ). (4.14) Suppose by contradiction that there exists ρ0 ∈ (ρ (γ), νn1 ) such that z(ρ0 ) > ξ (u0 ). Since n1 ≥ n0 , from (4.14) it follows that z(νn1 ) ≤ ξ (νn1 ). The mean value theorem concludes ˙ that there exists ρ1 ∈ (ρ0 , νn1 ), such that z(ρ1 ) > ξ (ρ1 ) and z(ρ1 ) − ξ (ρ1 ) < 0. On the ˙ other hand, we have z(ρ1 ) = −c + ˙ D(ρ1 )g(ρ1 ) g(ρ1 ) D(ρ1 )g(ρ1 ) > −c + = −c + c . g(γ) −z(ρ1 ) g(γ) − c ( c − )D(ρ1 ) g(ρ1 ) ˙ Since ρ1 ∈ (ρ0 , νn1 ) ⊂ (ρ (γ), γ + δ), we get z(ρ1 ) > −c + c g(γ)−c > ξ (ρ1 ). A contradiction. ˙ Similarly, from (4.7) we can choose n0 such that z(νn ) ≥ η(νn ) := (− g(ρ (γ)) − )D(νn ), c for n ≥ n0 . Again, η (ρ) → 0 as ρ → ρ (γ)+ and ˙ −c + c g(ρ) g(ρ (γ)) → −c + c < 0, g(ρ (γ)) + c g(ρ (γ)) + c 89 as ρ → ρ (γ)+ , (4.15) g(ρ) ˙ there exists δ > 0 such that −c + c g(ρ)+c < η (ρ) for each ρ ∈ (ρ (γ), ρ (γ) + δ). Let n1 ≥ n0 such that νn ∈ (ρ (γ), ρ (γ) + δ). We prove by contradiction that z(ρ) ≥ η (ρ) for 1 each ρ ∈ (ρ (γ), νn ), that is 1 z(ρ) ≥ (− g(ρ (γ)) − )D(ρ), c for ρ ∈ (ρ (γ), νn1 ). (4.16) Suppose there exists ρ0 ∈ (ρ (γ), νn1 ) such that z(ρ0 ) < η (ρ0 ). Since n1 ≥ n0 , from (4.16) it follows that z(νn1 ) ≥ η (νn1 ). Therefore, there exists ρ1 ∈ (ρ0 , νn ), such that z(ρ1 ) < η (ρ1 ) 1 and z(ρ1 ) − η (ρ1 ) > 0. On the other hand we have ˙ ˙ z(ρ1 ) = −c + ˙ D(ρ1 )g(ρ1 ) D(ρ1 )g(ρ1 ) g(ρ1 ) < −c + . = −c + c g(γ) −z(ρ1 ) g(γ) + c ( c + )D(ρ1 ) Since ρ1 ∈ (ρ0 , νn ) ⊂ (ρ (γ), ρ (γ) + δ), we get z(ρ1 ) < −c + c ˙ 1 g(ρ1 ) g(ρ (γ))+c < η (ρ1 ), hence a ˙ contradiction holds. By the Inequality (4.14) and (4.16), we proved that z(ρ) g(ρ (γ)) =− . c ρ→ρ (γ)+ D(ρ) lim z(ρ) exists, and its value is The same argument can be used to prove that limρ→ρ (γ)− ρ−ρ (γ) 1 2( c2 − 4D (ρ (γ)− )g(ρ (γ)) − c), also + ρ (T1 ) = z(ρ) =− ρ→ρ (γ)− D(ρ) c+ 2g(ρ (γ)) lim . c2 − 4D (ρ (γ)− )g(ρ (γ)) 2 Proof of the Theorem 4.2.4. The strategy is to construct traveling wave solution for a∗ < ξ < T1 , T1 < ξ < T2 and T2 < ξ < +∞ such that 90 • ρ(ξ) ∈ [0, ρ (γ)) for t ∈ (T2 , +∞), • ρ(t) ∈ [ρ (γ), ρ (γ)] for t ∈ (T1 , T2 ), • ρ(t) ∈ [ρ (γ), 1] for t ∈ (a∗ , T1 , ) • limt→T + ρ(t) = limt→T − ρ(t) = ρ (γ), limt→T + ρ(t) = limt→T − ρ(t) = ρ (γ), 1 1 2 2 • limt→T + D(ρ(t))ρ (t) = limt→T − D(ρ(t))ρ (t) = 0, 1 1 • limt→T + D(ρ(t))ρ (t) = limt→T − D(ρ(t))ρ (t) = 0. 2 2 t • D(ρ)ρ (t) + cρ(t) + a g(ρ(s)ds = c. If we can find such solutions, then the combined solutions will give us a weak solution. When 0 < ρ < ρ (γ), let us consider the first order equation (4.7) for 0 < ρ < ρ (γ). We have D(0)g(0) = D(ρ (γ))g(ρ (γ)) = 0, z(0) = z(ρ (γ)) = 0. We change variable z(s) = z(s/ρ (γ)),we can deduce the existence of a threshold value c∗ > 0, satisfying the 1 estimate D (0)g(0) ≤ c∗ ≤ 2 1 sup s∈(0,ρ (γ)] D(s)g(s) , s such that the equation (4.7) has a unique negative solution z(ρ) in (0, ρ (γ)) with z(0+ ) = z(ρ (γ)− ) = 0, if and only if c ≥ c∗ . We just consider the Cauchy problem 1    ρ = z(ρ) , 0 < ρ < ρ (γ),  D(ρ)   ρ(0) = ρ (γ) .  2 (4.17) We obtain that the unique solution ρ(t) of problem (4.17) is a solution of (??) in (τ1 , τ2 ), − + with τ1 ≥ −∞, τ2 ∈ R, ρ(τ1 ) = ρ (γ), ρ(τ2 ) = 0. Moreover, when D(0) = 0, we can have the inverse function ξ(ρ) in [0, ρ (γ) 2 ] and ξ(ρ (γ)− ) < +∞. 91 Let us consider the first order equation (4.7) for ρ (γ) < ρ < ρ (γ). We make the following change of variable: D(ρ) := −D(ρ (γ) + ρ (γ) − ρ), g(ρ) := g(ρ (γ) + ρ (γ) − ρ). Since D(ρ)g(ρ) > 0 for every ρ (γ) < ρ < ρ (γ), D(ρ (γ))g(ρ (γ)) = D(ρ (γ))g(ρ (γ)) = 0, z(ρ) = −z[ρ (γ) + ρ (γ) − ρ] ≤ 0. According to Theorem 4.2.2, If we consider the equation z(ρ) on the interval [ρ (γ), ρ (γ)] with z(ρ (γ)) = z(ρ (γ)) = 0. By using linear transformation similar to the case 0 < ρ < ρ (γ), we can derive the existence of a threshold c∗ > 0 2 satisfying 2 D (ρ (γ))g(ρ (γ)) < c∗ ≤ 2 2 D(s)g(s) , s − ρ (γ) s∈(ρ (γ),ρ (γ)] (4.18) 2 D (ρ (γ))g(ρ (γ)) < c∗ ≤ 2 2 sup D(s)g(s) , s − ρ (γ) s∈[ρ (γ),ρ (γ)) (4.19) ρ (γ) < ρ < ρ (γ), (4.20) sup that is such that the equation ω := −c − ˙ D(ρ)g(ρ) , ω admits negative solutions ω(ρ), satisfying ω(ρ (γ)+ ) = ω(ρ (γ)− ) = 0, if and only if c > c∗ . 2 92 Putting z(ρ) := −ω(ρ (γ) + ρ (γ) − ρ), we have D(ρ (γ) + ρ (γ) − ρ)g(ρ (γ) + ρ (γ) − ρ) ω(ρ (γ) + ρ (γ) − ρ) D(ρ)g(ρ) = −c − , ρ (γ) < ρ < ρ (γ). z(ρ) z(ρ) = ω(ρ (γ) + ρ (γ) − ρ) = −c − ˙ ˙ Moreover, z(ρ (γ)+ ) = z(ρ (γ)− ) = 0, and z(ρ) > 0 for every ρ ∈ (ρ (γ), ρ (γ)). So, if we consider the Cauchy problem    ρ = z(ρ) , ρ (γ) < ρ < ρ (γ),  D(ρ)   ρ(0) = ρ (γ)+ρ (γ) .  2 (4.21) Let ρ(t) be the unique solution of (4.7) defined in its maximal existence interval (t1 , t2 ), with −∞ < t1 < t2 < +∞. So we can see ρ(t) is a solution of (7) in (t1 , t2 ). When ρ (γ) < ρ < 1, we can use the same method as we prove for the situation 0 < ρ < ρ (γ), and we can deduce the existence of a threshold value c∗ > 0, satisfying the estimate 3 D (ρ (γ))g(ρ (γ)) < c∗ ≤ 2 3 sup s∈(ρ (γ),1] D(s)g(s) , s such that the equation (4.7) has a unique negative solution z(ρ) in (ρ (γ), 1) with z(ρ (γ)+ ) = z(1− ) = 0, if and only if c ≥ c∗ . We just consider the Cauchy problem 3    ρ = z(ρ) , ρ (γ) < ρ < 1,  D(ρ)   ρ(0) = ρ (γ)+1 .  2 (4.22) We can repeat the same arguments developed for the case when 0 < ρ < ρ (γ), and obtain 93 that the unique solution ρ(t) of problem (4.7) is a solution of (7) in (τ3 , τ4 ), with τ3 ≥ − + −∞, τ4 ∈ R, ρ(τ4 ) = ρ (γ), ρ(τ3 ) = 1. Moreover, when D(1) = 0, we can have the inverse function ξ(ρ) in [ ρ (γ)+1 , 1] 2 and ξ(1− ) < +∞. Putting c∗ := max{c∗ , c∗ , c∗ }, and glue the solutions of (4.17),(4.21) and (4.22) by 1 2 3 a time-shift, we obtaining a continuous function ρ(t) on some interval (a∗ , ∞), which is a decreasing function in (a∗ , ∞) and satisfies ρ((a∗ )+ ) = 1, ρ(+∞) = 0, limt→T1 ρ(t) = ρ (γ), limt→T2 ρ(t) = ρ (γ), and from construction of ρ, we also have lim D(ρ(t))ρ (t) = lim D(ρ(t))ρ (t) = 0. t→T1 t→T2 Note that ρ(t) is smooth on (a∗ , T1 ), we have, t D(ρ)ρ (t) + cρ(t) + a∗ g(ρ(s)ds = constant, t ∈ (a∗ , T1 ]. (4.23) Taking the limit t → a∗ , we see that the constant must be c. Similarly t g(ρ(s)ds = 0, D(ρ)ρ (t) + cρ(t) + +∞ t ∈ [T1 , +∞). (4.24) Take t = T1 in both (4.23) and(4.24), we see that +∞ a∗ g(ρ(s))ds = c. Furthermore for T1 < t < +∞, Equation (4.23) also holds. In fact, in this case, using 94 Equation (4.24) and +∞ a∗ T1 g(ρ(s)ds = a∗ t g(ρ(s)ds + +∞ g(ρ(s)ds + T1 g(ρ(s)ds. t We have t D(ρ)ρ (t) + cρ(t) + = g(ρ(s)ds a∗ +∞ D(ρ)ρ (t) + cρ(t) − +∞ g(ρ(s)ds + t a∗ +∞ g(ρ(s)ds = 0 + a∗ g(ρ(s)ds = c. II) Non-existence for c < c∗ . We proved in the Proposition that ρ (t) < 0 for ρ ∈ (0, 1) and this implies the existence of the inverse function ξ(ρ) in (0, 1) especially in (0, ρ (γ)) where D(ρ) > 0, then we define ω(ρ) := D(ρ)ρ (ξ(ρ)), it can be checked the negative solution of (4.7), satisfying ω(ρ (γ)− ) := ω(0+ ) = 0. Therefore, by applying Theorem 4.2.2 , we can deduce that c ≥ c∗ . 1 When ρ (γ) < ρ < ρ (γ) and ρ (γ) < ρ < 1, we can also use Theorem 4.2.2 again to get c ≥ c∗ and c ≥ c∗ . 2 3 Summarizing, c ≥ c∗ is a necessary condition for the existence of weak traveling wave solution of (7). 2 95 BIBLIOGRAPHY 96 BIBLIOGRAPHY [1] K. Anguige, Multi-phase Stefan problems for a non-linear one-dimensional model of cell-to-cell adhesion and diffusion, European J. Appl. Math. 21 no. 2,(2010), pp. 109136. [2] K. Anguige, A one-dimensional model for the interaction between cell-to-cell adhesion and chemotactic signalling, European J. Appl. Math. 22 no. 4,(2011), pp 291-316. [3] K. Anguige, C. Schmeiser, A one-dimensional model of cell diffusion and aggregation, incorporating volume filling and cell-to-cell adhesion, J. Math. Biol, 58 No. 3 (2009), pp. 395-427. [4] D. G. Aronson, The role of diffusion in mathematical population biology: Skellam revisited. In mathematics in biology and medicine, Lecture Notes in Biomathematics 57 S. Levin, Springer-Verlag Berlin (1985) pp. 2-6. [5] Lianzhang Bao, Zhengfang Zhou, Travelling wave of aggregation and diffusion equation submitted [6] H. Berestycki, G. Nadin, B. Perthame and L. Ryzhik, The non-local Fisher-KPP equation: traveling waves and steady states, Nonlinearity, Vol. 22 No. 12 (2009), pp. 2813-2845. [7] J. C. Dallon and J. A. Sherratt, A mathematical model for fibroblast and collagen orientation, Bulletin of Mathematical Biology 60, (1998) pp. 101-129. [8] C. Deroulers, M. Aubert, M. Badoual and B. Grammaticos, Modeling tumor cell migration: From microscopic to macroscopic models, Physical Review E, Vol 79 No. 3 (2009). [9] S. Esedoglu, Stability Properties of Perona-Malik Scheme, Siam J. Numer. Anal. Vol. 44, No. 3 (2006), pp. 1297-1313. [10] L. Ferracuti, C. Marcelli and F. Papalini, Travelling waves in some reaction-diffusionaggregation models, Advances in Dynamical Systems and Applications, Vol. 4 No. 1 (2009), pp. 19-33. 97 [11] R. A. Fisher, The wave of advance of advantageous genes, Ann. Eugen 7 (1937), pp. 353-369. [12] F. S. Gardu˜ o and P. K. Maini, Existence and uniqueness of a sharp travelling wave n in degenerate non-linear diffusion Fisher-KPP equations, J. Math. Biol. 33 (1994), pp. 163-192. [13] F. S´nchez-Gardu˜ o and P. K. Maini, Travelling wave phenomena in some degenerate a n reaction-diffusion equations, J. Differential Equation 117 (1995), pp. 281-319. [14] F. S´nchez-Gardu˜ o, P. K. Maini, and Judith P´ rez-Vel´zquez, A non-linear dea n e a generate equation for direct aggregation and taravelling wave dynamics, Discrete and Continuous Dynamical Systerms Series B, Vol. 13 No. 2 (2010), pp. 455-487. [15] V. N. Grebenev Interfacial phenomenon for one-dimensional equation of forward backward parabolic type Annili di Matematica pura ed applicata (IV), Vol. CLXXI (1996), pp. 379-394. [16] M. Ghisi and M. Gobbino, An example of global classical solution for the Perona-Malik equation, Comm. Partial Diff. Eqns. 36 (2011) pp. 1318-1352. [17] K. P. Hadeler, Travelling fronts and free boundary value problems. In: Albretch, J.,Collatz, L., Hoffman, K. H. (eds.) Numerical Treatment of Free Boundary Value Problems. Basel: Birkhauser 1981. [18] D. Horstmann, K. J. Painter and H. G. Othmer, Aggregation under local reinforcement: From lattice to continuum, Euro. Jnl of Applied Mathematics, 15 (2004), pp. 545-576. [19] B. Kawohl and N. Kutev, Maximum and comparison principle for one-dimensional anisotropic diffusion, Math. Ann. 311 (1998), pp. 107-133. [20] S. Kichenassamy, The Perona-Malik paradox, Siam Journal on Applied Mathematics, Vol 57. No. 5 (1997), pp. 1328-1342. [21] A. Kolmogorov, I. Petrovsky and I. N. Piskounov, Study of the diffusion equation with growth of the quantity of matter and its applications to a biological problem. (English translation containing the relvent results) In: OLiveira-Pinto, F., Conolly, B.W.(eds.) Applicable mathematics of non-physical phenomena. New York: Wiley 1982. [22] M. Kuzmin and S. Ruggerini, Front Propagation in Diffusion-Aggregation Models with Bi-Stable Reaction, Discrete and Continuous Dynamical Systems Series B, Vol. 16 No. 3 (2011), pp. 819-833. 98 [23] T. Laurent, Local and global existence for an aggregation equation, Comm. Partial Diff. Eqns. 32 (2007), pp. 1941-1964. [24] D. Li and X. Zhang, On a nonlocal aggregation model with nolinear diffusion, Discrete and Continuous Dynamical Systems, 27 (2010), pp. 201-323. [25] G. M. Lieberman Second order parabolic differential equations, 1996. [26] M. Linana and V. Padr´n,A spatially discrete model for aggregating populations. J. o Math. Biol. 38 (1999), pp. 79-102. [27] Z. Lu and Y. Takeuchi, Global asymptotic behavior in single-species discrete diffusion systems. J. Math. Biol Vol.32 No.1 (1993), pp. 67-77. [28] P. K. Maini, L. Malaguti, C. Marcelli and S. Matucci, Diffusion-aggregation processes with mono-stable reaction terms, Discrete and Continuous Dynamical Systerms Series B, Vol. 6 No. 5 (2006), pp. 1175-1189. [29] L. Malaguti and C. Marcelli Travelling wavefronts in reaction-diffusion equations with convection effects and non-regular terms, Math. Nachr., 242 (2002) pp. 1-17. [30] L. Malaguti and C. Marcelli Sharp profiles in degenerate and doubly degenerate FisherKPP equations, J. Differential Equation 195 (2003) pp. 471-496. [31] J. D. Murray, Mathematical biology. Springer-Verlag biomath. Vol.19 (1993) [32] V. Padr´n, Sobolev regularization of a nonlinear ill-posed parabolic problem as a model o for aggregating populations, Comm. Partial Diff. Eqns. 23 (1998), pp. 457-486. [33] P. Perona and J. Malik, Scale-space and edge detection using anisotropic diffusion, Pattern Analysis and Machine Intelligence, Vol 12, Issue 7, (1990), pp. 629-639. [34] P. Rosenau, Dynamics of dense lattices, Physical Review B, Vol 36 No. 11 (1987). [35] J. G. Skellam, Random dispersal in theoretical populations, Biometrika, 38 (1951), 196218. [36] L. R. Taylor and R. A. J. Taylor, Aggregation, migration and population mechanics, Nature, 265, (1977), pp. 415-421. 99 [37] P. Turchin, Population consequences of Aggregation movement, Journal of Animal Ecology, Vol. 58, Issue 1, (1989), pp. 75-100. [38] S. Turner, J. A. Sherratt, K. J. Painter and N. J. Savill, From a discrete to a continuous model of biological cell movement, Physical Review E, Vol 69 No. 22 (2004). 100