DIRECT ANALYSIS OF IMPLIED VOLATILITY FOR EUROPEAN OPTIONS By Yan Wei A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Applied Mathematics 2012 ABSTRACT DIRECT ANALYSIS OF IMPLIED VOLATILITY FOR EUROPEAN OPTIONS By Yan Wei We show existence and uniqueness of a strong solution to a linear non-uniformly parabolic equation, which gives the fair price of a normalized European call option. We then provide a direct link between local and implied volatilities in the form of a quasilinear degenerate parabolic partial differential equation. We also establish closed-form asymptotic formulae for the implied volatility near expiry as well as for deep in and out of the money options, using a generalized comparison principle on bounded domains. To my parents, for their faith and love iii ACKNOWLEDGMENTS It would not have been possible to write this doctoral dissertation without the help and support of the kind people around me, only some of whom is it possible to give particular mention here. Above all, I am heartily thankful to my advisor, Peter W. Bates, who gave me encouragement, guidance and support from the preliminary to the concluding level of my dissertation. I owe a deep debt of gratitude to Professor Henri Berestycki, for suggesting this interesting and challenging project. It is a pleasure to thank Professor Don. R. Aronson, for references that he shared with me. Special thanks go to my committee, for their precious time. Last, but by no means least, I would like to thank my friends, Samantha L. Dahlberg , Jacqueline M. Dresch, Tim Miller, and my former supervisor Pavel Sikorskii, for their consistent support and friendship. For any errors or inadequacies that may remain in this work, of course, the responsibility is entirely my own. iv TABLE OF CONTENTS List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Existence of the Solution to Equation (1.8) . . . . . . . . . . . . . . . . . . 2.1 Auxiliary Notions, and Function Spaces . . . . . . . . . . . . . . . . . . . . . 2.1.1 Auxiliary Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Parabolic Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Primary Results on the Fundamental Solution, Green’s Function, and Properties of Solutions to Parabolic Equations . . . . . . . . . . . . . . . . . . . . 2.3 The Existence of the Solution to the Non-uniform Parabolic Equation (1.8) . 14 15 15 15 20 3 Directly Computing the Implied Volatility . . . . . . . . . . . . . . . . . . . 3.1 The Implied Volatility ϕ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Associated Local Volatility σ[ϕ] . . . . . . . . . . . . . . . . . . . . . . . . . 40 40 46 4 The Asymptotic of ϕ as τ → 0 . . . . . . . . . . . . . . . . 4.1 The Fundamental Solutions, Green’s Functions, etc. . . 4.1.1 Uniform Parabolic Equations . . . . . . . . . . . 4.1.2 Parabolic Equations with unbounded Coefficients 4.2 The Asymptotics of ϕ as τ → 0 . . . . . . . . . . . . . . 48 48 48 56 69 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 27 5 The Asymptotic of ϕ as x → ±∞ . . . . . . . . . . . . . . . . . . . . . . . . . 106 6 Numerical Implementation . . . . . . . . . 6.1 The Finite Difference Method . . . . . . . 6.2 Approximations to the Implied Volatility ϕ 6.3 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 117 120 123 7 The Calibration Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 8 Comparing Relative Pricing of Options with Stochastic Volatility . . . . 131 v A Derivation of Equation (1.7) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 B Proof for Lemma 5.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 LIST OF FIGURES Figure 6.1 Implied Volatility (For interpretation of the references to color in this and other figures, the reader is referred to the electronic version of this dissertation.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Figure 6.2 Implied Volatility CEV . . . . . . . . . . . . . . . . . . . . . . . . . 125 vii Chapter 1 Introduction A European call option on an underlying security (or underlying) X, with strike price K and exercise date (or expiry) T is a financial contract between two parties, the buyer (or “ holder”) and the seller (or “writer”), written at time t with the following properties: i) The holder of the contract has, exactly at the time t = T , has the right to buy X at the price K, ii) The holder of the option has no obligation to buy the security. However, The seller is obligated to sell the underlying should the buyer so decide. The buyer pays a fee (called a premium) for this right. The underlying security (or underlying) is the commodity or financial instrument that can be sold or bought when an option holder decides to exercise his contract. The Black-Scholes model [10], [35] of a call option on a stock has gained wide recognition in both academia and industry. It makes the following explicit assumptions: • There is no arbitrage opportunity (i.e., there is no way to make a riskless profit). • It is possible to borrow and lend cash at a known constant risk-free interest rate. • It is possible to buy and sell any amount, even fractional, of stock (this includes short 1 selling). • The above transactions do not incur any fees or costs (i.e., frictionless market). • The stock price St follows a geometric Brownian motion with constant drift and volatility: dSt = St (µdt + ΣdWt ), where t is time, µ and Σ are constants and Wt is a standard Brownian motion. The parameter σ is called the volatility of the stock St . It is the relative rate at which the price of a security moves up and down. • The underlying security does not pay a dividend. 1 The price C(St , t; K, T ) of a European call option written on St with strike K and maturity T satisfies the linear backward parabolic partial differential equation Ct + Σ2 2 S CSS + rSCS − rC = 0 in (0, +∞) × (0, T ) 2 C(S, T ) = (S − K)+ , (1.1a) (1.1b) where r is the risk-free short-term interest rate. It is well known that, the solution to equation (1.1) is: C(S, t) = SN (d1 ) − Ke−r(T −t) N (d2 ), 1 Although (1.2) the original model assumed no dividends, trivial extensions to the model can accommodate a continuous dividend yield factor. 2 where 2 S ln( K ) + (r + Σ )(T − t) √ 2 d1 = , Σ T −t (1.3) 2 S √ ln( K ) + (r − Σ )(T − t) √ 2 d2 = = d1 − Σ T − t, Σ T −t ∞ 2 1 N (x) = √ e−y /2 dy. 2π −∞ (1.4) (1.5) The expression (1.2) is known as the Black-Scholes formula Once we have a measure of the (statistical) volatility for any underlying, we can plug the value into a standard options pricing model2 and calculate the fair market value of an option. A model’s fair market value3 , however, is often out of line with the actual market value for that same option. This is known as option mispricing. To understand the reason, we need to look closer at the role implied volatility plays in the equation. The implied volatility of the option is the volatility that, when used in a particular pricing model, yields a theoretical value for the option equal to the current market price of that option. It is the expected volatility the market is pricing into the option. Often, the implied volatility of an option is a more useful measure of the option’s relative value to other options than is its price. The reason is that the price of an option depends most directly on the price of its underlying asset. If an option is held as part of a delta neutral 4 portfolio (that is, a portfolio that is hedged against small moves in the underlying’s 2 for example, the Black-Scholes Formula example, (1.2) 4 In finance, delta neutral describes a portfolio of related financial securities, in which the portfolio value remains unchanged due to small changes in the value of the underlying security, i.e. ∆ = ∂V . Such a portfolio typically contains options and their corresponding ∂S underlying securities such that positive and negative delta components offset, resulting in the portfolio’s value being relatively insensitive to changes in the value of the underlying 3 for 3 price), then the next most important factor in determining the value of the option will be its implied volatility. Another way to look at implied volatility is to think of it as a price, not as a measure of future stock moves. In this view it simply is a more convenient way to communicate option prices than currency. Prices are different in nature from statistical quantities: one can estimate volatility of future underlying returns using any of a large number of estimation methods, however the number one gets is not a price. A price requires two counterparties, a buyer and a seller. Prices are determined by supply and demand. Statistical estimates depend on the time-series and the mathematical structure of the model used. It is a mistake to confuse a price, which implies a transaction, with the result of a statistical estimation, which is merely what comes out of a calculation. Implied volatilities are prices: they have been derived from actual transactions. Seen in this light, it should not be surprising that implied volatilities might not conform to what a particular statistical model would predict. [Wikipedia] In general, the value of an option depends on an estimate of the future realized price volatility, Σ, of the underlying. Or, mathematically: C = f (·, Σ)5 . The function f is monotonically increasing in Σ, 6 meaning that a higher value for volatility results in a higher theoretical value of the option. Conversely, by the inverse function theorem [1], there can be at most one value for ϕ that, when applied as an input to f (·, ϕ), will result in a particular value for C. I.) In principle, by regarding σ as constant, the implied volatility can be inferred by security. 5 where · represents S, K, T , t, and r. 6 We give a proof for such property based on the Black-Scholes model, using the generalized Maximum Principle we derive. 4 inverting the closed form of the solution to (1.1), the option price equation. However, this is known to be computationally difficult, especially near expiry or far from the money. II.) Another shortcoming is that the Black-Scholes model assumes the volatility of the underlying asset S price as constant. However, options based on the same underlying asset but with different strike value and expiration times will yield different implied volatilities. The volatility smile 7 or smirk 8 is a well-known manifestation of this phenomenon. This is generally viewed as evidence that the security’s volatility is not constant. There have been various attempts to extend the Black-Scholes theory to account for the volatility smile and the term structure. One class of models introduces a none-traded source of risk such as jumps [36] or stochastic volatility, including those given by Hull and White [29], and Heston [26]. Rubinstein [39], Derman and Kani [20] have independently constructed a discrete approximation to the risk-neutral process for the underlying asset in the form of a bi/trinomial tree, which are extensions of the original Cox et al. [18] binomial trees. Bouchouev and Isakov [12, 31] reduce the identification of volatility to an inverse parabolic problem with the final observation and establish uniqueness and stability results under certain assumptions. Then, they obtain a non-linear Fredholm integral equation for unknown volatility after dropping terms of higher orders in time to maturity and solve the equation iteratively. Deng, Yu, and Yang [19] use an optimal control framework to discuss an inverse problem of determining the implied volatility when the average option premium, 7 In markets such as FX options or equity index options as for longer term equity options 8 such 5 namely the average value of option premium corresponding with a fixed strike price and all possible maturities from the current time to a chosen future time, is known. Furthermore, the difficulty regarding inverting the extended BS formula is asserted by the impossibility of obtaining volatility as a closed form function of the option value and the remaining variables (see [28, 43, 9]; a similar assertion can also be found in the introduction of the forthcoming paper by Teichmann and Schachermayer [41]). Most works on this problem assume such impossibility and proceed in two broad directions: one theoretical, that attempts to obtain abstract mathematical properties of the implied volatility, such as partial differential equations governing it or similar approaches (see [22, 11, 5]), and the other research direction somewhat more practical, centering on obtaining approximate formulas and testing them against market data [14, 17, 27]. We list a few approximations of the inversion of the Black-Scholes formula. In the following, τ = T − t is the time to maturity. • Li [37] developed a closed-form method for the implied volatility based on rational approximation. Rational approximation has been used extensively in both physical and social sciences but his paper is among the first to apply this approximation to the problem of inverting the Black-Scholes formula. His approximation scheme is much faster than typical solver methods and very accurate for both at-the-money and awayfrom-the-money options. • Chargoy-Coronaa and Ibarra-Valdez [32] use elementary arithmetic operations and functions, together with the normal distribution function and its inverse, to obtain the asymptotic and approximate formulas for the option value, and an approximate formula for the implied volatility. Define the log-moneyness as α = log(X/Ke−rτ ) = 6 √ log(X/K) + rτ , θ = Σ τ . The authors have the approximating option value: ua (θ, α, X) = Xe−α [2N (θ/2) − 1], and an approximation formula for the volatility: 2 Σa = √ ϕ τ ua erτ + K 2K , where ϕ is the inverse of N , the standard normal cumulative distribution function. They also showed an error estimate in Theorem 3: There are a, b > 0, with b small enough, such that for all θ > 0, |α| < a, and s > (1 + b)u0 it holds    √ e|α| + |α|e|α| )2  1 (1 + 1+a 2 2π  . |Σ − Σa | ≤ √ |α|e|α| exp ϕ 2  2 τ • Brenner and Subrahmanyam [14] provided an elegant formula to compute an implied volatility that is accurate when a stock price is exactly equal to a discounted strike price: Σ≈ 2π C . τ S Feinstein [23] independently derived an essentially identical formula. • Corrado and Miller [17] provided an improved quadratic formula which is valid when stock prices deviate from discounted strike prices: Σ≈ 2π 1 S−K C− + τ S+K 2 7 (C − S − K 2 (S − K)2 ) − . 2 π • Bharadia et al. [7] derive a highly simplified but less accurate volatility approximation: Σ≈ 2π C − (S − K)/2 . τ S − (S − K)/2 • Chance [16] provides a direct method of obtaining an accurate estimate of the implied volatility of a call option. His estimate is based on the formula for at-the-money options developed by Brenner and Subrahmanyam [14]. The adjusted formula by Chance [16] is quite accurate for options no more than 20% in- or out-of-the-money and is simple to program and compute. Later, Chambers and Nawalkha [15] developed a simplified extension of the Chance [16] model. The approach taken in these two papers uses the first and second derivatives of the call price with respect to volatility. In addition, they need a reasonable estimate of volatility to serve as a starting point to the approximation. More recently, S. Li [40] used the Taylor series expansion to the third order for the standard normal cumulative distribution function N (x) and obtained new approximations that are valid for a wide band of option moneyness and time to expiration: – At-the-money calls: (S = K) √ 2 2 1 Σ≈ √ Z−√ τ T 6α 8Z 2 − √ , 2Z √ 2πC 1 where α = , and Z = cos arccos S 3 3α √ . In this case, Li’s formula is 32 significantly more accurate than Brenner-Subrahmanyam’s [14]. – In- or out-of-the-money calls: (Define η = 8 K that measures the moneyness of an S option: η = 1 represents at-the-money, η > 1 represents out-of-the-money, and η < 1 represents in-the-money.)   √  2 2Z − √ 6˜ 1 ˜ √ ˜ 8Z 2 − √ α˜  τ τ 2Z Σ≈ 4(η−1)2  α+ α2 −  ˜ ˜ 1+η  √  if ρ > 1.4, if ρ ≤ 1.4 2 τ √ 2π 2C 1 3˜ α ˜ + η − 1 , Z = cos arccos( √ ) , 1+η S 3 32 |X − S|S |η − 1| = and ρ = (C/S)2 C2 where α = ˜ Our goal is to overcome the challenges I.) and II.), and hence, to directly analyze the implied volatility. Indeed, this approach allows us to shed light on qualitative properties that would otherwise be more difficult to establish[6]. To accomplish these goals, we take the following steps: 1. we assume that the volatility Σ depends on the variables t (time) and St (stock price at time t), giving rise to the so-called local volatility model. The dynamics of the underlying asset is then governed by the stochastic differential equation: dSt = µdt + Σ(St , t)dWt , St (1.6) where µ, the expected rate of return of the stock, is constant; the local volatility Σ(x, t) satisfies certain smoothness and growth/decay conditions which we will state in the following section; and Wt is a standard Brownian motion. Since in this model Σ is a function of time and stock price, the stock price is no longer a geometric Brownian motion and the Black-Scholes-Merton formula would no longer apply. However, one 9 can still use a Feynman-Kaˇ -like representation formula to derive a parabolic partial c differential equation for the price C(St = S, t; K, T ) of a call option: Ct (S, t) + Σ2 (S, t) 2 S CSS (S, t) + rSCS − rC(S, t) = 0 2 in (0, +∞) × (0, T ) C(S, T ) = (S − K)+ . (1.7a) (1.7b) We will derive equation (1.7) in the appendix. To simplify the analysis, we transform the equation through the change of variables throughout the paper: τ = T − t, x = ln(S/K) + rτ, . From now on, T and K are fixed. Even though in equation (1.7) we consider only one underlying, many of our results can be extended into higher dimensions, say d ≥ 1. We denote, unless specify, ΩT := Rd × (0, T ), Ω := Rd × [0, T ], and Ω0 := Rd × [0, T ), where T can be any positive number, and d can be one or larger, depending on the contents. Then, the normalized price v(x, τ ) = erτ C(S, T − τ ; K, T )/K 10 satisfies 1 vτ = σ 2 (x, τ )(vxx − vx ) 2 v(x, 0) = (ex − 1)+ . in ΩT , (1.8a) (1.8b) Some fundamental questions such as the existence, uniqueness, and positivity of the solution to equation (1.8) under various assumptions are discussed in detail in Chapter 2. The main result of that chapter involves removing the assumption of the uniform bounds on the local volatility previous made in [5]. We do so by first freeze σ whenever it is greater than m or less than 1/n. We dem m note such functions σn . Then we take a sequence of mollified version of each σn , i.e. m m σn ε . Replacing σ by σn ε in equation (1.8) yields a sequence of uniform parabolic differential equations with H¨lder continuous coefficients. Hence, the existence of the o solution to the corresponding uniform parabolic equations and their properties are known from classical theorems pertaining to parabolic differential equations. We then show convergence of the solutions to these equations of the form (1.8), using a generalized Maximum Principle, uniform Schauder interior estimates, Sobolev embedding theorem and the Arzel`-Ascoli theorem. Next, we prove the limit of the solutions a 2,1,p lies in C(Ω) ∩ WLoc (ΩT ), and is the solution of the limiting equation, (1.8) almost everywhere. We conclude this section by showing the positivity, monotonicity, and uniqueness of the solution to (1.8). 2. In Chapter 3, we view the implied volatility, ϕ, as a suitable such that v := u(x, τ ϕ2 ) satisfies (1.8). The function u has explicitly definition, and is monotonically increasing 11 in the time variable. Consequently, we give the implied volatility, ϕ, as the unique solution to a well-posed degenerate9 initial value quasilinear parabolic problem: (τ ϕ2 )τ − σ 2 (x, τ )(1 − x ϕx 2 1 ) + τ ϕϕxx − τ 2 ϕ2 ϕ2 = 0 x ϕ 4 in Ω. We also introduce the “associated local volatility”, σ[ψ], of any suitable given function ψ. This concept, together with the generalized comparison and maximum principle, as well as the monotonicity of u in time variable allow us to compare the implied volatility with any suitable function ψ – a key property for establishing the asymptotics in the following chapters. 3. The next step is to find the asymptotics of the solution for options near expiry(τ = 0) and for deep in the money10 /far out of the money11 options. Deriving and proving those asymptotics are main results for Chapter 4 and 5, respectively. In Chapter 4, we solve the formal limiting equation of (3.7) at the expire and use its solution as benchmark for sequences of sub and super solutions to (3.7) in ΩT . Then we prove that actual convergence takes place through a generalized comparison principle. This theorem allows us to compare the solutions to parabolic equations on a bounded domain of ΩT . The proof of this Comparison Principle is a main mathematical contribution in this section. It is done through exhaustive applications of fundamental solutions for parabolic equations, with bounded and unbounded coefficients. 9 at τ =0 >> 1 << 1 10 S /K T 11 S /K T 12 4. The methodology for proving the asymptotics in Chapter 5 is similar to the one in section 4. However, we are more focused on finding the right auxiliary functions through which the comparisons are carried over. 5. Chapter 6 is dedicated to the numerical simulation of the implied volatility. The obstacle remains that this equation is degenerate at its initial time. To circumvent this difficulty, we numerically solve for another variable that is one-to-one to ϕ, using finite difference method and the asymptotics we derived in the previous two chapters. In addition, we give one-term and two-term approximation formula for the implied volatility. Our numerical results illustrate the “volatility smile”, a long-observed pattern in which at-the-money options tend to have lower implied volatilities than in- or out-of-themoney options. Additionally, the comparison between the asymptotic formula and the numerically computed smile shows a satisfactory agreement. 13 Chapter 2 Existence of the Solution to Equation (1.8) We want to understand the behavior of the implied volatility near expiry in a more general setting than has previously been done. Instead of assuming the local volatility σ(x, τ ) satisfies 0 < σ ≤ σ ≤ σ < ∞ as in [5], we allow the more realistic situation: ¯ ¯ Condition H0: 1. σ 2 is positive and uniformly continuous in Ω. 2. It follows from 1. that for any compact set C ⊂ Ω, there exists n(C) ∈ N, such that 1/n(C) ≤ σ ≤ n(C). However, as |x| → ∞, σ may grow to infinity, or decay to zero at a rate that is less than linear, uniformly in τ . Furthermore, there are step function −1 < p(x) < 1, which may only have discontinuity at zero, and constant κ1 > 1 such that 1 1 (1 + x2 )p(x)/2 ≤ σ 2 (x, τ ) ≤ κ1 (1 + x2 )p(x)/2 κ1 2 14 in Ω. From now on, we denote p+ = max{p(x), 0}, and p− = max{−p(x), 0}. x∈R 2.1 x∈R Auxiliary Notions, and Function Spaces In order to make precise statements, let us specify the technical conditions that we impose, and state some notations, terminologies, and functional spaces that we shall need. 2.1.1 Auxiliary Notation The parabolic distance between P = (xP , tP ) to Q = (xQ , tQ ) ∈ Ω is defined by Definition 2.1. d(P, Q) = ( xP − xQ 2 + |tP − tQ | )1/2 , where x is the Euclidean norm d 2 1/2 . i=1 (xi ) (2.1) The concept of H¨lder continuity in this o paper will always be defined with respect to the metric (2.1). We define the round cubes in Rd+1 : Qr = (−r, r) × (−r2 , 0], Qr (x, t) = Qr + (x, t). 2.1.2 (2.2a) (2.2b) Function Spaces As is classical when studying parabolic problems, we make use of the following anisotropic Sobolev spaces: 15 Definition 2.2. W 2,1,p (ΩT ) = |wxx |p + |wτ |p + |w|p < ∞ , w (2.3a) ΩT W 2,0,∞ (ΩT ) = {w| |wxx |∞ + |w|∞ < ∞}, (2.3b) 2,1,p 2,0,∞ endowed with their natural norms. Similarly, we define Wloc (ΩT ) and Wloc ◦ 2,1,p Wloc (ΩT ) = 2,0,∞ Wloc (ΩT ) as w D ¯ |wxx |p + |wτ |p + |w|p < ∞, ∀D ⊂ ΩT , (2.4a) ◦ ¯ (ΩT ) = {w| |wxx |∞(D) + |w|∞(D) < ∞, ∀D ⊂ ΩT }. (2.4b) Definition 2.3. Given D ⊆ Ω. 1. C(D) is the set of all real-valued, continuous functions v(x, t) defined on D such that the norm v C(D) = sup |v(x, t)| (2.5) (x,t)∈D is finite. 2. Given α ∈ (0, 1). (a) Define C α (D) as the subspace of C(D) consisting of all functions v such that the norms |v(P ) − v(Q)| d(P, Q)α P,Q∈D v C α (D) = v C(D) + sup P =Q are finite. 16 (2.6) (b) Moreover, we say v is in C 2+α (D) whenever the norm v C 2+α (D) = v C α (D) + vx C α (D) + vxx C α (D) + vt C α (D) (2.7) is finite. 3. As convention, C 2,1 (D) is the set of all functions in D having continuous second spacederivative and first time-derivative. Some results concern H¨lder continuity in x but not in t. We therefore define o C ∗,α (D), C ∗,2+α (D) C ∗,α (D) , , and C ∗,2+α (D) as above. Now, we give notations and spaces relating interior Schaulder estimates. These estimates were first introduced by Brandt [13]. Knerr [33] then extended Brandt’s result to give an estimate on the time derivative of the solutions. Their results show that if the coefficients of L is locally H¨lder, and bounded below, then the H¨lder norms of derivatives of the o o solution to Lu = f can be bounded above by the norms of f and the initial condition of u. Consequently, as we will show in the proof of our existence theorem, if σ were locally H¨lder, o then the solution to (1.8) would be in the H¨lder space, which we define next. o ˜ Definition 2.4. Let D be a bounded domain in Ω := R × R+ . For any point P = (x, t) in ˜ ˜ D we denote by T (P ) the set of points Q on the boundary of D which can be connected to P by a simple continuous arc along which the t coordinate is nondecreasing from Q to P . For ˜ all P , Q in D, we introduce the metric √ |P − Q| = max{|xP − xQ |, 4 2 |tP − tQ |1/2 }. 17 Consequently, let dP = inf{|P − Q| : Q ∈ T (P )}, and dP Q = min{dP , dQ }. ˜ 1. Given D ⊆ D, C(D) is the set of all real-valued, continuous functions v(x, t) defined on D such that the norm v C(D) = sup |v(x, t)| (x,t)∈D is finite. 2. Given α ∈ (0, 1). (a) Let C α (D) be the space of functions in C(D) consisting of all functions v such that the norms |v(P ) − v(Q)| α P,Q∈D |P − Q| v C α (D) = v C(D) + sup P =Q are finite. (b) For m = 0, 1, 2 and 0 < α < 1 and for any sufficiently smooth function v : D → R we define 0,m v D = sup |dm v(P )|, P α,m HD (v) = sup dm+α P,Q P,Q∈D α,m v D (2.8) P ∈D 0,m |v(P ) − v(Q)| , |P − Q|α α,m = v D + HD (v). 18 (2.9) (2.10) 2+α 3. The the space HD is the collection of functions such that α,1 α,2 α,2 v 2+α = v α + vx D + vxx D + vt D D D (2.11) are finite. ∗,α,m Some results concern H¨lder continuity in x but not in t. We therefore define HD o ∗,α,m · D , and ∗,2+α · D , as above except in (2.9) we further restrict P and Q so that P = Q + ηe for some scalar η, where e = (1, 0). 2+α Furthermore, we say v is in H 2+α (Ω) if v ∈ HD for any bounded domain D ∈ Ω. ◦ ˜ ¯ ˜ Note: D = D is allowed here but later we will require D ⊂ D so that dP is bounded below for P ∈ D. Remark 2.5. The estimates using norms of type C 2+α (D) are called boundary estimates, 2+α and the ones that are using norms of HD type are called interior estimates [24]. Definition 2.6. Let 1 [u]p,α (x, t) = sup α r r |u − u(x, t)|p 1 p . (2.12) Qr (x,t) As in [42], we say a function is Cp,α at (x, t) if [u]p,α (x, t) < ∞. We shall show, in both cases, our solution to (1.8) has the following bounds in ΩT , and it is the only solution in this class. Definition 2.7. We define the set of functions defined in ΩT that has no more than expo19 nential growth in the pace variable as ET := {w|, ∃ C, β > 0, ∀(x, τ ) ∈ ΩT , |w(x, τ )| ≤ Ceβ|x| }. 2.1.3 (2.13) Parabolic Terminology Consider the operator Lu ≡ :=   d ∂u − ∂t  ∂u −F ∂t ai,j (x, t) i,j=1 ∂ 2u ∂ 2u ∂xi ∂xj d + i=1   ∂u bi (x, t) + c(x, t)u  ∂xi ∂u , u, x, τ ∂xi ∂xj ∂xi , (2.14) (2.15) in a (d+1)-dimensional domain Ω := Rd × [0, T ], where T can be any positive number. The coefficients aij , bi , c are defined in Ω. We always take (aij (x, t)) to be a symmetric matrix, i.e. aij = aji . If the matrix (aij (x, t)) is positive definite, i.e, if for every real vector ξ = (ξ1 , · · · , ξn ) = 0, aij (x, t)ξi ξj > 0, then we say that the operator L is of parabolic type (or that L is parabolic) at point (x, t). If L is parabolic at all the points of Ω then we ¯ ¯ say that L is parabolic in Ω. If there exist positive constants λ0 , λ1 such that, for any real vector ξ, d ¯ λ0 |ξ|2 ≤ ¯ aij (x, t)ξi ξj ≤ λ1 |ξ|2 (2.16) i,j=1 ¯ ¯ for all (x, t) ∈ Ω then we say that L is uniformly parabolic in Ω. We refer λ0 and λ1 as parabolic constants. Besides the parabolic constants, we also make use of the following bounds on (ai,j (x, t)), 20 the inverse matrix to ai,j . From the above inequalities, it follows that d λ0 |ξ|2 ≤ aij (x, t)ξi ξj ≤ λ1 |ξ|2 , (2.17) i,j=1 for some 0 < λ0 , λ1 < ∞. 2.2 Primary Results on the Fundamental Solution, Green’s Function, and Properties of Solutions to Parabolic Equations Let us review some concepts concerning parabolic operators and associated Cauchy problems. Given a function f (x, t) in Ω and a function ϕ(x) in Rd , the problem of finding a function u(x, t) satisfying the following parabolic equation, and the initial condition Lu(x, t) = f (x, t) in Ω0 ≡ Rd × (0, T ] u(x, 0) = ϕ(x) on Rd (2.18a) (2.18b) is called a Cauchy problem (in the strip 0 ≤ t ≤ T ). The solution is always required to be continuous in Ω. The functions f (x, t) and ϕ(x) will be assumed to satisfy the boundedness conditions |f (x, t)| ≤ const. exp[h|x|2 ], (2.19) |ϕ(x)| ≤ const. exp[h|x|2 ] (2.20) 21 where h is any positive constant satisfying h< λ0 . 4T (2.21) Let us now mention the following assumptions, as they are critical conditions in classical existence theorems for equations of parabolic type. (A 1) L is uniformly parabolic in Ω; (A 2) aij ∈ C α (Ω), bi , c ∈ C ∗,α (Ω) for some 0 < α < 1. In the following, we recall i.) the definition of the fundamental solution to parabolic operator of form (2.14); ii.) sufficient conditions for the existence of a fundamental solution; iii.) the expression of solution to the Cauchy problem (2.18), which is closely related to the fundamental solution of the corresponding parabolic equation. Definition 2.8. [24] A fundamental solution of Lu = 0 is a function Γ(x, τ ; ξ, s) defined for all (x, τ ) ∈ Ω, (ξ, s) ∈ Ω, τ > s, which satisfies the following conditions: i) for each fixed (ξ, s) ∈ Ω, it satisfies, as a function of (x, τ ) (x ∈ Rd , s < τ < T ), the equation LΓ = 0. ii) Given any continuous function f = f (x) in Ω, such that |f | ≤ const · exp[h|x|2 ] 22 for some positive constant h < λ0 , one has, for all x0 ∈ Rd , 4T lim τ →s+ Rd Γ(x0 , τ ; ξ, s)f (ξ)dξ = f (x0 ). (2.22) Theorem 2.9. (Existence of the Fundamental Solution to operator (2.14))[24], [30] Consider operator L defined in (2.14), and assume (A 1), and (A 2) hold. Then the fundamental solution Γ(x, t; ξ, τ ) of Lu = 0 exists. Theorem 2.10. (Existence of the solution to the Cauchy problem (2.18))[24] Suppose that L satisfies (A 1), and (A 2) and let f (x, t), ϕ(x) be continuous functions in Ω and Rd respectively, satisfying (2.19). Assume also that f (x, t) is locally H¨lder continuous (exponent o α) in x ∈ Rd , uniformly with respect to t. Then the function t u(x, t) = Rd Γ(x, t; ξ, 0)ϕ(ξ)dξ − 0 Rd Γ(x, t; ξ, τ )f (ξ, τ )dξdτ (2.23) is a solution of the Cauchy problem (2.18) and |u(x, t)| ≤ const. exp[κ|x|2 ] for (x, t) ∈ Ω, (2.24) where κ is a constant depending only on h, λ0 , and T . Next, we give an extension of Friedman’s [24] Maximum Principle of solutions to parabolic inequalities on unbounded domains: Theorem 2.11. Let L be a parabolic operator with continuous coefficients in ΩT , and let ai,j (x, y), |bi (x, y)| ≤ M (|x|ε + 1), 23 |c(x, y)| ≤ M |x|2−ε , (2.25) be satisfied for some M > 0, and 0 ≤ ε ≤ 1. Assume Lu ≥ 0 in ΩT and that u(x, t) ≥ −B exp[β|x|2−ε ] in Ω (2.26) for some positive constants B and β. If u(x, 0) ≥ 0 in Rd , then u(x, t) ≥ 0 in Ω. Similarly, if u(x, 0) ≤ 0 in Rd then u(x, t) ≤ 0 in Ω. 1 Proof. For the moment, fix µ, ν > 0, and κ > β. Consider the auxiliary function H(x, t) = exp[ κ γ(|x|) + νt] 1 − µt 0≤t≤ 1 2µ (2.27) where γ(|z|) is a C 2 function defined on Rd such that 2 for r > 1   2 r  γ(r) =    2−ε r  for r ≤ 1/2. We shall see, fix κ, by properly choosing µ and ν, LH/H is non-negative on R × [0, 1/(2µ)]. 1 Recall ΩT := Rd × (0, T ), Ω := Rd × [0, T ], and Ω0 := Rd × (0, T ]. 2 We make C 2 connection at for r = 1 and r = 1/2. 24 • First, let us consider |x| > 1, LH H κγ(|x|) κ = µ +ν − (2 − ε)|x|−ε ( 2 1 − µt (1 − µt) − κ (2 − ε) 1 − µt 2 κ − (2 − ε)|x|−ε ( 1 − µt ≥ µ − − |x|−2ε d ai,i ) i=1 κ − (2 − ε) ε|x|−2−ε ( 1 − µt d ai,j xi xj ) i,j=1 d bi x i ) − c i=1 κ κ |x|2−ε + ν − (2 − ε)|x|−ε [dM (|x|ε + 1)] 2 1 − µt (1 − µt) 2 κ κ (2 − ε) |x|−2ε + (2 − ε) ε|x|−2−ε [d2 M (|x|ε + 1)|x|2 ] 1 − µt 1 − µt κ (2 − ε)|x|−ε [dM (|x|ε + 1)|x|] − c 1 − µt • Next, consider |x| ≤ 1, LH H κγ(|x|) 2κ = µ +ν − ( 2 1 − µt (1 − µt) − 2κ ( 1 − µt ai,i ) − i=1 d 2 2κ ( ai,j xi xj ) 1 − µt i,j=1 d bi x i ) − c i=1 2κ κ|x|2 +ν − ( = µ 1 − µt (1 − µt)2 2κ − ( 1 − µt d d ai,i ) − i=1 d bi xi ) − c. i=1 25 d 2 2κ ( ai,j xi xj ) 1 − µt i,j=1 Since ai,j , bi , and c are continuous, and since the set 0 ≤ |x| ≤ 1 is compact and 1 ≤ 1 ≤2 1 − µt LH − ∂H 1 ∂t on |x| ≤ 1, i.e., , one can find both upper and lower bounds for for 0 ≤ t ≤ 2µ H the terms after the first one. Thus, given any κ > 0 we can choose sufficiently large positive numbers µ, ν such that LH ≥0 H for all x ∈ Rd , t ∈ 0, 1 . 2µ (2.28) Let v = u/H, where H is defined by (2.27) with a fixed κ > β and with µ, ν such that (2.28) holds. From (2.26) it follows that lim inf v(x, t) ≥ 0, |x|→∞ uniformly with respect to t ∈ [0, 1/(2µ)]. Since Lu = L(Hv), v satisfies the equation   d ∂v ¯ − Lv ≡ ∂t  ai,j i,j=1 ∂ 2v ∂xi ∂xj d + i=1   ¯i ∂v + cv = f , ¯ b ¯  ∂xi ¯ where f = (Lu)/H ≥ 0 and d ¯i = bi + 2 b ai,j j=1 ∂H/∂xj , H c=− ¯ LH . H Now, for any δ > 0, v(x, t) + δ > 0 on t = 0 and on |x| = R, 0 ≤ t ≤ 1/2µ provided R ¯ ¯ ¯ is sufficiently large. Since L(v + δ) = f − cδ ≥ 0, by the classical maximum principle [24], v(x, t) + δ > 0 if |x| < R, 0 ≤ t ≤ 1/2µ. Taking (x, t) to be fixed and δ → 0, it follows 26 that v(x, t) ≥ 0 on Rd × [0, 1/(2µ)]. The same is true for u(x, t) = H(x, t)v(x, t). We can now proceed step by step in time to prove the positivity of u in Ω0 , since the choice of µ is uniform in Ω0 , i.e., only depends on M , ε, κ. 2,1,p Remark 2.12. Theorem 2.11 can be extended to the case where u ∈ C(Ω) ∩ Wloc (ΩT ) satisfies Lu ≥ 0 a.e. in ΩT . [5] 2.3 The Existence of the Solution to the Non-uniform Parabolic Equation (1.8) The difficulty in proving the existence and uniqueness of the solution to (1.8) are (i) this is a non-uniformly parabolic equation on an unbounded domain; (ii) the coefficient σ is only assumed to be uniformly continuous. Our strategy is to first replace σ by its mollified cut-off m versions σn ε , as shown in (2.29)–(2.31). Then show the convergence of the corresponding solutions as m, n → ∞ and ε → 0. Last, but not least, we show the limit satisfies the limiting equation. We would also like to point out, throughout our proof, we work with w = v − (ex − 1), if v exists. Notice that w satisfies the same differential equation, and it has the same smoothness as v, but it has bounded initial value (ex − 1)− . This additional boundedness allows us to show convergence in several places that might otherwise be difficult. Once we established the properties of w, we will add ex − 1 to w and get back to v. Now, we introduce the following “cut-off” volatility functions, which will be used below: 27 For all n, m ∈ N, define    m       m (x, τ ) = σn σ(x, τ )       1/n  σ(x, τ ) ≥ m, 1/n < σ(x, τ ) < m, (2.29) σ(x, τ ) ≤ 1/n. Similarly, for each m, n ∈ N, we define σ m (x, τ ) and σn (x, τ ) as the cut-off version of σ from above and below, respectively. Next, we take a standard mollifier ρε (x, τ ), which satisfies ∞ ρε ≥ 0, ρε ∈ C0 (R2 ), supp(ρε ) ⊂ Bε (0), ρε = 1. Consequently, (2.30a) (2.30b) ρε (x, τ ) → δ(x, τ ) as ε → 0 in distribution sense. (2.30c) m Fix m,n, and define σn (x, −τ ) = σn (x, τ ) for all τ ≥ 0, and set for all ε > 0 ˜m m σn ε (x, t) = ρε ∗ σn (x, t), ˜m (2.31) where ∗ denotes the convolution in R2 . Remark 2.13. Given continuous functions hi , i = 1, 2, in Ω, and ρε . If both ρε ∗ hi are locally integrable, and h1 ≥ h2 in Ω, then ρε ∗ h1 ≥ ρε ∗ h2 in ΩT . To see why, we take 28 (x, t) ∈ ΩT , and the difference ρε ∗ h1 (x, t) − ρε (x, t) ∗ h2 T ρε (x − ξ, t − τ ) · (h1 − h2 )(ξ, τ )dξdτ = 0 R ≥0. m m In particular, given (m, n), 1/n ≤ σn ε ≤ m in Ω, uniformly in ε. Moreover, given ε, σn ε is non-increasing in n and non-decreasing in m in Ω. m Note that σn ε ∈ C ∞ (Ω), but this, a priori, does not imply that it is uniform H¨lder o continuous. To that end, we need to show m Lemma 2.14. Fix m, n, and ε, (σn ε )2 ∈ C α (Ω) for all α ∈ (0, 1). Proof. We only need to show, for fixed m, n, and ε, sup m m |(σn ε )2 (x, τ ) − (σn ε )2 (x0 , τ0 )| (x0 ,τ0 ),(x,τ )∈Ω (|x − x0 |2 + |τ − τ0 |)α/2 <∞ Take (x0 , τ0 ), (x, τ ) ∈ Ω i) |x − x0 | ≥ 1 or |τ − τ0 | ≥ 1: m m |(σn ε )2 (x, τ ) − (σn ε )2 (x0 , τ0 )| (|x − x0 |2 + |τ − τ0 |)α/2 m m ≤|(σn ε )2 (x, τ ) − (σn ε )2 (x0 , τ0 )| ≤ m− 1 n m+ for all α ∈ (0, 1). 29 1 n for all α ∈ (0, 1). ii) |x − x0 | < 1 and |τ − τ0 | < 1: m m |(σn ε )2 (x, τ ) − (σn ε )2 (x0 , τ0 )| (|x − x0 |2 + |τ − τ0 |)α/2 m m m m |(σn ε )2 (x, τ ) − (σn ε )2 (x0 , τ0 )| |(σn ε )2 (x, τ ) − (σn ε )2 (x0 , τ0 )| ≤ max , |x − x0 | |τ − τ0 | for all α ∈ (0, 1). m m By the mean value theorem, it is sufficient to show sup(σn ε )2 , and sup(σn ε )2 are τ x Ω Ω finite. In fact, since m sup|(σn ε )τ | Ω m σn ε m is bounded above in Ω, it boils down to show sup|(σn ε )x |, and Ω are finite. It is straight forward that m |(σn ε )x |(x, τ ) = | m ρε (ξ − x, η − τ )σn (ξ, η)dξdη| x R R τ +ε x+ε =| m ρε (ξ − x, η − τ )σn (ξ, η)dξdη| x τ −ε x−ε τ +ε x+ε ≤m τ −ε x−ε |ρε (ξ − x, η − τ )| dξdη, x m m Therefore, sup|(σn ε )x | < ∞. Similarly, sup|(σn ε )τ | < ∞. Ω Ω Given m, n, and ε, define Lm ε := n ∂ 1 m − (σn ε )2 ∂t 2 30 ∂2 ∂ − ∂x2 ∂x . for all (x, τ ) ∈ ΩT . We now show the existence, uniqueness, and monotonicity of solution to m Lm ε wn ε = 0 n in ΩT m wn ε (x, 0) = (ex − 1)− , where (ex − 1)− =    1 − ex    0  (2.32a) (2.32b) if x ≤ 0 if x > 0. m Lemma 2.15. The solution, wn ε (x, τ ) to (2.32) exists and has the following properties m i) Smoothness: wn ε ∈ C(Ω) ∩ C 2,1 (ΩT ). m m ii) Positivity: wn ε (x, τ ) ≥ (ex − 1)− in Ω, and 0 < wn ε (x, τ ) < 1 in ΩT . m m iii) Monotonicity: (wn ε )τ (x, τ ) > 0 in ΩT . Hence, wn ε (x, τ ) > (ex − 1)− in ΩT . m iv) Uniqueness: wn is the only solution to (1.8) in the class ET . Proof. i) Smoothness. By [24], the solution to equation (2.32) exists, and can be written as: m wn ε (x, τ ) = Γ(x, τ ; ξ, 0)(eξ − 1)− dξ, (2.33) R where Γ is the fundamental solution to equation (2.32a), defined in Definition 2.8. Conm sequently, we have wn ε ∈ C 2,1 (ΩT ) from the smoothness of Γ and the boundedness of m the initial value. By Definition 2.8, we know that wn ε is continuous at τ = 0. m ii) 0 < wn ε < 1 in ΩT . 31 m Equation (2.33) implies |wn ε | ≤ 1. This estimate combined with Condition H0 allows m m us to use Theorem 2.11. Notice wn ε has nonnegative initial value, and wn ε − 1 has m non-positive initial value, we conclude 0 ≤ wn ε ≤ 1 in Ω. One then applies the strong m maximum principle [24] and gets 0 < wn ε < 1 in ΩT . m iii) We now show (wn ε )τ > 0, for τ > 0. m m Denote (wn ε )τ by zn ε , equation (2.32a) can be rewritten as 1 mε 2 m m m (σ ) [ (wn ε )xx − (wn ε )x ] = zn ε 2 n in ΩT . (2.34) Furthermore, differentiating (2.32a) w.r.t. τ , one gets 1 m m m m m (wn ε )τ τ − 2σn ε (σn ε )τ ((wn ε )xx − (wn ε )x ) 2 1 m m m − (σn ε )2 [(wn ε )τ xx − (wn ε )τ x ] = 0 in ΩT , 2 i.e., 1 m (σ m ε )τ m m m m (zn ε )τ − (σn ε )2 [(zn ε )xx − (zn ε )x ] − 2 n ε zn ε = 0 in ΩT . m 2 σn (2.35) From (2.34), the facts Dx (ex − 1)− = Dxx (ex − 1)− in R\{0}, Dx (ex − 1)− |0+ = 1, Dx (ex − 1)− |0− = 0, one sees Dxx (ex − 1)− |x=0 = δ0 (x), the Dirac mass at x = 0, in the distribution sense. One then defines 1 m m zn ε (x, 0) = (σn ε (0, 0))2 δ0 (x), 2 32 (2.36) m in the distributional sense. Plugging (2.36) into (2.23), one realizes zn ε (x, τ ) is the fun- 1 m damental solution to the uniform parabolic equation (2.35) multiplied by (σn ε (0, 0))2 . 2 m m Therefore, by [25], 0 ≤ zn ε in Ω, and zn ε > 0 in ΩT . iv) Uniqueness. See [24] and (2.33). For the rest of this section, we send m, n → ∞, and then ε → 0. We aim to show m convergence of wn ε to a solution to the limiting equation. The first stage is to send m, n → m ∞, and show convergence of wn ε to a classical solution to the limiting equation. In this m stage, we use monotonicity of (wn ε ) in m and n, and a interior Schauder estimates by [33]. Let us start with the following lemma. m m Lemma 2.16. wn ε , the solution to (2.32) increases as σn ε increases. In particular, given m ε m ε m ε (ni , mi , εi ), i = 1, 2. If σn11 1 ≥ σn22 2 in Ω, then for the corresponding solutions, wn11 1 ≥ m ε wn22 2 in Ω0 . m ε Proof. Suppose wi (x, t) := wnii i , i = 1, 2 are solutions to the equation (2.32) with correm ε sponding parameters σi := σnii i in Ω. Let ∆ := w1 (x, τ ) − w2 (x, τ ), then ∆ satisfies σ1 2 − 1 (w2 )τ σ2 1 ∆τ − (σ1 )2 (∆xx − ∆x ) = 2 in ΩT ∆(x, 0) = 0 on R. By Lemma 2.15, (w2 )τ > 0 in ΩT . From our assumption, σ1 2 σ2 − 1 ≥ 0 in Ω, the right hand side of the above equation is non-negative in Ω. Now, one can employ Theorem 2.11, to the above equation and conclude ∆ ≥ 0 in Ω0 . That is, w1 ≥ w2 in Ω0 . 33 m Remark 2.17. By Lemma 2.16, given m, ε, the sequence wn ε is non-increasing with respect m m to n. Similarly, for fixed n, ε, wn ε is non-decreasing with respect to m. Moreover, wn ε is bounded above by 1, and bounded below by 0, uniformly for all m, n, ε. Therefore, the sequence m m wn ε converges to wmε , say, as n → ∞. Similarly, if one lets m → ∞, then wn ε converges ε on Ω0 to wn , say. m Next, we show convergence of various derivatives of wn ε (x, τ ) as m → ∞, or n → ∞. Let us first mention the following interior estimates ([13] and [33]) on solutions. Proposition 2.18. Suppose v : D → R is a classical solution to (1.8) on a bounded domain D ⊂ Rd × R+ and there exist constants κ, ν > 0, and α, 0 < α < 1, such that σ 2 C ∗,α (D) ≤ κ (2.37) σ 2 (x, t) ≥ ν (2.38) and hold. Then there exists a constant C > 0, depending only on κ, ν, α, and d, such that: v 2+α ≤ C v 0 . D D (2.39) m We shall provide local estimates on the derivatives of wn ε , which are needed in several places. ◦ m ¯ Lemma 2.19. Let D ⊂ D ⊂ ΩT . For each fixed ε and n the sequence {wn ε } is uniε formly bounded in C α (D) and converges to a function wn . Furthermore, the derivatives m m m {(wn ε )x }, {(wn ε )xx }, and {(wn ε )τ } are also uniformly bounded in C α (D) and converges 34 ε to the respective derivatives of wn . ◦ ◦ ¯ ˆ ¯ ˆ Proof. Fix any domain D ⊂ D ⊂ ΩT , let D ⊂ ΩT be a bounded domain such that D ⊂ D. ˆ Define the norms and distance functions given at the start of Section 2.1 in terms of D. For ˆ instance, for P ∈ D, T (P ) is the set of points Q in ∂ D which can be connected to P by a simple continuous arc along which the time coordinate is nondecreasing from Q to P . Now, define dP = inf{|P − Q| : Q ∈ T (P )}. Since inf {dP } > 0, · 2+α is equivalent to the usual D P ∈D ¯ ¯ norm on C 2+α (D) which is compactly embedded in C 2 (D). This, with Proposition 2.18 ε m m and the monotonicity of wn ε as m → ∞ implies wn ε converge to wn ∈ C 2+α uniformly as m → ∞. This establishes the lemma. The second stage is to let ε → 0. Note that by doing so, we will lose local uniform H¨lder o constant on σ ε , a key assumption in Proposition 2.18. As in [5], here we make use of the 2,1,p following Wloc estimates for the solutions to uniform parabolic equations [42]. Proposition 2.20. ([42] Theorem 5.9) Let u be a continuous solution of ut − F ∂2 u, x, t ∂xi ∂xj = g(x, t) for some continuous function g, with u L∞ (Q ) < ∞, and let 1 |F (M, x, t) − F (M, y, τ )| , |M | + 1 M θ(x, t; y, τ ) = sup where the supremum is taken over the set of symmetric matrices. If vt − F ( ∂2 v, x, t) = 0 ∂xi ∂xj 35 has interior C1,1 estimates and ¯ ¯ lim θ(·, · ; y, τ ) L∞ (Qr (y,τ )) ≤ δ0 (p, λ0 , λ1 ) r→0 for all (y, τ ) ∈ Q1 , then for p > d + 1, |uτ |p + |uxx |p ≤ C Q1/2 Q1 |g|p + u L∞ (Q ) 1 , (2.40) where C only defends on the dimension d and the parabolic constants. We now give the main theorem of this section. Theorem 2.21. Given σ satisfies Condition H0, a strong solution, v(x, τ ) to (1.8) exists almost everywhere and 2,1,p 1. Smoothness: v ∈ C(Ω) ∩ Wloc (ΩT ) for any 1 < p < ∞. 2. Positivity: v(x, τ ) ≥ (ex − 1)+ in Ω, 3. Monotonicity: vτ (x, τ ) ≥ 0 in ΩT , and v(x, τ ) is strictly greater than v(x, 0) for τ > 0, i.e., v(x, τ ) > (ex − 1)+ in ΩT . 4. Growth rate and uniqueness: v(x, τ ) < ex in Ω, and it is the only solution to (1.8) in the class ET . Proof. 1) Our strategy to show the existence and smoothness of the solution to (1.8) is m to use a two-stage approximation then show the convergence as σn ε → σ. Stage I. Step 1. Fix n > 0, ε > 0, and let m → ∞. 36 ε m ¯ For given domain D in Ω, by Lemma 2.19, wn ε converges to wn uniformly on D ε ε ¯ as m → ∞. Moreover, Lε wn = 0 on D. Because D is arbitrary, wn ∈ C 2,1 (ΩT ) n ε ε and Lε wn = 0 in ΩT everywhere. Furthermore, wn has all the properties in Lemma n ε 2.15, except (wn )τ is only non-negative. Step 2. Fix ε, let n → ∞. Because of monotonicity in n, the same analysis as in Lemma 2.16 and Remark 2.17 ε ε shows that σn → σ ε as n → ∞. Note that wn is a classical solution to Lε v = 0 in n ΩT , we apply the same procedures as in Step 1. and conclude that wε is a classical m solution to Lε v = 0 in ΩT everywhere. In addition, it has all properties as wn ε listed in Lemma 2.15, expect it is only non-decreasing in τ . Remark 2.22. If σ were locally H¨lder continuous, then we would have no need o to use mollifiers and we would have the existence of the unique classical solution to (1.8) in H 2+α (ΩT ). Stage II. Now, letting ε → 0, we shall derive the existence of the solution v to (1.8). On any bounded domain D = (−R, R) × (τ1 , τ2 ), the σ ε are uniformly continuous, uniformly bounded, and bounded away from 0, uniformly in ε. Hence we have a family of uniformly parabolic operators with common upper and positive lower bounds on the ellipticity constants. Instead of solving a Cauchy problem, we treat (1.8) as a boundary problem, using wε restricted to the parabolic boundary to obtain interior estimates. Hence, we are in the same situation as [5]. By Proposition 2.20 and the uniform bounds on wε in Ω, we have uniform W 2,1,p (D) estimates on wε for all p > 1. By the Sobolev embedding theorem and the Arzel`-Ascoli theorem, a ε there is a subsequence w j which converges locally uniformly to w in D, a viscosity 37 2,1,p solution to the limiting equation. By [42] (Theorem 4.21) again, w ∈ Wloc and has second derivatives in x almost everywhere. Therefore, w satisfies the limiting equation point wise a.e. in D. Additionally, the whole family wε converges to the same limit, since uniqueness follows from the Maximum Principle. Adding ex − 1 to w, we get the existence and smoothness of v in the theorem. m 2) The positivity of the solution follows Lemma 2.15 that the sequence {vn ε } is non negative in Ω. 3) monotonicity m m Since each vn ε = wn ε > 0 in ΩT , their limit, vτ must be non-negative in ΩT . τ τ We are left to show v(x, τ ) > v(x, 0) in ΩT . Suppose there is (x0 , τ0 ) ∈ Ω such that v(x0 , τ0 ) = v(x0 , 0). Since vτ ≥ 0, we conclude that v(x0 , τ ) ≡ v(x0 , 0) for 0 ≤ τ ≤ τ0 , and vτ (x0 , τ ) ≡ 0 on [0, τ0 ]. Recall the initial value of v: v(x, 0) = (ex − 1)+ ≥ 0 on Ω. ◦ If x0 ≤ 0 then v(x0 , τ0 ) ≡ v(x0 , 0) = 0, so v has a minimum in Ω, a contradiction to the maximum principle, which holds for all parabolic equations. So, x0 > 0. As before, with w(x, τ ) = v(x, τ ) − (ex − 1) then w(x0 , τ ) ≡ w(x0 , 0) = 0 for τ ∈ [0, τ0 ]. So w has a ◦ minimum in Ω, which contradicts to the maximum principle. So, x0 < 0. In conclusion, there is no (x0 , τ0 ) ∈ Ω such that v(x0 , τ0 ) = v(xo , 0), i.e., v(x, τ ) > v(x0 , 0) in ΩT . 38 m 4) The growth rate of v follows from the fact that each wn ε values in [0, 1) in Ω. 5) The uniqueness of v(x, τ ) as discussed in the first part of this proof, follows the Strong Maximum Principle (Theorem 2.11). 39 Chapter 3 Directly Computing the Implied Volatility 3.1 The Implied Volatility ϕ Let u be the solution to 1 ut = (uxx − ux ) in ΩT , 2 and u(x, 0) = (ex − 1)+ . (3.1) The explicit solution to (3.1) is readily seen to be u(x, t) = ex N x 1√ √ + t −N t 2 x 1√ √ − t , t 2 where 2 x y 1 N (x) = √ e− 2 dy. 2π −∞ 40 (3.2) Then the derivatives of u are, ux (x, t) = ex N uxx (x, t) = ex N √ x t √ + 2 t √ x t √ + 2 t > 0, (3.3a) 1 1 −√ √ 2π t √ t x 1 1 − √t − 2 √ e ut (x, t) = √ 2π 2 t √ x −( √ − 2t )2 t e , (3.3b) 2 >0 in ΩT . (3.3c) Additionally, we will make use of the following identities: 1 x uxt (x, t) = − , ut 2 t utt 1 x2 1 (x, t) = − + 2 − . ut 2t 2t 8 (3.4a) (3.4b) Given constant ϕ > 0, one sees that v(x, τ ) ≡ u(x, τ ϕ2 ) satisfies vτ = ϕ2 (vxx − vx ). 2 So, v is a solution to equation (1.8) if and only if σ ≡ ϕ. Thus, for constant volatility, ϕ may be determined by inverting u with respect to the time variable. More generally, if one can find ϕ(x, τ ) ≥ 0, so that v(x, τ ) = u(x, τ ϕ2 (x, τ )) (3.5) satisfies equation (1.8) for all (x, τ ) ∈ Ω, then ϕ(x, τ ) is called the implied volatility. One main result in this chapter is to find an equation which ϕ(x, τ ) satisfies. In the next chapter, we will study its asymptotics as τ → 0 (near expiry) and x → ±∞ (far out-of-themoney/deep in-the-money option). 41 We now state an existence, uniqueness lemma for the implied volatility. Lemma 3.1. Given v(x, τ ), the solution to (1.8), the choice of ϕ as described in (3.5) exists, is unique, and positive in ΩT . Proof. To find ϕ(x, τ ), one just inverts u(x, t) with respect to the time variable. Since v exists, (e+ − 1)+ ≤ v(x, τ ) < ex and is unique (Theorem 2.21). On the other hand, u(x, 0) = (ex − 1)+ , lim u(x, τ ) = ex , and ut > 0 (equation (3.3) ) in Ω, the inverse τ →∞ function theorem [1] assures us the existence and uniqueness of such inverse. The positivity of ϕ in ΩT = R×(0, T ) follows from the monotonicity of v in time. If there is (x0 , τ0 ) ∈ ΩT such that ϕ(x0 , τ0 ) = 0, then, v(x0 , τ0 ) = v(x0 , τ0 ϕ2 (x0 , τ0 )) = v(x0 , 0), a contradiction to Theorem 2.21. However, as stated in the introduction section, our strategy is not to invert u(x, τ ϕ2 ). Instead, we see the implied volatility as the unique solution to a well-posed degenerate quasilinear parabolic problem. 2,1,p For any ψ ∈ Wloc (ΩT ), denote by H the quasilinear operator [5] H[ψ] ≡ H(x, τ, ψ, Dψ, D2 ψ) = (1 − x ψx 2 1 2 ) + τ ψψxx − τ 2 ψ 2 ψx . ψ 4 (3.6) Theorem 3.2. Under Condition H0, suppose that the implied volatility ϕ is defined by (3.5) 2,1,p where v and u are solutions to (1.8) and (3.1), respectively. Then, ϕ ∈ Wloc (ΩT ) for all p > 1, and it satisfies (τ ϕ2 )τ − σ 2 (x, τ )H[ϕ] = 0 a.e. in ΩT . 42 (3.7) 2,1,p Proof. We first show that ϕ ∈ Wloc (ΩT ). Therefore, we need estimates on ϕ and its various (weak) derivatives. We do so by the following two steps. 1. We express ϕ, and its derivatives in terms of u, v and their derivatives. Denote u−1 (y; x) : R+ → R+ the inverse of u(x, τ ) with respect to the time variable, i.e., u(x, u−1 (y; x)) = y for any y ∈ R+ . Then given (x, τ ) ∈ ΩT , ϕ2 (x, τ ) = u−1 (v(x, τ ); x)/τ. (3.8) By formally taking the derivative of v(x, τ ) = u(x, τ ϕ2 (x, τ )) with respect to x, and using (3.4), one obtains vx = ux + 2τ ϕϕx uτ . (3.9) Hence, ϕx (x, τ ) = vx − ux (x, τ ). 2τ ϕuτ (3.10) Taking a second derivative of v with respect to x yields vxx = uxx + 4τ ϕϕx uxτ + 2τ (ϕ2 + ϕxx ϕ)uτ + (2τ ϕϕx )2 uτ τ . x The above equality and identities (3.4) imply ϕxx (x, τ ) = = vxx − uxx 2ϕϕx uxτ 2τ (ϕϕx )2 uτ τ ϕ2 − + + x , 2τ uτ ϕ uxτ uτ ϕ ϕ vxx − uxx xϕx (xϕx − 2ϕ) tϕϕ2 x (x, τ ) − ϕx + − (x, τ ). 2tϕuτ 4 tϕ3 43 (3.11) Similarly, differentiating v with respect to the time variable gives vτ = uτ (ϕ2 + 2τ ϕϕτ ). Solving for ϕτ : ϕτ (x, τ ) = vτ ϕ (x, τ ) − (x, τ ). 2τ uτ ϕ 2τ (3.12) 2,1,p 2. Since v ∈ Wloc (ΩT ) (Theorem 2.21), u ∈ C ∞ (ΩT ), uτ > 0 and ϕ > 0 in ΩT , it follows that ϕ and all its (weak) derivatives stated above are locally p−integrable for 2,1,p any p > 1. This implies a finite Wloc bound on ϕ. Next, we show how to derive equation (3.7). Define w(x, τ ) = u(x, τ ϕ2 (x, τ )). 2,1,p Since ϕ ∈ Wloc , differentiating w with respect to x and τ yields: wx (x, τ ) = ux + uτ 2τ ϕϕx , (3.13a) wxx (x, τ ) = uxx + uxτ 4τ ϕϕx + uτ τ (2τ ϕϕx )2 + uτ 2τ (ϕ2 + ϕϕxx ), x wτ (x, τ ) = uτ (ϕ2 + 2τ ϕϕτ ). (3.13b) (3.13c) 44 From (3.13), (3.3), and (3.4), 1 wτ − σ 2 (wxx − wx ) 2 (3.13) 1 = uτ (ϕ2 + 2τ ϕϕτ ) − σ 2 2 [uxx + uxτ 4τ ϕϕx + uτ τ (2τ ϕx ϕxx )2 + uτ 2ϕ(ϕ2 + ϕϕxx − ϕϕx )] x (3.3) σ2 [2 − 2τ ϕϕx 2 uτ τ uτ τ (x, τ ϕ2 ) + 4τ 2 ϕ2 ϕ2 (x, τ ϕ2 )]} + 2τ (ϕ2 + ϕϕxx ) + 4τ ϕϕx x x uτ uτ = uτ (x, τ ϕ2 ){2τ ϕϕτ + ϕ2 − (3.4) = uτ (x, τ ϕ2 ){2τ ϕϕτ + ϕ2 − σ 2 1− xϕx 2 +τ ϕ ϕϕxx − ϕ2 ϕ2 x 4 }. Let F (x, τ, ψ, Dψ, D2 ψ) = 2τ ψψτ + ψ 2 − σ 2 1− xψx 2 +τ ψ ψψxx − = (τ ψ 2 )τ − σ 2 H(x, τ, ψ, Dψ, D2 ψ). Then, w(x, τ ) satisfies (1.8) if 1 wτ − σ 2 (wxx − wx ) = uτ (x, τ ϕ2 )F (x, τ, ϕ, Dϕ, D2 ϕ) = 0. 2 Because uτ (x, τ ϕ2 ) > 0 on R × (0, ∞), this is equivalent to F (x, τ, ϕ, Dϕ, D2 ϕ) = (τ ϕ2 )τ − σ 2 (x, τ )H[ϕ] = 0. 45 2 ψ 2 ψx 4 A very closely related concept is the so called 3.2 Associated Local Volatility σ[ϕ] 2,1,p Definition 3.3. Let I (0, ξ) be the class of nonnegative functions ψ ∈ Wloc (ΩT ) for which the “associated local volatility” σ[ψ](x, τ ) = (τ ψ 2 )τ H(x, τ, ψ, Dψ, D2 ψ) 1/2 (3.14) is well defined, continuous in Ωξ , and satisfies Condition H0. Furthermore, we require the growth condition at zero: τ ψ 2 (x, τ ) → 0 as τ → 0, (3.15) since u(x, τ ψ 2 ) has to replicate the behavior of u(x, τ ) as τ → 0. Remark 3.4. We call σ[ψ] the “associated local volatility” of ψ ∈ I (0, ξ) because, for ω(x, τ ) = u(x, τ ψ 2 ), 1 ωτ − σ[ψ]2 (ωxx − ωx ) = 0 2 on Ωξ , (3.16a) ω(x, 0) = (ex − 1)+ . (3.16b) Note that v(x, τ ) = u(x, τ ϕ2 ) satisfies both 1 vτ − σ[ϕ]2 (vxx − vx ) = 0 2 on ΩT , v(x, 0) = (ex − 1)+ , 46 and equation (1.8). Therefore, σ 2 [ϕ] = 2vτ = σ2 vxx − vx on ΩT . As we will see later, the “associated local volatility” provides a way to back out the local volatility from the solution to equation (3.7). In the next chapter, where we are looking for the asymptotics of the implied volatility, it becomes an essential tool to form super and sub solutions, on which variance comparison theorems are performed. 47 Chapter 4 The Asymptotic of ϕ as τ → 0 Condition H1 In this chapter, we assume, in addition to Condition H0, σ is uniformly Lipschitz in Ω. 4.1 4.1.1 The Fundamental Solutions, Green’s Functions, etc. Uniform Parabolic Equations We dedicate this subsection to giving an estimate on the fundamental solution (Proposition 4.5), starting with the following notations and conditions. Definition 4.1. Let Σ denote an open domain in Rd . It is not necessary that Σ be bounded and Σ = Rd is not excluded. Let I denote an interval included in [0, T ]. A function w = w(x, t) defined and measurable on D = Σ × I is said to belong to the class Lp,q (D) with 1 ≤ p, q < ∞ if  q/p 1/q  w p,q =  |w|p dx I 48  < ∞. In case either p or q is infinite, w p,q is defined in a similar fashion using L∞ norms rather than integrals. Recall the operator L (2.14): L≡   d ∂ − ∂t  ai,j (x, t) i,j=1 ∂2 ∂xi ∂xj d + i=1   ∂ bi (x, t) + c(x, t)  ∂xi in a (d+1)-dimensional domain Ω. The following conditions are collectively referred to as Condition H2: (To show the bounds of the fundamental solutions and the Green’s Functions to uniform parabolic equations.) There exist ν, M , M0 and R0 such that ν > 0, M < ∞, 0 ≤ M0 < ∞ and 0 ≤ R0 ≤ ∞, and such that the coefficients of L satisfy the following conditions: 1. For all ξ ∈ Rd and for almost all (x, t) ai,j (x, t)ξi ξj ≥ ν|ξ|2 and |ai,j (x, t)| ≤ M. 2. (a) Let Q0 = {|x| < R0 } × (0, T ]. Each of the coefficients d (ai,j )xj (x, t) is contained in some space Lp,q (Q0 ), where p, q are bi (x, t) − j=1 such that (∗) 2 < p, q ≤ ∞ and 1 1 d + < ; 2p q 2 and d (b) bi (x, t) − (ai,j )xj (x, t) ≤ M0 for all |x| ≥ R0 and t ∈ (0, T ]. j=1 49 3. (a) c(x, t) p,q < ∞, where the norms are taken over for all cylinders of the form R( ) × (0, T ], contained in Rd × I. R( ) denotes an open cube in Rd of edge √ length and = min{1, T }. p and q are such that (∗∗) 1 < p, q ≤ ∞ and 1 d + < 1. 2p q and (b) c(x, t) ≤ M0 for almost all |x| ≥ R0 and t ∈ (0, T ]. All (strong) derivatives are defined and measurable in ΩT . Remark 4.2. Consider the operator L defined as L≡ ∂ 1 − σ 2 (x, τ ) ∂τ 2 ∂ ∂2 − 2 ∂x ∂x . (4.1) Since d = 1, its coefficients satisfies Condition H2 if there exists ν, M , M0 and R0 such that 0 < ν, M < ∞, 0 ≤ M0 < ∞ and 0 ≤ R0 ≤ ∞, and such that 1. For almost all (x, t) 1 2 σ (x, t) ≥ ν 2 and 1 2 σ (x, t) ≤ M. 2 2. Let Q0 = (−R0 , R0 ) × (0, T ], then ( 1 σ 2 (x, t))x is contained in some space Lp,q (Q0 ), 2 where p, q are such that (∗) 2 < p, q ≤ ∞ 50 and 1 1 1 + < ; 2p q 2 1 and |( 2 σ 2 )x | ≤ M0 for all |x| ≥ R0 and t ∈ (0, T ]. Remark 4.3. The statement “C depends on the structure of L” means that C is determined 1 by the quantities ν, M , d, ( 2 σ 2 )x p,q(Q ) and p, q, d occurring in Condition H2. 0 The following lemma says if σ satisfies Condition H1, and it has upper and lower bounds, then if one defines σ ε as in (2.31), then (σ ε )2 satisfies Condition H2, with p, q any integers satisfying (∗). Lemma 4.4. Suppose f ∈ R is Lipschitz continuous with Lipschitz constant K. Let ρε (x) be a standard mollifier, that is, a function satisfying (2.30). Denote f ε (x) = (f ∗ ρε )(x) = f (x − ξ)ρε (ξ)dξ = R f (ξ)ρε (x − ξ)dξ. R Then |(f ε ) (x)| ≤ K on R, uniformly in ε. Proof. Note that f ε (x + h) − f ε (x) h f (x + h − ξ) − f (x − ξ) ε ρ (ξ) dξ h R |f (x + h − ξ) − f (x − ξ)| ε ρ (ξ) dξ ≤ h R = K · ρε (ξ)dξ ≤ R =K uniformly in x ∈ R, ε > 0, and h > 0. Therefore, f ε (x + h) − f ε (x) ≤ L, h h→0 |(f ε ) (x)| = lim 51 for all x ∈ R, ε > 0. Proposition 4.5. ( Bounds on Fundamental Solutions )([3] Theorem 71 ) Consider operator L defined in (4.1). If σ 2 satisfies Condition H2, then the fundamental solution to L exists. Moreover, there exists positive α1 , C1 , C2 depending only on T and the structure of L, and positive α2 depending only on the structure of L such that α |x−ξ|2 −1/2 e− 1 t−τ C1 (t − τ ) ≤ Γ(x, t; ξ, τ ) ≤ α |x−ξ|2 −1/2 e− 2 t−τ C2 (t − τ ) , (4.2) for all x, ξ ∈ R, 0 ≤ ξ < t ≤ T . Remark 4.6. The purpose of this remark is to provide exact formulas for C1 , C2 , α1 and α2 . For the reader’s convince, we keep the notation consistent with original papers [3], [4] whenever we can. To this end, let us first define a few constants that will appear in different parts of this remark. As a reminder, p, q, M , M0 , and ν are defined in Condition H2. For d ∈ N, define π d/2 (d/2)!          ωd =           d+1 d−1 2 2 π 2 d!! d even , d odd d ˜ ˜ ˜ ˜ ι = 16/T , θ = 1 − 1 − 2p 2 , σ = 1 + 2θ/d > 1, and r0 = 1/(1 + σ ). Let µ, λ, and K be q constants depend only on the dimension d. 1 It 2 In is a generation of Harnack Inequality for Parabolic Equations.[38] ˜ our case, p, q can be any numbers satisfying (∗), d = 1, and we have θ ∈ [1/2, 1]. 52 Define the function g as g(x) = 1 α= · 8 (x2 x . Furthermore, denote − 1)2 −1 4d2 M 2 1+ ν β = M0 1 + 2dM0 + + min(1, ν) 4dM0 ν , and . 1. We introduce the following constants: 2 c11 = ν −1 M0 + 16ν −1 M 2 + 12, c12 = [144K(2 + 4c11 B = 2−d ( + ν −1 c 11 σ ˜ 2(˜ −1) σ )] ν 1+ d −1 d 2 ) + 2−d + ωd · σ g(˜ ) , ˜ σ 2−d [c11 + 64(1 + 2ν −1 M 2 )] , 2 1/r γ = µB /λ · c12 0 , 9 T 9d T ˜ C1 = ln γ · max 32, + + 8 8 32 δ 2 , and C1 = ln γ · max{32, d}. 3 (4.3a) (4.3b) The value of C1 is C1 = 1 ωd ιd/2 ˜ exp{−2C1 (ι + 1)} · exp{−C1 (T + 1)}. (4.4) 2. (a) Let θ denote the minimum value of 1 − d/(2p) − 1/q for all pairs of (p, q) involved. 3 Note ˜ that for each fixed δ, when d < 32 and T is small enough, C1 = C1 = 32 ln γ. 53 4 Now, let   1/θ    min{1, ν}   σ = min 1, ˆ ,  8K 2 1 + 2 (M 2 + M 2 ) + 1  ν 2 0 σ c211 = 21+T /ˆ , c212 = c211 Rd and c213 = c212 · exp exp{− z 9α + βT 4 2 }dz α−d/2 1/2 , . Define, 5 C21 = 27 + (c213 )2 . (b) Define 2 c221 = ν −1 M0 + 1, µ= ˜ ˜ 1/θ min(ν, 1) Kc221 , 8 c222 = 9 + 16ν −1 M 2 , c223 = c221 + 2c222 , c224 = 2K 5 + (ν −1 + 4)c223 , ˜ σ σ c225 = (36c224 )σ/(˜ −1) (2˜ )2g(˜ ) , σ and 4 In 5 α, µ c226 = 210+1/˜ K(1 + ν −1 )(1 + c222 ). our case, p, q can be any number satisfying (∗), and d = 1, we have θ > 1/2. β are defined earlier in the remark. 54 (4.5) We then let 6 C22 = √ c225 c226 . (4.6) (c) Define c232 = 4K(2 + 4c11 + ν −1 c11 ), c233 = 4c232 (1 + σ 2 )(˜ − 1)−2 , ˜ σ and c234 = (9c233 ˜ 1+2θ ˜ 4θ ) · σ g(˜ ) . ˜ σ Denote 7 2 C23 = µB /λ (c234 )2/r0 . ˜ (4.7) We finally arrive C2 = C21 C22 C23 eα/2 . (4.8) ˜ α1 = 2C1 (4.9) α2 = α/8, (4.10) 3. 4. ˜ where α is defined at the beginning of this remark. C1 is defined in (4.3). It is worth noting ˜ ˜ θ, σ , and g(x) are defined at the beginning of this remark. 7 c , and B were defined before in case 1. 11 6 K, 55 • Keeping the other parameters fixed, C1 ∼ O(T d/2 ), as T → 0. ¯ • Keeping the other parameters fixed and fixing T < ∞, C21 has uniform upper and lower ¯ bounds for 0 ≤ T ≤ T ; C22 , C23 , and α depend only on the structure of L. Therefore, ¯ C2 has uniform upper and lower bounds for all 0 ≤ T ≤ T . 9 • Since γ depends only on the structure of L, α1 = 2 ln γ · max{32, 8 + T + 9dT } depends 2 8 32δ only on the structure of L for T small enough. More precisely, α1 ∼ O max 2 1 M0 + M 2 , ν ν2 , and α2 ∼ O ν , M2 when T is small enough. 4.1.2 Parabolic Equations with unbounded Coefficients ε The goal of this part is to show wτ > 0 in ΩT , where wε satisfies 1 ε ε ε wτ = (σ ε )2 (wxx − wx ) 2 wε (x, 0) = (ex − 1)− . Let us recall some properties relating to the Green’s Functions. Proposition 4.7. ( Ch3, Sec 7 [24], Theorem 16) Given M , Tα > 0, denote (−M, M ) × (0, Tα ]. Let LM in the form of (2.14) be a parabolic operator in α satisfies 56 M α . M α = If LM α (A) the coefficients of LM are uniformly H¨lder continuous (exponent α) 8 in o α a11 (B) for any (x, t) ∈ C α( M ), α b1 C α( M ), α c C α( M) α M α and ≤ K1 , M α , a1,1 (x, t) ≥ K2 > 0, M then the Green’s function γα (x, t; ξ, τ ) for LM with zero Dirichlet initial data exists, and α has the following properties: a) For any 0 ≤ τ < Tα , and for any continuous, compactly supported function f on Bτ := {|x| < M } × {t = τ }, the function u(x, t) = Bτ M γα (x, t; ξ, τ )f (ξ)dξ is a solution to LM v = 0 in {|x| < M } × {τ < t ≤ Tα }, and it satisfies the initial and α boundary conditions lim u(x, t) = f (x) t→τ + u(x, t) = 0 ¯ for x ∈ Bτ , on {|x| = M } × {τ < t ≤ Tα }. b) (Ch3, Sec 7 [24], Corollary 1) As a function of (x, t) ∈ for (ξ, τ ) ∈ M α M α , m LM γα = 0. Furthermore, α M ∪[−M, M ] × {0}, γα (x, t; ξ, τ ) = 0 on {|x| = M } × {τ < t ≤ Tα } and 8 Recall a function f (x, τ ) defined on a bonded closed set S of R2 is said to be H¨lder o continuous of exponent α in S [24] if there exists a constant A such that for all (x, τ ), (x0 , τ 0 ) ∈ S, |f (x, τ ) − f (x0 , τ 0 )| ≤ A(|x − x0 |2 + |τ − τ 0 |)α/2 . 57 M γα (x, t; ξ, τ ) > 0 on {|x| < M } × {τ < t ≤ Tα }. c) (Bounds on Green’s Functions)([3] Theorem 8) Suppose furthermore that, if we restrict σ 2 on a bounded domain, then it satisfies Condition H2. Let (ΩM ) be a subinterval of (−M, M ), and let δ > 0 be the distance from (ΩM ) to (−M, M ). Then, (a) There exist positive constants C2 depending on Tα and the structure of L, and α2 M α depending on the structure of L, such that for all (x, t) and (ξ, τ ) in M γα (x, t; ξ, τ ) ≤ C2 (t − τ )−1/2 exp |x − ξ|2 −α2 8(t − τ ) . with t > τ , (4.11) (b) There exist positive constants C1 and α1 , depending on δ, Tα and the structure of L, such that |x − ξ|2 M γα (x, t; ξ, τ ) ≥ C1 (t − τ )−1/2 exp −α1 t−τ holds for all x, ξ ∈ (ΩM ) and either τ < t ≤ min Tα , τ + Tα 2 d (ξ, ∂(ΩM ) ) 8 for arbitrary τ ∈ [0, Tα ) or max 0, t − Tα 2 d (x, ∂(ΩM ) ) 8 for arbitrary t ∈ (0, Tα ]. 58 ≤τ 0. Consider the differential operator L given in the form (2.14). We collectively refer the ε following conditions as Condition H3: (To show the bounds of vτ .) Assume that ai,j , (ai,j )xi , bi − d j=1 ai,j , [bi − d j=1 ai,j ]xi are H¨lder continuous, and o ai,j is twice differentiable in x on every compact subset of S. We further assume there exist constants ν, κ2 > 0 such that ˜ 1. ν|ξ|2 ≤ ai,j (x, t)ξi ξj ≤ κ2 ˜ |x|2 + 1|ξ|2 ; 2. d |(ai,j )x |, bi (x, t) − |x|2 + 1; and ≤ κ2 ˜ (ai,j )xj (x, t) ≤ κ2 ˜ |x|2 + 1 j=1 3.   d c(x, t), c(x, t) − bi (x, t) − (ai,j )xj (x, t) j=1 xi for all (x, t) ∈ S, and ξ ∈ Rd . Remark 4.9. Consider operators of the form L≡ ∂ 1 ∂2 ∂ στ − σ2 − −2 . ∂τ 2 σ ∂x2 ∂x 59 (4.13) They satisfy Condition H3 if σ 2 , (σ 2 )x , and (σ 2 )xx are H¨lder continuous on every compact o subset of Ω, and if there exist constants ν, κ2 , κ3 > 0 such that 1. 1 ν ≤ σ 2 (x, t) ≤ κ2 2 x2 + 1, 2. 1 σt ( σ 2 )x , 2 ≤ κ2 2 σ x2 + 1, and 3. σσxx ≤ κ3 x2 + 1 for all (x, t) ∈ Ω. Similar as discussed before, suppose σ satisfies Condition H1, and it is bounded below, then σ ε satisfies Condition H3. By Lemma 4.4, The constants ν and κ2 are uniform in ε. ∞ But κ3 (ε) may approach to infinity as ε → 0. This is because ρε ∈ C0 , when differentiating σ ε , the derivatives are passed onto ρε , the upper bounds of which depend on ε. See the proof for Lemma 2.14 for a detailed calculation. By Condition H0, σ 2 has no more than linear 2 growth. Therefore, κ3 (ε) really is the upper bound for ∂ 2 ρε . ∂x For operators of form 4.13, we start our analysis with a sufficiently smooth, bounded below σ 2 . To this end, let us take σ (x, −τ ) = σ(x, τ ) for all τ ≤ 0 as an extension of σ, and ˜ define for any ε > 0, ρε a standard mollifier. Define σ ε by σ ε = ρε ∗ σ . ˜ 60 ε Given n ∈ N, σn (x, τ ; η) is a smooth function in ΩT such that = 2 σ ε (x, τ ) ≥ n  1  n ε σn (x, τ )    ε σ (x, τ )  1 σ ε (x, τ ) ≤ n . ε Clearly, if σ 2 ≥ ν 2 > 0 in ΩT , then σn (x, τ ) ≡ σ ε (x, τ ) in ΩT for all n > 1/(2ν). Lemma 4.10. ([2]) Given T , ε, n, and α > 0, there exists 0 < Tα (n, ε) < T such that the Cauchy problem 1 ε (σ ε )t ε ε ε ε (zn,α )t = (σn )2 [(zn,α )xx − (zn,α )x ] + 2 n zn,α ε 2 σn in ΩTα := R × (0, Tα ], 1 ε ε zn,α (x, 0) = (σn (0, 0))2 δ0 (x) 2 (4.14a) (4.14b) has a solution for 0 < t ≤ Tα(n,ε) . It is strictly positive in ΩTα . Proof. As argued in Lemma 2.15, the solution to (4.14) is the fundamental solution to the ε corresponding operator multiplied by 1 (σn (0, 0))2 > 0. Hence, it is equivalent to show the 2 existence and strict positivity of the fundamental solution to the corresponding operator. We further provide the expression of the corresponding fundamental solution through our proof. Further estimates on the fundamental solution will be discussed in the following remark. • Notations and definitions Throughout the proof for this lemma, we define the operator Lε associated with equan tion (4.14) on T as: Lε ≡ n ∂ 1 ε − (σn )2 ∂t 2 ∂2 ∂ − ∂x2 ∂x 61 −2 ε (σn )τ ε , σn (x, t) ∈ ΩT . (4.15) Given parameter α > 0, we introduce the following definitions and notations ([2]): i) β(ε, α) = max{κ2 , κ3 (ε)}(α + 2)2 > 0, where κ2 , κ3 are defined in Remark 4.9. ii) Tα (ε) = min T, 1 2β > 0 is a non-increasing function of α such that lim Tα (ε) = 0 α→∞ and lim Tα (ε) = T0 (ε) ≤ T. α→0 Thus, in particular, 0 < Tα (ε) < T0 (ε) for α ∈ (0, ∞). To simplify notation, for the rest of the proof, we use β, κ2 , and Tα . iii) Define gα (x, t) = (α + βt) x2 + 1. (4.16) iv) For any u(x, t) defined on ΩTα , we define uα (x, t) = e−gα (x,t) u(x, t); Lε = n,α ∂ 1 ε ∂2 ∂ − (σn )2 2 − aε − cε , n,α n,α ∂t 2 ∂x ∂x where 1 ε ε aε = − (σn )2 + (σn )2 (gα )x , n,α 2 62 (4.17) (4.18) and cε = 2 n,α ε (σn )t 1 ε 2 1 ε 2 1 ε 2 2 ε + 2 (σn ) (gα )x − 2 (σn ) (gα )x + 2 (σn ) (gα )xx − (gα )t σn ε (σn )t 1 ε 2 x2 x 1 = 2 ε + (σn ) (α + βt) (α + βt) 2 −√ + 2 + 1)3/2 2+1 σn 2 x +1 (x x −β x2 + 1. ≤ (|x|2 + 1)1/2 {κ3 + κ2 (α + βt)2 + 2κ2 (α + βt) − β}. ε The last inequality holds because (σn )2 satisfies Condition H1. Hence, cα < 0 on ΩTα 1 for β = max{κ2 , κ3 }(α + 2)2 , and Tα = min T, . 2β Clearly, Lε u = egα Lε uα , n,α n,α ε and Lε u = 0 in ΩTα if and only if Lε uα = 0 in ΩTα . Moreover, if γn,α (x, t; ξ, τ ) is n,α n,α a fundamental solution of Lε v = 0 then n,α ε Γε (x, t; ξ, τ ) = egα (x,t)−gα (ξ,τ ) γn,α (x, t; ξ, τ ) n,α (4.19) is a fundamental solution of Lε u = 0. n • The increasing sequence of Green’s Functions Let M Tα = (−M, M ) × (0, Tα ] for M = 1, 2, · · · . We use the superscript “M ” to indi- cate the operator, and the corresponding Green’s Function are restricted on (−M, M ) for the space variable. In each M M Tα , the leading coefficient of Lα 63 is bounded away from ε,M zero, and uniformly H¨lder continuous. Hence, the Green’s function γn,α (x, t; ξ, τ ) for o ε,M M Tα Ln,α with zero Dirichlet data in each exists, and has the properties in Proposi- tion 4.7. Let 0 < η << 1, and let ϕMη be a smooth function such that ϕMη ≡ 1 for |x| ≤ M − 2η, ϕMη ≡ 0 for |x| ≥ M − η and 0 ≤ ϕMη ≤ 1. Consider the boundary value problem ε,M in (−M, M ) × (τ, Tα ], Ln,α v = 0 v(x, τ ) = ϕMη v=0 for |x| ≤ M, on {|x| = M } × [τ, Tα ]. The solution is given by ε,M v Mη (x, t) = |ξ| 0, {γn,α }M is monotone nondecreasing in M and ε has a uniform upper bound, therefore, it has a limit γn,α as M → ∞. It is shown in ε [2] that γn,α is the fundamental solution to Lε in ΩTα . Furthermore, n,α Proposition 4.11. i) [2] Theorem II: Given α > 0, a fundamental solution of Lu = 0 is given by ε Γε (x, t; ξ, τ ) = exp{gα (x, t) − gα (ξ, τ )}γn,α (x, t; ξ, τ ) n,α for x, ξ ∈ R and 0 ≤ τ < t ≤ Tα . ε ii) [2] 0 ≤ γn,α (x, t; ξ, τ ) ≤ κ(n)(t − τ )−1/2 for x ∈ R and τ ≤ t ≤ Tα . ε iii) [24] γn,α (x, t; ξ, τ ) > 0 for x ∈ R and τ < t ≤ Tα . 66 (4.23) ε • The existence and positivity of the solution, zn,α ∈ ΩTα , to (4.14). 1 ε (σ (0, 0))2 Γε (x, t; 0, 0+ ) n,α 2 n 1 ε ε = (σn (0, 0))2 exp{(α + β) x2 + 1 − α} · γn,α (x, t; 0, 0+ ), 2 ε zn,α (x, t) = (4.24) ε Remark 4.12. We further derive upper and lower bounds of zn,α (x, t) ε Recall that σ satisfies Condition H1. Lemma 4.4 and remark 4.9 imply σn satisfies Condition H3. Therefore, given M ∈ N, n and α, we have the following estimates for ε γn,α (x, t; ξ, τ ). 1. By Proposition 4.11, there exists positive constant κ such that ε γn,α (x, t; ξ, τ ) ≤ κ(n)(t − τ )−1/2 in ΩTα . (4.25) ε,M +1 2. There exist positive constants C1 (M ), and α1 (M ), depending on the structure of Ln,α 67 , and Tα , such that 9 ε γn,α (x, t; ξ, τ ) ε,K = lim γn,α (x, t; ξ, τ ) K→∞ ε,M +1 ≥ γn,α (x, t; ξ, τ ) (4.26) ≥ C1 (t − τ )−1/2 exp −α1 (x − ξ)2 t−τ (4.27) M in Tα where the first inequality is from equation (4.22), and the second is an application of Proposition 4.7. By equations (4.19) and (4.24), given α > 0, ε zn,α (x, t) = 1 ε (σ (0, 0))2 Γε (x, t; 0, 0+ ), n,α 2 n where ε Γε (x, t; ξ, τ ) = egα (x,t)−gα (ξ,τ ) γn,α (x, t; ξ, τ ), n,α and gα (x, t) = (α + βt) 9 Note ε,M x2 + 1. that for fixed M , the structure of Ln,α is uniform for all ε, n, and Tα (ε). Moreover, ε,M as discussed in Remark 4.6, α1 depends only on the structure of Ln,α when Tα is small enough, which happens when ε is small enough. Therefore, C1 is a function of M , and α1 is a function of M and α when ε is small enough. 68 Therefore, given M ∈ N, when ε is sufficiently small, ε zn,α (x, t) ≤ ε zn,α (x, t) 1 ε (σ (0, 0))2 · egα (x,t)−α · κ(n)t−1/2 2 n in ΩTα , x2 1 ε ≥ (σn (0, 0))2 · egα (x,t)−α · C1 (M )t−1/2 exp −α1 (M, α) 2 t Remark 4.13. M in . (4.29) Tα ε ε 1. If further we have σ 2 ≥ ν > 0 in ΩT , then zα = zn,α for all n > 1/ν. 1 2. One can push the final time Tα up to T0 := min T, 4k1 4.2 (4.28) by sending α → 0. The Asymptotics of ϕ as τ → 0 As in [5], the essential idea for finding the asymptotics of ϕ as τ → 0 is to define suitable sub and super solutions of (1.8) from the formal limiting solution of (3.7) and prove that actual convergence takes place through the comparison principle. Hence, let us first find the formal limiting solution to (3.7). Claim 4.14. The unique positive solution of F (x, 0, ϕ0 , Dϕ0 , D2 ϕ0 ) = 0, i.e., ϕ0 (ϕ0 )2 − σ 2 (x, 0)(1 − x x )2 = 0 ϕ0 (4.30) is ϕ0 (x) = x x ds . 0 σ(s,0) 69 (4.31) Solution 4.15. Our first observation is x ϕ0 = = This implies ϕ0 x ϕ0 ϕ0 − xϕ0 x ϕ0 2 ϕ0 1−x x ϕ0 1 ϕ0 . ϕ0 = 1 − x x . Therefore, (4.30) is equivalent to ϕ0 = ±σ(x, 0)ϕ0 ϕ0 x ϕ0 Since ϕ0 (x) is non-zero, one can further rewrite equation (4.30) as x 1 = ±( 0 ) . σ(x, 0) ϕ Integrate both sides from on [0, x], we get x s ds =± ds 0 0 ϕ (s) 0 σ(s, 0) 0 x − = ±( 0 ) ( 0) ϕ (x) ϕ x = x ϕ0 (x) . In the last equation, we choose the “+” sign. It is because both σ(x, 0) and ϕ0 (x) are strictly x ds x and 0 should have the same sign. Finally, we rearrange terms ϕ (x) 0 σ(s, 0) and get (4.31), the solution to (4.30). positive, so The upper and lower solutions that we are comparing ϕ with have the form ϕ(x, τ ) = ϕ0 (x)(1 + κτ ), ¯ ϕ(x, τ ) = ϕ0 (x)(1 − κτ ) 70 and (4.32) (4.33) . in ΩT κ , for some given κ > 0, and T (κ) << 1/κ. Clearly, ϕ ≥ ϕ0 ≥ ϕ in ΩT . We shall further show, under some additional regularity con¯ ditions, σ[ϕ] ≥ σ[ϕ] ≥ σ[ϕ] in some domain of ΩT . To motivate our generalized comparison ¯ principle, let us start with the following estimates. Claim 4.16. Assume additional regularity conditions on σ: σ ∈ C 2,1 (Ω) and σxx ∈ L∞ (ΩT ); (4.34) and the Additional Decay Rate of σ(x, 0) as x → 0: σx (x, 0) → 0, (4.35a) σxx (x, 0) → 0. (4.35b) ¯ ¯ Given R > 0, there are σ(R), σ (R), such that 0 < σ(R) ≤ σ ≤ σ (R) < ∞ on [−R, R]×[0, T ], and i) 0 2 ϕxx σ(x, τ ) × [1 + τ (2κ − σ 2ϕ0 στ (x, 0)) ] + O(τ 2 ), on ΩR ; T σ (4.36) ϕ0 στ σ[ϕ](x, τ ) = σ(x, τ ) × [1 − τ (2κ − σ 2 xx − (x, 0)) ] + O(τ 2 ), on ΩR . T 0 σ 2ϕ (4.37) σ[ϕ](x, τ ) = ¯ − ii) Where O(τ 2 ) = O(σ(R), σ (R), σ W 2,0,∞ ([−R,R]×[0,T ]) ) · τ 2 . ¯ Proof. We only provide the proof for equation (4.36). The proof for equation (4.37) is similar. Besides equations (3.14), (4.31), (4.32), and (4.41), we will also use the following equa71 tions: ϕx (x, τ ) = ϕ0 (x)(1 + κτ ); ¯ x ϕxx (x, τ ) = ϕ0 (x)(1 + κτ ); ¯ xx (4.38a) and ϕτ (x, τ ) = κϕ0 (x). ¯ (4.38b) (4.38c) Denote x d(x) = dy . 0 σ(y, 0) (4.39) Note that ϕ0 (x) ϕ0 (x) 1− x σ(x, 0) x d(x) − σ(x,0) , and = d2 (x) ϕ0 (x) = x ϕ0 (x) = xx = (4.40a) ϕ0 x − ϕ0 ϕ0 ϕ0 −ϕ0 σ(x, 0) + σx (x, 0)ϕ0 x x 1− + x σ(x, 0) x σ 2 (x, 0) xσx (x, 0) + 2[ϕ0 (x) − σ(x, 0)] . σ 2 (x, 0)d2 (x) (4.40b) We also need the following two higher order derivatives: {σx (x, 0) + xσxx (x, 0) + 2[ϕ0 (x) − σx (x, 0)]}σ 2 (x, 0)d2 (x) d3 0 ϕ (x) = dx3 σ 4 (x, 0)d4 (x) {xσx (x, 0) + 2[ϕ0 (x) − σ(x, 0)]}2σ(x, 0)d(x)[σx (x, 0)d(x) + 1] − . σ 4 (x, 0)d4 (x) 72 (4.40c) It is then straight forward from (3.6): 2 H[ϕ] = ¯ ϕ0 1−x x ϕ0 2 = ϕ0 1−x x ϕ0 1 2 2 + τ ϕ0 ϕ0 (1 + κτ )2 − τ 2 ϕ0 ϕ0 (1 + κτ )4 x xx 4 + τ ϕ0 ϕ0 + O(τ 2 ), xx (4.41) 1 where O(τ 2 ) = τ 2 (2κ + κ2 τ )ϕ0 ϕ0 − τ 2 (1 + κτ )4 (ϕ0 ϕ0 )2 . x xx 4 (4.42) In order to derive equation (4.36), we fix x then find a polynomial approximation of σ[ϕ](τ ) for τ > 0. Suppose ¯ σ[ϕ](τ ) = a0 + a1 τ + a2 τ 2 + O(τ 3 ). ¯ (4.43) So, σ 2 [ϕ](τ ) = a2 + 2a0 a1 τ + (a2 + 2a0 a2 )τ 2 + O(τ 3 ). ¯ 0 1 (4.44) Meanwhile, we Taylor expand σ(x, τ ) and στ (x, τ ) at (x, 0). Then rearrange terms so we have expressions for σ(x, 0) and στ (x, 0): τ2 στ τ (x, 0) + O(τ 3 ), 2 τ 2 ∂ 3σ στ (x, 0) = στ (x, τ ) − τ στ τ (x, 0) − (x, 0) + O(τ 3 ), 3 2 ∂τ σ(x, 0) = σ(x, τ ) − τ στ (x, 0) − σ 2 (x, 0) = σ 2 (x, τ ) − 2τ σ(x, τ )στ (x, 0) + O(τ 2 ). 73 (4.45) and (4.46) (4.47) Collecting the above results: σ 2 [ϕ]H[ϕ] = (τ ϕ2 )τ , ¯ ¯ ¯ ⇒σ 2 [ϕ]H[ϕ] = ϕ0 (x)2 (1 + 4κτ + 3κ2 τ 2 ), ¯ ¯ ⇒[a2 0 + 2a0 a1 τ + O(τ 2 )] ϕ0 (x) σ(x, 0) 2 + ϕ0 ϕ0 τ + O(τ 2 ) xx =ϕ0 (x)2 (1 + 4κτ + 3κ2 τ 2 ), ⇒[a2 + 2a0 a1 τ +O(τ 2 )][ϕ0 (x)2 + σ 2 (x, 0)ϕ0 ϕ0 τ +O(τ 2 )] xx 0 2 1 = ϕ0 (x)2 σ 2 (x, 0)(1 + 4κτ + 3κ2 τ 2 ) . 3 We further expand O(τ 2 ) terms and get 1 2 [(1) + (a2 + 2a0 a2 )τ 2 + O(τ 3 )][(2) + σ 2 (x, 0)(2κϕ0 ϕ0 − ϕ0 (ϕ0 )2 )τ 2 + O(τ 3 )] xx xx 1 4 =(3). Plug (4.45), (4.47) to the last equality, then compare coefficients, we have τ 0 : a2 ϕ0 (x)2 = σ 2 (x, 0)ϕ0 (x)2 , 0 τ 1 : a2 ϕ0 ϕ0 σ 2 (x, 0) + 2a0 a1 ϕ0 (x)2 = 4κσ 2 (x, 0)ϕ0 (x)2 , xx 0 τ2 : a2 0 2κϕ0 ϕ0 xx and 1 − (ϕ0 ϕ0 )2 + 2a0 a1 ϕ0 ϕ0 + (a2 + 2a0 a2 ) x xx 1 4 74 ϕ0 σ(x, 0) 2 = 3κ2 (ϕ0 )2 . This means a0 = σ(x, 0), (4.48) 4κϕ0 (x)2 σ 2 (x, 0) − σ 4 (x, 0)ϕ0 ϕ0 xx a1 = 0 (x)2 2σ(x, 0)ϕ = 2κσ(x, 0) − σ 3 (x, 0)ϕ0 xx , 2ϕ0 (x) and (4.49) 3 σ 3 (x, 0) ϕ0 1 2 a2 = κ2 σ(x, 0) − 2κ xx − ϕ0 . 0 2 2 4 x ϕ (4.50) Hence, σ[ϕ](x, τ ) = σ(x, 0) + 2κσ(x, 0) − ¯ σ 3 (x, 0)ϕ0 xx τ + a2 τ 2 + O(τ 3 ). 2ϕ0 (x) Replace σ(x, 0) by (4.45), σ[ϕ](x, τ ) = σ(x, τ ) + 2κσ(x, τ ) − στ (x, 0) − ¯ = σ(x, τ ) 1 + τ 2κ − σ 2 (x, 0) Replace σ(x, τ )σ 2 (x, 0)ϕ0 xx τ + O(τ 2 ), 2ϕ0 (x) στ (x, 0) ϕ0 xx − 0 (x) σ(x, τ ) 2ϕ + O(τ 2 ) (4.51) (4.52) στ τ σ2 + 2 τ (x, 0)τ 2 + O(τ 2 ), σ σ3 ϕ0 στ xx (4.53) = σ(x, τ ) 1 + τ 2κ − σ 2 (x, 0) 0 − (x, 0) +O(τ 2 ) σ 2ϕ (x) 1 1 σt 1 by 1 − (x, 0)τ + σ(x, τ ) σ(x, 0) σ 2 − 4 1 στ (x, 0)σ 2 (x, 0)ϕ0 xx = (4) + τ 2 [− στ τ (x, 0) − 2κστ (x, 0) + 2 2ϕ0 5 + 2 στ (x, 0) σ(x, 0) 6 10 + a2 (σ(x, 0), ϕ0 , ϕ0 )] + O(τ 3 ), xx (4.54) 7 where O(τ 2 ) = 5 + 6 + 7 + O(τ 3 ) = O(σ(R), σ (R), σ W 1,2,∞ ([−R,R]×[0,T ]) )τ 2 . Under ¯ 10 Here (5) comes from the Taylor expansion for σ(x, t), (6) is the Taylor expansion for 75 additional regularity condition (4.34), we will show all terms in (5), (6), and (7) are bounded when x ∈ [−R, R]. So, we can write σ[ϕ](x, t) = a0 + a1 τ + O(τ 2 ), where O = O(σ, σ , σ W 2,0,∞ )τ 2 , ¯ ¯ where a0 and a1 are as in (4.48) and (4.49). • Some estimates and bounds on ϕ0 , ϕ0 , ϕ0 , D3 ϕ0 (x) as x → 0. xx x 1. From (4.31) lim ϕ0 (x) = σ(0, 0). x→0 This implies d(x) ∼ x as x → 0. σ(x, 0) 2. From (4.40a) ϕ0 (x) x → x − σ(x,0) 0 = x )2 0 ( σ(x,0) x σ(x,0) as x → 0. So, L.P. lim ϕ0 (x) = lim x x→0 x→0 = x d (x) − d (x) + xσ2 (x, 0) σ −1 (x, 0) 2d(x)σ σx x (0, 0) · lim 2σ x→0 d(x) σx (0, 0) · σ(0, 0) 2σ σx (0, 0) = 2 L.P. = Since σx (x, τ ) is continuous in ΩT , σx (x, 0) is bounded for all x ∈ [−R, R]. σ −1 (x, t), and (7) is a function of (σ(x, 0), ϕ0 , ϕ0 ) . xx 76 3. Similarly, by (4.40b) ϕ0 (x) → xx xσx (x, 0) + (2σ(x, 0) − 2σ(x, 0)) 0 = 2 0 x as x → 0, ´ o we need to apply LHˆpital s rule. First, let us evaluate lim ϕ0 (x): xx x→0 lim ϕ0 (x) xx x→0 xσx (x, 0) + 2[ϕ0 (x) − σ(x, 0)] x→0 σ 2 (x, 0) · d2 (x) = lim xσx (x, 0) x − σ(x, 0)d(x) 1 lim +2 = 2 2 (x) σ (0, 0) x→0 d d3 (x) (1) 1 = 2 σ (0, 0) (2) · lim x→0 [1 − d (x)σ(x, 0) + d(x)σx (x, 0)]σ(x, 0) [σx (x, 0) + xσxx (x, 0)]σ(x, 0) −2 2d(x) 3d2 (x) 1 = 2 σ (0, 0) (3) · 1 σx (x, 0) 2 σx σ(x, 0) · lim σ(x, 0) + ϕ0 (x)σxx (x, 0) − lim 2 x→0 d(x) 3 x→0 d(x) 1 1 σ(x, 0)σx (x, 0) σ(x, 0)σxx (x, 0)ϕ0 (x) + = 2 · lim − 6 d(x) 2 σ (0, 0) x→0 (4) 1 σ(0, 0) σx (x, 0) σ 2 (0, 0) = 2 · − lim + σxx (0, 0) 6 x→0 d(x) 2 σ (0, 0) (5) Equality (1) holds because ϕ0 (x) = x/d(x). Equation (2) is valid because σx (x) → ´ o 0 as x → 0. So, we can apply LHˆpital s rule and continue our estimate. (3) 1 comes from d (x)σ(x, 0) = σ(x,0) σ(x, 0) = 1, and x/d(x) = ϕ0 (x). Simplifying and rearranging terms of (3), we get (4). Equation (5) follows lim ϕ0 (x) = σ(0, 0). x→0 ´ o Note that σx (x, 0) and d(x) both approaches to zero as x → 0, apply LHˆpital s 77 rule one more time and we get lim ϕ0 (x) = − xx x→0 1 σxx (x, 0) σxx (0, 0) · lim −1 + 6σ(0, 0) x→0 σ (x, 0) 2 1 = σxx (0, 0). 3 4. By (4.40c) lim D3 ϕ0 (x) x→0 (3σx + xσxx )σ 2 d2 + 6σ 2 d − 6xσ − 2xσσ 2 d2 − 6xσσx d x→0 σ 4 (x, 0)d4 (x) = lim 6σ = lim x→0 σ 4 σd − x d4 2 3σx + xσxx 2xσx xσx + − 3 2 −6 3 3 . 2 d2 σ σ d σ d Next, we calculate each of the three limits. Under additional assumption σ is ´ o smooth, σx , σxx approach to zero as x → 0, we repeatedly applying LHˆpital s rule, and arrive at the following conclusions. (a) σ(x, 0)d(x) − x x→0 d4 (x) lim σ(x, 0)(σx (x, 0)d(x) + 1 − 1) x→0 4d3 (x) = lim σ 2 (x, 0) + σσxx (x, 0) + σx (x, 0)/d(x) = lim σ(x, 0) x 12d(x) x→0 σ(0, 0) 12 σ(0, 0) + 12 = lim σ(x, 0)[2σx σxx (x, 0) + σx σxx (x, 0) + σσxxx (x, 0)] x→0 σ(x, 0)σxx (x, 0) 2d(x) x→0 σ 2 (0, 0) 7 3 = σx σxx (0, 0) + σσxxx (0, 0) . 12 2 2 lim 78 (b) 3σx + xσxx x→0 d2 (x) lim = lim 3 σ(x, 0)(3σxx + σxx + x ∂ 3 σ) ∂x 2d(x) x→0 3 4 3 σ(4 ∂ 3 σ + ∂ 3 σ + x ∂ 4 σ) σ(0, 0) ∂x ∂x ∂x = lim 2 x→0 2 = 3 σ 2 (0, 0) · 5 ∂ 3 σ(0, 0) ∂x 4 . (c) xσx x→0 d3 σ(σx + xσxx ) = lim x→0 3d2 lim 3 σ(σxx + σxx + x ∂ 3 σ) σ(0, 0) ∂x = lim 3 x→0 2d σ 2 (0, 0) ∂3 ∂3 ∂4 = lim 2 3 σ + 3 σ + x 4 σ 3 · 2 x→0 ∂x ∂x ∂x = 3 σ 2 (0, 0) ∂ 3 σ(0, 0) ∂x 2 . Some estimates and bounds on ϕ0 , ϕ0 , ϕ0 , and D3 ϕ0 (x) as x → ±R. x xx Recall equations (4.31), (4.40a), and (4.40b), lim ϕ0 (x), lim ϕ0 (x), and lim ϕ0 (x), x xx x→±R x→±R x→±R they are obviously bounded. Some estimates and bounds relating ϕ0 , ϕ0 , and ϕ0 as x → ±∞, and x → 0. x xx 1 1. σ(x, τ )d(x) = O(|x|) as |x| → ∞, uniformly in τ . Therefore, ϕ0 = O( |x| ) as xx x → ±∞. 79 2. (ϕ0 )x (ϕ0 )x 1 ≤ O( ) as |x| → 0. Therefore, · x ≤ O(1) as |x| → 0. |x| ϕ0 ϕ0 In summary, ϕ0 , ϕ0 and ϕ0 are all of O(σ , σ , σ W 2,0,∞ ([−R,R]×[0,T ]) ) for |x| ≤ R. ϕ0 ¯ x xx x ¯ and ϕ0 are bounded in R. xx Next, we remove the additional regularity assumption in Claim 4.16 by an approximation procedure. Definition 4.17. Take σ (x, −τ ) = σ(x, τ ) for all τ ≤ 0 as an extension of σ and define for ˜ any ε > 0, ρε a standard mollifier. Define σ ε by σ ε = ρε ∗ σ . ˜ ε Given R >> 1, and 0 < η << 1, σR (x, τ ; η) is a smooth function in ΩT such that    ε σ (R + η, τ ) [R + η, ∞) × [0, T ]       ε (x, τ ) = ε σR σ (x, τ ) [−R, R] × [0, T ]       ε σ (−R − η, τ ) (−∞, −R − η] × [0, T ].  (4.55) ε In the following, ϕε 0 is defined by equation (4.31), with σ replaced by σR , and ϕε is ¯R R x ε given by equation (4.32), with ϕ0 replaced by ϕε 0 . We also denote dε := 0 σR (s, 0)ds. R R The associated volatility, σ[ϕε ] of ϕε is given by (3.14), with ψ been replaced by ϕε . As ¯R ¯R ¯R ε mentioned before, if σ satisfies Condition H1, then (σR )2 satisfies both Condition H2, and Condition H3. Moreover, we have estimates of the bounds appear in these conditions. 11 11 M , 0 κ3 (R, ε) and κ2 are constants for all (R, ε). ν(R), M (R) depend on R, uniformly in ε, and depends on both R and ε. 80 The following claim, gives bounds relating σ[ϕε ]. ¯R Claim 4.18. Given (R, ε), Lε := R ∂ 1 ∂2 ∂ − σ 2 [ϕε ]( 2 − ¯R ) ∂τ 2 ∂x ∂x satisfies Condition H2. Proof. Let us first estimate the components of σ[ϕε ]. Recall ¯R σ[ϕε ]2 (x, τ ) ¯R = ∂ ∂τ (τ ϕε )2 ¯R . H[ϕε ] ¯R where, ∂ τ (ϕε )2 = (ϕε )2 + 2ϕε (ϕε )τ τ ¯R ¯R ¯R ¯R ∂τ = (ϕε 0 )2 (1 + κτ )(1 + 3κτ ), R H[ϕε ] ¯R = 1−x (ϕε 0 )x R ϕε 0 R 2 1 + τ (1 + 2κτ + κ2 τ 2 )ϕε 0 (ϕε 0 )xx − τ 2 (1 + κτ )4 [ ϕε 0 (ϕε 0 )x ]2 , R R R R 4 and 1−x (ϕε 0 )x R ϕε 0 R ϕε 0 = R . ε σR We hence have σ[ϕε ]2 (x, τ ) ¯R = (1 + κτ )3 (1 + 3κτ ) ε0 (ϕ )xx ε [σR (x, 0)]−2 + τ (1 + 2κτ + κ2 τ 2 ) Rε 0 − 1 τ 2 (1 + κτ )4 [(ϕε 0 )x ]2 4 R ϕ R 81 ε Note that fix R, σR and (ϕε 0 ) are bounded above and below by positive constants in Ω. R ε The bounds are of order O(|R|p(±R) ). In addition, by Condition H0, ϕε 0 /σR is bounded R below and above in R by positive constants, uniformly in all (R, ε). Therefore, by choosing κ(R) big enough, and then choose Tδ (R) small enough, we have H[ϕε ] bounded below and ¯R above on ΩT (R) . In conclusion, we have σ[ϕε ] be smooth, strictly positive(away from zero), ¯R δ and bounded12 above in Ω, uniformly in ε. Next, we estimate the derivatives of σ[ϕε ]2 . ¯R ∂ σ[ϕε ]2 ¯R ∂x (1 + κτ )(1 + 3κτ ) = H[ϕε ]2 ¯R 2ϕε 0 ϕε 0 H[ϕε ] − ¯R R Rx ∂ H[ϕε ](ϕε 0 )2 , ¯R R ∂x and ∂ σ[ϕε ]2 ¯R ∂τ 1 (ϕε 0 )2 R = 2 H[ϕε ]2 ¯R · ∂ ∂ (1 + κτ )(1 + 3κτ ) H[ϕε ] − ¯R H[ϕε ](1 + κτ )(1 + 3κτ ) . ¯R ∂τ ∂τ where ∂ H[ϕε ] ¯R ∂x ϕε 0 Rx =2 1 − x ε 0 ϕR  − ϕε 0 Rx ϕε 0 R −x ϕε 0 ϕε 0 − ϕε 0 R xx R Rx 2 ϕε 0 R 2   + (τ + 2τ 2 κ + κ2 τ 3 ) 1 2 ·(ϕε 0 ϕε 0 + ϕε 0 (D3 ϕε 0 ) ) − τ 2 (1 + κτ )4 ϕε 0 ϕε 0 (ϕε 0 + ϕε 0 ϕε 0 ), R x R xx Rx R R Rx Rx R R xx 2 12 The bounds are of order Rp(±R) . 82 and ∂ H[ϕε ] ¯R ∂τ =(ϕε 0 ϕε 0 )(1 + 4κτ + 3κ2 τ 2 ) − (ϕε 0 ϕε 0 )2 R R xx R R xx 1 τ (1 + κτ )4 + τ 2 κ(1 + κτ )3 . 2 ∂ From Claim 4.16, the first factor of the first term of ∂x H[ϕε ] is bounded below and above ¯R ∂ by constants for all (R, ε). The second factor of the first term of ∂x H[ϕε ] is O(1/|x|) as ¯R |x| → ∞, uniformly for all (R, ε). Moreover, for given R > 0, on can choose Tδ (R, ε) small enough so that τ D3 ϕε 0 is also bounded on ΩT (R,ε) , and the bounds are uniform for all ε R δ if one chooses the appropriate Tδ (R, ε). In addition, we notice that ϕε 0 ϕε 0 is uniformly13 R Rx ∂ ¯R bounded by constants on R. This means ∂x H[ϕε ] has uniform14 bounds on ΩT (R,ε) . For δ ∂ ∂ the same reason, we have the same conclusion for ∂τ H[ϕε ]. Therefore, both σ[ϕε ]2 and ¯R ¯R ∂x ∂ σ[ϕε ]2 are bounded on ΩT (R,ε) , and the bounds are uniform for all (R, ε). ¯R δ ∂τ To summarize, fixing (R, ε), σ[ϕε ]2 has the following properties: ¯R • σ[ϕε ]2 is bounded below and above by positive numbers in Ω. Moreover, the bounds ¯R in ΩT (R) are of order Rp(R) , and are uniform in ε. δ • σ[ϕε ]2 is uniformly Lipschitz on ΩT (R,ε) , with uniform Lipschitz constants for all ¯R δ (R, ε). Now, let us further motivate our generalized Comparison Principle by estimating 13 for 14 for all (R, ε) all (R, ε) 83 σ[ϕε ] 2 ¯R − 1. σε Corollary 4.19. Given R, by choosing κ(R) > 1 big enough and Tδ (R) small enough, one can construct σ[ϕε ] such that ¯R σ[ϕε ] 2 ¯R − 1 = 2τ σε ε0 ε 2 (ϕR )xx 2κ − σR 2ϕε 0 R for (x, τ ) ∈ [−R, R] × (0, Tδ ]; σ[ϕε ] 2 ¯R −1= σε ε σR 2 · 2τ σε ε (σR )τ − σε + O(τ 2 ) > 0 and (4.56a) ε0 ε 2 (ϕR )xx 2κ − σR 2ϕε 0 R ε (σR )τ − σε + ε σR 2 − 1 + O(τ 2 ) ε σ for (x, τ ) ∈ [−R − η, R + η]C × (0, Tδ ]. (4.56b) 1 , then 6κ ε0 (σ ε )τ ε (ϕ )xx 2κ − σR 2 R ε 0 − R ≤ 1 in |x| ≤ R, σε 2ϕR Moreover, if one choses Tδ ≤ 2τ κ ≤ 2τ for all ε > 0. We have similar estimates for σ[ϕε ] 2 R σε (4.56c) − 1. ε ε Proof. Since σ satisfies Condition H1, for given R, the bounds of σR and (σR )x in ΩT are uniform for all ε. Consequently, fix R, x, we have uniform bounds for ϕε 0 , (ϕε 0 )x and R R (ϕε 0 )xx (for all ε). However, for any pair of R, and x, D3 (ϕε 0 ) may approach to ±∞ as R R ε → 0. Claim 4.16 says σ[ϕε ] ¯R (x, τ ) σε σ ε σ[ϕε ] ¯R = R· ε (x, τ ) σε σR σε = R· σε 1+τ ε0 ε 2 (ϕR )xx 2κ − σR 2ϕε 0 R 84 − ε (σR )τ ε σR (x, 0) + O(τ 2 ) . ε0 ε ε σR ε 2 (ϕR )xx − (σR )τ . We now give estimates on ε and 2κ − σR σ σε 2ϕε 0 R 1. ε σR = 1 for |x| ≤ R, and σε 1 κ2 1 1 + R2 1 + x2 −p(x)/4 σε ≤ R ≤ κ2 1 σε 1 + R2 1 + x2 −p(x)/4 for |x| ≥ R + η 15 . We have    2 κ (1 + x2 )−p(x)/4  1 ε σR ≤  σε  2 κ  1 if p(x) < 0 (4.57) if p(x) > 0. ε ε ε ε 2. (a) The definition of σR shows that (σR )x , (σR )xx , and (σR )τ are bounded on and they all vanish on ( R+η C ) . T Hence, for |x| ≥ R + 1, x , ϕε 0 (x) = ε R dR (x) x dε (x) − σε (x,0) R ε 0 ) (x) = R (ϕR x , (dε (x))2 R (ϕε 0 )xx (x) = R 15 The ε 2[ϕε 0 (x) − σR (x, 0)] R . ε (σR (x, 0)dε (x))2 R constant κ1 and the function −1 < p(x) < 1 are defined in Condition H0. 85 R T, Similar method as in Claim 4.16 shows lim ϕε 0 (x) x→±∞ R ε ε = lim σR (x, 0) = σR (±R, 0); x→±∞ 1 lim (σ ε )x (x, 0) 2 x→±∞ R lim (ϕε 0 )x (x) = R x→±∞ = 0; lim (ϕε 0 )xx (x) x→±∞ R ε ε x(σR (x, 0))x + 2[ϕε 0 (x) − σR (x, 0)] R = lim ε x→±∞ (σR (x, 0)dε (x))2 R ε 2[ϕε 0 (x) − σR (±R, 0)] R = lim x→±∞ (σ ε (±R, 0)dε (x))2 R R = 0; ε lim x(ϕε 0 )x (x) = lim x(σR )x (x, 0) R x→±∞ x→±∞ = lim x · 0 x→±∞ = 0; ε ε x(σR (x, 0))x + 2[ϕε 0 (x) − σR (x, 0)] R ε x→±∞ (σR (x, 0)dε (x))2 R lim x(ϕε 0 )xx (x) = lim x R x→±∞ ε 2x[ϕε 0 (x) − σR (x, 0)] x2 · 0 R = lim + lim x→±∞ (σ ε (x, 0)dε (x))2 x→±∞ (σ ε (x, 0)dε (x))2 R R R R ε [ϕε 0 (x) − σR (x, 0)] 2x R · lim ε x→±∞ (dε (x))2 x→±∞ (σR (x, 0))2 R = 0 + lim = 0. ε The definitions of σR , and dε imply that |xdε | is bounded away from zero when R R |x| ≥ R + η. Therefore, i. the magnitude of ε0 ε )2 (σR )xx (ϕR 2ϕε 0 R R + η. 86 ε ϕε 0 − σR R = is bounded above for |x| ≥ xdε R ii. The magnitude of ε (σR )τ is bounded for |x| ≥ R + η. ε σR iii. The magnitude of x(ϕε 0 )x and x(ϕε 0 )xx are also bounded for |x| ≥ R + η. R R (b) The estimates in Claim 4.16 show that |(ϕε 0 )xx | has uniform bounds on R The definitions of ε σR , ϕε 0 R show that the magnitudes of both ε (σR )τ are bounded above for |x| ≤ R. ε σR ε0 ε )2 (ϕR )xx (σR 2ϕε 0 R R T; and Hence, given (R, ε), there exists κ(R) > 1 such that ε0 ε )2 (ϕR )xx (σR 2ϕε 0 R + ε (σR )τ ≤ κ on R. ε σR Consequently, ε κ ≤ 2κ − σR 2 (ϕε 0 )xx R − 2ϕε 0 R ε (σR )τ ≤ 3κ σε for given (R, ε) and proper κ(R) and Tδ (R). Summing up, by choosing κ(R) > 1 big enough and Tδ (R) small enough16 , one can construct σ[ϕε ] such that (4.56) are satisfied. ¯R When finding the asymptotics of a function, we find two sequences of functions with known asymptotics that converge to the target function from below and above. The following generalized comparison principle [5] allows us to transfer the comparison between implied volatilities, which are solutions to two different PDEs, into the comparison between “associated local volatilities”, which are straightforward functions in class I (0, ξ) of our choice. Let us now state and prove the following critical result. 16 for example, Tδ ≤ 1/(6κ) 87 Lemma 4.20. (Comparison Principle) Assume σ satisfies Condition H1 and is bounded below in Ω. Given R >> 1, ε > 0, define σ ε , ϕε , and σ[ϕε ] as in Corollary 4.19. We also ¯R ¯R define σε , ϕε , and σ[ϕε ] in a similar way. R R From Theorem 2.21, there are vR , v ε , and v ε ∈ C(Ω) ∩ W 2,1,p (ΩT ) such that ¯ε R 1 ¯R vε vε vε (¯R )τ = (σ[ϕε ])2 ((¯R )xx − (¯R )x ), (¯R )(x, 0) = (ex − 1)+ ; vε 2 1 (v ε )τ = (σ[ϕε ])2 ((v ε )xx − (v ε )x ), (v ε )(x, 0) = (ex − 1)+ ; R R R R R 2 1 ε ε (v ε )τ = (σ ε )2 (vxx − vx ), v ε (x, 0) = (ex − 1)+ . 2 Moreover, we have the following comparison between the corresponding implied volatilities 17 , ϕε , ϕε , and ϕε on a bounded domain. ¯R R R i) There exist N (R) > 1, κ(R, ε) sufficiently large, and Tδ (R, ε) sufficiently small such that R/N ϕε ¯R ≥ ϕε in . (4.58) Tδ ii) Similarly, for the same N (R) > 1, κ(R, ε) and Tδ (R, ε), R/N ϕε R ≤ ϕε , in (4.59) Tδ Proof. We only give the proof for inequality (4.58). Inequality (4.59) can be proved similarly. Given (R, ε). Since vR = u(x, (ϕε )2 τ ), v ε = u(x, τ (ϕε )2 ), and uτ (·, τ ) > 1 in ΩT , instead ¯ε ¯R 17 Recall from previous sections: u(x, τ ϕ2 ) = v(x, τ ) is the implied Given v, the solution to (1.8), the function ϕ such that volatility associated with σ 2 . Here u is the solution to (3.1), and is explicitly given in (3.2). 88 of (4.58), it is equivalent to show the difference function R/N (R) ∆(x, τ ) := vR (x, τ ) − v ε (x, τ ) ≥ 0 ¯ε in ΣT (R,ε) , δ (4.60) for some N (R) > 0. In fact, ∆ satisfies the following equation: σ[ϕε ] 2 ¯R ε − 1 vτ σε 1 ∆τ − (σ[ϕε ])2 (∆xx − ∆x ) = ¯R 2 ∆(x, 0) = 0 in in ΩT δ (4.61) (4.62) R. We formally write the solution to equation (4.61) as the following double integral, and show positivity in R/N Tδ , where the value of N > 1 will be specified in the proof. The integrability of this double integral will become clear as we proceed. ∆(x, τ ) τ Γ0 (x, τ ; ξ, s) ds = 0 R σ[ϕε ] 2 ¯R −1 σε τ 1 Γ0 = (σ ε (0, 0))2 ds 2 0 [−R,R] ε vτ (ξ, s)dξ, σ[ϕε ] 2 ¯R −1 σε τ 1 + (σ ε (0, 0))2 ds Γ0 2 0 [−R−η,−R]∪[R,R+η] τ 1 + (σ ε (0, 0))2 ds Γ0 2 0 [−R−η,R+η]C (α+βs) ξ 2 +1−α ε γ (ξ, s; 0, 0)dξ e σ[ϕε ] 2 ¯R −1 σε σ[ϕε ] 2 ¯R −1 σε := I1 + I2 + I3 . (4.63) e (α+βs) ξ 2 +1−α ε γ (ξ, s; 0, 0)dξ e (α+βs) ξ 2 +1−α ε γ (ξ, s; 0, 0)dξ (4.64) The goal is to show I1 > |I3 | on a bounded domain of ΩT . But first, let us review a few estimates on terms appear in the last equation. 89 Define the operator for this proof : L0 := ∂ 1 ∂2 ∂ − (σ[ϕε ])2 ( 2 − ¯R ). ∂τ 2 ∂x ∂x And, L1 := ∂ 1 ∂2 σε ∂ − (σ ε )2 ( 2 − )−2 τ. ∂τ 2 ∂x σε ∂x ¯R 1. Recall Claim 4.18, 1 (σ[ϕε ])2 is smooth and satisfies Condition H2. Based on our 2 assumption on σ 2 , the constants ν > 0, R0 = 0, and · p,q = 0 are uniform in (R, ε). The constants µ, M , and M0 depend only on R. By Proposition 4.5 and Remark 4.6, the fundamental solution Γ0 to L0 exists and there exist positive constants C1 , α1 , α2 , depending only on the structure of L, i.e., R, such that 2 2 α |x−ξ| α |x−ξ| 1 − 2τ −s − 1τ −s ≤ Γ0 (x, τ ; ξ, s) ≤ C1 (τ − s)−1/2 e . (τ − s)−1/2 e C1 (4.65) 2. Recall Lemma 4.10, there exist positive constants C2 , and α3 , depending on the structure of L1 and Tδ , such that 1 ε vτ (ξ, s; 0, 0) = (σ ε (0, 0))2 · exp{(α + βs) 2 ξ 2 + 1 − α} · γ ε (ξ, s; 0, 0), where constants α > 0, β(α) = max{κ2 , κ3 }(α + 2)2 are fixed for all (R, ε). 18 By Remark 4.12, (a) γ ε (ξ, s; 0, 0) ≤ C2 s−1/2 18 κ 2 and κ3 are defined in Condition H3. 90 in ΩT , δ (4.66) and (b) γ ε (ξ, s; 0, 0) ≥ 1 −1/2 ξ2 s exp −α3 C2 s R in . Tδ 3. Recall equations (4.56), and (4.57) in Remark 4.19 σ[ϕε ] 2 ¯R −1 σε =2τ ε0 ε 2 (ϕR )xx 2κ − σR 2ϕε 0 R ε (σR )τ − σε + O(τ 2 ) ≥2τ κ + O(τ 2 ) > 0 for (x, τ ) ∈ [−R, R] × (0, Tδ ]; and σ[ϕε ] 2 ¯R −1 σε = ε σR 2 · 2τ σε ≤ ε σR 2 · 2τ · 3κ + σε ε 2κ − σR 2 (ϕε 0 )xx R − 2ϕε 0 R ε (σR )τ σε + ε σR 2 − 1 + O(τ 2 ) σε ε σR 2 − 1 + O(τ 2 ) σε for (x, τ ) ∈ [−R − η, R + η]C × (0, Tδ ], where    2 κ (1 + x2 )−p(x)/4  ε σR 1 ε ≤ σ  2 κ  1 if p(x) < 0 if p(x) > 0. Now, we are readily estimating I1 and I3 . 91 (4.67) 1. Lower bound for I1 : I1 τ 1 ds Γ0 := (σ ε (0, 0))2 2 0 [−R,R] σ[ϕε ] 2 ¯R −1 σε e (α+βs) ξ 2 +1−α ε γ (ξ, s; 0, 0)dξ 1 −α 1 e (τ − s)−1/2 ≥ (σ ε (0, 0))2 2 C1 C2 [−R,R] (x − ξ)2 ξ2 · exp −α1 · s · exp (α + βs) ξ 2 + 1 · s−1/2 exp −α3 τ −s s τ 1 ε 2 1 e−α = (σ (0, 0)) ds (τ − s)−1/2 2 C1 C2 0 [−R,R] (x − ξ)2 ξ2 · exp −α1 · s · exp (α + βs) ξ 2 + 1 · s−1/2 exp −α3 τ −s s 1/2 τ 1 ε s 2 1 e−α = (σ (0, 0)) ds 2 C1 C2 τ −s 0 (x − ξ)2 ξ2 + (α + βs) ξ 2 + 1 − α3 exp −α1 · τ −s s [−R,R] dξ. The first inequality holds because when |ξ| ≤ R, σ[ϕε ] 2 ¯R (ξ, s) − 1 σε = 2s ε0 ε 2 (ϕR )xx 2κ − σR 2ϕε 0 R R ≥ 2s on ε (σR )τ − σε (ξ) + O(s2 ) > 0 , Tδ holds for appropriate κ and Tδ , uniformly in (R, ε). Now, fix τ ≤ Tδ , we show a lower bound of [−R,R] exp −α1 (x − ξ)2 + (α + βs) τ −s 92 ξ 2 + 1 − α3 ξ2 s dξ dξ dξ for all 0 ≤ s < τ . (a) The first step is to complete the following square: (x − ξ)2 ξ2 − α1 + (α + βs) ξ 2 + 1 − α3 τ −s s  2 α1 s x α1 s + α3 (τ − s)ξ + √ α1 α3 α1 s+α3 (τ −s)  − =− x2 . α1 s + α3 (τ − s) (τ − s)s (b) Now we have [−R,R] exp −α1 √ = 2π exp − (x − ξ)2 + (α + βs) τ −s α1 α3 α1 s + α3 (τ    1 1 √ · exp −  2 2π [−R,R]  − s)  ξ 2 + 1 − α3 ξ2 s dξ x2 α1 s + α3 (τ − s)ξ + √ · 2 α1 s x   α1 s+α3 (τ −s) (τ − s)s/2 Again, denote 2 x ξ 1 N (x) = √ e− 2 dξ. 2π −∞ Changing variables, one have the last integral equals to (τ − s)s 2[α1 s + α3 (τ − s)]   α1 s α1 s + α3 (τ − s)R + √ x α1 s+α3 (τ −s)  ·[N  (τ − s)s/2   α1 s − α1 s + α3 (τ − s)R + √ x α1 s+α3 (τ −s) ] −N  (τ − s)s/2 for |x|, |ξ| ≤ R. 93     dξ. (c) Next, we show the interval  α1 s + α3 (τ − s)(±R) + √   α1 s x α1 s+α3 (τ −s)  (τ − s)s/2 α s + α3 (τ − s) 2 1 R+ (τ − s)s = ± √ 2αx α1 s + α3 (τ − s) · s τ −s • Covers the origin. We simply take the ratio of between half of the length of the interval and the magnitude of the center of that interval. α s + α3 (τ − s) 2 1 R/ (τ − s)s = R α , α|x| 1 α1 s + α3 (τ − s) · s τ −s R τ α1 + α3 −1 α|x| s > 2α|x| R α1 s + α3 (τ − s) α|x| s = √ since 0 ≤ s < τ >1. The last inequality holds uniformly for all (R, ε), if we choose α ≥ 1 19 . This is because α1 > 1 for anyR sufficiently large and any ε > 0. • Furthermore, the length of the above interval has a uniform lower bound for 19 Recall the summery of Remark 4.6, α1 ∼ O max 2 1 M0 + M 2 , ν ν2 , ¯ where ν, M are lower and upper bounds of σ[ϕε ], respectively, M0 is the upper bound for R ∂ σ[ϕε ]|. ¯ | ∂x R 94 all 0 ≤ s < τ . The minimum length occurs when ∂ 2· ∂s α s + α3 (τ − s) √ α s2 − α3 (τ − s)2 2 1 = 2[(τ − s)s]−3/2 1 = 0. (τ − s)s α1 s + α3 (τ − s) √ Therefore, the minimal length of the interval is at s = √ that length is 2R α3 √ t, and α1 + α3 √ √ 2( α1 + α3 )2 /τ , which increases as R increases, or as τ decreases. That means, the length of the above interval is bounded below. • In conclusion, there exists constant ϑ, such that N (+) − N (−) ≥ ϑ for all R, whenever κ(R, ε) big enough, and ε, Tδ (R, ε) small enough. (d) Plugging the result to part (b), we have [−R,R] exp −α1 √ ≥ 2π exp − (x − ξ)2 ξ2 + (α + βs) ξ 2 + 1 − α3 τ −s s α1 α3 x2 α1 s + α3 (τ − s) 95 · dξ (τ − s)s · ϑ. 2[α1 s + α3 (τ − s)] Hence I1 1/2 s 1 1 −α τ ≥ (σ ε (0, 0))2 e 2 ds 2 C1 C2 τ −s 0 (x − ξ)2 ξ2 + (α + βs) ξ 2 + 1 − α3 · exp −α1 τ −s s [−R,R] √ 1 −α ≥ 2π(σ ε (0, 0))2 e ·ϑ C1 C2 τ · 0 1/2 s α1 α3 exp − x2 τ −s α1 s + α3 (τ − s) · dξ (τ − s)s ds 2[α1 s + α3 (τ − s)] √ = 2π(σ ε (0, 0))2 1 −α e ·ϑ C1 C2 τ α1 α3 s x2 · ds · exp − α1 s + α3 (τ − s) 2[α1 s + α3 (τ − s)] 0 √ 1 −α ≥ 2π(σ ε (0, 0))2 e ·ϑ C1 C2 τ α1 α3 s exp − x2 · ds · min{α1 , α3 }τ 2 max{α1 , α3 }τ 0 π ε 1 −α (σ (0, 0))2 e ·ϑ 2 C1 C2 α1 α3 x2 (max{α1 , α3 })−1/2 . ·τ 3/2 exp − min{α1 , α3 }τ = 96 2. Upper bound for |I3 |: |I3 | τ 1 ds ≤ (σ ε (0, 0))2 e−α 2 0 [−R−η,R+η]C τ 1 ds ≤ (σ ε (0, 0))2 e−α C1 C2 2 0 · Γ0 e σ[ϕε ] 2 ¯R − 1 dξ σε (α+βs) ξ 2 +1 ε γ (ξ, s; 0, 0) [−R−η,R+η]C (τ (x−ξ)2 −1/2 e−α2 τ −s − s) (α+βs) ξ 2 +1 −1/2 s e σ[ϕε ] 2 ¯R − 1 dξ σε τ 1 = (σ ε (0, 0))2 e−α C1 C2 ds 2 0 [−R−η,R+η]C (x − ξ)2 · exp −α2 + (α + βs) τ −s [(τ − s)s]−1/2 ξ2 + 1 · σ[ϕε ] 2 ¯R − 1 dξ. σε Next, we estimate each term of the integrand. • (x − ξ)2 + (α + βs) ξ 2 + 1 τ −s −α2 ξ 2 + 2α2 xξ + (α + βs)(τ − s) ξ 2 + 1 − α2 x2 = τ −s (1) −α ξ 2 + 2α xξ + ατ ξ 2 + 1 − α x2 2 2 2 ≤ τ −s (2) −α2 ξ 2 + 2 α2 ξ 2 + ατ ξ 2 + 1 − α2 x2 N ≤ τ −s − α2 3 2 − 4 − N α 2 ξ 2 − α 2 x2 ≤ . τ −s (3) Here, inequalities (1), (2), (3) hold for the following reasons: (1) (α + βs)(τ − s), 0 ≤ s < τ ≤ Tδ takes its maximum value when s = βτ −α = 2β 97 τ 2 − α , 2 max{κ2 ,κ3 }(α+1)2 which is negative when Tδ is very small. So, if we make Tδ small enough, then (α+βs)(τ −s) takes its maximum value at s = 0. Therefore, (α + βs)(τ − s) ≤ ατ for 0 ≤ s < τ ≤ Tδ << 1. R 1 (2) For |x| ≤ R/N , and |ξ| ≥ R + η > R, |xξ| < N |ξ| < N ξ 2 . α 2 (3) When Tδ < 4α and R is sufficiently large, ξ 2 + 1 ≤ αTδ ατ So, −α2 ξ 2 + ατ 1 α ξ 2 + 1 ≤ α 2 ξ 2 = α2 ξ 2 . 4α 4 ξ 2 + 1 ≤ − 3 α2 ξ 2 . 4 • σ[ϕε ] 2 ¯R (ξ, s) − 1 σε ≤ ε σR 2 · 2τ σε ≤κ4 (1 + ξ 2 )2 · 2 1 ε 2κ − σR 2 (ϕε 0 )xx R − 2ϕε 0 R ε (σR )τ σε + ε σR 2 − 1 + O(τ 2 ) σε 1 3κ + κ4 (1 + ξ 2 )2 + 1 + 1 1 3κ =5κ4 (1 + ξ 2 )2 + 2, 1 1 . if one chooses τ ≤ Tδ ≤ 6κ The validation of the first inequality is discussed in Remark 4.19. The second inequality holds uniformly for all (R, ε) when we choose κ(R, ε) > 1 large enough and Tδ (R, ε) ≤ 1/κ(R, ε). Therefore, for R big enough, one can choose Tδ so small that 5κ4 (1 + ξ 2 )2 + 2 ≤ exp 1 1 α ξ2 4 2 τ −s 98 for all 0 ≤ s < τ ≤ Tδ , |ξ| ≤ R + η, uniformly for all ε > 0. Therefore (x − ξ)2 exp −α2 + (α + βs) ξ 2 + 1 · τ −s   1 2  − 2 − N α2 ξ 2 − α2 x2  ≤ exp   τ −s ξ2 := exp −˜ 2 α τ −s x2 exp −α2 τ −s σ[ϕε ] 2 ¯R −1 σε for all 0 ≤ s < τ ≤ Tδ , |ξ| ≤ R + η, uniformly for all ε > 0, where α2 (N, α2 ) = ˜ 1 2 − 2 N α2 (R). So |I3 | τ 1 ds ≤ (σ ε (0, 0))2 e−α C1 C2 2 0 [−R−η,R+η]C (x − ξ)2 · exp −α2 + (α + βs) τ −s ξ2 (τ − s)−1/2 s−1/2 +1 · σ[ϕε ] 2 ¯R − 1 dξ σε τ 2 1 ε 2 e−α C C −1/2 s−1/2 exp −α x ≤ (σ (0, 0)) (τ − s) 1 2 2 2 τ −s 0 2 ξ · exp −˜ 2 α dξ τ −s [−R−η,R+η]C τ 1 ≤ (σ ε (0, 0))2 e−α C1 C2 (τ − s)−1/2 s−1/2 2 0 2 x τ −s (R + η)2 · exp −α2 · · 2 exp −˜ 2 α τ −s 2˜ 2 α τ −s 99 ds. ds The last inequality holds because d ξ2 exp −˜ 2 α dξ τ −s ξ2 = exp −˜ 2 α τ −s · − 2˜ 2 α ξ dξ, τ −s and consequently, ξ2 exp −˜ 2 α τ −s dξ = τ −s d ξ2 · exp −˜ 2 α 2˜ 2 ξ dξ α τ −s ≤ τ −s d ξ2 exp −˜ 2 α · 2˜ 2 α dξ τ −s since |ξ| > R + η >> 1. Now, we have |I3 | 1 ˜ ≤ (σ ε (0, 0))2 e−α C1 C2 α2 −1 2 τ τ − s 1/2 x2 · exp −α2 · s τ −s 0 1 ≤ (σ ε (0, 0))2 e−α C1 C2 α2 −1 ˜ 2 τ τ 1/2 x2 · exp −α2 · s τ 0 (R + η)2 · exp −˜ 2 α τ −s (R + η)2 · exp −˜ 2 α τ x2 1 ˜ = (σ ε (0, 0))2 e−α C1 C2 α2 −1 τ · 2 · exp −α2 2 τ =(σ ε (0, 0))2 e−α C1 C2 α2 −1 τ · exp −α2 ˜ where α2 (N, α2 ) = ˜ 2 1 − 2 N α2 (R). 100 x2 τ ds ds · exp −˜ 2 α · exp −˜ 2 α (R + η)2 τ (R + η)2 τ , R/N (R) , Tδ 3. We shall show, I1 > |I3 | in and the difference is independent of η. Recall I1 π ε 1 −α (σ (0, 0))2 e ·ϑ 2 C1 C2 α1 α3 ·τ 3/2 exp − x2 (max{α1 , α3 })−1/2 . min{α1 , α3 }τ ≥ And |I3 | ≤(σ ε (0, 0))2 e−α C1 C2 α2 −1 τ · exp −α2 ˜ where α2 (N, α2 ) = ˜ 1 2 − 2 N x2 τ · exp −˜ 2 α (R + η)2 τ , α2 (R). For fixed R, ε, and Tδ (R, ε), α1 , α2 , C1 and C2 are constants. We are interested in the comparison between I1 and |I3 | as τ → 0, in which case both bounds will be dominated by the exponential factor. Therefore, it is sufficient to show α2 x2 + α2 (R + η)2 > ˜ 1 2 − 2 N α1 α3 x2 , min{α1 , α3 } i.e., α2 (R + η)2 > (max{α1 , α3 } − α2 ) x2 . 101 Note x2 ≤ R2 /N 2 , and (R + η)2 > R2 , so the last inequality holds if 1 2 − 2 N α2 > (max{α1 , α3 } − α2 ) /N 2 , i.e. 1 2 max{α1 , α2 } − α2 N − 2N > , 2 α2 therefore, we need N > 2 + √ 2 1+ max{α1 , α3 } . α2 Recall Remark 4.6, and the summary in Claim 4.18, 2 M 2 (R) + M0 1 ν(R) α1 , α3 ∼ max{ 2 , }, and α2 ∼ 2 , uniformly in ε. Here M (R), M (R) ν(R) ν (R) and ν(R) are the upper and lower bound for σ 2 [ϕε ] on ΩT , respectively. M0 is the ¯R δ ∂ 2 ε upper bound 20 for σ [ϕR ] on ΩT (R,ε) . Thus, ¯ δ ∂x N∼ max 2 M 2 (R) + M0 1 , ν(R) ν 2 (R) M 2 (R) · ν(R) 1/2 only depends on R. Our next main result in this section is: Theorem 4.21. Suppose p+ < 1/2, and p− = 0. i) In the limit τ → 0, the implied volatility ϕ is the harmonic-mean of the local volatility, namely, given x ∈ R, 1 1 ds = . τ →0 ϕ(x, τ ) 0 σ(sx, 0) lim 2,1,p (4.68) ii) Conversely, if ϕ ∈ Wloc (Ω) (for any p > 1), satisfies (3.7) and (4.68), then ϕ ≡ ϕ. ˜ ˜ 20 uniformly for all (R, ε) 102 Proof. i) Given (R, ε) > 0. Applying the comparison Lemma 4.20, we get R/N (R) ϕε ¯R ≥ ϕε (4.69) in Tδ for some N (R) > 1, and 0 < Tδ (R, ε) << min{T, 1}. Similarly, from Lemma 4.20 R/N ϕε,R ≤ ϕε , in (4.70) Tδ without lost of generality, for the same N (R) > 1, and 0 < Tδ (R, ε) << min{T, 1}. Therefore, for all (R, ε), there exists (κ, Tδ )(R, ε), and N (R) such that −1 ds (1 − κτ ) ε 0 σR (s, 0) x x ≤ϕε (x, τ ) −1 ds (1 + κτ ) ε 0 σR (s, 0) x ≤x for all (x, τ ) ∈ R/N (R) . Tδ (R,ε) (4.71) This yields −1 ds ε 0 σR (s, 0) x x ≤lim inf ϕε (x, τ ) τ →0 ≤lim sup ϕε (x, τ ) τ →0 −1 ds ε 0 σR (s, 0) x ≤x for all ε > 0. 103 (4.72) R/N (R) . Tδ (R,ε) ε Now, let ε → 0, then σR (x, 0) → σR (x, 0), which equals to σ in On the other hand, by theorem 2.21, ϕε → ϕ as ε → 0. Equation (4.72) hence implies −1 ds =x 0 σR (s, 0) x lim ϕ(x, τ ) = x τ →0 −1 ds 0 σ(s, 0) R/N (R) x . in (4.73) Tδ (R,ε) To show the above equality holds on ΩT (R,ε) , it is sufficient to show N (R) ∼ δ M 3/2 (R) = o(R) ν(R) as R → ∞, uniformly in ε. Recall p+ = max{p(x), 0}, and p− = max{−p(x), 0}, then α1 ∼ O(R2p+ +p− ), and α2 ∼ x∈R −p− −2p+ ), O(R x∈R uniformly in ε21 . Therefore, max{α1 , α3 } = O(R2p+ +p− ) = o(R) if 2p+ + p− < 1. α2 Since R is an arbitrary large number, N (R) = o(R) as R → ∞ for p+ < 1/2, and p− = 0. Equation (4.73) gives the point-wise convergence of ϕ as τ → 0. ii) We now show the uniqueness part of Theorem 4.21. 2,1,p Suppose ϕ ∈ WLoc (ΩT ), also satisfies (3.7) and (4.68). Let ˜ ∆(x, τ ) = u(x, τ ϕ2 (x, τ )) − u(x, τ ϕ2 (x, τ )). ˜ From part i), 1 ∆τ = σ 2 (∆xx − ∆x ) in ΩT . 2 21 for εsuf f icientlysmall 104 Furthermore, one can extend ∆(x, τ ) to a continuous function in Ω, which gives ∆(x, 0) = 0 ∀x ∈ R. To see how, we take any x ∈ R, then compute the limit of ∆ as τ goes to zero: lim ∆(x, τ ) = lim u(x, τ ϕ2 (x, τ )) − lim u(x, τ ϕ2 (x, τ )) ˜ τ →0 τ →0 u∈C(Ω) = τ →0 u(x, lim τ ϕ2 (x, τ )) − u(x, lim τ ϕ2 (x, τ )) ˜ τ →0 τ →0 (1) = u(x, 0) − u(x, 0) = 0. 1 ds 1 1 Equality (1) holds since 0 < lim ϕ(x,τ ) = lim ϕ(x,τ ) = 0 σ(sx,0) < ∞. Notice that ˜ τ →0 τ →0 |∆(x, τ )| ≤ |u(x, τ ϕ2 (x, τ ))| + |u(x, τ ϕ2 (x, τ ))| < 2ex , one may apply the generalized ˜ Maximum Principle, Theorem 2.11 and conclude ∆ ≥ 0 and ∆ ≤ 0 in Ω, i.e., ϕ ≡ ϕ. ˜ 105 Chapter 5 The Asymptotic of ϕ as x → ±∞ Throughout this section, assume σ satisfies Condition H0. Furthermore, we assume lim σ(x, τ ) = x→+∞ σ+ (τ ) (respectively x → −∞, σ− (τ )), locally uniformly in τ , with σ± continuous. The main theorem in this section says: Theorem 5.1. I. If σ+/− = ∞ and p− ∈ [0, 1/2), then lim ϕ(x, τ ) = ∞, uniformly in x→±∞ t ∈ (0, T ] . II. If σ+/− = 0 and p+ = 0, then lim ϕ(x, τ ) = 0, uniformly in t ∈ (0, T ] . x→±∞ III. If p+ = p− = 0, then lim ϕ(x, τ ) = x→±∞ 2 1 τ 2 σ (s)ds . τ 0 ± (5.1) Remark 5.2. The cases where σ± is positive and finite are proved in [5]. We provide the prove for the case involving σ+/− = ∞, or σ+/− = 0. Recall the following “cut-off volatilities” functions, which will be used in the proof of 106 Theorem 5.1. For each m, n ∈ N, define    m       m (x, τ ) = σn σ(x, τ )       1/n  σ(x, τ ) ≥ m 1/n < σ(x, τ ) < m σ(x, τ ) ≤ 1/n. Similarly, for each m, n ∈ N, we define σ m (x, τ ) and σn (x, τ ) as the cut-off version of σ m from the above and below, respectively. By Theorem 2.21, σ, σ m , σn , and σn each has a corresponding implied volatility: ϕ, ϕm , ϕn , and ϕm respectively. n The proof of Theorem 5.1 takes the following steps: I. The asymptotic of ϕ if σ+/− = ∞ The case where σ+ = ∞ is equivalent to: lim inf ϕ(x, τ ) ≥ m uniformly in τ , for all x→+∞ m ∈ N. To this end, we need the following auxiliary function. 2 13p + 3 1/p Lemma 5.3. Given p ∈ (0, 1/2), define: M = , and , z0 = − 3 + 3p (1 − p)2 4 2 z1 = 1 − z0 . Take any η ∈ (0, 0.1), and κ ≥ 1.1 For 0 < εp < min √ ,1 , 1√ p + 2πp p 2πεp let Y = √ < 1. A > z1 > 2,2 then there exists ψ ∈ C 2 (R) satisfying the 2 M κ − εp following properties: (i) ψ ∈ W 2,0,∞ (R). In addition, ψ W 2,0,∞ (R) = |ψ|∞ + |ψ xx |∞ is independent of A, and ε 1 (ii) ψ(z) ≤ 1+η = lim ψ(z) ∀z ∈ R z→+∞ 1+η 1 + 0.1 1+η we will use the inequalities √ < < 1 and κ > √ . 2 M M 2 for p ∈ (0, 1/2), z (p) has the smallest value when p = 9/31, and this value is bigger 1 than 2.848125 1 Later, 107 (iii) ψ(z) ≤ min εp ψ(z) ≤ √ Mκ (iv) (v) zψ (z) ≤p ψ(z) εp εp √ , √ Mκ M κ|z|p ∀z ∈ (−∞, 0), and ∀z ∈ [0, z1 + A) ∀z ∈ R zψ (z) → 0, ψ (z) → 0, ψ (z) → 0 as z → +∞. ψ(z) Proof. We prove this lemma in the Appendix. We also make use of the following Lemma when prove item II, the case σ− = ∞ in Theorem 5.1. 13p + 3 1/p 2 , and , z0 = − Lemma 5.4. Given p ∈ (0, 1/2), define: M = 3 + 3p (1 − p)2 4 2 z1 = 1 − z0 . Now take any η ∈ (0, 0.1), and κ ≥ 1. For 0 < εp < min √ ,1 , 1√ p + 2πp 2πεp p < 1. let Y = √ 2 M κ − εp ˜ Take any A > z1 > 1, then there exists ψ ∈ C 2 (R) satisfying the following properties: ˜ ˜ (i) ψ ∈ W 2,0,∞ (R), with ψ W 2,0,∞ , independent of A, and ε 1 ˜ ˜ (ii) ψ(z) ≤ 1+η = lim ψ(z) ∀z ∈ R ˜ (iii) ψ(z) ≤ min z→−∞ εp √ Mκ , √ εp M κ|z|p ∀z ∈ (0, ∞), and εp ˜ ψ(z) ≤ √ ∀z ∈ (−z1 − A, 0] Mκ ˜ z ψ (z) (iv) ≤ p ∀z ∈ R ˜ ψ(z) (v) ˜ z ψ (z) ˜ ˜ → 0, ψ (z) → 0, ψ (z) → 0 as z → −∞. ˜ ψ(z) ˜ Proof. If ψ(x) satisfies Lemma 5.3, then ψ(x) = ψ(−x) satisfies Lemma 5.4. 108 Proof for item I, σ+ = ∞. step 1: Comparison between local volatilities. We fix an m ∈ N, and show that lim ϕ(x, τ ) ≥ m, uniformly in τ . x→+∞ Note σ m (x, τ ) → m as x → +∞, uniformly in τ ∈ [0, T ]. Hence, given η ∈ (0, 0.1), ˜ there exists A such that m σ m (x, τ ) ≤ 1+η ˜ ∀(x, τ ) ∈ [A, +∞) × [0, T ]. (5.2) We denote the decay rate of σ as p = − min{0, lim p(x)/2}, where p(x) is defined in x→−∞ Condition H0. We now denote M := 2/(1 − p)2 . Also, by Condition H0, σm ≤ κ1 m for (x, τ ) ∈ (−∞, 0) × [0, T ], m, n ≥ 1, (5.3a) 1 σm ≤ ≤ κ1 κ1 m min for (x, τ ) ∈ [0, +∞) × [0, T ], m, n ≥ 1. (5.3b) 1 1 p, κ κ1 |x| 1 ≤ Now, we set ϕm (x, τ ) = mψ(εx) (5.4) where ψ is given in Lemma 5.3, the values of η, κ = κ1 being defined as above, and A, ε to be found. A simple computation yields H[ϕm ](x, τ ) = ψ 1 − εx (εx) ψ 2 1 2 + τ m2 ε2 ψ ψ (εx) − ε2 τ 2 m4 ψ 2 ψ (εx). 4 Clearly, by (i) and (iv) in Lemma 5.3, we can choose 109 (5.5) 0 < ε = ε( ψ W 2,∞ , T, m, p) < 1, independent of A such that 0< (1 − p)2 1 4 := ≤ H[ϕm ](x, τ ) ≤ <∞ 2 M M on Ω. (5.6) By (v) in Lemma 5.3, there is B > 0, for which εx ≥ B implies H[ϕm ] ≥ 1 1+η for τ ∈ [0, T ]. (5.7) ˜ Setting A = max{B, εA, 2}, we see that (5.6) holds for all z = εx ∈ R, τ ∈ (0, T ); (5.7) holds for z = εx ≥ A, τ ∈ [0, T ]. Next, we compute the local volatility (3.14) associated with ϕ, that is σ[ϕm ]2 (x, τ ) = m2 ψ 2 (εx) . H[ϕm ](x, τ ) (5.8) We now estimate σ[ϕm ]2 (x, τ ) in the following cases: • z = εx ∈ (−∞, 0). So x ∈ (−∞, 0): m2 ψ 2 (εx) H[ϕm ](x, τ ) (1) εp εp ,√ M κ1 |εx|p M κ1 ≤ m2 · min √ (2) 1 1 p, κ κ1 |x| 1 ≤ σ m / min 2 · min 1 =(σ m )2 (x, τ ) · 1 κ1 2 min 1 ,1 |x|p (3) ≤ (σ m )2 (x, τ ). 110 2 ·M εp εp √ ,√ M κ1 |εx|p M κ1 1 · 2 M κ2 1 min 1 , εp |x|p 2 ·M 2 ·M Inequalities (1) holds by (iii) in Lemma 5.3, and (5.6); (2) is from (5.3a); (3) holds due to 0 < εp < 1. • z = εx ∈ [0, A + z1 ). So x ≥ 0: m2 ψ 2 (εx) H[ϕm ](x, τ ) (1) ≤ m2 ψ 2 (εx) · M 2 εp √ ·M M κ1 (2) ≤ m2 (3) ≤ (σ m κ1 )2 ε2p ·M M κ2 1 =(σ m )2 ε2p (4) ≤ (σ m )2 . Inequalities (1) golds by (5.6); (2) holds by (iii) in Lemma 5.3, and (5.6) ; (3) holds by (5.3b); (4) holds since 0 < εp < 1. ˜ • z = εx ∈ [A + z1 , ∞). So x ≥ max{B/ε, A}: ψ 2 (εx) H[ϕm ](x, τ ) m2 (τ ) 2 1 (1 + η) 1+η (1) ≤ m2 (τ ) 1 1+η =m2 (τ ) (2) (σ m )2 ≤ κ1 1 1+η (3) ≤ (σ m )2 (x, τ ). 111 Inequalities (1) holds by (ii) in Lemma 5.3, and (5.7); (2) holds by (5.2); (3) holds because both κ and 1 + η are greater than one. In summary, σ[ϕm ](x, τ ) ≤ σ m (x, τ ) ≤ σ(x, τ ) in ΩT for all m ∈ N. (5.9) Step 2: Comparison between implied volatilities. Let v m (x, τ ) = u(x, (ϕm )2 τ ), where u is the solution to (3.1). One easily verifies, v m satisfied equation (1.8) with σ 2 replaced by σ 2 [ϕm ]. Moreover, 0 < v m (x, τ ) < ex on ΩT . On the other hand, Theorem 2.21 says v = u(x, τ ϕ2 ) is the unique solution to equation (1.8), which has no more than exponential growth. Let ∆ = v m − v, then ∆ satisfies 1 ∆τ − σ[ϕm ]2 (x, τ )(∆xx − ∆x ) = 2 ∆(x, 0) = 0 σ[ϕm ] 2 − 1 vτ in ΩT σ in R. Since 1. σ[ϕm ]2 (x, τ ) is continuous and non-negative on Ω; 1 2. σ 2 [ϕm ](x, τ ) ≤ m2 · M · 1+η in Ω.3 4 σ[ϕm ] 2 3. − 1 vτ < 0 in Ω . σ 3 Because (a) equation (5.8), σ[ϕm ]2 (x, τ ) = 112 m2 ψ 2 (εx) , H[ϕm ](x, τ ) (5.10a) (5.10b) We can apply the Maximum Principle ([24] or Theorem 2.11) to ∆, and conclude ∆ = vm − v ≤ 0 in ΩT . Furthermore, note that v m = u(x, τ (ϕm )2 ), v = u(x, τ ϕ2 ), and uτ (x, ·) > 0 in ΩT , we have ϕm ≤ ϕ in ΩT . This implies 1 m = lim ϕm (x, τ ) ≤ lim inf ϕ(x, τ ). x→+∞ x→+∞ 1+η Sending η → 0, we get lim inf ϕ(x, τ ) ≥ m for all m ∈ N. (5.11) x→+∞ The proof for the case σ− = ∞ follows the same argument, using Lemma 5.4 and ˜ auxiliary function ψ(z) = ψ(−z). II. The asymptotic of ϕ if σ+ = 0, σ− < ∞, or vise-versa. (b) equation (5.6) 0< (1 − p)2 1 4 := ≤ H[ϕm ](x, τ ) ≤ <∞ 2 M M and 1 (c) item (ii) in Lemma 5.3, ψ(z) ≤ 1+η = lim ψ(z) ∀z ∈ R. z→+∞ 113 in Ω, Without loss of generality, we give the proof for the case where σ+ = 0. It is sufficient to show lim inf ϕ(x, τ ) ≤ 1/n uniformly in τ , for all n ∈ N. To this end, we need the x→+∞ following auxiliary function[5]. The other case can be proved using the same symmetric argument as we did for item I. Lemma 5.5. Given A > 1, η ∈ (0, 0.1), and κ > 1/(1 − η), there exists ψ ∈ C 2 (R) satisfying the following properties: (i) ψ ∈ W 2,0,∞ (R) with ψ W 2,0,∞ independent of A. 1 (ii) ψ(z) ≥ 1−η = lim ψ(z) ∀z ∈ R, z→+∞ (iii) ψ(z) ≥ 2κ ∀z ∈ (−∞, A). (iv) (v) zψ (z) ≤ 1/2 ∀z ∈ R, ψ(z) zψ (z) → 0, ψ (z) → 0, ψ (z) → 0 as z → +∞. ψ(z) Proof for item II: We fix an n ∈ N, and show that lim ϕ(x, τ ) ≤ 1/n, uniformly in τ . By assumption x→+∞ in Condition H0, σn (x, τ ) → 1/n as x → +∞, uniformly in τ ∈ [0, T ]. Therefore, given ˜ η ∈ (0, 0.1), there exists A such that 1 σn (x, τ ) ≥ n 1−η ˜ ∀(x, τ ) ∈ [A, +∞) × [0, T ]. (5.12) In addition, from Condition H0, 1 1/n ≤ ≤ κ1 κ1 σn 114 for (x, τ ) ∈ ΩT . (5.13) Now, we set ϕn (x, τ ) = 1 ψ(εx) n (5.14) where ψ is given in Lemma 5.5, the values of η, κ = κ1 being defined as above, and A, ε to be found. A simple computation yields H[ϕn ](x, τ ) = ψ 1 − εx (εx) ψ 2 1 2 2 1 1 + τ 2 ε2 ψ ψ (εx) − ε2 τ 2 4 ψ ψ (εx) 4 n n (5.15) Clearly, by (i) and (iv) in Lemma 5.5, we can choose 0 < ε = ε( ψ W 2,∞ , T, 1/n) < 1, independent of A such that 1 ≤ H[ϕn ](x, τ ) ≤ 2 < ∞ 4 in Ω. (5.16) By (v) in Lemma 5.5, there is B > 0, for which εx ≥ B implies H[ϕn ] ≤ 1 1−η for τ ∈ [0, T ]. (5.17) ˜ Setting A = max{B, εA}, we see that (5.16) holds for all z = εx ∈ R, τ ∈ (0, T ), and (5.17) holds for z = εx ≥ A, τ ∈ [0, T ]. Next, we compute the local volatility (3.14) associated with ϕ, i.e., 2 σ[ϕn ]2 (x, τ ) 1 ψ (εx) = 2 . n H[ψ n ](x, τ ) 115 (5.18) Similar, but much simpler than the estimates for item I, we have σ[ϕn ](x, τ ) ≤ σn (x, τ ) ≤ σ(x, τ ) in ΩT for all n > 0. (5.19) Step 2: Comparison between implied volatilities. Similar as item I, we have ϕn ≥ ϕ in ΩT , f oralln ∈ N. This implies 1 1 = lim ϕ (x, τ ) ≥ lim supϕ(x, τ ). 1 − η n x→+∞ n x→+∞ Sending η → 0, lim sup ϕ(x, τ ) ≤ x→+∞ Proof for item III: See [5]. 116 1 n for all n. (5.20) Chapter 6 Numerical Implementation 6.1 The Finite Difference Method 2,1,p Recall, for any ψ ∈ WLoc (ΩT ), denote by H the quasilinear operator H[ψ] ≡ H(x, τ, ψ, Dψ, D2 ψ) = (1 − x 1 ψx 2 2 ) + τ ψψxx − τ 2 ψ 2 ψx . ψ 4 The equation which the implied volatility ϕ satisfies in Ω is (τ ϕ2 )τ − σ 2 (x, τ )H[ϕ] = 0, with its asymptotic 1 1 ds = . τ →0 ϕ(x, τ ) 0 σ(sx, 0) lim Note that equation (3.7) is singular at τ = 0. To overcome this obstacle, we numerically solve for another variable that is one-to-one w.r.t. ϕ > 0, and has no singularity in Ω. This 117 intermediate variable we are considering is R = τ ϕ2 , which satisfies the Cauchy problem 2 σ2 x Rx 2 σ 2 Rx σ 2 2 Rxx = σ 2 (1 − ) − − Rx 2 2 R 4 R 16 (6.1a) R(x, 0) = 0. Rτ − (6.1b) We claim it is not singular in Ω, for the following reasons. i) R > 0 on ΩT . ii) When τ = 0, R = 0. However, (6.1) is non-sigular if we need to find the limit of Rx has a finite limit. To this end, R Rx as τ → 0. R Rx ϕx = 2 lim τ →0 R τ →0 ϕ lim =2 =2 d(x)−x/σ(x,0) d2 (x) x/d(x) x ds x 0 σ(s,0) − σ(x,0) , x ds x 0 σ(s,0) x where d(x) = ds . Since σ(x, 0) is strictly positive and finite on R, we have 0 σ(s, 0) x ds (a) 0 < |x 0 σ(s,0) | < ∞, and x ds x (b) | 0 σ(s,0) − σ(x,0) | < ∞ 118 for x ∈ R\{0}. Our task now boils down to show x ds x 0 σ(s,0) − σ(x,0) lim x ds x→0 x 0 σ(s,0) < ∞. In fact, x ds x 0 σ(s,0) − σ(x,0) lim x ds x→0 x 0 σ(s,0) x d(x) − σ(x,0) = lim xd(x) x→0 1 σ(x,0) = lim x→0 − 1 σ(x,0) xσ (x,0) − 2 σ (x,0) d(x) + xd (x) σ (x,0) σ 2 (x,0) = lim 1 x→0 d(x) + x σ(x,0) σ (0,0) σ 2 (0,0) = d(x) lim x→0 x = 1 + σ(x,0) σ (0,0) σ 2 (0,0) 1 + σ(x,0) 1 σ (0, 0) = < ∞. 2 σ(0, 0) 1 σ(x,0) Hence, equation (6.1) is well-defined and has no singularity in Ω. Next, we solve equation (6.1) on [xL , xU ] × [0, T ] using finite-difference method. Let Ri,j be the approximation of R(xi , τj ), where xL = x0 < x1 < · · · < xn = xU , and 0 = τ0 << τ1 · · · < τm = T . We approximate the derivatives R(xi , τj+1 )τ , and R(xi , τj+1 )xx at (x0 , τj+1 ) by R0,j+1 − R0,j , ∆τ R0,j+1 − 2R1,j+1 + R2,j+1 (Rxx )0,j+1 = . ∆x2 (Rτ )0,j+1 = 119 The derivatives at (xn , τj+1 ) are handled in the same way. Now, we are readily writing down our scheme 2 σ2 σ 2 2 σ 4 Rx 2 1 − x Rx Rτ − Rxx = σ , − Rx − 2 2 R 16 4 R i.e., LRi,j+1 = G(Ri,j ). (6.2a) (6.2b) In matrix form, it is  (∆x)2  ∆τ σ 2  0,j                − 1 2  ··· 0 (∆x)2 2 ∆τ σ1,j 1 −2 1 1 −2 − 1 −1 2 2 ··· 0 . . . ..  2 R0,j ∆τ .. . .. . −1 2 (∆x)2 2 ∆τ σi,j 1 −2 . . .. .. 1 ∆x + G(R0,j )  σ  0,j   . .  .   2 R  ∆x i,j = σ ∆τ + G(Ri,j )  i,j   . .  .    ∆x 2 Rn,j σ ∆τ + G(Rn,j ) . −1 2 −1 2 (∆x)2 2 ∆τ σn,j      R0,j+1    .   .   .        R   i,j+1     .   .   .       Rn,j+1 1 −2          .        (6.3) n,j 6.2 Approximations to the Implied Volatility ϕ Once we found the asymptotic of ϕ as τ → 0, we can get further terms of ϕ by Taylor expanding ϕ in powers of τ . This gives simple approximations to ϕ. The goal for this 120 section is to find functions ϕ0 (x) and ϕ1 (x), so that ϕ(x, τ ) = ϕ0 (x, τ )[1 + ϕ1 (x, τ )τ + O(τ 2 )] (6.4) satisfies (3.7), i.e., (τ ϕ2 )τ = 1−x ϕx 2 + σ 2 τ ϕϕxx + O(τ 2 ) ϕ in ΩT , and ϕ(x, 0) = ϕ0 (x). Recall ϕ0 (x) = x , d(x) x where d(x) = ds . 0 σ(s, 0) (6.6) So, we only have ϕ1 to solve. Matching terms involving τ , we have O(τ 1 ) : 6(ϕ0 )4 ϕ1 = 2σ 2 [ϕ0 − x(ϕ0 ) ][ϕ0 ϕ1 − x(ϕ0 ) ϕ1 − xϕ0 (ϕ1 ) ] + (ϕ0 )3 σ 2 (ϕ0 (x)) . (6.7) Plug d (x) = ϕ0 (x) = (ϕ0 ) (x) = 1 , σ(x, 0) x , d(x) 1 xd (x) − 2 , d(x) d (x) and 2d (x) xσ (x, 0)d 2 (x) 2xd 2 (x) (ϕ0 ) (x) = − 2 + + 3 d (x) d2 (x) d (x) 121 into (6.7), we get an equivalent equation that ϕ1 satisfies 2ϕ1 (x) + σ(x, 0)d(x)(ϕ1 ) (x) = − σ(x, 0) σ (x, 0) 1 + + 2 . xd(x) 2d(x) d (x) (6.8) Claim 6.1. The solution to (6.8) is: 1 ln ϕ1 (x) = − 2 d (x) x − ln d(x) σ(x, 0) Solution: Multiplying both sides of (6.8) by 1 2d (x) 1 ϕ (x) + (ϕ1 (x)) = − 2 d(x) d (x) σ(x, 0) . 1 = ln(d(x)) , we get σ(x, 0)d(x) 1 σ (x, 0) d (x) − − x 2σ(x, 0) d(x) , i.e., 1 2d (x) 1 1 ϕ (x) + (ϕ1 (x)) = − 2 (ln x) − (ln σ(x, 0)) − (ln d(x)) , d(x) 2 d (x) 1 2d (x) 1 ϕ + (ϕ1 ) = − 2 ln d(x) d x . d σ(x, 0) Now, multiply both sides of the above equation by d2 (x) = exp 2 (ln d) dξ , we can further simplify (6.8) to 2d(x)d (x)ϕ1 (x) + d2 (x)(ϕ1 (x)) = − ln x d(x) σ(x, 0) Hence, (d(x)2 ϕ(x)1 ) = − ln 122 x d(x) i.e., σ(x, 0) . . Therefore 1 x ϕ1 (x) = − 2 ln( )+C d (x) d(x) σ(x, 0) 1 x =− 2 ln( ) − ln d (x) d(x) σ(x, 0) We chose the constant C to be − ln σ(x, 0) . (6.9) σ(x, 0), so that ϕ1 (x) does not blow up as x approaches to zero. To summarize this subsection, we have the following approximation to the implied volatility ϕ(x, τ ) when time to expire is small: ϕ(x, τ ) = x d 1 1 − τ 2 ln d x d σ(x, 0) − ln σ(x, 0) + O(τ 2 ) , (6.10) x where d(x) = 6.3 ds . 0 σ(s, 0) Numerical Example We examine the accuracy of the asymptotic formula in (6.6) and (6.10) by comparing it to benchmark prices computed by solving (6.1) on a refined finite difference grid. Moreover, we illustrate the gain in accuracy provided by the two-term expansion (6.10) in Figures 6.1 and 6.2. We observe a satisfactory agreement between the asymptotic formula and the numerically computed smile. 123 Figure 6.1: Implied Volatility (For interpretation of the references to color in this and other figures, the reader is referred to the electronic version of this dissertation.) 124 Figure 6.2: Implied Volatility CEV 125 Chapter 7 The Calibration Problem In this section, we follow the ideas in [5]. One problem relevant in practice is the calibration problem–one wants to recover the value of the parameters of the model from market data. The asymptotics in Theorem 4.21 ii) exhibits a linear relation between the inverses of the local and implied volatilities. This leads us to propose the following penalized functional for the calibration problem: J ε (σ) =ε 1 σ 2 dx + i,j 1 1 − ∗ ϕ ϕ 2 (xi , τj ) . (7.1) where ϕ∗ are market implied volatilities 1 , to be minimized over a suitable functional space. We suspect that this minimization problem is well posed, at least for short time-to-maturities τj . Indeed in this case J ε is close to a convex functional. As a matter of fact we shall prove this property in the limiting case, that is, τj ≡ τ → 0. Specifically, we denote by x ξ(x) = σ(x, 0)−1 the inverse of the local volatility and ζ(x) = ξ(y)dy/x the inverse of 0 1 by inverting the BlackScholes formula for σ 126 the implied volatility2 in the limit τ → 0. We can consider the functional in terms of ξ, ζ and write (abusing the notation) J ε (ξ) = ε (ζ(xi ) − ζ ∗ (xi ))2 , ξ 2 dx + (7.2) i 1 where ζ ∗ = ∗ . ϕ We assume that these volatilities are consistent, i.e., that there exists σ0 (x, 0) for which the solution to (3.6) and (3.7) with σ(x) ≡ σ0 (x, 0) asymptotically replicates market prices, i.e., such that 1 . lim ϕ(xi , τ ) = ϕ∗ (xi ) ≡ ∗ ζ (xi ) τ →0 −1 It follows that J ε (ξ0 )|ε=0 = 0, with ξ0 = σ0 . This means that, by assumption, we have a solution to the exact asymptotic calibration problem. As a consequence, there are in fact infinitely many of them, as can easily be seen from the argument in the proof of Theorem 7.1 below. The whole point is to choose one of these solutions in a stable way. This is the question that the following result addresses. Theorem 7.1. i) For any ε > 0, there exists a unique solution of the minimization prob- lem inf ξ∈H 1 (R) J ε (ξ), (7.3) denoted by ξε . ˆ ii) When ε → 0, ξε converges uniformly in R to a solution ξ of the exact asymptotic 2 Theorem 4.21 127 calibration problem, i.e., 1 ˆ ζ(xi , 0) ≡ Proof. 0 ˆ ξ(sxi ) ds = ζ ∗ (xi ). (7.4) i) The existence of the infinium of J ε follows from the best approximation property [21] of a Hilbert space: “If C is a non-empty closed convex subset of a Hilbert space H and x a point in H, there exists a unique point y ∈ C which minimizes the distance between x and points in C.” For the uniqueness of the minimizer, let us first compute the Euler equation. Note that ξε being a solution for the minimizing problem (7.3) implies ∂J ε (ξε + hv) = 0 ∀v ∈ H 1 (R). ∂h h=0 That is ∂J ε (ξε + hv) ∂h h=0 =2ε (ξε + hv) v + 2 R i (1) = 2ε ξε v|∞ − −∞ 1 xi ξε + hvdy − ζ ∗ (xi ) xi 0 (ζε (xi ) − ζ ∗ (xi )) ξε vdy + 2 i (2) = 2 −ε (ζε (xi ) − ζ ∗ (xi )) ξε vdy + i =2 (ζε (xi ) − ζ ∗ (xi )) −εξε + i =0 ∀v ∈ H 1 (R). 128 1 xi vds xi 0 1 xi vdy xi 0 1 1 vdy xi (0,xi ) 1 xi vdy xi 0 h=0 1 x Equalities (1) holds because ζ(x) = x 0 ξ(y)dy, (2) holds since ξε ∈ H 1 (R). Therefore, the Euler equation turns out to be: −εξε + i 1 ∗ 1 (ζε (xi ) − ζi ) = 0. xi (0,xi ) (7.5) ˜ Next, we show uniqueness of the solution to (7.5). Take ξε and ξε as two solutions to ˜ (7.5), with corresponding quantities ζε , ζε ,respectively. We first take the difference of ˜ two corresponding Euler equations. Then multiply both sides by ξε − ξε and integrate over R, we get ˜ −ε(ξε − ξε ) + i ˜ ˜ 1 (ξε − ξε )(ζεi − ζεi ) 1(0,x ) dy = 0 i xi R ˜ ˜ −ε(ξε − ξε )(ξε − ξε )dy + ⇒ R 1 ˜ (ζ(xi )ε − ζε (xi )) 1(0,x ) = 0 i xi i ˜ (ζεi − ζεi )2 = 0. ˜ (ξε − ξε )2 dy + ⇒ε R i ˜ ˜ ˜ This implies ξε = ξε a.e. and ζεi = ζεi . Therefore, ξ(x) = ξ(x) ∀x ∈ R. ii) We now show the convergence of ξε as ε → 0. Multiply both sides of (7.5) by ξε , then 2 integrate over R. This gives us ε R ξε + 2 ε ∗ ζε (xi )ζi . 2 ζε (xi ) = ξε + R i ∗ (ζε (xi ) − ζi )ζε (xi ) = 0, i.e., i i 129 (7.6) (∗) Therefore, i 2 ζε (xi ) ≤ i ∗ ζε (xi )ζi . On the other hand, ∗ (ζε (xi ) − ζi )2 = i ∗ ζi 2 − 2 2 ζε (xi ) + i i (∗) ∗ ζi 2 − ≤ i ∗ ζε (xi )ζi i ∗ ζε (xi )ζi . (7.7) i Hence, inf ξ∈H 1 (R) (ζε (xi ) − ζ ∗ (xi ))2 ξε2 + J ε (ξ) = ε R i ∗ ζε (xi )ζi − = i i (ζε (xi ) − ζ ∗ (xi ))2 2 ζε (xi ) + i (7.6) ∗ ζi 2 − ∗ ζε (xi )ζi + ≤ i i ∗ ζε (xi )ζi i (7.7) ∗ ζi 2 . = i Passing to the limit as ε → 0 in (7.2), one sees the limit of ξε solves the exact asymptotic calibration problem. 130 Chapter 8 Comparing Relative Pricing of Options with Stochastic Volatility Ledoit, Santa-Clara and Yan molded the implied volatilities of call options of all maturities and strike prices as a joint diffusion with the stock price S(t). They assumed the stock price follows: dS(t) = µs (t)dt + σS1 dW1 (t). S(t) The implied volatilities V of any fixed time to maturity and moneyness X ≡ S(t)/K, where K is the strike price have dynamics given by dV (t, T, X) = µV (t, T, X)dt + σV1 (t, T, X)dW1 (t) + σV2 (t, T, X)dW2 (t), where W2 is a Brownian motion orthogonal to W1 . In order for no arbitrage opportunities to exist in trading the stock and its options, the drift of the processes followed by the implied volatilities is constrained in such a way that 131 it is fully characterized by the volatilities of the implied volatilities. The authors equated the drift of the options that we obtain in this manner with the short term interest rate and obtain a constraint on the drift of the implied volatilities. In conclusion, the authors derived the risk-adjusted dynamics of the implied volatilities. They also showed that the Black-Scholes implied volatilities of at-the-money options converge to the underlying asset’s instantaneous (stochastic) volatility as the time to maturity goes to zero. This asymptotic agrees with ours if x dy x = σ(x, 0) 0 σ(y, 0) One possibility is that σ(y, 0) ≡ σ for y ∈ (0, x). 132 ∀x ∈ R. APPENDICIES 133 Appendix A Derivation of Equation (1.7) ˜ Proof. We notice under the risk-neutral measure P, the underlying security satisfies dSt ˜ = rdt + Σ(t, St )dWt , St ˜ ˜ where W is a Brownian motion under P. Furthermore, at time t, the value of a call option is the discounted conditional expectation of the pay-off function (ST − K)+ , i.e., C(St , t) = ˜ e−r(T −t) ESt ,t [(ST − K)+ ]. Differentiating er(T −t) C(St , t), and let τ = T − t, one gets: d(er(T −t) C(St , t)) 1 =erτ [−rC(St , t) + Ct (St , t)]dt + erτ CS (St , t)dSt + erτ CSS (St , t)dSt dSt 2 1 2 =erτ −rC(St , t) + Ct (St , t) + rSt CS (St , t) + Σ2 (St , t)St CSS (St , t) dt 2 ˜ + erτ CS (St , t)St Σ(St , t)dWt . 134 Since er(T −t) C(St , t) is a martingale, the drift is zero. That is, 1 2 erτ −rC(St , t) + Ct (St , t) + rSt CS (St , t) + Σ2 (St , t)St CSS (St , t) = 0. 2 Notice that erτ > 0, and the above equation holds for every possible path of St . Therefore, one can replace (St , t) by (S, t) ∈ R+ × (0, T ), and get equation (1.7). 135 Appendix B Proof for Lemma 5.3. 13p + 3 1/p 2 , z0 = − , and 3 + 3p (1 − p)2 2 4 1+η z1 = 1 − z0 . For any κ ≥ 1 > √ , and 0 < εp < min √ ,1 , 1+p 2πp M √ 2πεp εp p εp √ ,c= define Y = √ , and Z0 < 0 such that N (Z0 ) ≤ √ η . p Mκ 2 Mκ − ε (1 + η) M κ Given p ∈ (0, 1/2), η ∈ (0, 0.1), let M = The function ψ(z) is defined as:    c · 1   |z|p     ψ(z) = c · g(z)/|z0 |p+3       ˜ ψ(z − z1 )  z < z0 z0 ≤ z ≤ z1 (B.1) z1 < z, where g(z) =    g (z) =  1 p(p+1) 2 (z   g (z) =  2 p(p+1)2 (z−z1 )3 12 − z0 )2 |z0 | − p(p+1)2 (z 6 + |z0 |p+3 , z +z − z0 )3 + p(z − z0 )|z0 |2 + |z0 |3 z0 ≤ z < 0 2 1 z0 +z1 2 ≤ z ≤ z1 (B.2) 136 and ˜ ψ(z) = 1 1+η εp 1− √ Mκ εp z N Y ln( ) + Z0 + √ A Mκ , d 2 1 N (d) = √ e−y /2 dy. 2π −∞ Note that M ≥ 2, z0 < 0, z1 > 0 and 0 < Y < 1 for all ε defined above. Before we check all five conditions in lemma 5.3, we need to show: Proposition B.1. 1.) ψ(z) ∈ C 2 (R). 2.) It is non-decreasing on R, and 3.) ψ(z) ≤ 1 = lim ψ(z). 1 + η z→∞ Proof. 1.) Notice ψ(z) is piece wise defined, and smooth in each piece. To show ψ(z) ∈ C 2 (R), we only need to check the smoothness at points where different pieces connect. Simple calculations give: ψ (z) = c · p|z|−p−1 ψ (z) = c · p(p + 1)|z|−p−2 z < z0 z < z0 ;    g (z) = p(p + 1)(z − z )|z | − p(p+1)2 (z − z )2 + p|z |2 , z ≤ z < z0 +z1  1 0 0 0 0 0 2 2 g (z) =   g (z) = p(p+1)2 (z−z1 )2 , z0 +z1 ≤ z ≤ z ;  2 1 4 2 137    g (z) = p(p + 1)|z | − p(p + 1)2 (z − z ), z ≤ z < z0 +z1  1 0 0 0 2 g (z) =   z0 +z1 g (z) = p(p+1)2 (z − z ),  2 ≤ z ≤ z1 ; 1 2 2 and p 1 1 ψ (z) = 1+η √1 1 − √ε exp{−Λ2 /2}Y z−z 2π Mκ 1 ψ (z) = −C1 exp{−Λ2 /2}(1 + Y Λ)Y z > z1 1 (z−z1 )2 z > z1 , where C1 = 1 1 √ 1 + η 2π εp 1− √ Mκ and Λ(z) = Y ln z − z1 + Z0 . A Note that the constant C1 is between 0 and 1, for all A and ε. To show ψ is C 2 at z0 and (z0 + z1 )/2, one simply computes left and right limits of ψ and its derivatives up to order two, and show they agree at z0 and (z0 + z1 )/2. To see ψ is C 2 at z1 , one needs to show ˜ lim ψ (z) = 0 and z→0+ 138 ˜ lim ψ (z) = 0. z→0+ (B.3) The key is to show exp − Λ2 2 = o(z N ) for any N ∈ N as z → 0+ . Indeed, z + Z0 )2 /2} A Z2 z z 2 = exp −Y 2 ln /2 − 0 − Y ln Z0 A 2 A exp{−(Y ln z 2 z < exp −Y 2 ln /2 − Y ln Z0 A A   2 Y   z A 2  = exp ln · ln · exp ln  A  z   z = A ln A z z A ln A z = and lim ln z→0+ A z Y2 2 Y2 2 · A Y Z0 z A Y Z0 z Y2 2 −Y Z 0 , = +∞. Therefore, lim exp{−(Y ln z→0+ z 1 + Z0 )2 /2} · N = 0 for A z any given large N . These complete the proof for the smoothness of ψ. 2.) The non-decreasing property of ψ is a simple evaluation of its first derivative. For z < z0 , or z ≥ (z0 + z1 )/2, it is clear, ψ (z) ≥ 0. It is left to show g1 (z) ≥ 0 for 139 z0 ≤ z < (z0 + z1 )/2. Recall g1 (z) p(p + 1)2 =p(p + 1)(z − z0 )|z0 | − (z − z0 )2 + p |z0 |2 2 2 (p + 1)(z − z0 ) 3 =p + |z0 | − p(p + 1)2 (z − z0 )2 2 4 √ √ p = (1 + 3)(p + 1)(z − z0 ) + 2 |z0 | (1 − 3)(p + 1)(z − z0 ) + 2 |z0 | 4 z +z 2 for z0 ≤ z < 0 2 1 . Moreover, |z − z0 | ≤ 1+p |z0 | implies (1 − √ √ √ 2 3)(p + 1)(z − z0 ) ≥ −|1 − 3|(p + 1) |z0 | = −2( 3 − 1) |z0 |. p+1 Therefore, both factors of the last equality are positive and consequently, g1 (z) ≥ 0 for z0 ≤ z < (z0 + z1 )/2. 1 3.) Consequently, ψ ≤ lim ψ = 1+η . z→∞ Proposition B.2. ψ(z) ∈ 0, 1 1+η in R (B.4) Proof. It is simply because of the non-decreasing property of ψ and the facts lim ψ(z) = 0, z→−∞ lim z→+∞ 1 and ψ(z) = 1+η . Proposition B.3. |ψ (z)| has uniform upper bound on R for all A, ε1 . 1 defined in Lemma 5.3 140 Proof. 1. or z < z0 < 0 ψ (z) =c · p · |z|−p−1 ≤c · p · |z0 |−p−1 , where c = εp √ (1+η) M κ < 1 √ , (1+η) M κ and z0 = − 13p + 3 1/p both have bounds in3 + 3p dependent of A and ε. 2. For z0 ≤ z ≤ z1 ψ (z) = c · P2 (z), where c = εp √ (1+η) M κ < 1 √ (1+η) M κ and P2 (z) is a quadratic polynomial of which the coefficients depend only on p and z0 . Hence, the bounds for ψ (z) are finite and independent of A and ε. 3. For z > z1 ψ (z) = C1 exp{−Λ2 /2} where C1 = 1 1+η εp 1− √ Mκ Y , z − z1 1 √ , and Λ = Y ln 2π (a) 0 < z − z1 ≤ A 141 z − z1 A + Z0 . We first find an upper bound for exp[−Λ2 /2]. exp[−Λ2 /2] = exp = A z − z1 Y ln A z − z1 = A z − z1 ≤ A z − z1 Y · e−Z0 e Z + 0 2 Y ln z−z0 + Z0 2 2 A Y 2 ln z−z1 + Y Z0 2 2 A Y 2 ln z−z1 + Y Z0 2 2 A The last inequality holds because of z−z Z −Z0 Y ln A 1 + 20 2 z − z1 A Y ln 2 − Z0 · ·e z−z Z −Z0 Y ln A 1 + 20 2 . z − z1 A Y ln 2 ≤ 1 when 0 < Y < 1, 0 < Z + 0 < 0 and consequently, 2 z − z1 ≤ 1, and Z0 < 0. A Therefore, for 0 < z − z1 ≤ A, Y ψ (z) ≤ C1 · · A A z − z1 1 1 √ where C1 = 1 + η 2π Y 2 ln z−z1 + Y Z0 −1 2 2 A εp 1− √ Mκ , . Note that A z − z1 = exp − Y 2 ln z−z1 + Y Z0 −1 2 2 A Y2 ln 2 z − z1 A + Y Z − 1 · ln 2 0 z − z1 A ↓0 as z − z1 → 0, or z − z1 → ∞. So it would have an interior maximum on R when 142 its derivative with respect to z equals to zero. That is, Y2 ln 2 z − z1 ˜ Y Z0 + − 1 = 0. However, this point, A 2 z − z1 ˜ 2 Y 2 = exp 1 − Z0 > exp > 1 is outside of the interval 2 A 2 Y Y2 z − z1 ∈ (0, 1], so A when A z − z1 Y 2 ln z−z1 + Y Z0 −1 2 2 A ∈ (0, 1] for z − z1 ∈ (0, 1].2 A Consequently, |ψ (z)| (1) Y ·1 A √ (2) 1 1 p 2πεp 1 √ · √ · ≤ 1 + η 2π 2 M κ/2 A ≤ C1 · εp 1 p ·√ · 1+η Mκ 2 1 p · √ ≤ . 1 + η 2 Mκ (3) ≤ 2 z−z YZ Y ln 1 0 2 A + 2 −1 A Note inequality (1) is from the last estimate on . z − z1 √ p 1 (2) holds because C1 < 1+η √1 , and Y < p √ 2πε when εp < 1. (3) holds 2 2π M κ/2 because A ≥ 2. 2 The range of z − z1 → 0+ . A z−z1 Y 2 ln z−z1 + Y Z0 −1 2 2 A is between the value at z − z1 = A and when 143 (b) z − z1 > A: A Y · exp{−Λ2 /2} A z − z1 Y ≤ C1 · · 1 · 1 A √ √ 1 1 M κ − εp p 2πεp 1 √ √ = · √ · 1 + η 2π 2 M κ − εp A Mκ p 1 ε p 1 ≤ ·√ · · 1+η Mκ 2 2 |ψ (z)| = C1 · < p 1 · √ 1 + η 4 Mκ Proposition B.4. |ψ (z)| is finite and the maximum value is independent of A. Proof. We estimate |ψ | in the following intervals separately. 1. z < z0 < 0: εp √ ψ = · p(p + 1)|z|−p−2 . (1 + η) M κ Since (a) εp √ (1+η) M κ < 1 √ (1+η)2 2 for all ε and A3 , and (b) z0 depends only on on p, the decay rate of σ 2 . |ψ| is finite and the maximum value independent of A and ε. 2. z0 ≤ z ≤ z1 : In this case, ψ is a polynomial on [z0 , z1 ]. Since all coefficients and the end points z0 , z1 of this polynomial are finite and depending only on p, the maximum and minimum 3 satisfies Lemma 5.3 144 values of ψ are finite and only depending on p. 3. z > z1 |ψ (z)| = C1 1+YΛ 2 (z − z1 )2 exp Λ 2 = C1 · ·Y 2 Λ2 A · exp − z − z1 2 · 1+YΛ ·Y A2 1 εp z − z1 1 √ (1 − √ + Z0 , and 0 < C1 = ) < 1. Next, we show where Λ = Y ln A 1 + η 2π Mκ |ψ (z)| ≤ 1 + e−1/2 for z > z1 in the following subintervals. (a) 0 < z − z1 ≤ A = = 2 Λ2 1+YΛ A · exp − · ·Y z − z1 2 A2     2 −Y 2 A Z0   1 + Y Λ ln z − z1 · exp ·Λ · − ·Y   z − z1 A 2 A2 Y z − z1 − 2 Λ−2 − Z0 Λ 1 + Y Λ ·e 2 · ·Y A A2 ≤ Y z − z1 − 2 Λ−2 − Z0 Λ · e 2 · |1 + Y Λ| · Y A ≤ Y z − z1 − 2 Λ−2 − Z0 Λ e 2 Y + A (1) (2) ≤ Y z − z1 − 2 Λ−2 − Z0 Λ e 2 + A Y z − z1 − 2 Λ−2 − Z0 Λ e 2 |Λ| Y 2 A Y z − z1 − 2 Λ−2 − Z0 Λ e 2 |Λ|, A where inequalities (1) holds because A > 2, and (2) holds since 0 < Y < 1. 145 Next, we find an upper bound for and Y z − z1 − 2 Λ−2 − Z0 Λ ·e 2 A Y z − z1 − 2 Λ−2 − Z0 Λ · e 2 · |Λ|. A i. Y z − z1 − 2 Λ−2 − Z0 Λ Λ2 2 ·e = exp − A 2 ≤ 1 in R. ii. Y z − z1 − 2 Λ−2 − Z0 Λ Λ2 · e 2 · |Λ| = −Λ · exp − A 2 . The last quality has maximum when its derivative, Λ exp − Λ2 2 (−1 + Λ2 ), + + equals to zero, or when z → z1 , i.e., when Λ = −1 4 , or z → z1 . Since Λ2 (−1)2 + = e−1/2 , and −Λ exp{− } → 0 as z → z1 , we get 1 · exp − 2 2 Y z − z1 − 2 Λ−2 − Z0 · e 2 · |Λ| ≤ e−1/2 . A 4 Since 0 < z − z1 ≤ A, Y > 0 and Z0 < 0, the only feasible solution to −1 + Λ2 = 0 is Λ = −1. 146 In conclusion, when 0 < z − z1 ≤ A, |ψ (z)| =C1 · 2 A Λ2 · exp − z − z1 2 · ≤ 2 Λ2 A · exp − z − z1 2 ≤ Y z − z1 − 2 Λ−2 − Z0 Λ e 2 ·Y + A · 1+YΛ ·Y A2 1+YΛ ·Y A2 Y z − z1 − 2 Λ−2 − Z0 Λ e 2 |Λ| · Y 2 A (1) ≤ 1 · Y + e−1/2 · Y 2 (2) ≤ 1 + e−1/2 . Inequality (1) is from the estimates on Y Z z−z1 − 2 Λ−2 − 0 Λ ·e 2 A and Y z−z1 − 2 Λ−2 · A Z0 √ e− 2 Λ ·|Λ|. Inequality (2) holds because 0 < Y < 1 when εp < min{1, 2/( 2πp)}. (b) z − z1 > A: |ψ (z)| ≤ ≤ 1+YΛ 2 (z − z1 )2 exp Λ 2 ·Y 1 |Y Λ| Y + 2 2 exp{Λ2 /2} (z − z1 ) (z − z1 ) (1) ≤1 + =1 + Y2 |Λ| · 2 exp{Λ2 /2} (z − z1 ) 2 A |Λ| Y2 · 2· z − z1 A exp{Λ2 /2} (2) ≤1 + Y 2 · 1 · 1 · (2) ≤ 1 + 1 · e−1/2 , 147 |Λ| exp{Λ2 /2} Y ≤ Y < 1; (2) exp Λ2 /2 |Λ| holds because z−z1 > A ≥ 2 and 0 < Y < 1; (3) holds because < e−1/2 exp{Λ2 /2} where inequalities (1) holds because |z − z1 | ≥ A ≥ 2, and as discussed in the previous case, and 0 < Y < 1. In summary, when z − z1 > A, |ψ (z)| ≤ 1 + e−1/2 . ˜ ˜ If one exam the proof for Proposition B.1, it is clear that z ψ (z) → 0, ψ (z) → 0, ˜ ψ (z) → 0 as z → +∞, and 1 ˜ lim ψ(z) = 1+η . These implies (ii), and (v) in Lemma 5.3. z→+∞ Now, we left to show (iii) and (iv) in Lemma 5.3. (iii): It is equivalent to show εp ψ≤√ M κ|z|p εp ψ≤√ Mκ z ∈ (−∞, −1), and z ∈ [−1, z1 + A). As usual, we estimate ψ in different intervals. A critical property is that ψ is non-decreasing in R. 148 1. z < z0 < −1: ψ ≤ ψ(z0 ) 1 εp 1 √ = 1 + η M κ |z0 |p εp <√ , Mκ since 0 < 1 1 , < 1. 1 + η |z0 |p 2. z0 ≤ z ≤ z1 : ψ ≤ ψ(z1 ) = 1 εp |z0 |p √ 1 + η M κ |z0 |p εp √ < , Mκ 3. z1 < z ≤ z1 + A: ψ ≤ ψ(z1 + A) εp εp 1− √ N (Z0 ) + √ Mκ Mκ √ p ε 1 Mκ =√ − 1 N (Z0 ) + 1 εp Mκ 1 + η √ (1) εp 1 Mκ ηεp ≤ √ −1 · √ +1 , εp Mκ 1 + η M κ − εp √ εp 1 M κ − εp ηεp =√ · ·√ +1 εp Mκ 1 + η M κ − εp = 1 1+η εp =√ Mκ 149 Inequality (1) holds because we choose Z0 so that N (Z0 ) ≤ √ ηεp . M κ − εp (iv): 1. z < z0 : zψ ψ = zpz p−1 = p. zp zψ ψ ≤ z0 ψ (z0 ) = p. ψ(z0 ) 2. z0 ≤ z ≤ 0: The first inequality holds because when z0 < z < 0, ψ > 0, ψ > 0, and ψ < 0. Therefore, |z|, |ψ (z)|, and 1/|ψ(z)| are decreasing as z ↑ 0. zψ 3. 0 < z ≤ z1 : since z, ψ > 0, and ψ > 0, it is equivalent to show ≤ p, i.e., ψ W (z) := zψ − pψ ≤ 0 for 0 < z ≤ z1 . We now exam the inequality over the following two sub-intervals5 . (a) z + z1 For 0 < z ≤ 0 , 2 W (z) = zg1 (z) − g1 (z). • Endpoints W z0 + z1 2 14/3p2 3 = z , 1+p 0 <0. 5 We z +z z −z p−1 2 frequently use the equalities 0 2 1 = p+1 z0 , and 1 2 0 = − p+1 z0 . 150 • Exam the interior extreme points. Since W (z) = (1 − p)g1 (z) + zg”1 (z) = p−3 2 p(p + 1)2 (z − z0 )2 + p(p + 1)(−3)z0 (z − z0 ) − 2p2 z0 , 2 W (˜) = 0 when z p−3 2 (p + 1)2 (z − z0 )2 − 3(p + 1)z0 (z − z0 ) − 2pz0 = 0 i.e., 2 3z ± (2p − 3) z − z0 = 0 ˜ z , i.e., (p − 3)(p + 1) 0 z= ˜ 2p + 1 z0 < 0, (p − 3)(p + 1) z= ˜ −2 z + z1 + 1 z0 = 0 . p+1 2 or We see that the two extreme points are either unfeasible, or an end point. z +z Therefore, we conclude W (z) < 0 for 0 < z ≤ 0 2 1 . (b) z + z1 When 0 < z ≤ z1 , 2 W (z) = zg2 (z) − pg2 (z) =z p(p + 1)2 (z − z1 )3 p(p + 1)2 (z − z1 )2 −p + |z0 |p+3 . 4 12 • Check endpoints: W z0 + z1 2 =− 14p2 |z |3 < 0 3(1 + p) 0 W (z1 ) = −p|z0 |p+3 < 0. 151 • Let us now check interior extreme points. Since W (z) 2 2 p(p + 1)2 (z − z1 )2 z 2 · 2(z − z ) − p (p + 1) · 3 · (z − z )2 = + p(p + 1) 1 1 4 4 12 p(p + 1)2 = (z − z1 ) [(3 − p)(z − z1 ) + 2z1 ], 4 W (˜) = 0 if z = z1 , or z = z ˜ ˜ 1−p z . Furthermore, 3−p 1 1−p z ) 3−p 1 −2 −2 p(p + 1)2 ( 3−p z1 )2 p(p + 1)2 ( 3−p z1 )3 1−p z · −p + |z0 |p+3 = 3−p 1 4 12 W( = p(p + 1)2 1 3 (1 − p)z1 − p|z0 |p+3 3 3 (3 − p) =− 14p2 |z |3 3(p + 1) 0 <0. 4. When z > z1 , zψ ψ where Λ = Y ln( • 6 Recall, = p 2 1 1 z · 1+η · (1 − √ε ) · √1 · e−Λ /2 · Y · z−z 2π Mκ 1 1 1+η p p (1 − √ε )N [Λ] + √ε Mκ , Mκ z − z1 ) + Z0 . A For 0 < z − z1 ≤ A, we have z1 < z < 2A 6 . By Proposition B.3, in Lemma 5.3, we assume A > z1 > 2. 152 ψ (z) ≤ p εp 1 · ·√ . So 2A 1 + η Mκ zψ ψ p p 1 2A · 2A · 1+η · √ε Mκ ≤ = p. 1 · √εp 1+η Mκ • z < 2, and consequently, z − z1 For z − z1 > A ≥ z1 , we have 1 < zψ ψ ≤ z z−z1 p 1 · 1+η · 1 − √ε 1 1+η √1 2π Mκ p · √ε Mκ p √1 · 1 · Y 2π Mκ εp √ Mκ √ √ p 2π M κ−ε √ · √1 · 2 2π Mκ p √ε Mκ 2 · e−Λ /2 · Y 2 · 1 − √ε ≤ (1) 2 · = p · √ pε p M κ−ε =p, √ p 2π (1) holds since Y = 2 √ pε p . M κ−ε This completes the proof for (iii), hence the proof for Lemma 5.3. 153 BIBLIOGRAPHY 154 BIBLIOGRAPHY [1] Tom M. Apostol 1965 Mathematical Analysis Addison-Wesley Publishing Company, Inc. Second Print [2] D. G. Aronson, P. Besala, 1967 Journal of Differential Equations Parabolic Equations with Unbounded Coefficients 3 1-14. [3] D. G. Aronson 1968 Annali Della Scuola Normale Superiore di Pisa, Classes di Scienze Non-negative Solutions of Linear Parabolic Equations 4 607-694. [4] D. G. Aronson, James Serrin 1967 Arch. Rational Mech. Anal Local Behavior of Solutions of Quasilinear Parabolic Equations 25 81-122. [5] H Berestycki, J Busca, and I Florent, 2002 Asymptotics and calibration of local volatility models Quantitative Finance Vol. 2 61-69 [6] H Berestycki, J Busca, and I Florent, 2004 Computing the Implied Volatility in Stochastic Volatility Models Communications on Pure and Applied Mathematics Vol. LVII 1352-1373 [7] M.A. Bharadia, N. Christofides and G.R. Salkin, 1996 Advances in Futures and Options Research, JAI Press, London Computing the black Scholes implied volatility, vol. 8, 15-29. [8] Tomas Bj¨rk 1998 Oxford University Press Arbitrage Theory in Continuous Time o [9] T. Bj¨rk, 2004 World Scientific, Singapore Arbitrage in Continuous Time o [10] Black F, and Scholes M, 1973 J. Political Economy The pricing of corporate liabilities 81, No.3, 637-54. 155 [11] I. Bouchouev and V. Isakov, 1999 Inverse Problems Uniqueness, stability and numerical method for the inverse problem that arises in financial markets Vol. 15, 95-116 [12] I. Bouchouev and V. Isakov, 1997 Inverse Problems The inverse problem of option pricing 13, 7-11 [13] A. Brandt, 1969 Israel journal of mathematics Interior Schauder Estimates for Parabolic Differential (or Difference) Equations via the Maximum Principle Vol. III 254-263. [14] M. Brenner and M.G. Subrahmanyam, 1988 Financial Anal. J. A simple formula to compute the implied standard deviation 44, 80-83. [15] D.R. Chambers and S.K. Nawalkha, 2001 The Financial Review An improved approach to computing implied volatility, 38, 89-100. [16] D.M. Chance, 1996 The Financial Review A generalized simple formula to compute the implied volatility, Vol. 31 4, 859-867. [17] C.J. Corrado and T.W. Miller, 1996 J. Banking Finance A note on a simple, accurate formula to compute implied standard deviations 20 595-603 [18] J. Cox, S. Ross, and M. Rubinstein, 1979 J. Finance Economics Option pricing: A simplified approach 7, 63-229 [19] Zui-Cha Deng, Jian-Ning Yu, and Liu Yang, 1 April 2008, Journal of Mathematical Analysis and Applications An inverse problem of determining the implied volatility in option pricing, Vol. 340, issue 1, 16-31 [20] E. Derman and I. Kani, 1944 Risk Riding on a smile Vol. 2, 9-32. [21] Dunford, N., and Schwartz, J.T. 1958 Wiley-Interscience Linear operators, Parts I and II [22] B. Dupuire, 1994 Risk Pricing with a smile Vol. 7 18-20 [23] S. Feinstein, December, 1988 Federal Reserve Bank of Atlanta A source of unbiased implied volatility Working paper 88-9. [24] Friedman A, 1964 Partial Differential Equations of Parabolic Type Englewood Cliffs, NJ: Prentice-Hall 156 [25] Ronald Guenther, Nov, 1966 Some Elementary Properties of the Fundamental Solution of Parabolic Equations Mathematics Magazine Vol. 39 No.5 294-298 [26] S.L. Heston, 1993 Rev. Financial Studies A closed-form solution for option with stochastic volatility with applications to bond and currency option 6, 43-327 [27] E. Hofstetter, and M.J.P. Selby, 2001 Online The Logistic Function and Implied Volatility Quadratic Approximation and Beyond [28] J. Hull, 1997 Prentice-Hall, Englewood Cliffs, NJ Options, Futures and other Derivatives [29] J.C. Hull and A. White, 1988 Advances in Futures and Options Research An analysis of the bias in option pricing caused by by a stochastic volatility 3, 29-61. [30] A. M. Il’yin, A. S. Kalashnikov, and O. A. Oleynik 2002 Second-order linear equations of parabolic type Journal of Mathematical Sciences Vol. 108, No. 4 435-542 [31] V. Isakov and I. Bouchouev, 1999 Inverse Problems Uniqueness, stability and numerical methods for the inverse problem that arises in financial markets 15, 95-116 [32] Jes´s Chargoy-Coronaa, and Carlos Ibarra-Valdez 2006 Physica A: Statistical Mechanics u and its Applications A note on Black-Scholes implied volatility Vol. 370, issue 2, 681688. [33] B. F. Knerr, 1980 Rational Mechanics and Analysis Parabolic Interior Schauder Estimates by the Maximum Principle Vol. 75 51-58. [34] Ledoit, Olivier, Santa-Clara, Pedro and Yan, Shu 2002 eScholarship University of California Relative Pricing of Options with Stochastic Volatility [35] Merton R C 1973 Bell. J. Econ. Management Sci. Theory of rational pricing 4, No. 1, 141-183. [36] R. Merton, 1976 J. Finance Economics Option pricing when underlying stock returns are discontinuous 3 44-125 [37] M. Li 2008 European Journal of Operational Research Approximate inversion of the Black-Scholes formula using rational functions Vol. 185, Issue 2, 743-759 157 [38] J. Moser, 1964 Communications on Pure and Applied Mathematics A Harnack inequality for parabolic differential equations. 17 101-134 Correction, ibid 20 213-236 (1967) [39] M. Rubinstein, 1994 J. Finance Implied binomial trees 49, 771-818 [40] Steven Li November 2005, Applied Mathematics and Computation A new formula for computing implied volatility Vol. 170, issue 1, 611-625. [41] J. Teichmann, W. Schachermayer, 2006 Math. Finance How close are the option pricing formulas of Bachelier and Black-Merton-Scholes? [42] L. Wang 1992 Commun. Pure Appl. Math. On the regularity theory of fully nonlinear parabolic equations: I XLV, 2776 [43] P. Wilmott, S. Howison and J. Dewenney, 1995 Cambridge University Press, Cambridge, UK The Mathematics of Financial Derivatives [44] W. Zhao, L. Chen, and S. Peng, 2006 SIAM. J. Sct. Comput. A New Kind of Accurate Numerical Method for Backward Stochastic Differential Equations Vol. 28, No. 4, 1563-1581. 158