ON THE ESTABLISHMENT OF HYPERBOLICITY OF SHALLOW WATER MOMENT EQUATIONS IN TWO DIMENSIONS By Matthew Bauerle A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Mathematics—Doctor of Philosophy 2023 ABSTRACT In this thesis, we investigate the two-dimensional extension of a recently introduced set of shallow water models based on a regularized moment expansion of the incompressible Navier-Stokes equations [22, 21]. We show the rotational invariance of the proposed moment models with two different approaches. The first proof involves the split of the coefficient matrix into the conservative and non-conservative parts and prove the rotational invariance for each part, while the second one relies on the special block structure of the coefficient matrices. With the aid of rotational invariance, the analysis of the hyperbolicity for the moment model in 2D is reduced to the real diagonalizability of the coefficient matrix in 1D. Then we prove the real diagonalizability by deriving the analytical form of the characteristic polynomial. Furthermore, we extend the model to include a more general class of closure relations than the original model and establish that this set of general closure relations retain both rotational invariance and hyperbolicity. ACKNOWLEDGMENTS This work was impacted by many people. It has been said that we stand on the shoulders of giants so it only seems fair we mention them. The main thanks for research goes to my advisor Professor Andrew Christlieb for orga- nizing the research group that led to this thesis. I thank him for including me in the group and helping me through the struggles of graduate studies. The meetings and encouragement has kept me on track. The Christlieb group AKA SPECTRE (it is from James Bond and we will tell the story if you wish) has been an enjoyable space to both work and spend free time in. I look forward to seeing what other work and tales will come out from its members. My committee deserves thanks for reviewing this thesis and offering feedback. Thanks goes to Professor Juntao Huang for formalizing the proofs and Dr. Mingchang Ding for the numerical analysis of the equations. I would like to thank Michigan State University and Texas Tech University for the support of this work, through the ongoing support of the PIs and students. I would further like to thank Keith Promislow for his helpful discussions and insight during the development of this effort. The reading course we set up was also greatly appreciated as a way to finish out credits and review literature. I would like to thank the Office of Naval Research for the support of this work under grant number N00014-18-1-2552. Last, I would like thank my family for helping me through the ups and downs. My mother was my first teacher and my father first got me interested in mathematics and engineering. I still appreciate their encouragement and motivation (sometimes). iii TABLE OF CONTENTS CHAPTER 1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . CHAPTER 2 SHALLOW WATER MOMENT EQUATIONS . . . . . . . . . . . 2.1 Equation Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CHAPTER 3 ANALYSIS OF ROTATIONAL INVARIANCE . . . . . . . . . . . 3.1 Rotational invariance of the SWME and the HSWME . . . . . . . . . . . 3.2 Other proof of rotational invariance of the SWME and the HSWME . . . 3.3 General closure relation with the rotational invariance . . . . . . . . . . . CHAPTER 4 ANALYSIS OF THE HYPERBOLICITY . . . . . . . . . . . . . . 4.1 Hyperbolicity of the HSWME . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Hyperbolicity of the β-HSWME . . . . . . . . . . . . . . . . . . . . . . . 4.3 A framework for constructing general closure relations with rotational invariance and hyperbolicity . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 An example of constructing a general closure . . . . . . . . . . . . . . . . CHAPTER 5 CONCLUDING REMARKS . . . . . . . . . . . . . . . . . . . . . BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 4 4 10 10 17 21 28 28 35 39 39 43 44 47 iv CHAPTER 1 INTRODUCTION Shallow water equations are widely used in modeling meteorological and oceanographic phe- nomena. They are most useful in scenarios where the vertical dimension is much smaller than horizontal dimensions of the problem domain. This is often the case with simulations spanning thousands of kilometers horizontally but only a couple of kilometers vertically. An introduction to shallow water equations and proof of hyperbolicity in 2 dimensions is shown in [29]. The definition of hyperbolicity for 1st order systems of PDEs in multiple spacial dimensions is given in [25]. The Euler equations were shown to be rotationally invariant and therefore the system is hyperbolic if the system is hyperbolic in one direction. While the vertical dimension is often the smallest dimension, vertical dynamics play a critical role in many scenarios. Shallow water moment equations provide an intermediate level of resolution between shallow water and a full 3 dimensional model. The first expansion to the shallow water model is to replace the horizontal velocities with a polynomial expansion [22]. This creates a system of first order ODEs. For the 0th order model the system is the shallow water equations which form a conservative hyperbolic system. When one moment is added the system is still globally hyperbolic but the system loses the form of a conservation law. For two or more moments the system ceases to be hyperbolic. Subsequent work revealed this could lead to instabilities in solutions and the 1 dimensional equations were altered to make them hyperbolic [21]. In [26] the same process of regularization was extended to apply to the 2 dimensional problem. While it is conjectured in that work that the 2 dimensional regularized shallow water moment equations are globally hyperbolic it is not proven. Some extensions to shallow water equations are presented in [3]. Various boundary conditions and modifications are discussed. One of the most important modifications is the introduction of multilevel shallow water models. The examples given is of water dynamics in the strait of Gibraltar and submarine avalanches. Hyperbolicity loss is also a problem for multilevel shallow water equation. This is associated with a shear velocity differential between the 1 layers that physically creates Kelvin-Helmholtz instabilities which cannot be modeled with a piece-wise constant flow [6]. This work deals with that two-dimensional extension of shallow water models based on a regularized moment expansion of the incompressible Navier-Stokes (NS) equations [22, 21]. Moment models have a long history of constructing computationally efficient representations of complex dynamics arising in a higher dimensional model. Historically, this approach has been widely used in the reduction of kinetic equations to hierarchies of moment models, where the resulting evolution equation for the mth moment equation depends on the (m+1)st moment [14]. At some point, one needs to restrict the expansion to m moments and introduce a model for the (m + 1)st moment in terms of lower moments to obtain a solvable system of equations. This leads to the well-known moment closure problem, which is the study of what kind of model preserves the desired hyperbolic structure [23, 4, 5, 11, 8]. In this work, we are looking at a new class of models for describing systems tradition- ally modeled with shallow water equations [28] and multi-layer shallow water equations [2, 24, 10, 9]. In [22], the new class of models, called the shallow water moment equations (SWME), was derived by taking moments with respect to the Legendre polynomials of the vertical direction of the three-dimensional (3D) incompressible NS equations. The first two moments of the system yield the traditional shallow water equations. In principle, higher- order moments offer an approach to include vertical information without introducing a mesh in the vertical direction, as one would need for the 3D NS equations. Fundamentally, this is seeking to address a multi-scale problem by providing a path to increased fidelity of the flow dynamics without needing to introduce a mesh to resolve the vertical direction. The system as originally proposed is mathematically elegant, but does not preserve hyperbolicity for higher numbers of moments in 1D and 2D. The developers of the model realized this and introduced a class of regularizations in 1D called the hyperbolic shallow water moment equations (HSWME) and proceeded to show that this regularized system was provably hy- perbolic [21]. In addition, they have extended the 1D system to a variety of settings and 2 compared depth averaged results of the incompressible 2D Navier Stokes to the regularized moment based models. In their work, they demonstrated increased fidelity when adding additional moments [22, 21]. More recently, there are some follow-up works, including the equilibrium stability analysis [19], well-balanced schemes [20], efficient time discretizations [1], axisymmetric model [27], multilayer-moment models [13], and extension to sediment transport [12]. In this work, our main result is that we establish that the 2D extension to the regularized moment expansion of the incompressible NS equations is rotational invariant and hyperbolic for arbitrary number of moments. We present two different proofs of the rotational invari- ance. The first proof involves the split of the coefficient matrix into the conservative and non-conservative parts and prove the rotational invariance for each part, while the second one relies on the special block structure of the coefficient matrix. With the aid of rotational invariance, the hyperbolicity in 2D reduces to the real diagonalizability of the matrix in 1D. To analyze the eigenvalues, we make use of the associated polynomial sequence and derive the characteristic polynomial of the coefficient matrix analytically. We find that the eigen- values are related to the Gauss-Lobatto quadrature points (i.e. the zeros of derivative of Legendre polynomials) and Gauss-Legendre quadrature points (i.e. the zeros of Legendre polynomials). In particular, we show that the eigenvalues of the moment system are real and distinct for arbitrary number of moments. More importantly, we establish the general closure relations such that the rotational invariance and hyperbolicity are guaranteed. This opens the door to the development of data-driven closures that preserve hyperbolicity, as in our past work for the radiation transfer equations [15, 17, 16]. Data-driven closures are the subject of our future work. The remaining parts of this paper are structured as follows: Section 2 introduces the models. In Section 3, we show two proofs to the rotational invariance of the models. Section 4 analyzes the eigen structure of the models and establishes hyperbolicity of the models. In Section 5, we give conclusions and talk about future directions. 3 CHAPTER 2 SHALLOW WATER MOMENT EQUATIONS 2.1 Equation Derivation In this section, we review the main ideas and the results for the derivation of the shallow water moment model in [22]. We also show the 1D hyperbolic shallow water moment model proposed in [21]. We start by considering the 3D incompressible Navier-Stokes equations: ∇ · U = 0, ∂tU + ∇ · (U U ) = − 1 ρ ∇p + 1 ρ ∇ · σ + g. (2.1.1) Here, U = (u, v, w)T is the velocity vector, p is the pressure and σ is the stress tensor. The density ρ is constant and g = (ex, ey, ez)g with (ex, ey, ez) a constant unit vector denotes the gravitational acceleration. The often used shallow water coordinate system is recovered by choosing ex = ey = 0 and g = (0, 0, −1)T g. Under the shallowness assumption, i.e., the horizontal scales of the flow are much larger than the vertical scale, the Navier-Stokes equations (2.1.1) can be reduced by an asymptotic analysis to: ∂xu + ∂yv + ∂zw = 0, ∂tu + ∂x(u2) + ∂y(uv) + ∂z(uw) = − ∂tv + ∂x(uv) + ∂y(v2) + ∂z(vw) = − 1 ρ 1 ρ ∂xp + ∂yp + 1 ρ 1 ρ ∂zσxz + gex, (2.1.2) ∂zσyz + gey, where the hydrostatic pressure is given by p(x, y, z, t) = (hs(x, y, t) − z)ρgez, (2.1.3) with hs(x, y, t) being the profile of the upper free surface. A summary of this reduction is presented in the appendix of this thesis and details can be found in Appendix A of [22]. 4 To derive the shallow water moment model from (2.1.2), the first idea in [22] is to intro- duce a scaled vertical variable ζ(x, y, t) given by ζ(x, y, t) := z − hb(x, y, t) hs(x, y, t) − hb(x, y, t) = z − hb(x, y, t) h(x, y, t) , (2.1.4) with h(x, y, t) := hs(x, y, t) − hb(x, y, t) is the water height from the bottom hb(x, y, t) to the surface hs(x, y, t). This transforms the z-direction from a physical space z ∈ [hb, hs] to a projected space ζ ∈ [0, 1]. For any function ψ = ψ(x, y, z, t), the corresponding mapped function ˜ψ = ˜ψ(x, y, ζ, t) is given by ˜ψ(x, y, ζ, t) := ψ(x, y, h(x, y, t)ζ + hb(x, y, t)). (2.1.5) The complete vertically resolved shallow flow system has the form [22] ∂th + ∂x(hum) + ∂y(hvm) = 0, ∂t(h˜u) + ∂x(h˜u2 + g 2 ezh2) + ∂y(h˜u˜v) + ∂ζ(h˜uω − ∂t(h˜v) + ∂x(h˜u˜v) + ∂y(h˜v2 + g 2 ezh2) + ∂ζ(h˜vω − 1 ρ 1 ρ ˜σxz) = gh(ex − ez∂xhb), (2.1.6) ˜σyz) = gh(ey − ez∂yhb), where um(x, y, t) = (cid:82) 1 0 u(x, y, ζ, t)dζ and vm(x, y, t) = (cid:82) 1 0 v(x, y, ζ, t)dζ denote the mean velocities and ω is the vertical coupling ω = 1 h ∂x(h˜u) + ∂y(h˜v), with the average for any function ψ = ψ(ζ) defined by ¯ψ(ζ) := (cid:90) ζ (cid:18)(cid:90) 1 0 0 ψ(ˇζ)dˇζ − ψ(ˆζ) (cid:19) dˆζ. (2.1.7) (2.1.8) Note that, for a constant flow profile in ζ, the vertical coupling coefficient ω vanishes. In that case, if in addition shear stresses are negligible σxz = σyz = 0, the system reduces to the shallow water equations. Before deriving the moment equation, we introduce some assumptions. First, we use the Newtonian constitutive law: σxz = µ∂zu, σyz = µ∂zv, 5 where µ stands for the dynamic viscosity and ν = µ/ρ the kinematic viscosity. In order to solve it, we need to specify dynamic boundary conditions in the form of a velocity boundary condition both at the free-surface, and at the bottom topography. At the free-surface, the stress-free conditions are assumed: ∂zu = ∂zv = 0, at z = hs(x, y, t). At the basal surface, the slip boundary conditions are assumed: u − λ µ σxz = v − λ µ σyz = 0, at z = hb(x, y, t). Here, λ stands for the slip length. By assuming a polynomial expansion of the velocity components: u(x, y, z, t) = um(x, y, t) + v(x, y, z, t) = vm(x, y, t) + N (cid:88) j=1 N (cid:88) j=1 αj(x, y, t)ϕj(z), βj(x, y, t)ϕj(z), with the scaled Legendre polynomials ϕj, orthogonal on the interval [0, 1] and normalized by 6 ϕj(0) = 1, the shallow water moment equations (SWME) can be derived [22]: ∂th + ∂x(hum) + ∂y(hvm) = 0, (cid:32) ∂t(hum) + ∂x h(u2 m + N (cid:88) j=1 α2 j 2j + 1 ) + g 2 ezh2 (cid:33) (cid:32) + ∂y h(umvm + (cid:33) N (cid:88) j=1 αjβj 2j + 1 ) = − ν λ (um + N (cid:88) j=1 αj) + hg(ex − ez∂xhb), (cid:32) ∂t(hvm) + ∂x h(umvm + N (cid:88) j=1 αjβj 2j + 1 (cid:33) ) (cid:32) + ∂y h(v2 m + N (cid:88) j=1 β2 j 2j + 1 ) + g 2 ezh2 (cid:33) = − ν λ (vm + N (cid:88) j=1 βj) + hg(ey − ez∂yhb), (cid:32) ∂t(hαi) + ∂x h(2umαi + N (cid:88) j,k=1 (cid:33) (cid:32) Aijkαjαk) + ∂y h(umβi + vmαi + N (cid:88) j,k=1 = umDi − (cid:32) BijkDjαk − (2i + 1) ν λ (cid:32) um + N (cid:88) (1 + λ h (cid:33) j=1 (cid:32) ∂t(hβi) + ∂x h(umβi + vmαi + Aijkαjβk) + ∂y h(2vmβi + N (cid:88) j,k=1 = vmDi − N (cid:88) j,k=1 BijkDjβk − (2i + 1) (cid:32) vm + ν λ N (cid:88) j=1 (1 + λ h (cid:33) Aijkαjβk) N (cid:88) j,k=1 (cid:33) (cid:33) Aijkβjβk) N (cid:88) j,k=1 (cid:33) Cij)αj , i = 1, 2, · · · , N, Cij)βj , i = 1, 2, · · · , N. (2.1.9) Here the right-hand-side (RHS) contains non-conservative terms involving the expression Di := ∂x(hαi) + ∂y(hβi) (2.1.10) and the constants Aijk, Bijk, Cij are related to the integrals of the Legendre polynomials: Aijk = (2i + 1) Bijk = (2i + 1) (cid:90) 1 0 (cid:90) 1 0 ϕiϕjϕkdζ, i, j, k = 1, · · · , N, ϕ′ i (cid:18)(cid:90) ζ (cid:19) ϕjdˆζ 0 ϕkdζ, i, j, k = 1, · · · , N, (2.1.11) Cij = (cid:90) 1 0 iϕ′ ϕ′ jdζ, i, j = 1, · · · , N. The above system (2.1.9) can be written as ∂tU + A(U )∂xU + B(U )∂yU = S(U ), (2.1.12) 7 with the unknown variables U = (h, hum, hvm, hα1, hβ1, hα2, hβ2, · · · , hαN , hβN )T . (2.1.13) The coefficient matrices in (2.1.12) can be split into the conservative and non-conservative parts: A(U ) = ∂U F (U ) + P (U ), B(U ) = ∂U G(U ) + Q(U ). (2.1.14) Here the physical fluxes F (U ) and G(U ) for the conservative parts are F (U ) = (hu, h(u2 + N (cid:88) j=1 α2 j 2j + 1 ) + 1 2 gh2, h(uv + N (cid:88) j=1 αjβj 2j + 1 ), h(2uα1 + · · · , h(2uαn + N (cid:88) j,k=1 N (cid:88) j,k=1 A1jkαjαk), h(uβ1 + vα1 + N (cid:88) j,k=1 A1jkαjβk), (2.1.15) Anjkαjαk), h(uβN + vαn + N (cid:88) j,k=1 Anjkαjβk))T , G(U ) = (hv, h(uv + (cid:88) j αjβj 2j + 1 ), h(v2 + β2 j 2j + 1 (cid:88) j ) + 1 2 gh2, h(uβ1 + vα1 + · · · , h(uβn + vαn + (cid:88) j,k (cid:88) j,k A1jkαjβk), h(2vβ1 + (cid:88) j,k A1jkβjβk), (2.1.16) Anjkαjβk), h(2vβn + Anjkβjβk))T . (cid:88) j,k The matrices for the non-conservative part can be further decomposed into two parts: P (U ) = P1(U ) + P2(U ), Q(U ) = Q1(U ) + Q2(U ). (2.1.17) Here P1(U ) and Q1(U ) describe the terms umDi and vmDi on the RHS of (2.1.9): P1(U ) = diag(03×3, p(U ), · · · , p(U )) ∈ R(2N +3)×(2N +3) (2.1.18) and Q1(U ) = diag(03×3, q(U ), · · · , q(U )) ∈ R(2N +3)×(2N +3) (2.1.19) 8 with    p(U ) =   ∈ R2×2,  −u 0 −v 0 q(U ) =    The second part P2(U ) and Q2(U ) describe the terms (cid:80)N 0 −u   ∈ R2×2.  0 −v j,k=1 BijkDjαk and (cid:80)N (2.1.20) j,k=1 BijkDjβk on the RHS of (2.1.9): P2(U ) = diag(03×3, G(U )) ∈ R(2N +3)×(2N +3) (2.1.21) Q2(U ) = diag(03×3, H(U )) ∈ R(2N +3)×(2N +3) (2.1.22) G(U ) = (gij)1≤i,j≤N ∈ R2N ×2N , H(U ) = (hij)1≤i,j≤N ∈ R2N ×2N (2.1.23) and with and    gij(U ) = (cid:80) (cid:80) k Bijkαk 0 k Bijkβk 0   ∈ R2×2,  hij(U ) =    0 (cid:80) 0 (cid:80) k Bijkαk k Bijkβk   ∈ R2×2.  (2.1.24) This system (2.1.12) is called the shallow water moment equations (SWME). In the one-dimensional case, the SWME (2.1.12) with N ≥ 2 is not globally hyperbolic. In [21], the author proposed to linearize the system matrix around linear deviations from equilibrium/constant velocity. ∂tU + AH(U )∂xU + BH(U )∂yU = S(U ), (2.1.25) with and AH(U ) := A(h, hum, hvm, hα1, hβ1, 0, 0, · · · , 0, 0), BH(U ) := B(h, hum, hvm, hα1, hβ1, 0, 0, · · · , 0, 0). Keeping α1 allows to capture a large part of the structure despite its simplicity. For example, there will still be a coupling between the different higher order equations. In 1D, it is proved to be hyperbolic in [21]. This system is called the hyperbolic shallow water moment equations (HSWME). 9 CHAPTER 3 ANALYSIS OF ROTATIONAL INVARIANCE In this section, we present two approaches to prove the rotational invariance of the SWME (2.1.12) and the HSWME (2.1.25). The first approach is to first show that the SWME (2.1.12) is indeed rotational invariant by decomposing the convection term into conservative part and non-conservative part. The second approach is to exploit the systems block struc- ture. Motivated by the second approach, we propose the general closure relation such that the rotational invariance is satisfied. 3.1 Rotational invariance of the SWME and the HSWME In this part, we show that the SWME (2.1.12) is invariant under the rotation of the co- ordinate system. We first introduce the definition of rotational invariance, which guarantees that the form of the system remains unchanged under a new rotated coordinate system. Definition 3.1.1 (rotational invariance). Consider the first-order system ∂tU + A(U )∂xU + B(U )∂yU = S(U ) (3.1.1) with U = (h, hu, hv, hα1, hβ1, · · · , hαN , hβN )T ∈ R2N +3. It is said to satisfy the rotational invariance property if the following relation holds: cos θ A(U ) + sin θ B(U ) = T −1A(T U )T, (3.1.2) for any angle 0 ≤ θ < 2π and any vector U . Here T = T (θ) is the rotation matrix given by T (θ) = diag(1, T2(θ), T2(θ), · · · , T2(θ)) ∈ R(2N +3)×(2N +3) (3.1.3) with T2(θ) =    cos θ sin θ − sin θ cos θ   ∈ R2×2.  (3.1.4) Since the coefficient matrices in the SWME (2.1.12) can be split into the conservative part and non-conservative part in (2.1.14), we will prove the rotational invariance property in two steps. Before the proof, we prepare a set of equalities used in the proof. 10 Proposition 3.1.1. For u, v, α, β, θ ∈ R, we introduce the rotated variables in the new coordinate system: and uθ := cos θ u + sin θ v, vθ := − sin θ u + cos θ v, (3.1.5) αθ := cos θ α + sin θ β, βθ := − sin θ α + cos θ β. (3.1.6) Then the following equalities hold true: cos θ uθ − sin θ vθ = u, sin θ uθ + cos θ vθ = v, cos θ (uθ)2 − sin θ uθvθ = uuθ, sin θ (uθ)2 + cos θ uθvθ = vuθ, 2 cos θ uθαθ − sin θ (uθβθ + vθαθ) = 2 cos θ uα + sin θ (uβ + vα), 2 sin θ uθαθ + cos θ (uθβθ + vθαθ) = cos θ (uβ + vα) + 2 sin θ vβ. Proof. See the proof in Appendix A.2. We first prove the rotational invariance for the conservative part: (3.1.7) (3.1.8) (3.1.9) (3.1.10) (3.1.11) (3.1.12) Lemma 3.1.1 (rotational invariance for conservative part). The conservative part in (2.1.12) satisfies the rotational invariance: cos θ F (U ) + sin θ G(U ) = T −1F (T U ). (3.1.13) for any θ and U . 11 Proof. We first compute T U :                                             T U = = 1 cos θ sin θ − sin θ cos θ cos θ sin θ − sin θ cos θ . . .                                             h hu hv hα1 hβ1 ... hαN hβN                       cos θ sin θ − sin θ cos θ h  h(cos θu + sin θv) h(− sin θu + cos θv) h(cos θα1 + sin θβ1) h(− sin θα1 + cos θβ1) ... h(cos θαN + sin θβN ) h(− sin θαN + cos θβN )                       h huθ hvθ h(α1)θ h(β1)θ ... h(αN )θ h(βN )θ                       . =                      Here, for convenience, we introduce the notation uθ := cos θ u + sin θ v, vθ := − sin θ u + cos θ v, (3.1.14) and (αi)θ := cos θ αi + sin θ βi, (βi)θ := − sin θ αi + cos θ βi, i = 1, · · · , N. (3.1.15) 12 Next, we compute F (T U ): F (T U ) = (huθ, h(u2 θ + h(2uθ(α1)θ + (cid:88) j (cid:88) j,k · · · , h(2uθ(αn)θ + (cid:88) j,k (α1)2 θ 2j + 1 ) + 1 2 gh2, h(uθvθ + (cid:88) j (αj)θ(βj)θ 2j + 1 ), A1jk(αj)θ(αk)θ), h(uθ(β1)θ + vθ(α1)θ + (cid:88) j,k A1jk(αj)θ(βk)θ), Anjk(αj)θ(αk)θ), h(uθ(βn)θ + vθ(αn)θ + Anjk(αj)θ(βk)θ))T , (cid:88) j,k Then we compute T −1F (T U ) for each component and prove that it is equal to the LHS in (3.1.13). Notice that 1                       T −1 = cos θ − sin θ sin θ cos θ cos θ − sin θ sin θ cos θ . . .                       cos θ − sin θ sin θ cos θ . (3.1.16) We start with the first component in (3.1.13): RHS = huθ = h(cos θ u + sin θ v) = cos θ hu + sin θ hv = LHS. 13 Next, we compute the second component in (3.1.13): (cid:32) RHS = cos θ h(u2 θ + (cid:33) (α1)2 θ 2j + 1 (cid:88) j ) + 1 2 gh2 − sin θh(uθvθ + (cid:88) j (αj)θ(βj)θ 2j + 1 ) = h (cid:0)cos θu2 θ − sin θuθvθ (cid:1) + cos θ 1 2 gh2 + h (cid:88) j 1 2j + 1 (cid:0)cos θ(αj)2 θ − sin θ(αj)θ(βj)θ (cid:1) (3.1.9) = huuθ + cos θ 1 2 gh2 + h (cid:88) 1 2j + 1 αj(αj)θ = hu(cos θu + sin θv) + cos θ gh2 + h αj(cos θαj + sin θβj) j 1 2 (cid:88) j 1 2j + 1 (cid:32) + sin θ h(uv + (cid:33) ) (cid:88) j αjβj 2j + 1 (cid:32) = cos θ h(u2 + (cid:33) α2 j 2j + 1 (cid:88) j ) + 1 2 gh2 = LHS. The third component in the RHS of (3.1.13) is: (cid:32) RHS = sin θ h(u2 θ + (cid:33) (αj)2 θ 2j + 1 (cid:88) j ) + 1 2 gh2 + cos θh(uθvθ + (cid:88) j (αj)θ(βj)θ 2j + 1 ) = h (cid:0)sin θu2 θ + cos θuθvθ (cid:1) + sin θ 1 2 gh2 + h (cid:88) j 1 2j + 1 (cid:0)sin θ(αj)2 θ + cos θ(αj)θ(βj)θ (cid:1) (3.1.10) = hvuθ + sin θ 1 2 gh2 + h (cid:88) j 1 2j + 1 βj(αj)θ = hv(cos θu + sin θv) + h (cid:88) j 1 2j + 1 βj(cos θαj + sin θβj) + sin θ 1 2 gh2 (cid:32) = cos θ h(uv + (cid:33) ) (cid:88) j αjβj 2j + 1 (cid:32) + sin θ h(v2 + (cid:33) β2 j 2j + 1 (cid:88) j ) + 1 2 gh2 = LHS. 14 The fourth component in the RHS of (3.1.13) is    RHS = cos θh 2uθ(α1)θ + (cid:88) j,k A1jk(αj)θ(αk)θ)  − sin θh uθ(β1)θ + vθ(α1)θ +  A1jk(αj)θ(βk)θ)  (cid:88) j,k = h (2 cos θuθ(α1)θ − sin θ(uθ(β1)θ + vθ(α1)θ)) + h (cid:88) j,k A1jk(αj)θ (cos θ(αk)θ − sin θ(βk)θ) (3.1.11),(3.1.7) = h (2 cos θuα1 + sin θ(uβ1 + vα1)) + h (cid:88) j,k A1jk(αj)θαk = h (2 cos θuα1 + sin θ(uβ1 + vα1)) + h (cid:88) j,k A1jk(cos θαj + sin θβj)αk = cos θh(2uα1 + (cid:88) j,k = LHS. A1jkαjαk) + sin θh((uβ1 + vα1) + (cid:88) j,k A1jkαjβk) Then we compute the fifth component:    RHS = sin θh 2uθ(α1)θ + (cid:88) j,k A1jk(αj)θ(αk)θ)  + cos θh uθ(β1)θ + vθ(α1)θ + (cid:88) j,k A1jk(αj)θ(βk)θ)   = h (2 sin θuθ(α1)θ + cos θ(uθ(β1)θ + vθ(α1)θ)) + h (cid:88) j,k A1jk(αj)θ (sin θ(αk)θ + cos θ(βk)θ) (3.1.12),(3.1.8) = h (cos θ(uβ1 + vα1) + 2 sin θvβ1) + h (cid:88) j,k A1jk(αj)θβk = h(cos θ(uβ1 + vα1) + 2 sin θvβ1) + h (cid:88) j,k A1jk(cos θαj + sin θβj)βk = cos θh((uβ1 + vα1) + (cid:88) j,k A1jkαjβk) + sin θh(2vβ1 + (cid:88) j,k A1jkβjβk) = LHS. For the remaining components, the proof is similar to the fourth and fifth ones and the details are omitted here. Next, we prove the rotational invariance for the non-conservative part: Lemma 3.1.2 (rotational invariance for non-conservative part). The non-conservative part in (2.1.12) satisfies the rotational invariance: cos θ P (U ) + sin θ Q(U ) = T −1P (T U )T, (3.1.17) 15 for any θ and U . Proof. The matrix for the non-conservative part in (2.1.17) consists of two parts. We start the proof with the first part. Notice that T (θ) has the same block diagonal structure as P1(U ) in (2.1.18) and Q1(U ) in (2.1.19). Therefore, it suffices to check that cos θ p(U ) + sin θ q(U ) = T −1 2 p(T U )T2, (3.1.18) where the matrices p(U ) and q(U ) are defined in (2.1.20) and T2 defined in (3.1.4). This equality can be easily verified by direct calculations. The proof of the second part P2(U ) in (2.1.21) and Q2(U ) in (2.1.22) follows similarly by using the multiplication of the block matrices and the rotational invariance property of each sub-block matrices gij(U ) and hij(U ) defined in (2.1.24). Combining the above two lemmas, we have the following theorem: Theorem 3.1.1 (rotational invariance of SWME). The SWME (2.1.12) satisfies the rota- tional invariance: cos θ A(U ) + sin θ B(U ) = T −1A(T U )T. (3.1.19) Proof. Taking the derivative with respect to U on both sides of (3.1.13) in Lemma 3.1.1, we have cos θ ∂U F (U ) + sin θ ∂U G(U ) = T −1∂U F (T U )T. (3.1.20) Combining this with (3.1.17) in Lemma 3.1.2 , one immediately obtains cos θ A(U ) + sin θB(U ) = T −1A(T U )T. (3.1.21) Since the HSWME (2.1.25) is obtained by evaluating the coefficient matrices in the SWME (2.1.12) at αi = βi = 0 for 2 ≤ i ≤ N , its rotational invariance follows immediately: 16 Theorem 3.1.2 (rotational invariance of HSWME). The HSWME (2.1.25) satisfies the rotational invariance: cos θ AH(U ) + sin θ BH(U ) = T −1AH(T U )T. (3.1.22) 3.2 Other proof of rotational invariance of the SWME and the HSWME In the previous part, we prove the rotational invariance of the HSWME (2.1.25) by first proving the rotational invariance of the SWME (2.1.12). In this part, we will show an alternative proof of the rotational invariance of the HSWME (2.1.25) with the aid of its block structure. We first show the explicit form of the coefficient matrices of the HSWME (2.1.25). This is also given in Theorem 4.3.1 and Theorem 4.3.2 in [26] with another order of variables. For completeness, we include the result and the proof here. Lemma 3.2.1 (coefficient matrices of HSWME). The coefficient matrices of the HSWME (2.1.25) are given by:                                 AH = 1 0 −u2 − α2 1 3 + gh 2u −uv − α1β1 3 −2uα1 v 2α1 0 0 u 0 −(uβ1 + vα1) β1 α1 2α1 3 β1 3 u 0 − 2 3α2 1 − 2 3α1β1 0 0 0 α1 3 0 u 0 0 1 3α1 0 − 1 3β1 2 3α1 17                                 0 0 0 0 0 N 2N +1α1 0 u (3.2.1) 3 5α1 1 5β1 u 0 . . . . . . 0 2 5α1 0 u . . . . . . N −1 2N −1α1 − 1 2N −1β1 · · · · · · · · · · · · . . . . . . 0 N 2N −1α1 0 0 0 0 N +1 2N +1α1 1 2N +1β1 u 0 in the x direction and 0 −uv − α1β1 3 −v2 − β2 1 3 + gh 0 0 v 1 u 2v −(uβ1 + vα1) β1 α1 −2vβ1 − 2 3α1β1 − 2 3β2 1 0 0 0 2β1 0 0                                 BH = β1 3 0 v 0 α1 3 2β1 3 0 v 2 3β1 − 1 3α1 1 3β1 0 2 5β1 0 v 0 . . . . . . 1 5α1 3 5β1 0 v . . . . . . · · · · · · . . . . . . N 2N −1β1 − 1 2N −1α1 N −1 2N −1β1 0 0 0 N 2N +1β1 0 v 0                                 0 0 1 2N +1α1 N +1 2N +1β1 0 v (3.2.2) in the y direction. Proof. See the proof in Appendix A.3. Now we present the alternative proof of Theorem 3.1.2 by using the coefficient matrices of the HSWME (2.1.25) given in Lemma 3.2.1. Alternative proof of Theorem 3.1.2. The coefficient matrix AH in (3.2.1) can be written as in the block form:  AH = 0 d1 d2 A11 A12 d3 A21 A22 . . . d4                                A23 . . . . . . AN,N −1 AN,N AN,N +1 AN +1,N AN +1,N +1 18 (3.2.3) with for i > 1 d1 = (1, 0), d2 =    −u2 − α2 1 3 + gh −uv − α1β1 3    ,    d3 = −2uα1 −(uβ1 + vα1)    ,    d4 =    , − 2 3α2 1 − 2 3α1β1    A11 =  2u 0   , A21 = u v    2α1 0 β1 α1    Aii =    u 0 0 u     , Ai,i+1 =   i+1 2i+1α1 1 2i+1β1 0 i 2i+1α1     , Ai+1,i =   i−1 2i−1α1 −1 2i−1β1 0 i 2i−1α1    . The coefficient matrix BH in (3.2.2) can be written as in the block form similarly: 0 f1 f2 B11 B12 f3 B21 B22 . . . f4                 BH = B23 . . . . . . BN,N −1 BN,N BN,N +1 BN +1,N BN +1,N +1 with f1 = (0, 1), f2 =    −uv − α1β1 3 −v2 − β2 1 3 + gh    ,    f3 = −(uβ1 + vα1) −2vβ1    ,                 f4 = (3.2.4)    − 2 3α1β1 − 2 3β2 1    ,    B11 =   v u 0 2v   , Bii =   v 0 0 v    B21 =    β1 α1 0 2β1     , Bi,i+1 =   i 2i+1β1 0 1 2i+1 α1 i+1 2i+1β1     , Bi+1,i =   i 2i−1β1 0    . −1 2i−1α1 i−1 2i−1β1 19 Next, we compute T −1AH(T U )T and verify that it is equal to cos θAH(U ) + sin θBH(U ). T −1AH(T U )T = diag(1, T −1 2 , · · · , T −1 2 ) 0 d1 d2 A11 A12 d3 A21 A22 . . . d4                                 T A23 . . . . . . AN,N −1 AN,N AN,N +1 AN +1,N AN +1,N +1 0 d1 T −1 2 d2 T −1 2 A11 T −1 2 A12 T −1 2 d3 T −1 2 A21 T −1 2 A22 . . . T −1 2 d4                 =                 T T −1 2 A23 . . . . . . T −1 2 AN,N −1 T −1 2 AN,N T −1 2 AN,N +1 2 AN +1,N T −1 T −1 2 AN +1,N +1 0 d1T2 T −1 2 d2 T −1 2 A11T2 T −1 2 A12T2 T −1 2 d3 T −1 2 A21T2 T −1 2 A22T2 . . . T −1 2 d4                 = T −1 2 A23T2 . . . . . . T −1 2 AN,N −1T2 T −1 2 AN,N T2 T −1 2 AN,N +1T2 T −1 2 AN +1,N T2 T −1 2 AN +1,N +1T2                 where the independent variable T U in di for 1 ≤ i ≤ 4 and Aij for 1 ≤ i, j ≤ N + 1 is omitted for simplicity. Now we compute each block of the matrix: (cid:18) (cid:19) d1T2 = 1 0    cos θ sin θ − sin θ cos θ    = (cid:18) (cid:19) cos θ sin θ = cos θ d1 + sin θ f1. (3.2.5) T −1 2 d2(T U ) = cos θ d2(U ) + sin θ f2(U ). T −1 2 d3(T U ) = cos θ d3(U ) + sin θ f3(U ). (3.2.6) (3.2.7) 20 T −1 2 AiiT2 =   T −1 2 d4(T U ) = cos θ d4(U ) + sin θ f4(U ).      cos θ − sin θ 2u 0         cos θ sin θ − sin θ cos θ  v u  sin θ  cos θ  = cos θ   2u 0   + sin θ u v   v u 0 2v   = cos θAii + sin θBii. T −1 2 Ai,i+1T2 = cos θAi,i+1 + sin θBi,i+1. T −1 2 Ai+1,iT2 = cos θAi+1,i + sin θBi+1,i.    (3.2.8) (3.2.9) (3.2.10) (3.2.11) Therefore, we have proved cos θ AH(U ) + sin θ BH(U ) = T −1AH(T U )T. (3.2.12) Remark 3.2.1. The rotational invariance of the SWME (2.1.12) can also be proved in the similar line as the above proof by using the block structure of the coefficient matrices (A.3.24) and (A.3.44) explicitly. We omit it here for space considerations. 3.3 General closure relation with the rotational invariance From the above alternative proof, we observe that the rotational invariance of the HSWME relies on the rotational invariance of each sub-block (of size 2 × 2) of the coefficient matrices. Motivated by this observation, we would like to analyze the rotational invariance of matrices of size 2 × 2 and find out some general relations which satisfy the rotational invariance. We note that this will be key in deriving our new model, which has a more general closure relation than that of HSWME. Definition 3.3.1 (rotational invariance of 2 × 2 matrices). Consider two matrices A(V ) and B(V ) of size 2 × 2 given by A(V ) =    a11(V ) a12(V ) a21(V ) a22(V )   ∈ R2×2,  (3.3.1) 21 and B(V ) =    b11(V ) b21(V )   ∈ R2×2,  b12(V ) b22(V ) (3.3.2) with V = (p, q)T ∈ R2. We say that A(V ) and B(V ) satisfy the rotational invariance if T −1 2 A(T2V )T2 = cos θ A(V ) + sin θ B(V ), (3.3.3) for any 0 ≤ θ < 2π and any V ∈ R2 with T2 being the rotational matrix    T2 = cos θ sin θ − sin θ cos θ    . (3.3.4) Note that, in the definition, the dummy variables (p, q) could be (u, v) or (αi, βi) for 1 ≤ i ≤ N in the shallow water moment model. In the following part, we will derive some conditions for the rotational invariance of 2 × 2 matrices to be satisfied. We first present several necessary conditions: Lemma 3.3.1. If the matrices A(V ) and B(V ) given by A(V ) =    a11(V ) a12(V ) a21(V ) a22(V )     , B(V ) =   b11(V ) b12(V ) b21(V ) b22(V )    (3.3.5) satisfy the rotational invariance of 2 × 2 matrices, then the following relations hold: 1. All the entries in B(V ) are determined by A(V ) in the following way: b11(p, q) = a22(q, −p), b12(p, q) = −a21(q, −p), b21(p, q) = −a12(q, −p), b22(p, q) = a11(q, −p). (3.3.6) 2. A(V ) is an odd function in the sense that: A(−V ) = −A(V ). (3.3.7) 22 Proof. 1. By taking θ = π 2 in (3.3.3), we have  T2 =   0 1 −1 0    , and T2V =    q −p    . Therefore, we have      T −1 2 A(T2V )T2 =       0 −1 a11(T V ) a12(T V ) a22(T V ) −a21(T V ) 1 0 a21(T V ) a22(T V ) −a12(T V ) a11(T V )     =       0 1 −1 0    . Then the relation (3.3.3) reduces to    b11(V ) b21(V )  b12(V ) b22(V )   =    a22(T V ) −a21(T V ) −a12(T V ) a11(T V )    , which completes the proof. 2. Taking θ to be (θ + π) in (3.3.3), we have cos(θ + π)A(V ) + sin(θ + π)B(V ) = (−T2)−1A(−T2V )(−T2) which implies − cos(θ)A(V ) − sin(θ)B(V ) = T −1 2 A(−T2V )T2. Comparing the above equation with (3.3.3), we have A(T2V ) = −A(−T2V ). Since T2 is invertible, we have that A is an odd function. Next, we will restrict to the case of linear functions and find out the necessary and sufficient conditions for the matrices to be rotational invariant. The conditions will be presented in Thereom 3.3.1. To prove the theorem, we prepare the following lemma: 23 Lemma 3.3.2. Assume that A(V ) and B(V ) satisfy the rotational invariance of 2 × 2 matrices and they are linear functions of V . 1. If A(V ) only has two non-zero entries in the first column: A(V ) =    a11(V ) 0 a21(V ) 0    , then A(V ) and B(V ) have to be of the form: A(V ) = c1     p 0 q 0   + c2    q −p 0  0   , and where c1, c2 ∈ R. B(V ) = c1    0 p 0 q     + c2   0 q 0 −p    , 2. If A(V ) only has two non-zero entries in the diagonal: A(V ) =    a11(V ) 0 0 a22(V )    , then A(V ) and B(V ) have to be of the form: A(V ) = c1    p 0 0 p      + c2   q 0 0 q   , and      q 0 0 q   − c2      , p 0 0 p B(V ) = c1 where c1, c2 ∈ R. 3. If A(V ) only has two non-zero entries in the second column:    , 0 a21(V ) 0 a22(V )    A(V ) = 24 (3.3.8) (3.3.9) (3.3.10) (3.3.11) (3.3.12) (3.3.13) (3.3.14) then A(V ) and B(V ) have to be of the form:    A(V ) = c1   0 p 0 q   + c2      , 0 q 0 −p and where c1, c2 ∈ R. B(V ) = c1    Proof. See the proof in Appendix A.4.    −p 0 −q 0   + c2   −q 0   , 0 p (3.3.15) (3.3.16) Theorem 3.3.1. Assume that the matrices A(V ) and B(V ) satisfy the rotational invariance of 2 × 2 matrices and they are linear functions of V = (p, q)T . Then they must be of the form            A(V ) = c1   p 0 q 0   + c2   q −p 0  0   + c3   p 0 0 p   + c4   q 0 0 q   + c5   0 p 0 q   + c6   0 q 0 −p   , (3.3.17) and B(V ) = c1    0 p 0 q        + c2   0 q 0 −p   + c3   q 0 0 q   − c4    p 0 0 p      − c5   p 0 q 0   + c6    p −q 0    , 0 where ci ∈ R for 1 ≤ i ≤ 6. Proof. Since the matrix B(V ) is determined by A(V ) by Lemma 3.3.1, we only need to (3.3.18) consider the form of A(V ). For any matrix A(V ) of the form    A(V ) = a11(V ) a12(V ) a21(V ) a22(V )    , we can decompose it as A(V ) =    ˜a21(V ) 0 a21(V ) 0     +   a11(V ) − ˜a21(V ) a12(V ) 0 a22(V )    , 25 where ˜a21(V ) is a linear function uniquely determined by a21(V ) such that P (V ) :=    ˜a21(V ) 0 a21(V ) 0    satisfies the form of rotational invariance given in Case 1 in Lemma 3.3.2. Since A(V ) and P (V ) both satisfy the rotational invariance, we have that the remaining part    Q(V ) := a11(V ) − ˜a21(V ) a12(V ) 0 a22(V )    also satisfies the rotational invariance. Next, we decompose Q(V ) into two parts:    Q(V ) = a11(V ) − ˜a21(V ) 0 0 ˜a22(V )     +   0 a12(V ) 0 a22(V ) − ˜a22(V )    (3.3.19) where ˜a22(V ) is a linear function uniquely determined by (a11(V ) − ˜a21(V )) such that    R(V ) := a11(V ) − ˜a21(V ) 0 0 ˜a22(V )    satisfies the rotational invariance given in Case 2 in Lemma 3.3.2. Lastly, the second part in (3.3.19)  0   S(V ) := a12(V ) 0 a22(V ) − ˜a22(V )    (3.3.20) (3.3.21) must satisfy the rotational invariance and falls into Case 3 in Lemma 3.3.2. The proof is completed. Motivated by the constraint given in Theorem 3.3.1, we can modify the coefficient matri- ces in the HSWME (2.1.25) in the following way to make it satisfy the rotational invariance property. 26 Theorem 3.3.2 (general closure relation with rotational invariance). Suppose that the ma- trices Aij(U ) ∈ R2×2 and Bij(U ) ∈ R2×2 satisfy the rotational invariance for the 2 × 2 matrices for 1 ≤ i, j ≤ N + 1. Then the matrices A and B given by A = AH + in the x direction and B = BH + 0             0             A11 A11 ... A12 A12 ... · · · A1,N +1 · · · A1,N +1 ... AN +1,1 AN +1,2 · · · AN +1,N +1             B11 B11 ... B12 B12 ... · · · B1,N +1 · · · B1,N +1 ... BN +1,1 BN +1,2 · · · BN +1,N +1             (3.3.22) (3.3.23) in the y direction, satisfy the rotational invariance. Here AH and BH are the coefficient matrices in the HSWME (2.1.25). Proof. The proof is similar to the alternative proof of Theorem 3.1.2 in Section 3.2. Remark 3.3.1. From Theorem 3.3.2, to preserve the rotational invariance of the moment model, we can modify any entries except the first row and the first column, as long as the sub-blocks satisfy the rotational invariance for the 2 × 2 matrices. However, in practice, we will only modify the last row blocks, i.e., AN +1,j and BN +1,j for 1 ≤ j ≤ N + 1, to guarantee provable hyperbolicity. This will be illustrated in the next section in detail. 27 CHAPTER 4 ANALYSIS OF THE HYPERBOLICITY In this section, we analyze the hyperbolicity of the moment models. With the aid of the rotational invariance, the hyperbolicity in 2D is equivalent to the hyperbolicity in the x direction or y direction. Therefore, it suffices to only analyze the real diagonalizability of the coefficient matrix in x direction. We will first prove the hyperbolicity of the HSWME (2.1.25) in 2D. Then we generalize the β-HSMWE in 1D proposed in [21] to 2D and show its hyperbolicity. Lastly, we propose the general framework for constructing provable hyperbolic moment models with specified propogation speeds. 4.1 Hyperbolicity of the HSWME In this part, we will prove the hyperbolicity of the HSWME (2.1.25) in 2D. This reduces to check the real diagonalizability of the coefficient matrix AH in the x direction. Note that the characteristic polynomial of AH was analyzed in Theorem 4.3.3 in [26]. However, the proof in [26] only shows that the eigenvalues of AH are real but not necessarily distinct. Therefore, the proof is incomplete since the real diagonalizability requires not only the real eigenvalues but also a complete set of eigenvectors. In this part, we will prove the real diagonalizability of AH with the aid of the associated polynomial sequence and show that the eigenvalues are related to the Gauss-Lobatto and Gauss-Legendre quadrature points. To analyze the hyperbolicity, we use another ordering of variables: W = (h, hu, hα1, · · · , hαN , hv, hβ1, · · · , hβN )T . (4.1.1) Note that using different order of variables will not change the rotational invariance or the hyperbolicity of the model, but it does simplify the analysis. Using this set of variables (4.1.1), the coefficient matrix (3.2.1) in the x direction in HSWME (2.1.25) can be written as ˜AH(W ) =    ˜A11(W ) ˜A21(W ) 28    , 0 ˜A22(W ) (4.1.2) where the block matrices ˜A11(W ) ∈ R(N +2)×(N +2), ˜A21(W ) ∈ R(N +1)×(N +2) and ˜A22(W ) ∈ R(N +1)×(N +1) are given by 0 gh − u2 − 1 3α2 1 1 2u 2 3α1 −2uα1 2α1 u − 2 3α2 1 0 1 3α1 ˜A11(W ) =                    −uv − α1β1 3 v −(uβ1 + vα1) β1 β1 3 0 − 2 3α1β1 0 − 1 3β1                 ˜A21(W ) = and                    3 5α1 u . . . 4 7α1 . . . N −2 2N −3α1 . . . u N +1 2N +1 α1 N −1 2N −1α1 u , (4.1.3) 1 5β1 0 . . . 1 7β1 . . . − 1 2N −1β1 . . . 0 1 2N +1 β1 − 1 2N −1β1 0                 , (4.1.4) ˜A22(W ) = u α1 α1 3 u 2 3α1 2α1 5 u . . .                                 . 3α1 7 . . . N −1 2N −3α1 . . . u N 2N +1α1 N 2N −1α1 u (4.1.5) Therefore, the characteristic polynomial of ˜AH(W ) is det(λI− ˜AH(W )) = det    λI − ˜A11(W ) − ˜A21(W ) 0 λI − ˜A22(W )   = det(λI− ˜A11(W )) det(λI− ˜A22(W )).  Next, we focus on the analysis of the characteristic polynomial of ˜A11(W ) and ˜A22(W ). 29 Notice that both ˜A11(W ) and ˜A22(W ) are lower Hessenberg matrices. Before the discus- sion, we review important properties of the Hessenberg matrix. These properties facilitate directly relating the eigenvalues of a Hessenberg matrix to the roots of some associated poly- nomial. We start with the definitions of the (unreduced) lower Hessenberg matrix and the associated polynomial sequence [7]: Definition 4.1.1 (lower Hessenberg matrix). The matrix A = (aij)n×n is called lower Hes- senberg matrix if aij = 0 for j > i + 1. It is called unreduced lower Hessenberg matrix if further ai,i+1 ̸= 0 for i = 1, 2, · · · , n − 1. Definition 4.1.2 (associated polynomial sequence [7]). Let A = (aij)n×n be an unreduced lower Hessenberg matrix. The associated polynomial sequence {qi}0≤i≤n is defined as follows: q0(x) = 1, qi(x) = (cid:32) 1 ai,i+1 xqi−1(x) − i (cid:88) j=1 (cid:33) aijqj−1(x) , 1 ≤ i ≤ n, (4.1.6) with an,n+1 := 1. Theorem 4.1.1 ([7]). Let A = (aij)n×n be an unreduced lower Hessenberg matrix and {qi}0≤i≤n is the associated polynomial sequence with A. The following conclusions hold true: 1. If λ is a root of qn, then λ is an eigenvalue of the matrix A and a corresponding eigenvector is (q0(λ), q1(λ), · · · , qn−1(λ))T . 2. If all the roots of qn are simple, then the characteristic polynomial of A is given by det(xI − A) = ρqn(x), with ρ = Πn−1 i=1 ai,i+1. With the aid of the associated polynomial sequence, we are able to obtain the analytical form of the characteristic polynomials of ˜A11 in (4.1.3) and ˜A22 in (4.1.5). 30 Lemma 4.1.1 (characteristic polynomial of ˜A11 in (4.1.3)). The associated polynomial se- quence of ˜A11 in (4.1.3) is given by q0(x) = 1, q1(x) = x, q2(x) = 3(x − u)2 − 3gh + α2 1 2α1 , qn(x) = 2n − 1 n(n − 1)α1 P ′ n−1 (cid:19) (cid:18) x − u α1 (cid:0)(x − u)2 − gh − α2 1 (cid:1) , 3 ≤ n ≤ N + 1, (4.1.7) and qN +2(x) = 1 N + 1 N +1(ξ) (cid:0)(x − u)2 − gh − α2 P ′ 1 (cid:1) . (4.1.8) Here Pn(ξ) is the Legendre polynomial on [−1, 1] with the standardization condition Pn(1) = 1. Proof. For convenience, we first introduce the notations: and ξ := x − u α1 , pg(x) := (x − u)2 − gh − α2 1. The first several associated polynomials can be obtained by direct computation: q0(x) = 1. q1(x) = q2(x) = 1 1 (x − 0) = x. (xq0(x) − a11q0(x)) = 1 a12 1 a23 1 2 3α1 3(x − u)2 − 3gh + α2 1 2α1 x2 − (gh − u2 − . = = 1 3 (xq1(x) − (a21q0(x) + a22q1(x))) (cid:19) (cid:18) α2 1 + 2ux) 31 (xq2(x) − (a31q0(x) + a32q1(x) + a33q2(x))) − (−2uα1 + 2α1x + u 3(x − u)2 − 3gh + α2 1 2α1 (cid:19) ) q3(x) = = = = = (cid:18) 1 a34 1 3 5α1 5(x − u)((x − u)2 − gh − α2 1) 2α2 1 3(x − u)2 − 3gh + α2 1 2α1 x 5ξ 2α1 5 6α1 pg(x) P ′ 2(ξ)pg(x). (xq3(x) − (a41q0(x) + a42q1(x) + a43q2(x) + a44q3(x))) pg(x) − (− 2 3 α2 1 + 0 + 1 3 α1 3(x − u)2 − 3gh + α2 1 2α1 (cid:19) ) 1)((x − u)2 − gh − α2 1) q4(x) = = = = = (cid:18) 1 a45 5ξ 1 4 2α1 7α1 7(5(x − u)2 − α2 (x − u) 8α3 1 7(5ξ2 − 1) 8α1 pg(x) 7 12α1 P ′ 3(ξ)pg(x). Now we prove by induction and assume that (4.1.7) holds for 4 ≤ n ≤ k with k ≤ N . We will prove it also holds for n = k + 1. qk+1(x) = (cid:32) 1 ak+1,k+2 xqk(x) − k+1 (cid:88) j=1 (cid:33) ak+1,jqj−1(x) (xqk(x) − (ak+1,kqk−1(x) + ak+1,k+1qk(x))) (x − u) 2k − 1 k(k − 1)α1 P ′ k−1(ξ)pg(x) − α1 2k − 3 (k − 1)(k − 2)α1 (cid:19) P ′ k−2(ξ)pg(x) k − 2 2k − 3 (cid:19) ξP ′ k−1(ξ) − 1 k − 1 P ′ k−2(ξ) pg(x) ((k − 1)P ′ k(ξ) + kP ′ k−2(ξ)) − (cid:19) P ′ k−2(ξ) pg(x) 1 k − 1 = = = = = (cid:18) 1 ak+1,k+2 1 k+1 2k+1α1 2k + 1 (k + 1)α1 2k + 1 (k + 1)α1 2k + 1 k(k + 1)α1 (cid:18) 2k − 1 k(k − 1) 1 k(k − 1) (cid:18) P ′ k(ξ)pg(x). where we use the relation (2k − 1)xP ′ k−1(x) = (k − 1)P ′ k(x) + kP ′ k−2(x). 32 Lastly, we compute qN +2(x): qN +2(x) = 1 aN +2,N +3 (xqN +1(x) − (aN +2,N +1qN (x) + aN +2,N +2qN +1(x))) = xqN +1(x) − (aN +2,N +1qN (x) + aN +2,N +2qN +1(x)) (4.1.9) = 1 N + 1 P ′ N +1(ξ)pg(x). The proof is completed. Corollary 4.1.1. Since the roots of P ′ N +1(ξ) are real and distinct, the characteristic poly- nomial of ˜A11 is det(xI − ˜A11) = N ! (2N + 1)!! αN 1 P ′ N +1 (cid:19) (cid:18) x − u α1 (cid:0)(x − u)2 − gh − α2 1 (cid:1) . (4.1.10) Threfore, the eigenvalues of ˜A11 are given by λ1,2 = u ± (cid:113) gh + α2 1, and λi+2 = u + riα1, i = 1, 2, · · · , N. where ri with i = 1, 2, · · · , N are the roots of P ′ N +1(ξ), i.e. the Gauss-Lobatto quadrature points in [−1, 1]. Lemma 4.1.2 (associated polynomial sequence of ˜A22). For the matrix ˜A22 given in (4.1.5), the associated polynomial sequences satisfy: qn(x) = (2n + 1)Pn (cid:19) (cid:18)x − u α1 , 0 ≤ n ≤ N. (4.1.11) and qN +1(x) = (N + 1)α1PN +1 (cid:18)x − u α1 (cid:19) . (4.1.12) Proof. We compute the associated polynomial sequence by recurrence relation: q0(x) = 1, 33 = (cid:18) x − u α1 3 1 2 5α1 q1(x) = 1 a12 (xq0(x) − a11q0(x)) = 3(x − u) α1 = 3ξ = 3P1(ξ), q2(x) = 1 a23 (xq1(x) − (a21q0(x) + a22q1(x))) = (x − u) 3(x − u) α1 (cid:19) − α1 = 5 2 (3ξ2 − 1), = 5P2(ξ) with ξ := x−u α1 . Now we prove by induction and assume that the formula (4.1.11) holds for 2 ≤ n ≤ k. We will prove it also holds for n = k + 1. qk+1(x) = (cid:32) 1 ak+1,k+2 xqk(x) − k+1 (cid:88) j=1 (cid:33) ak+1,jqj−1(x) (cid:18) = = = = 1 ak+1,k+2 1 k+1 2k+3α1 2k + 3 k + 1 2k + 3 k + 1 (xqk(x) − (ak+1,kqk−1(x) + ak+1,k+1qk(x))) (x − u)(2k + 1)Pk(ξ) − k 2k − 1 α1(2k − 1)Pk−1(ξ) (cid:19) ((2k + 1)ξPk(ξ) − kPk−1(ξ)) (k + 1)Pk+1(ξ) = (2k + 3)Pk+1(ξ), Lastly, qN +1(x) = xqN (x) − (aN +1,N qk−1(x) + aN +1,N +1qk(x)) = (N + 1)α1PN +1 (cid:18) x − u α1 (cid:19) . Corollary 4.1.2 (characteristic polynomial of ˜A22). The matrix ˜A22 is real diagonalizable and its characteristic polynomial is det(λI − ˜A22) = (N + 1)! (2N + 1)!! αN +1 1 PN +1 (cid:18) x − u α1 (cid:19) . (4.1.13) Moreover, the eigenvalue of ˜A22 is given by λi = siα1, i = 1, 2, · · · , N + 1, (4.1.14) where si for i = 1, 2, · · · , N + 1 are the roots of Legendre polynomial PN +1(ξ), i.e. the Gauss-Legendre quadrature points in [−1, 1]. 34 Since the roots of PN +1(ξ) are all distinct, P ′ N +1(ξ) and PN +1(ξ) have no common roots, we immediately have that all the eigenvalues of AH are real and distinct. The result is summarized as follows: Theorem 4.1.2 (real diagonalizability of AH). The matrix AH is real diagonalizable. Its characteristic polynomial is given by: det(λI − AH) = N !(N + 1)! ((2N + 1)!!)2 α2N +1 1 P ′ N +1 (cid:19) (cid:18) x − u α1 PN +1 (cid:19) (cid:18) x − u α1 (cid:0)(x − u)2 − gh − α2 1 (cid:1) . (4.1.15) Moreover, the eigenvalues are given by λ1,2 = u ± (cid:113) gh + α2 1, λi+2 = u + riα1, i = 1, 2, · · · , N, λi+N +2 = u + siα1, i = 1, 2, · · · , N + 1, where ri for i = 1, 2, · · · , N are the roots of the derivative of Legendre polynomial P ′ N +1(ξ) and si for i = 1, 2, · · · , N + 1 are the roots of Legendre polynomial PN +1(ξ). Combining the real diagonalizability of AH in Theorem 4.1.2 with the rotational invari- ance in Theorem 3.1.2, we have the hyperbolicity of the HSWME (2.1.25) in 2D: Theorem 4.1.3 (hyperbolicity of the HSWME). The HSWME model (2.1.25) in 2D is hyperbolic. Remark 4.1.1. The analytical form of the eigenvectors can be derived by Theorem 4.1.1. This will be useful in some Riemann solvers. 4.2 Hyperbolicity of the β-HSWME In [21], a new version of shallow water moment model, called the β-HSWME, is proposed by modifying the last row of the coefficient matrix, so that predefined propagation speeds can be obtained. 35 The coefficient matrix of the β-HSWME [21] reads as: ˜Aβ,11(W ) = 0 gh − u2 − 1 3α2 1 1 2u 2 3α1 −2uα1 2α1 u − 2 3α2 1 0 1 3α1                                       . (4.2.1) 3 5α1 u . . . 4 7α1 . . . N −2 2N −3α1 . . . u N +1 2N +1α1 (N −1)(2N +1) (N +1)(2N −1)α1 u In Theorem 3.5 in [21], it was shown that the characteristic polynomial of ˜Aβ,11 in (4.2.1) is related to the Legendre polynomial by numerical computation up to order N = 100. Here, we will prove that this holds true for any N ≥ 1: Lemma 4.2.1 (characteristic polynomial of ˜Aβ,11). The characteristic polynomial of ˜Aβ,11 in (4.2.1) is det(λI − ˜Aβ,11) = N ! (2N − 1)!! αN 1 PN (cid:19) (cid:18)x − u α1 (cid:0)(x − u)2 − gh − α2 1 (cid:1) . (4.2.2) Proof. Notice that the matrix ˜Aβ,11 in (4.2.1) only differs from ˜A11 in (4.1.3) in the last row. Thus, the associated polynomial sequences for two matrices are the same for qi with 0 ≤ i ≤ N + 1. We only need to compute qN +2(x): qN +2(x) = 1 aN +2,N +3 (xqN +1(x) − N +2 (cid:88) j=1 aN +2,jqj−1(x)) = xqN +1(x) − (aN +2,N +1qN (x) + aN +2,N +2qN +1(x)) = (x − u)qN +1(x) − (N − 1)(2N + 1) (N + 1)(2N − 1) qN (x) P ′ N (ξ)pg(x) − (N − 1)(2N + 1) (N + 1)(2N − 1) 2N − 1 N (N − 1)α1 P ′ N −1(ξ)pg(x) 2N + 1 N (N + 1)α1 (cid:0)ξP ′ N (ξ) − P ′ N −1(ξ)(cid:1) pg(x) = (x − u) = = = 2N + 1 N (N + 1) 2N + 1 N (N + 1) 2N + 1 N + 1 N PN (ξ)pg(x) PN (ξ)pg(x). 36 Here we denote ξ := x−u α1 P ′ n−1(ξ) = nPn(ξ). and pg(x) := (x − u)2 − gh − α2 1 and use the relation ξP ′ n(ξ) − Therefore, the characteristic polynomial of ˜Aβ,11 is det(λI − ˜Aβ,11) = (N + 1)! (2N + 1)!! αN 1 2N + 1 N + 1 PN (ξ)pg(x) = N ! (2N − 1)!! αN 1 PN (ξ)pg(x). (4.2.3) With the matrix ˜Aβ,11 at hand, there is still some degree of freedom to choose the matrix ˜Aβ,22 to make the matrix ˜Aβ hyperbolic. One simple choice is to keep ˜A22 unchanged. In this case, the corresponding matrix in x direction is ˜Aβ =    ˜Aβ,11 ˜Aβ,21    , 0 ˜A22 where ˜Aβ,21 is determined by the rotational invariance constraint given in Theorem 3.3.1: −uv − α1β1 3 v −(uβ1 + vα1) β1 β1 3 0 − 2 3 α1β1 0 − 1 3 β1 ˜Aβ,21(W ) =                 1 5 β1 0 . . . 1 7 β1 . . . − 1 2N −3 β1                 . (4.2.4) . . . 0 − 1 2N +1 β1 N 2 (2N −1)(N +1) β1 0 37 We can also write the coefficient matrices in the original order of variables: 0 −u2 − α2 1 3 + gh −uv − α1β1 3 1 2u v −2uα1 2α1 0 0 u 0 −(uβ1 + vα1) β1 α1 2α1 3 β1 3 u 0 0 α1 3 0 u 0 Aβ = − 2 3 α2 1 − 2 3 α1β1 0 0 0 1 3 α1 0 − 1 3 β1 2 3 α1                                  in the x direction and 0 −uv − α1β1 3 −v2 − β2 1 3 + gh 0 v 0 1 u 2v −(uβ1 + vα1) β1 α1 −2vβ1 − 2 3 α1β1 − 2 3 β2 1 0 0 0 2β1 0 0                                  Bβ = β1 3 0 v 0 α1 3 2β1 3 0 v 2 3 β1 − 1 3 α1 1 3 β1 0 3 5 α1 1 5 β1 u 0 . . . . . . 0 2 5 α1 0 u . . . . . . (N −1)(2N +1) (2N −1)(N +1) α1 N 2 (2N −1)(N +1) β1 · · · · · · · · · · · · . . . . . . 0 N 2N −1 α1 0 0 0 0 N +1 2N +1 α1 1 2N +1 β1 u 0 2 5 β1 0 v 0 . . . . . . 1 5 α1 3 5 β1 0 v . . . . . . · · · · · · . . . . . . N 2N −1 β1 0 N 2 (2N −1)(N +1) α1 (N −1)(2N +1) (2N −1)(N +1) β1 0 0 N 2N +1 β1 0 v 0                                  0 0 0 0 0 N 2N +1 α1 0 u (4.2.5)                                  0 0 1 2N +1 α1 N +1 2N +1 β1 0 v (4.2.6) in the y direction. Since PN (ξ) in Lemma 4.2.1 and PN +1(ξ) in Corollary 4.1.2 both have real and distinct roots and there are no common roots due to the interlacing property of zeros of orthogonal 38 polynomials, we have that the coefficient matrix (4.2.5) in the x direction has real and distinct roots and thus real diagonalizable. Combining with the rotational invariance in Theorem 3.3.2, we immediately have the hyperbolicity of the β-HSWME model in 2D: Theorem 4.2.1 (hyperbolicity of the β-HSWME). The β-HSWME model in 2D with the coefficient matrices given by (4.2.5) in x direction and (4.2.6) in y direction is hyperbolic. 4.3 A framework for constructing general closure relations with rotational in- variance and hyperbolicity Besides the previous hyperbolic shallow water moment models, we can also modify both ˜A11 and ˜A22 in (4.1.2), as long as each of them has real and distinct eigenvalues and they have no common eigenvalues. In this case, the real diagonalizability of the matrix ˜A in (4.1.2) is guaranteed. Here, we can borrow the idea from [21] to modify the entries in the last row of the matrix such that the modified matrix has predefined propagation speeds. Since only the last row is modified, the associated polynomial sequence remains unchanged except for the last one. Therefore, it is easy to derive the analytical form of the characteristic polynomial where the coefficients have a linear dependence on the entries in the last row. Next, by matching the coefficients using Vieta’s formulas which relate the coefficients of a polynomial to sums and products of its roots, or using the appropriate recurrence relation, the entries in the last row can be solved analytically. Similar ideas are also applied in machine learning moment closures for radiative transfer equation [16] where the roots are represented by the neural networks. After the modified ˜A11 and ˜A22 are obtained, the next step is to determine the form of ˜A21 by the rotational invariance constraint in Theorem 3.3.1. The coefficient matrix B in the y direction can be derived by this constraint as well. Then we have a moment model in 2D with provable rotational invariance and hyperbolicity. 4.4 An example of constructing a general closure To illustrate the framework in the previous part, we show an example by modifying the entries in the last row of ˜A22 in (4.1.5) such that its characteristic polynomial is related to 39 the derivative of the Legendre polynomial. Since only the last row is modified, the associated polynomial sequence given in Lemma 4.1.2 will not be changed except the last one. Therefore, we compute the last associated polynomial: qN +1(x) = xqN (x) − N +1 (cid:88) j=1 aN +1,jqj−1(x) = (x − aN +1,N +1)qN (x) − N (cid:88) j=1 aN +1,jqj−1(x) = ((x − u) − (aN +1,N +1 − u))(2N + 1)PN (ξ) − N (cid:88) j=1 aN +1,j(2j − 1)Pj−1(ξ) = ξα1(2N + 1)PN (ξ) − (aN +1,N +1 − u)(2N + 1)PN (ξ) − N (cid:88) j=1 aN +1,j(2j − 1)Pj−1(ξ) = α1(N PN −1(ξ) + (N + 1)PN +1(ξ)) − (aN +1,N +1 − u)(2N + 1)PN (ξ) − N (cid:88) j=1 aN +1,j(2j − 1)Pj−1(ξ) = α1(N + 1)PN +1(ξ) − (aN +1,N +1 − u)(2N + 1)PN (ξ) + (α1N − aN +1,N (2N − 1))PN −1(ξ) − N −1 (cid:88) j=1 aN +1,j(2j − 1)Pj−1(ξ), where we use the recursion relation (2n + 1)ξPn(ξ) = nPn−1(ξ) + (n + 1)Pn+1(ξ). Next, we would like to take appropriate values of aN +1,j for 1 ≤ j ≤ N + 1 such that qN +1(x) is proporional to P ′ N +2(ξ). We use the relation P ′ N +2(ξ) = (2N + 3)PN +1(ξ) + (2N − 1)PN −1(ξ) + (2N − 5)PN −3(ξ) + · · · . (4.4.1) Matching the coefficient of PN +1(ξ) in P ′ N +2(ξ) and qN +1(x), we have qN +1(x) = α1 N + 1 2N + 3 P ′ N +2(ξ), which can be expanded as α1(N + 1)PN +1(ξ) − (aN +1,N +1 − u)(2N + 1)PN (ξ) + (α1N − aN +1,N (2N − 1))PN −1(ξ) − N −1 (cid:88) j=1 aN +1,j(2j − 1)Pj−1(ξ) = α1(N + 1)PN +1(ξ) + α1 N + 1 2N + 3 (2N − 1)PN −1(ξ) + α1 N + 1 2N + 3 (2N − 5)PN −3(ξ) + · · · . 40 Now we match the remaining coefficients in the above equation. For the coefficient of PN (ξ), we have aN +1,N +1 = u. For the coefficient of PN −1(ξ), we have α1N − aN +1,N (2N − 1) = α1 N + 1 2N + 3 (2N − 1), from which we solve for aN +1,N and obtain aN +1,N = 2N + 1 (2N − 1)(2N + 3) α1. For the remaining entries, it is easy to obtain: aN +1,j = N + 1 2N + 3 α1,   −  0, if j ≡ N mod 2, otherwise. (4.4.2) for 1 ≤ j ≤ N − 1. Therefore, the modified ˜A22 is ˜A22(W ) =                 u α1 α1 3 u 2 3α1 2α1 5 u 3α1 7 . . . . . . N −1 2N −3α1 . . . u N 2N +1α1 · · · 0 − N +1 2N +3 α1 0 − N +1 2N +3α1 0 2N +1 (2N −1)(2N +3)α1 u                 . (4.4.3) Next, we keep ˜A11 in (4.1.3) unchanged and write down ˜A21 based on the rotational invariance constraint given in Theorem 3.3.1. We can also write the coefficient matrices in 41 the original order of variables:                                A = 0 −u2 − α2 −uv − α1β1 1 3 + gh 3 −2uα1 −(uβ1 + vα1) 3 α2 3 α1β1 − 2 − 2 1 1 2u v 2α1 β1 0 0 0 0 u 0 α1 0 0 2α1 3 β1 3 u 0 α1 3 − β1 3 0 α1 3 0 u 0 2α1 3 · · · · · · · · · · · · 0 0 (N +1)β1 2N +3 − (N +1)α1 2N +3 0 0 in the x direction and                                B = 0 −uv − α1β1 −v2 − β2 3 3 + gh −(uβ1 + vα1) 1 −2vβ1 − 2 3 α1β1 3 β2 − 2 1 0 v 0 β1 0 0 0 1 u 2v α1 2β1 0 0 β1 3 0 v 0 2β1 3 0 α1 3 2β1 3 0 v − α1 3 β1 3 · · · · · · · · · − (N +1)β1 2N +3 (N +1)α1 2N +3 · · · 0 0 0 0 in the y direction. 3α1 5 β1 5 u 0 . . . . . . 0 0 2β1 5 0 v 0 . . . . . . 0 0 0 2α1 5 0 u . . . . . . (N −1)α1 2N −1 (2N 2−N −4)β1 (2N −1)(N +1) · · · · · · · · · · · · . . . . . . 0 (2N +1)α1 (2N −1)(2N +3) 0 0 0 0 (N +1)α1 2N +1 β1 2N +1 u 0                                0 0 0 0 0 N α1 2N +1 0 u (4.4.4)                                0 0 0 0 N 2N +1 β1 1 2N +1 α1 0 v 0 N +1 2N +1 β1 0 v (4.4.5) α1 5 3β1 5 0 v . . . . . . (2N +1)β1 (2N −1)(2N +3) 0 · · · · · · . . . . . . (2N 2−N −4)α1 (2N −1)(N +1) (N −1)β1 2N −1 Since P ′ N +1(ξ) in Corollary 4.1.1 and P ′ N +2(ξ) both have real and distinct roots and there are no common roots, we have that the coefficient matrix (4.4.4) in the x direction has real and distinct roots and thus we have that the matrix is real diagonalizable. Combining with the rotational invariance in Theorem 3.3.2, we have established the hyperbolicity of the new model in 2D: Theorem 4.4.1 (hyperbolicity of an examplore general moment closure). The model in 2D with the coefficient matrices given by (4.4.4) in x direction and (4.4.5) in y direction is hyperbolic. 42 CHAPTER 5 CONCLUDING REMARKS In this work, we investigate the rotational invariance and hyperbolicity of the shallow water moment equations in 2D. We establish a general closure relation such that the resulting shallow water moment equations are hyperbolic and rotationally invariant. Moreover, we propose a new class of shallow water moment equations, which is a generalization of the HSWME model. In future work, it remains to be seen whether the derived shallow water moment models are good approximations of the original incompressible NS equations both numerically and theoretically. Another interesting direction is to apply data-driven methods to learn the closure relations while preserving the rotational invariance and hyperbolicity, as we have done in [18, 15, 17, 16]. These generalizations are the subject of our ongoing work. 43 BIBLIOGRAPHY [1] A. Amrita and J. Koellermeier. Projective integration for hyperbolic shallow water moment equations. Axioms, 11(5):235, 2022. [2] F. Bouchut and V. Zeitlin. A robust well-balanced scheme for multi-layer shallow water equations. Discrete and Continuous Dynamical Systems-Series B, 13(4):739–758, 2010. [3] D. Bresch. Chapter 1 - shallow-water equations and related topics. In C. Dafermos and M. Pokorn´y, editors, Handbook of Differential Equations, volume 5 of Handbook of Differential Equations: Evolutionary Equations, pages 1–104. North-Holland, 2009. [4] Z. Cai, Y. Fan, and R. Li. Globally hyperbolic regularization of Grad’s moment system in one-dimensional space. Communications in Mathematical Sciences, 11(2):547–571, 2013. [5] Z. Cai, Y. Fan, and R. Li. Globally hyperbolic regularization of Grad’s moment system. Communications on Pure and Applied Mathematics, 67(3):464–518, 2014. [6] Castro, Manuel, Mac´ıas, Jorge, and Par´es, Carlos. A q-scheme for a class of systems of coupled conservation laws with source term. application to a two-layer 1-d shallow water system. ESAIM: M2AN, 35(1):107–127, 2001. [7] M. Elouafi and A. D. A. Hadj. A recursion formula for the characteristic polynomial of hessenberg matrices. Applied mathematics and computation, 208(1):177–179, 2009. [8] Y. Fan, J. Koellermeier, J. Li, R. Li, and M. Torrilhon. Model reduction of kinetic equations by operator projection. Journal of Statistical Physics, 162(2):457–486, 2016. [9] E. D. Fern´andez-Nieto, J. Garres-D´ıaz, A. Mangeney, and G. Narbona-Reina. A mul- tilayer shallow model for dry granular flows with the-rheology: application to granular collapse on erodible beds. Journal of Fluid Mechanics, 798:643–681, 2016. [10] E. D. Fern´andez-Nieto, E. H. Kon´e, and T. Chac´on Rebollo. A multilayer method for the hydrostatic navier-stokes equations: a particular weak solution. Journal of Scientific Computing, 60:408–437, 2014. [11] R. Fox and F. Laurent. Hyperbolic quadrature method of moments for the one- dimensional kinetic equation. arXiv preprint arXiv:2103.10138, 2021. [12] J. Garres-Diaz, M. J. Castro Diaz, J. Koellermeier, and T. Morales de Luna. Shallow water moment models for bedload transport problems. Communications In Computa- tional Physics, 30(3):903–941, 2021. [13] J. Garres-D´ıaz, C. Escalante, T. M. De Luna, and M. C. D´ıaz. A general vertical de- composition of euler equations: multilayer-moment models. Applied Numerical Mathe- matics, 183:236–262, 2023. [14] H. Grad. On the kinetic theory of rarefied gases. Communications on Pure and Applied Mathematics, 2(4):331–407, 1949. 44 [15] J. Huang, Y. Cheng, A. J. Christlieb, and L. F. Roberts. Machine learning moment closure models for the radiative transfer equation I: directly learning a gradient based closure. Journal of Computational Physics, 453:110941, 2022. [16] J. Huang, Y. Cheng, A. J. Christlieb, and L. F. Roberts. Machine learning moment clo- sure models for the radiative transfer equation III: enforcing hyperbolicity and physical characteristic speeds. Journal of Scientific Computing, 94(1):7, 2023. [17] J. Huang, Y. Cheng, A. J. Christlieb, L. F. Roberts, and W.-A. Yong. Machine learning moment closure models for the radiative transfer equation II: Enforcing global hyper- bolicity in gradient-based closures. Multiscale Modeling & Simulation, 21(2):489–512, 2023. [18] J. Huang, Z. Ma, Y. Zhou, and W.-A. Yong. Learning thermodynamically stable and Galilean invariant partial differential equations for non-equilibrium flows. Journal of Non-Equilibrium Thermodynamics, 2021. [19] Q. Huang, J. Koellermeier, and W.-A. Yong. Equilibrium stability analysis of hyper- bolic shallow water moment equations. Mathematical Methods in the Applied Sciences, 45(10):6459–6480, 2022. [20] J. Koellermeier and E. Pimentel-Garc´ıa. Steady states and well-balanced schemes for shallow water moment equations with topography. Applied Mathematics and Computa- tion, 427:127166, 2022. [21] J. Koellermeier and M. Rominger. Analysis and numerical simulation of hyperbolic shal- low water moment equations. Communications in Computational Physics, 28(3):1038– 1084, 2020. [22] J. Kowalski and M. Torrilhon. Moment approximations and model cascades for shallow flow. Communications in Computational Physics, 25(3):669–702, 2018. [23] C. D. Levermore. Moment closure hierarchies for kinetic theories. Journal of statistical Physics, 83(5):1021–1065, 1996. [24] A. L. Stewart and P. J. Dellar. Multilayer shallow water equations with complete coriolis force. part 1. derivation on a non-traditional beta-plane. Journal of Fluid Mechanics, 651:387–413, 2010. [25] E. F. Toro. Riemann Solvers and Numerical Methods for Fluid Dynamics. Springer- Verlag, Berlin Heidelberg, 1997. [26] R. Verbiest. Analysis and numerical simulation of two-dimensional shallow water mo- ment equations, 2022. [27] R. Verbiest and J. Koellermeier. Hyperbolic axisymmetric shallow water moment equa- tions. arXiv preprint arXiv:2302.07952, 2023. [28] C. B. Vreugdenhil. Numerical methods for shallow-water flow, volume 13. Springer Science & Business Media, 1994. 45 [29] P. ˜oa Navarro, P. Brufau, J. Burguete, and J. Murillo. The shallow water equations: An example of hyperbolic system. Monograf´ıas de la Real Academia de Ciencias de Zaragoza, 31, 01 2008. 46 A.1 Discussion of Dimensional Scaling APPENDIX Under the shallow water assumptions certain terms are neglected under the assumption that the ratio of the characteristic vertical length scale H to the characteristic horizontal length scale L is small. i.e. H/L = ϵ ≪ 1 The starting point is the Navier Stokes equations ∇ · U = 0, ∂tU + ∇ · (U U ) = − 1 ρ ∇p + 1 ρ ∇ · σ + g. (A.1.1) We will scale the spacial variables according to the characteristic length scales x = Lˆx, y = Lˆy, z = H ˆz (A.1.2) The velocity variables are also scaled according to a characteristic horizontal velocity V . Because of the shallow water assumption the characteristic vertical velocity is much smaller, i.e. on the order of ϵV . u = V ˆu, v = V ˆv, w = ϵV ˆw (A.1.3) The time is also scaled by a factor involving the characteristic horizontal length and velocity scales. t = L V ˆt (A.1.4) A characteristic stress scale S is introduced so the pressure and stress scaling are p = ρgH ˆp, σbasal = S ˆσbasal, σother = Sϵˆσother (A.1.5) where basal stresses are σxz and σyz which will be larger than the other stress components due to the shallow water assumption. Substituting all this ∂ˆx ˆu + ∂ˆy ˆv + ∂ˆz ˆw = 0 F 2ϵ(∂ˆt ˆu + ∂ˆx ˆu2 + ∂ˆy(ˆuˆv) + ∂ˆz(ˆu ˆw)) = −ϵ∂ˆx ˆp + ϵ2G∂ˆxˆσxx + ϵG∂ˆy ˆσxy + G∂ˆz ˆσxz + ex F 2ϵ(∂ˆtˆv + ∂ˆx(ˆuˆv) + ∂ˆy ˆv2 + ∂ˆz(ˆv ˆw)) = −ϵ∂ˆy ˆp + ϵG∂ˆxˆσxy + ϵ2G∂ˆy ˆσyy + G∂ˆz ˆσyz + ey F 2ϵ2(∂ˆt ˆw + ∂ˆx(ˆu ˆw) + ∂ˆy(ˆv ˆw) + ∂ˆz ˆw2) = −ϵ∂ˆz ˆp + ϵG∂ˆxˆσxz + ϵG∂ˆy ˆσyz + ϵG∂ˆz ˆσxz + ez (A.1.6) 47 where F = V / gH is the Froude number and G = S/(ρgH) is the ratio of characteristic √ stress to characteristic hydrostatic pressure. For shallow water both ϵ and G will be much smaller than 1 so terms with ϵ2 and Gϵ will be ignored. This allows for pressure to be solved for directly and leads to the equations at the start of the shallow water moment equation derivation. A.2 Proof of Proposition 3.1.1 Proof. We prove the equalities in Proposition 3.1.1 by direct calculations: 1. 2. 3. 4. LHS = cos θ (cos θ u + sin θ v) − sin θ (− sin θ u + cos θ v) = u = RHS. LHS = sin θ (cos θ u + sin θ v) + cos θ (− sin θ u + cos θ v) = v = RHS. LHS = cos θ (cos θ u + sin θ v)2 − sin θ (cos θ u + sin θ v)(− sin θ u + cos θ v) = (cos θ u + sin θ v) (cos θ (cos θ u + sin θ v) − sin θ (− sin θ u + cos θ v)) = u(cos θ u + sin θ v) = RHS. LHS = cos θ(cos θ u + sin θ v)2 − sin θ(cos θ u + sin θ v)(− sin θ u + cos θ v) = (cos θ u + sin θ v) (cos θ(cos θ u + sin θ v) − sin θ(− sin θ u + cos θ v)) = u(cos θ u + sin θ v) = RHS. 48 5. 6. LHS = 2 cos θ (cos2 θ uα + cos θ sin θ (uβ + vα) + sin2 θ vβ) − sin θ (cid:0)cos2 θ uβ + cos θ sin θ (vβ − uα) − sin2 θ vα(cid:1) − sin θ (cid:0)cos2 θvα + cos θ sin θ (−uα + vβ) − sin2 θuβ(cid:1) = uα cos θ (2 cos2 θ + sin2 θ + sin2 θ) + vα sin θ (2 cos2 θ + sin2 θ − cos2 θ) + uβ sin θ (2 cos2 θ − cos2 θ + sin2 θ) + vβ cos θ(2 sin2 θ − sin2 θ − sin2 θ) = 2uα cos θ + (uβ + vα) sin θ = RHS LHS = 2 sin θ (cos2 θ uα + cos θ sin θ (uβ + vα) + sin2 θvβ) + cos θ (cid:0)cos2 θ uβ + cos θ sin θ (vβ − uα) − sin2 θ vα(cid:1) + cos θ (cid:0)cos2 θ vα + cos θ sin θ (−uα + vβ) − sin2 θ uβ(cid:1) = uα sin θ (2 cos2 θ − cos2 θ − cos2 θ) + vα cos θ (2 sin2 θ − sin2 θ + cos2 θ) + uβ cos θ(2 sin2 θ + cos2 θ − sin2 θ) + vβ sin θ(2 sin2 θ + cos2 θ + cos2 θ) = (uβ + vα) cos θ + 2vβ sin θ = RHS A.3 Proof of Lemma 3.2.1 We split the proof into three parts: (1) the computation of the conservative part in the x-direction; (2) the computation of the conservative part in the y-direction; (3) the computation of the nonconservative part. Since the proof relies on the properties of the coefficients Aijk and Bijk, we summarize in the following: Lemma A.3.1 ([21]). For the coefficient Aijk given by Aijk = (2i + 1) (cid:90) 1 0 ϕi(x)ϕj(x)ϕk(x)dx (A.3.1) 49 we have the following properties: 1. Aijk = Aikj for any i, j, k. 2. Aij1 =    i 2i−1, i+1 2i+3, if if j = i − 1. j = i + 1. 0, otherwise. Lemma A.3.2 ([21]). For the coefficient Bijk given by it satisfies the properties: Bijk = (2i + 1) (cid:90) 1 0 ϕ′ i (cid:18)(cid:90) ζ (cid:19) ϕjdˆζ ϕkdζ, 0 Bij1 =    − i+1 2i−1, − i 2i+3, if if j = i − 1. j = i + 1. 0, otherwise. (A.3.2) (A.3.3) (A.3.4) A.3.1 The conservative part in the x-direction We first compute the Jacobian matrix in x-dimension ∂F ∂U where the physical flux F is given by (2.1.15): 1. For the first component we compute the gradient F1(U ) = hu, (A.3.5) ∂F1(U ) ∂U = (0, 1, 0, 0, · · · , 0)T . (A.3.6) 2. For the second component F2(U ) = h(u2 + α2 j 2j + 1 (cid:88) j ) + 1 2 gh2 = (hu)2 h (cid:88) + j 1 2j + 1 (hαj)2 h + 1 2 gh2, (A.3.7) 50 the derivatives are ∂F2(U ) ∂h = − (hu)2 h2 − (cid:88) j 1 2j + 1 (hαj)2 h2 + gh = −u2 − α2 j 2j + 1 (cid:88) j + gh, = 2u ∂F2(U ) ∂(hu) ∂F2(U ) ∂(hαj) ∂F2(U ) ∂(hv) = = = 2(hu) h 1 2j + 1 ∂F2(U ) ∂(hβj) = 0. 2(hαj) h = 2αj 2j + 1 Therefore, we have ∂F2(U ) ∂U = (−u2 − α2 j 2j + 1 (cid:88) j + gh, 2u, 0, 2α1 3 , 0, 2α2 5 , · · · , 0, 2αN 2N + 1 , 0)T . (A.3.8) 3. For the third component F3(U ) = h(uv + (cid:88) j αjβj 2j + 1 ) = (hu)(hv) h (cid:88) + j 1 2j + 1 (hαj)(hβj) h (A.3.9) the derivatives are ∂F3(U ) ∂h = − (hu)(hv) h2 (cid:88) − j 1 2j + 1 (hαj)(hβj) h2 = −uv − (cid:88) j αjβj 2j + 1 (A.3.10) = v = u ∂F3(U ) ∂(hu) ∂F3(U ) ∂(hv) ∂F3(U ) ∂(hαj) ∂F3(U ) ∂(hβj) = = = = (hv) h (hu) h 1 2j + 1 1 2j + 1 (A.3.11) (hβj) h (hαj) h = = βj 2j + 1 αj 2j + 1 We have ∂F3(U ) ∂U = (−uv − (cid:88) j αjβj 2j + 1 , v, u, β1 3 , α1 3 , β2 5 , α2 5 , · · · , βN 2N + 1 , αN 2N + 1 )T . (A.3.12) 4. For the remaining component, we denote gi(U ) := h(2uαi + (cid:88) j,k Aijkαjαk) = 2(hu)(hαi) h (cid:88) + Aijk j,k (hαj)(hαk) h (A.3.13) 51 for i = 1, 2, · · · , N . Then the gradient is ∂gi(U ) ∂h = − 2(hu)(hαi) h2 (cid:88) − Aijk j,k (hαj)(hαk) h2 = −2uαi − (cid:88) j,k Aijkαjαk (A.3.14) ∂gi(U ) ∂(hu) = 2(hαi) h = 2αi ∂gi(U ) ∂(hαl) = 2(hu) h δil + (cid:88) j,k Aijk(δjl (hαk) h + δkl (hαj) h ) = 2uδij + = 2uδij + (cid:88) k (cid:88) Ailkαk + Ailjαj + = 2uδij + 2 j (cid:88) j Ailjαj (cid:88) j (cid:88) j Aijlαj Aijlαj (A.3.15) (A.3.16) where in the last step we use Aijk = Aikj. ∂gi(U ) ∂(hv) = ∂gi(U ) ∂(hβj) = 0 (A.3.17) 5. For the component, we denote hi(U ) := h(uβi + vαi + (cid:88) j,k Aijkαjβk) = (hu)(hβi) h + (hv)(hαi) h (cid:88) + Aijk j,k (hαj)(hβk) h for i = 1, 2, · · · , N . Then the gradient is ∂hi(U ) ∂h = − (hu)(hβi) h2 − (hv)(hαi) h2 (cid:88) − Aijk j,k (hαj)(hβk) h2 = −uβi − vαi − ∂hi(U ) ∂(hu) ∂hi(U ) ∂(hv) = = (hβi) h (hαi) h = βi = αi ∂hi(U ) ∂(hαl) = (hv) h δil + (cid:88) j,k Aijkδjl (hβk) h = vδil + (cid:88) k Ailkβk 52 (A.3.18) (cid:88) j,k Aijkαjβk (A.3.19) (A.3.20) (A.3.21) (A.3.22) ∂hi(U ) ∂(hβl) = (hu) h δil + (cid:88) j,k Aijkδkl (hαj) h = uδil + = uδil + (cid:88) j (cid:88) j Aijlαj Ailjαj (A.3.23) Therefore, the Jacobian matrix ∂F (U ) ∂U is 0 α2 j 2j+1 + gh −u2 − −uv − αj βj 2j+1 −2uα1 − A1jkαjαk −(uβ1 + vα1) − A1jkαjβk −2uα2 − A2jkαjαk −(uβ2 + vα2) − A2jkαjβk ... 1 2u v 2α1 β1 2α2 β2 ... −2uαN − AN jkαjαk 2αN 0 0 u 0 α1 0 α2 ... 0 0 2α1 3 β1 3 2u + 2A11jαj 0 0 α1 3 0 v + A11jβj u + A11jαj 2A21jαj 0 A21jβj ... A21jαj ... 2AN 1jαj 0 −(uβN + vαN ) − AN jkαjβk βN αN AN 1jβj AN 1jαj · · · · · · · · · · · · · · · · · · · · · . . . · · · · · · 0 2αN 2N +1 βN 2N +1 2A1N jαj A1N jβj 2A2N jαj A2N jβj ... 0 0 αN 2N +1 0 A1N jαj 0 A2N jαj ... 2u + 2AN N jαj 0 v + AN N jβj u + AN N jαj (A.3.24)                              Here, for saving space, we use Einstein summation convention and the summation notation is omitted. Evaluating the above matrix at U0 = (h, hu, hv, hα1, hβ1, 0, 0, · · · , 0, 0)T , we have 0 −u2 − α2 1 3 + gh −uv − α1β1 3 −2uα1 − A111α2 1 −(uβ1 + vα1) − A111α1β1 −A211α2 1 −A211α1β1 ... −AN 11α2 1 −AN 11α1β1 1 2u v 2α1 0 0 u 0 0 2α1 3 β1 3 2u + 2A111α1 0 0 α1 3 0 β1 α1 v + A111β1 u + A111α1 0 0 ... 0 0 0 0 ... 0 0 2A211α1 0 A211β1 ... A211α1 ... 2AN 11α1 0 AN 11β1 AN 11α1 · · · · · · · · · · · · · · · · · · · · · . . . · · · · · · 0 0 0 2A1N 1α1 0 0 0 0 A1N 1β1 A1N 1α1 2A2N 1α1 A2N 1β1 ... 0 A2N 1α1 ... 2u + 2AN N 1α1 0 v + AN N 1β1 u + AN N 1α1                              (A.3.25) 53                                                           Next, we use the property of Aijk given in Lemma A.3.1 and further simplify it to 1 0 −u2 − α2 1 3 + gh 2u −uv − α1β1 3 −2uα1 v 2α1 0 0 u 0 0 2α1 3 β1 3 2u −(uβ1 + vα1) β1 α1 v − 2 3α2 1 − 2 3α1β1 0 0 0 0 4 3α1 2 3β1                                 0 0 α1 3 0 u 0 2 3α1                                 4 5α1 2 5β1 2u v . . . . . . 0 2 5α1 0 u . . . . . . 2N 2N −1α1 N 2N −1β1 . . . . . . . . . . . . 0 N 2N −1α1 2N 2N +1α1 N 2N +1 β1 0 N 2N +1α1 2u v 0 u (A.3.26) A.3.2 The conservative part in the y direction Then we compute the Jacobian matrix in the y direction. 1. For the first component we have 2. For the second component G1(U ) = hv (A.3.27) ∂G1(U ) ∂U = (0, 0, 1, 0, 0, · · · , 0)T (A.3.28) G2(U ) = h(uv + (cid:88) j αjβj 2j + 1 ) = (hu)(hv) h (cid:88) + j 1 2j + 1 (hαj)(hβj) h (A.3.29) it is the same as the third component F3(U ) in x direction. 3. For the third component G3(U ) = h(v2 + β2 j 2j + 1 (cid:88) j ) + 1 2 gh2 = (hv)2 h (cid:88) + j 1 2j + 1 (hβj)2 h + 1 2 gh2 (A.3.30) 54 the gradient is ∂G3(U ) ∂h = − (hv)2 h2 − (cid:88) j 1 2j + 1 (hβj)2 h2 + gh = −v2 − β2 j 2j + 1 (cid:88) j + gh (A.3.31) ∂G3(U ) ∂(hu) = 0 ∂G3(U ) ∂(hv) = 2(hv) h = 2v ∂G3(U ) ∂(hαj) = 0 ∂G3(U ) ∂(hβj) = 1 2j + 1 2(hβj) h = 2βj 2j + 1 (A.3.32) (A.3.33) (A.3.34) (A.3.35) Therefore, we have ∂G3(U ) ∂U = (−v2 − β2 j 2j + 1 (cid:88) j + gh, 0, 2v, 0, 2β1 3 , 0, 2β2 5 , · · · , 0, 2βN 2N + 1 )T (A.3.36) 4. For the component hi(U ) := h(uβi + vαi + (cid:88) j,k Aijkαjβk) (A.3.37) it is the same as the component in x direction. 5. For the component gi(U ) := h(2vβi + (cid:88) j,k Aijkβjβk) = 2(hv)(hβi) h (cid:88) + Aijk j,k (hβj)(hβk) h (A.3.38) the gradient is ∂gi(U ) ∂h = − 2(hv)(hβi) h2 (cid:88) − Aijk j,k (hβj)(hβk) h2 = −2vβi − (cid:88) j,k Aijkβjβk (A.3.39) ∂gi(U ) ∂(hu) = 0 ∂gi(U ) ∂(hv) = 2(hβi) h = 2βi ∂gi(U ) ∂(hαj) = 0 55 (A.3.40) (A.3.41) (A.3.42) ∂gi(U ) ∂(hβl) = 2(hv) h δil + (cid:88) j,k Aijk(δjl (hβk) h + δkl (hβj) h ) = 2vδij + = 2vδij + (cid:88) k (cid:88) Ailkβk + Ailjβj + = 2vδij + 2 j (cid:88) j Ailjβj (cid:88) j (cid:88) j Aijlβj Aijlβj (A.3.43) where in the last step we use Aijk = Aikj. Therefore, the Jacobian matrix ∂G(U ) ∂U                              0 −uv − αj βj 2j+1 β2 j 2j+1 + gh −v2 − (cid:80) j −(uβ1 + vα1) − A1jkαjβk −2vβ1 − A1jkβjβk −(uβ2 + vα2) − A2jkαjβk −2vβ2 − A2jkβjβk ... 0 v 0 β1 0 β2 0 ... −(uβN + vαN ) − AN jkαjβk βN 1 u 2v α1 2β1 α2 2β2 ... αN is 0 β1 3 0 α1 3 2β1 3 v + A11jβj u + A11jαj 0 2v + 2A11jβj A21jβj 0 ... A21jαj 2A21jβj ... AN 1jβj AN 1jαj −2vβN − AN jkβjβk 0 2βN 0 2AN 1jβj · · · · · · · · · · · · · · · · · · . . . · · · · · · βN 2N +1 0 A1N jβj 0 A2N jβj 0 ... αN 2N +1 2βN 2N +1 A1N jαj 2A1N jβj A2N jαj 2A2N jβj ... v + AN N jβj u + AN N jαj 0 2v + 2AN N jβj                              (A.3.44) Here, for saving space, we use Einstein summation convention and the summation notation is omitted. 56 Evaluating the above matrix at U0 = (h, hu, hv, hα1, hβ1, 0, 0, · · · , 0, 0)T , we have                              0 −uv − α1β1 −v2 − β2 3 1 3 + gh 0 v 0 1 u 2v 0 β1 3 0 0 α1 3 2β1 3 −(uβ1 + vα1) − A111α1β1 β1 α1 v + A111β1 u + A111α1 −2vβ1 − A111β2 −A211α1β1 −A211β2 1 ... −AN 11α1β1 −AN 11β2 1 0 0 0 ... 0 0 2β1 0 2v + 2A111β1 0 0 ... 0 0 A211β1 A211α1 0 ... 2A211β1 ... AN 11β1 AN 11α1 0 2AN 11β1 0 0 0 A1N 1β1 0 A2N 1β1 0 ... 0 0 0 A1N 1α1 2A1N 1β1 A2N 1α1 2A2N 1β1 ... v + AN N 1β1 u + AN N 1α1 0 2v + 2AN N 1β1                              · · · · · · · · · · · · · · · · · · · · · . . . · · · · · · (A.3.45) and further simplify it to 0 −uv − α1β1 3 −v2 − β2 1 3 + gh 0 0 v 1 u 2v −(uβ1 + vα1) β1 α1 −2vβ1 − 2 3α1β1 − 2 3β2 1 0 0 0 2β1 0 0                                 β1 3 0 v 0 α1 3 2β1 3 u 2v 2 3β1 0 2 3α1 4 3β1                                 2 5β1 0 v 0 . . . . . . 2 5α1 4 5β1 u 2v . . . . . . . . . . . . . . . . . . N 2N −1β1 0 N 2N −1α1 2N 2N −1β1 N 2N +1β1 0 v 0 N 2N +1 α1 2N 2N +1β1 u 2v (A.3.46) 57 A.3.3 The nonconservative part in x and y directions Evaluating the nonconservative part in x direction at the state (h, hu, hv, hα1, hβ1, · · · , 0, 0), we have 03×3                              P (U ) = −u −v 0 − 1 5α1 0 − 1 5β1 − 3 3α1 0 −u − 3 3β1 0 −v . . . 0 0 0 0 . . . . . . −u −v (A.3.47) 0 − N −1 2N +1α1 0 0 − N −1 2N +1β1 0 − N +1 2N −1α1 0 − N +1 2N −1β1 0 −u −v                            0   0 Here, we use the property of the coefficient Bijk. The nonconservative part in y direction is similar to the one in x direction, we have 0 −u 0 −v 0 − 3 3α1 0 − 3 3β1 0 − 1 5α1 0 − 1 5β1 0 0 . . . −u −v . . . 03×3                              Q(U ) =                              (A.3.48) . . . 0 0 −u −v 0 − N −1 2N +1α1 0 − N −1 2N +1β1 0 − N +1 2N −1α1 0 0 − N +1 2N −1β1 0 −u −v Combining the previous results, we obtain the explicit form given in (3.2.1). 58 (A.4.2)    A.4 Proof of Lemma 3.3.2 A.4.1 Proof of the first case For the first case, A(V ) only has two non-zero entries in the first column: A(V ) =    a11(V ) 0 a21(V ) 0    . (A.4.1) From the necessary condition given in Lemma 3.3.1, we have that B(V ) only has two non-zero entries in the second column:    B(V ) =    . 0 b12(V ) 0 b22(V ) Now we compute T −1 2 A(T2V )T2 = =       cos θ − sin θ sin θ cos θ       a11(T2V ) 0 a21(T2V ) 0       cos θ sin θ − sin θ cos θ    cos θ(cos θa11(T2V ) − sin θa21(T2V )) sin θ(cos θa11(T2V ) − sin θa21(T2V )) cos θ(sin θa11(T2V ) + cos θa21(T2V )) sin θ(sin θa11(T2V ) + cos θa21(T2V )) which should be equal to cos θA(V ) + sin θB(V ) =    cos θa11(V ) cos θa21(V ) Then we have    . sin θb12(V ) sin θb22(V ) cos θ(cos θ a11(T2V ) − sin θ a21(T2V )) = cos θ a11(V ), sin θ(cos θ a11(T2V ) − sin θ a21(T2V )) = sin θ b12(V ), cos θ(sin θ a11(T2V ) + cos θ a21(T2V )) = cos θ a21(V ), sin θ(sin θ a11(T2V ) + cos θ a21(T2V )) = sin θ b22(V ). The above equations can be further simplified as cos θa11(T2V ) − sin θa21(T2V ) = a11(V ), sin θa11(T2V ) + cos θa21(T2V ) = a21(V ), 59 (A.4.3) (A.4.4) (A.4.5) and a11(V ) = b12(V ), a21(V ) = b22(V ). Since a11(V ) and a21(V ) are linear functions of V , we have a11(V ) = c1p + c2q, a21(V ) = c3p + c4q, (A.4.6) (A.4.7) where c1, c2, c3, c4 are constants. Plugging (A.4.7) into (A.4.5), we have cos θ(c1(cos θp + sin θq) + c2(− sin θp + cos θq)) − sin θ(c3(cos θp + sin θq) + c4(− sin θp + cos θq)) = c1p + c2q sin θ(c1(cos θp + sin θq) + c2(− sin θp + cos θq)) + cos θ(c3(cos θp + sin θq) + c4(− sin θp + cos θq) = c3p + c4q which can be simplified as (c1 cos2 θ − (c2 + c3) cos θ sin θ + c4 sin2 θ)p + (c2 cos2 θ + (c1 − c4) cos θ sin θ − c3 sin2 θ)q (c3 cos2 θ + (c1 − c4) cos θ sin θ − c2 sin2 θ)p + (c4 cos2 θ − (c2 + c3) cos θ sin θ + c1 sin2 θ)q = c1p + c2q This implies c1 cos2 θ − (c2 + c3) cos θ sin θ + c4 sin2 θ = c1 c2 cos2 θ + (c1 − c4) cos θ sin θ − c3 sin2 θ = c2 Then we derive the following relations: = c3p + c4q (A.4.8) Therefore, we have c1 = c4, c2 = −c3 (A.4.9) a11(V ) = c1p − c2q a21(V ) = c2p + c1q 60 (A.4.10) Thus, the matrices A(V ) and B(V ) are    A(V ) = c1p + c2q −c2p + c1q 0  0   = c1     p 0 q 0   + c2    q −p 0  0   , and B(V ) =  0   c1p + c2q 0 −c2p + c1q       = c1   0 p 0 q   + c2      . 0 q 0 −p A.4.2 Proof of the second case In this case, we assume that A(V ) is a diagonal matrix   A(V ) =   a11(V ) 0   . 0 a22(V ) From Lemma 3.3.1, we have B(V ) is also a diagonal matrix   B(V ) =   b11(V ) 0 0 b22(V )   .     a11(T2V ) 0 cos θ − sin θ sin θ cos θ     0 a22(T2V )     cos θ sin θ − sin θ cos θ Then we compute T −1 2 A(T2V )T2 = =       (A.4.11) (A.4.12)       , cos2 θa11(T2V ) + sin2 θa22(T2V ) cos θ sin θ(a11(T2V ) − a22(T2V )) cos θ sin θ(a11(T2V ) − a22(T2V )) sin2 θa11(T2V ) + cos2 θa22(T2V ) which should be equal to cos θA(V ) + sin θB(V ) =    Then we have cos θa11(V ) + sin θb11(V ) 0 0 cos θa22(V ) + sin θb22(V )    . a11(T2V ) = a22(T2V ), cos2 θa11(T2V ) + sin2 θa22(T2V ) = cos θa11(V ) + sin θb11(V ), (A.4.13) sin2 θa11(T2V ) + cos2 θa22(T2V ) = cos θa22(V ) + sin θb22(V ). 61 This is reduced to a11(V ) = a22(V ), b11(V ) = b22(V ), a11(T2V ) = cos θa11(V ) + sin θb11(V ). Next, we assume the linear functions a11(V ) = a22(V ) = c1p + c2q, b11(V ) = b22(V ) = c3p + c4q, where c1, c2, c3, c4 are constants. Then we have (A.4.14) (A.4.15) c1(cos θp + sin θq) + c2(− sin θp + cos θq) = cos θ(c1p + c2q) + sin θ(c3p + c4q), (A.4.16) which means that Therefore, we have    A(V ) = c1 = c4, c2 = −c3. (A.4.17)      c1p + c2q 0 0 c1p + c2q   = c1   p 0 0 p   + c2   q 0 0 q   , and B(V ) =         c3p + c4q 0 0 c3p + c4q   = −c2   p 0 0 p   + c1   q 0 0 q   . (A.4.18) (A.4.19) A.4.3 Proof of the third case In the last case, we assume that A(V ) only has non-zero entries in the second column: and from Lemma 3.3.1, we have A(V ) =    0 a12(V ) 0 a22(V )    , B(V ) =    b11(V ) 0 b21(V ) 0    . 62 (A.4.20) (A.4.21) We compute T −1 2 A(T2V )T2 = = =          cos θ − sin θ sin θ cos θ         0 a12(T2V ) cos θ sin θ 0 a22(T2V ) cos θ    0 cos θa12(T2V ) − sin θa22(T2V ) cos θ sin θ 0 sin θa12(T2V ) + cos θa22(T2V ) − sin θ cos θ     − sin θ           − sin θ(cos θa12(T2V ) − sin θa22(T2V )) − sin θ(sin θa12(T2V ) + cos θa22(T2V )) cos θ(cos θa12(T2V ) − sin θa22(T2V )) cos θ(sin θa12(T2V ) + cos θa22(T2V ))   which should be equal to cos θA(V ) + sin θB(V ) =    Therefore, we have sin θb11(V ) cos θa12(V ) sin θb21(V ) cos θa22(V )    . which is reduced to −(cos θa12(T2V ) − sin θa22(T2V )) = b11(V ) cos θa12(T2V ) − sin θa22(T2V ) = a12(V ) −(sin θa12(T2V ) + cos θa22(T2V )) = b21(V ) sin θa12(T2V ) + cos θa22(T2V ) = a22(V ) b11(V ) = −a12(V ) b21(V ) = −a22(V ) cos θa12(T2V ) − sin θa22(T2V ) = a12(V ) sin θa12(T2V ) + cos θa22(T2V ) = a22(V ) Next, we assume the linear function a12(V ) = c1p + c2q, a22(V ) = c3p + c4q, 63 (A.4.22) (A.4.23) (A.4.24) (A.4.25) where c1, c2, c3, c4 are constants. Then we have cos θ(c1(cos θp + sin θq) + c2(− sin θp + cos θq)) − sin θ(c3(cos θp + sin θq) + c4(− sin θp + cos θq)) = c1p + c2q, sin θ(c1(cos θp + sin θq) + c2(− sin θp + cos θq)) + cos θ(c3(cos θp + sin θq) + c4(− sin θp + cos θq)) = c3p + c4q, from which we solve out Therefore, we have c1 = c4, c2 = −c3. (A.4.26) A(V ) =    0 c1p + c2q 0 −c2p + c1q       = c1   0 p 0 q   + c2      , 0 q 0 −p and    B(V ) = −c1p − c2q 0 c2p − c1q    = c1 0    −p 0 −q 0      + c2   −q 0   . 0 p (A.4.27) (A.4.28) 64