't._ "I . a. fi. .3 | I :3..‘.....: .3 “Harri. .. .2. . u . : 3 x. . .‘hpnds ~.. 3:, axluahwni “.mwumv.mmmwu L luaufi.‘ 3.: him.“ In 5 I 13:, 3:. .xr k.nv may. szxfifi»§&§r. $4.? hwyww, mmfiflnfiafigfifliw. 03. “Mmflur .wyitl J 3-: ‘c it IP‘VA 1!: GANS Willll\llilllllll\iii‘llil 3 1293 01588 3865 This is to certify that the dissertation entitled CHAOS IN SEMICLASSICAL AND QUANTUM MODELS OF NUCLEI presented by DAVID A . MCGREW has been accepted towards fulfillment of the requirements for PhD PHYSICS degree in jo fessor Date LAL‘ u i%% ‘l 0 ‘ MS U is an Affirmative Action/Equal Opportunity Institution 0-12771 LIBRARY Michigan State University CHAOS IN SEMICLASSICAL AND QUANTUM MODELS OF NUCLEI by DAVID A. MCGREW A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the Degree of DOCTOR OF PHILOSOPHY Department of Physics and Astronomy 1996 ABSTRACT CHAOS 1N SEMICLASSICAL AND QUANTUM MODELS OF NUCLEI by DAVID A. MCGREW Nuclear chaos is investigated in a semiclassical model (with Bohr-Mottleson mul- tipole interactions) and a quantum model (the quantum billiard model). In the former, Bohr-Mottelson type Hamiltonians with multipole-multipole inter- actions are constructed to investigate the relation between the collective coordinate (the nuclear multipole) and single particle coordinates. The test particle solution to the Vlasov equation leads to dynamical equations involving the individual nucleons and a collective coordinate (the multipole moment), which are numerically integrated. The system is tested for chaos using the method developed by Wolf et. al., involv- ing computation and decomposition of the Jacobian matrix at each timestep. The model can have chaotic single particle dynamics; most interestingly, the single particle dynamics can be chaotic while there is regular motion of the collective coordinate. In the latter, a new method is introduced to solve Schrodinger’s equation for a free particle in an infinite well of arbitrary shape (the Helmholtz equation with Dirichlet boundary conditions), the ‘quantum billiard’ problem. The wavefunction is expanded in a basis of products of sine functions, then the constraint operator is used to contain the wavefunction to a region within the domainof the basis functions. In this manner, a quantum billiard problem of arbitrary shape can be solved. Several methods exist to solve problems of this sort, but as recent work reviewing these methods has shown, all have shortcomings. This work represents a new direction in the solution of these problems. This method is unique in that it provides a means of computing an eigenbasis. It is also interesting from a physical standpoint in that it can represent the Hamiltonian of a classically chaotic system in the basis of a classically regular system. Decay of a quantum billiard is investigated. The straightforward approach, a numerical solution using a truncated basis, is shown to fail due to a lack of convergence with increasing basis size. For my Family ACKNOWLEDGEMENTS I gratefully acknowdedge the help of Wolfgang Bauer, my advisor, and Alex Brown, Aaron Galonsky, S. D. Mahanti, and Dan Stump, my guidance committee. Vladimir Zelevinsky and Scott Pratt also deserve thanks for many helpful discussions, as do Torn Kaplan and Dan Stump, for an inspiring introduction to physics. Thanks also to all of my friends, for their support and good fellowship. ii Contents LIST OF TABLES LIST OF FIGURES 1 Introduction 1.1 1.2 Chaos .............................. L . . . . 1.1.1 Flows, Maps, and Poincaré Sections ............... 1.1.2 Exponential Sensitivity and Lyapunov Exponents ....... 1.1.3 Divergence and Attractors .................... 1.1.4 An Example: the Lorenz System ................ 1.1.5 Metric Entropy .......................... 1.1.6 Fourier Spectra and Correlations ................ 1.1.7 Hamiltonian Chaos ........................ 1.1.8 Hierarchy of Chaos ........................ 1.1.9 Billiard Models .......................... Quantum Chaos .............................. 1.2.1 Energy Level Statistics ...................... 1.2.2 Random Matrix Theory ..................... 1.2.3 ‘Level Repulsion’ and Level Spacing Distribution ....... 1.2.4 Analysis of Energy Levels .................... 2 Nuclear Multipole Oscillations and Chaos 2.1 2.2 2.3 2.4 2.5 2.6 Collective and Single Particle Motion .................. Dynamically Deformed Billiards ..................... Introducing Self-Consistency ....................... Bohr-Mottelson Type Hamiltonians ................... The Vlasov Equation and the Test Particle Solution .......... Numerical Integration .......................... iii vi vii WNWWi—‘H 10 12 13 16 17 19 21 21 22 23 25 25 26 26 27 28 29 2.7 Initial Conditions ............................. 29 2.8 Multipole Oscillations .......................... 31 2.9 Single Particle Motion .......................... 32 2.9.1 The Lyapunov Spectrum ..................... 34 2.9.2 Finding the Lyapunov Spectrum via Orthogonalization . . . . 35 2.9.3 Constant Multipole Moments ...... I ............ 38 2.9.4 Evidence for Chaos due to Energy Exchange .......... 39 2.10 Single Particle Energy Distribution ................... 39 2.11 Interpretation ............................... 39 The Constraint Operator Solution to Quantum Billiards 41 3.1 Introduction ................................ 42 3.2 The Constraint Operator ......................... 44 3.3 Green’s Function Derivation ....................... 46 3.4 Numerical Calculation of Matrix Elements ............... 52 3.5 Computational Algorithm ........................ 54 3.6 Comparison to an Analytic Case ..................... 55 3.7 Comparison with the Plane Wave Decomposition Method ....... 59 3.8 Connection with the Finite Difference Method ............. 64 3.9 Other Operators, Bases, and Boundary Conditions ........... 64 3.10 Wavelet Bases ............................... 67 3.11 Approximating a Projector .................. V ...... 67 The Constraint Einction Expansion 70 4.1 A One-dimensional Example ....................... 73 4.1.1 The Rank of C' .......................... 74 4.2 A Two-dimensional Example ....................... 76 4.3 Band Diagonality of C .......................... 78 4.4 Finding the Expansion Coefficients ................... 79 4.5 Questions About the Constraint Operator ............... 79 Matrix Structure and Eigenvalue Error of C 80 5.1 The Eigenvalue Spectrum ........................ 80 5.1.1 The ‘Temperature’ of C ..................... 80 5.2 Band Diagonality ............................. 81 5.2.1 The Wieland-Hoffman Bound on Eigenvalue Error ....... 81 5.2.2 Measuring Band Diagonality ................... 82 iv 5.2.3 Banded Random Matrices .................... 84 5.3 Separability ................................ 84 5.3.1 Separability and Chaoticity ................... 85 5.4 Error in Eigenvalue Estimates of Infinite Rank Operators in Finite Dimensional Bases ............................ 85 5.4.1 Truncation and Diagonalization ..... , ............ 85 5.4.2 The Gersgorin Theorem ..................... 86 5.4.3 A Bound of the Eigenvalue Error ................ 88 5.4.4 Projectors and Matrix Sections ................. 89 6 Quantum Chaotic Decay 91 6.1 Finite Basis Approach .......................... 91 6.1.1 A Complete Basis ......................... 92 6.1.2 N on-Convergence ......................... 94 LIST OF REFERENCES 95 List of Tables vi List of Figures 1.1 2.1 2.2 2.3 2.4 3.1 3.2 3.3 3.4 3.5 The Lorenz attractor, in the z-y plane, for 0‘ = 16, b = 4, and r = 45.92, obtained by numerically integrating the Lorenz equations with the fourth-order Runge-Kutta method. ................. Time evolution of the collective coordinate quadrupole moment Q2 and the collective momentum quadrupole moment P2 during the ramping stage. The period T = (.051 of the ramping potential is illustrated as well. .................................... Time evolution of the collective quadrupole (top) and octupole (bot- tom) moments for all combinations of static potential and multipole interactions. In all cases, the collective moments have multiply peri- odic, and not chaotic, motion. ..................... Convergence of the positive Lyapunov exponents with time for the single-particle trajectories of self-consistent nuclear Hamilitonians. All combinations of static potential and multipole interaction are shown. A comparison of the Lyapunov exponent convergence plot for the an- harmonic (r6) potential with a quadrupole interaction. The case with the static multipole moment (left) is non—chaotic (as the Lyapunov ex- ponents are all converging to zero), while the self-consistent case (right) is chaotic. ................................. The included and excluded regions shown for the desymmetrized sta- dium. The total region is the entire rectangle. The ratio of included to total regions p here is 0.8570. . . .................... The spectrum of C for the desymmetrized stadium, for m = 124 and m = 1004, where r/m = p = 0.8570. .................. The convergence of the RMS fractional error in the energy eigenvalues as a function of the number of basis functions used for the disk problem. Here the number of grid points and the number of basis functions are equal. ................................... The fractional eigenvalue error as a function of the fractional eigenvalue number for the unit disk with m = 124 (boxes) and m = 316 (pluses). The volume fractional error as a function of the fractional eigenvalue number for the unit disk with m = 124 (boxes) and m = 316 (pluses). vii 30 31 37 38 45 51 56 57 58 3.6 3.7 3.8 5.1 The convergence of the RMS fractional error in the energy eigenval- ues as a function of the number of grid points used in the numerical integration for the disk problem, using 400 basis functions. ..... \Il(i") for the first four eigenfunctions over the desymmetrized stadium. Something like Gibbs phenomenon can be seen, with the maximum error occurring near, but not on, the boundary 61. .......... The distributions of normalized energy spacings for the desymmetrized disk problem (eigenvalues 100-200) and the Wigner distribution. . . . The rms value 6 of the b“ off-diagonal elements of C as a function of b, for the disk (upper curve) and stadium (lower curve), demonstrating the band diagonal nature of C. ..................... viii 60 62 63 Chapter 1 Introduction 1.1 Chaos Chaotic systems merit interest both because of their rich variety of behaviors and their prevalence. In chaotic systems, determinism can lead to complicated motion that appears to be ‘random’ because it is non-repeating (it has no finite period) and is unpredictable (in the sense that accurate predictions about future behavior can only be made for ’the very near future.) Many natural systems display the hallmark of chaos: a sensitive dependence on initial conditions. A small change in one of these systems can rapidly lead to a big change. This fact was appreciated as long ago as 1882 by Poincare, [1] in his study of celestial motion, and 1907 by Lyapunov, [2] who introduced the quantitative measure of sensitive dependence, which we now call the Lyapunov exponents. Chaotic physical systems are often referred to as non-integrable, as the Hamilton- Jacobi equation cannot be integrated in these cases. Physics has historically con- centrated on integrable problems, as these are more tractable. With the advent of modern computers, the simulation of chaotic (non-integrable) systems became feasi- ble. Even so, the simulations are quantitatively valid only for short timespans. John Von Neumann, the mathematician and computer pioneer, predicted that computers 1 would eventually reliably predict the weather [3]. This confident statement predates the modern appreciation of exponential sensitivity to initial conditions. The Earth’s weather is a good example of a complicated non-linear system, and is quite sensitive to initial conditions, rendering long-term prediction of the weather infeasible. The simulation and prediction of the weather motivated Lorenz to make a phe- nomenological model of a simple weather system [4]. This multilinear three dimen- sional model, now associated with his name, simulates two dimensional flow between a hot surface (the ground) and a cold surface (the sky). The model reduces the problem down to its barest essentials, yet can produce all the ‘randomness’ that is present in real weather systems, and a very complicated motion can ensue from its three simple nonlinear equations. At the time this was a very surprising result. The Lorenz system can undergo either regular or chaotic motion, depending on its parameters. The transition of a system from regular to chaotic motion as a pa- rameter is changed is called the ‘route to chaos’. Many systems have complicated and interesting routes to chaos. Especially interesting was the discovery, by Feigenbaum [5, 6] of the ‘period doubling’ route to chaos, a universality class that many nonlinear systems belong to. This was the first use of renormalization group theory in nonlinear dynamics; other fruitful applications have followed. Perhaps the largest stimulation that the study of chaos received was the devel- opment of methods to analyze chaotic behavior in a real system without any prior knowledge about the underlying mechanics of the system [7, 8, 9]. A complete rep- resentation of the dynamics of the system in phase space can be made from limited information about the phase space. The influential 1980 paper of Packard et al. [7] brought these methods, and chaos in general, to the attention of the physics com- munity and many physical systems were shown to contain an underlying chaotic dynamics. For example, the dripping of a faucet [10] was shown to obey a simple chaotic dynamics, through an analysis of the time between drips. 1.1.1 Flows, Maps, and Poincaré Sections An important point in the study of nonlinear dynamics is that continuous time sys- tems (flows) can be studied by considering discrete time systems (maps) of lower dimension. A lower-dimensional discrete system can always be constructed for a flow because of determinism. For example, a damped, driven oscillator has a two dimen- sional phase space consisting of the position q and the momentum p. A map can be made by keeping track of the momentum p; every time the position q is zero. Here p.- is the momentum at the 2"" time the position is zero. By plotting q.-+1 vs. q, a return plot is constructed; it predicts the value of p the next time that q is zero. It works because the deterministic equations of motion can, in principle, always be integrated forward in time to find the next value. This correspondence is often used to simplify the study of flows, as maps are more wieldy things. 1.1.2 Exponential Sensitivity and Lyapunov Exponents The Lyapunov exponent for a one dimensional iterated system xn+l = f($n) (1.1) can be computed elegantly by a summation over all trajectory points. Consider the linearized equation of motion 6 den“ = 3:: It" do: (1.2) 3 which defines the dynamics of an infinitesimal displacement dz: from the point x. The Lyapunov exponent A for the system is defined as f"(1‘ + dit) - f"($) /\ = lim lim -1-log2| n-ooodx—oOn d3? 1 6f“ = ' — — 1. Jggnlogzl 6x ( 3) where f"(:r) denotes the 12“ iterate of f. By the chain rule, if: fifi — 6a: [2,. 6x '30— i=1, and , 1 " 6f A — Jigonlogzigflx Ix" , 1 = 11m— 71—00011 7: 6f 21032 5'; ixn - . (1'5) 8:1, The simplest possible example of a dynamical system is i=f($), (16) where the dot indicates a time derivative and the domain and range of the function f are the same. Consider what happens when the system (1.6) is integrated forward in time from two very close positions, 1:0 and $0 + dx, where dz: is small. f (:r + dx) can be expressed as 1152 2 6s:2 f(x+da:) = f(x)+fi dw+ 62:3 4 It (das)2 + . .. (1.7) When dz: is small enough, everything but the linear term in (1.7) can be neglected, resulting in the ‘linearized’ equation f(:r+d:t:) = f(:c)-i-§j-r dz: (1.8) 61: z (This approach is properly called ‘linear stability analysis’. See [11] for examples of when this analysis is insufficient.) We can derive the dynamics of an infinitesimal ' perturbation dz: by considering the difference of f (2:) and f (a: + dz): d2: = g I, dz. (1.9) 6:): By integrating this equation forward in time while we integrate (1.6), we can accu- rately find the trajectory of the perturbation. The measure of ‘sensitivity to initial conditions’ is the Lyapunov exponent. A Lyapunov exponent A is defined for this system as _ . 1 Idw(t)| A_t13gtlog,[ldx(0) I" (1.10) If A > 0, then then perturbation grows exponentially in time; if A < 0, then the per- turbation diminishes exponentially in time. If A = 0, then the size of the perturbation is a polynomial in time. A system is chaotic when it has a Lyapunov exponent that is positive. Lyapunov exponents can depend on the position in phase space of the initial value :r(0). When this is true, equation (1.10) will give a Lyapunov exponent averaged over a trajectory. This is equivalent to a phase-space average Lyapunov exponent. Systems with more than one variable can be analyzed in much the same way. These systems will have as many Lyapunov exponents as they have variables; the set 5 of Lyapunov exponents is called the the Lyapunov spectrum. Let x be a vector, with dynamics given by xznn ' him where f is a vector function of the vector x. The infinitesimal displacement dx is called a tangent vector. Its dynamics are given by d’x(t) = J(t)dx(t) (1.12) where J (t) is the Jacobian matrix of partial derivatives with elements given by 6f.- Jij “2532: (1.13) The Lyapunov exponents are defined in a way similar to (1.10) as the eigenvalues of the matrix 31510 $1035, (J"(t)J(t)). (1.14) The matrix J *(t)J (t) is positive definite, so its logarithm, defined as a matrix poly- nomial, always exists. Consider a nonlinear dynamical system with m coordinates. The time evolution of the m dimensional tangent vector 5(t) is given by the linearized equations of motion d5“) _ .. 7t— .. J(t)a:(t) (1-15) 6 where J (t) is the Jacobian matrix of partial derivatives evaluated at time t. Define the flow matrix M (t) by the equation 5(t) = M (t):i:'(0) The Lyapunov exponents of the system are found by considering the motion of m vectors. Let X denote the matrix with the m tangent vectors as its columns. For simplicity, assume that X (0) is the identity matrix, so that X (t) = M (t) Then the Lyapunov exponents A,- are defined to be A,- = 11.1.1510 % log) 2;,- (1.16) where 2;; is the 2"" singular value of the matrix M (t) (The singular values are defined by the relation M TM = U TEU , where E is diagonal and U is unitary.) Thus 2;; is the amount by which the 2'“ tangent vector is stretched, and A.- measures its exponential growth. 1.1.3 Divergence and Attractors Systems are classified as dissipative or conservative, depending on Whether a phase space volume element in the systems diminishes in size over time, or remains constant, respectively. If all the points of a region with volume V are integrated forward using (1.12), then the divergence theorem gives us the time derivative of the volume V: V = deV - f. (1.17) If V - f is independent of x (as it is for the Lorenz system), then if = (V . f)V. (1.18) and the volume has the exponential behavior V(t) = exp(V - ft)V(0). If V - f is negative, then the system is dissipative, the volume element will shrink to zero ex- 7 ponentially fast. With the volume equal to zero, the trajectory of the system must occupy a region of lower dimension. This region is called the attractor, and it can have a fractional dimension, in which case it is called a strange attractor. Stable equi- librium points or surfaces are examples of non-strange attractors. Strange attractors are characterized by two relevant facts: 0 The trajectory on a strange attractor has at least one positive Lyapunov expo- nent. o A strange attractor has a fractional dimension. Fractional dimension can be understood by considering the number of cells N of volume 1" in a d dimensional space that are needed to enclose the attractor. As I -> 00, N (I) or I'D, where D is called the capacity dimension. When D is a fraction, the attractor has fractional dimension and is called a fractal. If V - f is positive, then the system cannot be bound, which precludes any interest in this sort of system. Conservative systems, in which V = fv de . f = 0 and volume elements maintain their size, do not have strange attractors. The physical systems that we will consider are conservative, so that we will not discuss dissipative systems, attractors, or fractals in any more detail. 1.1.4 An Example: the Lorenz System The Lorenz system, discussed in section 1.1, typifies dissipative chaotic systems. The model consists of the three variables, 2:, y, and 2, whose dynamics are given by the equations dz: 5' = 001—93) g - z+r:r— dt — x y fiif = xy—bz, ' (1.19) where b, r and a are positive parameters. Fig. (1.1) shows the Lorenz attractor (projected onto the 26-3; plane) for a = 16, b = 4, and r = 45.92. The plot was generated by numerically integrating the Lorenz equations forward in time. The trajectory is clearly non-periodic. Lorenz Attractor I T I f I I I I I I I I I I I I 1 l 40 -- — i— -i 1. —J 20 — — >~. 7 - 0 — m '- -i -20 — .— —4O — — I. 1 l l l A l l 1 l l I l 1 l J l l l- -20 0 20 X Figure 1.1: The Lorenz attractor, in the :r-y plane, for a = 16, b = 4, and r = 45.92, obtained by numerically integrating the Lorenz equations with the fourth-order Runge-Kutta method. 1.1.5 Metric Entropy Metric entropy (also called Kolmogorov entropy, K-entropy, Kolmogorov-Sinai en- tropy, and KS entropy) is a measure of the rate at which information is destroyed (or created, depending on your perspective) [12]. Knowledge of a chaotic system’s initial conditions allow predictions about the system that become increasingly worse with time; it is in this sense that information can be lost [13]. On the other hand, chaotic dynamical systems have non-repeating sequences, so it is apparent that they are making information in some sense [14]. A derivation of the metric entropy K that is also a practical scheme for its come putation [15, 16] is as follows: The information entropy I is defined by I = 2: 19; log.) p; (1.20) i=1, where the sum runs over all m possible states of the system. It is equal to 0 if there is perfect information (p, = 6,3). for some k) and equal to log2 m if there is no information (1».- = 1/m) Assume the phase space is divided into cells of volume 1“, where d is the dimension of the phase space and I is the length of a side of a cell. Assume also that time is discrete, that is, t = nr for integer n. Define p;,,,,.-,, to be the joint probability that the state vector of the system is in the if,” cell at time t = 0, the ii” cell at time t = r, and if,” cell at time t = nr. Thus p;,,,,.-,, is the probability that the system will evolve along the particular trajectory i1 . . . in, where the precision of location is I. The information (1.20) evaluated over all possible trajectories, 10 Kn = 21913.4}. 10821911.":‘m (1-21) i1 000’." gives the change in the information in a system from time t = 0 to t = nr. The metric entropy K is defined as the average rate of loss of information . . l K — 12m)...” 11m nr [Kn+1 — Kn] 71—900 . . 1 = 1277212003351; :1: Z Pi;...in_llogzpi1...in_1 (1.22) i1...in_1 K can be connected to the Lyapunov spectrum. It is the positive Lyapunov exponents that are responsible for the system spreading into new cells. A positive exponent A will cause spreading in the direction of its associated tangent vector; if we know that a system is in a certain cell initially, then it could be in a region of length exp(Anr)I after it timesteps. This gives K = A. In general, K = Z A.- (1.23) where {A.-} is the Lyapunov spectrum of the system, and the sum runs over only those positive exponents. If the Lyapunov exponents are position dependent, then a phase space average must be introduced as well, changing (1.23) to K = Z ILITKT (1.24) 11 1.1.6 Fourier Spectra and Correlations Power Spectra The power spectrum of a time-varying variable a:(t) is given by P(w) =| :1:(w) [2, where a:(w) = [0” even). (1.25) Multiply periodic trajectories a:(t), which are not chaotic, have power spectra that are non-zero at a finite set of points. In reality, the finite size of the integral in (1.25) makes these spectra appear to be a finite set of spikes, with a small amount of background noise. Chaotic trajectories, on the other hand, have continuous power spectra, the result of the non-periodic nature of chaotic motion. These spectra can often be qualitatively distinguished from multiply periodic motion. The power spectrum of a trajectory can be used as an indicator of chaoticity, and is especially useful in high-dimensional cases where calculation of the Lyapunov spectrum is numerically infeasible (as the computational effort is an exponential function of the number of dimensions in the system). Autocorrelations In a one dimensional map system with dynamics given by 23,-.“ = f (33;), temporal correlations between the iterates can be computed as 1 n C(m) = Jug ; E z;:c;+m — (3)2. (1.26) i=1. 12 Here (1:) is the average of 1:,- over all iterates. In chaotic systems, C(m) can decay exponentially or with a power law [13, 17], while non-chaotic systems have correlations that are constant or periodic as a function of m. It has been proven [18] that in Hamiltonian systems with N degrees of freedom C(m) or m(1’N)/2. The slow decay of the correlations in this system (a power law rather than an exponential) is seemingly at conflict with the uncertainty inherent in chaotic dynamics. However, correlations actually contain comparatively little infor- mation, for example, the fact that the system is limited to within a certain volume in phase space. 1.1.7 Hamiltonian Chaos The dynamics of Hamiltonian systems have some unique characteristics. The nuclear systems discussed here are Hamiltonian systems, so I will describe their interesting aspects [14, 19, 20]. Hamiltonian systems are specified by a scalar function H (p, q, t). of the ‘momen- tum’ p, the ‘position’ q, and the time t. The vectors p and q of size n together describe the state of the system, and are said to exist in phase space. The dynamics of the system are defined by Hamilton’s equations - _ 611 . 6H q — -6p' (1.27) When H is not explicitly time dependent, it is a constant: 13 -_ 6q 6H 6p 6H H ‘ 6t 6q+6t 6p 6H 6H 6H 6H ‘ ‘33 ' 3; + a ' as = 0 (1.28) In physical systems, H (p, q, t) is identified with energy. We can write (1.27) in the same form as ( 1.12), where x is the partitioned vector _ P x—(q) (1.29) and the vector function f is .__ s— (1.30) where the matrix S has the simple partitioned form 5 = (‘1) ‘01) (1.31) where all of the blocks are of size n xn. The matrix S enables Hamiltonian dynamics to be written using partitioned matrices and vectors, reducing the number of equations. The manipulation of equations involving 5' can be simplified by using the two relations S’z—l and S’=—S. Hamiltonian systems are symplectic: they preserve the quantity dx’de, the dif- ferential symplectic area, where dx is an infinitesimal displacement obeying (1.12) 14 (this is the differential form of Poincaré’s integral invariant law). This can be demon- strated by finding the time derivative of the differential symplectic area, gidx’de = d’x’de + dx’de = dx’(J’S + SJ)dx = 0 (1.32) which is zero because J’S + SJ = (s 62H/6x2)’S + 5(362H/6x2) = 5212/55.? — 5211/5512 = 0, (1.33) where we have used the fact that 62H/6x2 is hermitian. The Lyapunov spectra of Hamiltonian systems have a distinct pattern: there are 2n eigenvalues with n distinct magnitudes, forming a ‘mirror image’ about zero. To see this, consider the differential symplectic area, which is independent of time, dx’(0)de(0) = dx'(t)de(t) = dx’(O)J'Sde(0) (1.34) which must be true for arbitrary da:(0). Thus the Jacobian matrix J must satisfy the equations J’SJ = s (1.35) 15 and J“ = S‘IJ’S, (1.36) the latter of which is derived from the former. The last equation states that the inverse of J and the transpose of J are related by a similarity transform, and therefore have the same eigenvalue spectrum. This implies that all of the eigenvalues of J must come in pairs like a and a'l. From (1.14), the Lyapunov exponents are equal to the logarithm of the magnitude of these eigenvalues and therefore come in pairs like log) a and — log2 a. Poincaré’s Recurrence Theorem The ergodicity of time-independent Hamiltonian systems with bounded orbits is man- ifested by the recurrence theorem: any phase space trajectory will come arbitrarily close to its initial position. This follows from considering the flow of the points within a sphere of radius 6 in phase space, centered about an arbitrary point. If the flow is not confined in this sphere, it flows outward, and its volume is constant as it does so. Since all orbits are bounded, they are confined to some region of phase space. The total volume swept out by the flow from the small sphere cannot exceed the total volume of this region. Therefore the flow returns to within the sphere. 1.1.8 Hierarchy of Chaos Hamiltonian systems can have varying degrees of chaos. In order of ‘increasing ran- domness’, the types of chaotic systems of interest to us are: 16 Ergodic systems are systems in which phase space averages are equivalent to time averages, as any trajectory in phase space comes arbitrarily close to any point in phase space. This weakest form of randomness is sometimes used to justify statistical mechanics. Actually, ergodicity is such a weak condition that stronger claims can be made for statistical mechanics [21]. K—systems have one or more positive Lyapunov exponents, or equivalently, have positive metric (Kolmogorov) entropy. This is the definition associated with chaoticity. C-systems have one or more positive Lyapunov exponents for every initial condition, or equivalently, have positive metric entropy everywhere. The stadium and Sinai billiards are C-systems. There are some other definitions of chaoticity of systems, namely Mixing systems and Bernoulli systems, but they involve ideas that are not of use to us. The hierarchy of quantum chaotic systems is not so clear, and is still a matter of debate. Regular systems have Poisson level statistics, completely chaotic systems have GOE level statistics, and anything in between has a level distribution between those distributions. Whether or not there is a canonical ‘path’ from one extreme to the other has not yet been determined. 1.1.9 Billiard Models A billiard system consists of a ball bouncing around in a box, with elastic collisions so that the angle of incidence equals the angle of reflection These systems are simple and can be easily investigated numerically, as only the collisions between the ball and the retaining wall need be considered. At the same time, these systems have a rich 17 variety of dynamics that can be regular or fully chaotic or anything in between. They have been incorporated into more complicated models with good results. If the momentum of a billiard in a two dimensional rectangular box is represented by p(i) after the it” bounce, then p(i + 1) = Mp(i) (1.37) where the matrix M is given by :tl 0 M = [ 0 :F1 ] (1.38) The momentum vector only visits the four points (P1(0),P2(0)),(—P1(0),P2(0)),(P1(0), -p2(0))t (—P1(0), —P2(0))- (139) Most of the phase space remains unexplored, and the dynamics are regular, not chaotic. The Sinai billiard model adds a circular scattering center to the rectangular do- main. The matrix M for reflections off of the circle is [22] _i_ q%(i)-qi(i) -2q1(i)qz(i) M ‘ 32 —2q.(z')q.(z') «(v—3(2) “'40) where R2 = q12 + qg. The Siani billiard system is not limited to a finite set of points in phase space, but instead explores the entire space. The phase space density is uni- form within regions allowed by geometry and energy conservation. There is sensitive dependence on initial conditions, and a positive metric entropy. l8 Billiard Decay A billiard can be allowed to ‘decay’ out of its container by creating a small hole in the container. This changes the bound system into an unbound system. This was studied by Bauer and Bertsch [22], revealing that decay was quantitatively different for chaotic and regular billiards. Chaotic billiards have a probability of having decayed that is exponential in time, while that of regular billiards is a power law in time. The different decay laws for the chaotic and regular cases can be understood from phase space considerations. In the chaotic case, the phase space density ‘leaks out’ steadily, resulting in an exponential decay law. In the regular case, a power law holds for the decay. 1.2 Quantum Chaos Quantum chaos is the study of quantum systems that are chaotic in the classical limit; these systems are distinguished not by sensitivity to initial conditions but by statistics of eigenvalues and eigenvectors [34, 23, 24, 25]. Classical chaos limited the development of semiclassical techniques early in the development of quantum mechanics; it prevented the quantization of the helium atom, for example. Semiclassical methods use classical trajectories, and chaos complicates the trajectories too much [25]. In 1970, Gutzwiller derived a semiclassical formula, the ‘trace formula’, that ex- presses the quantum energy eigenvalues of a chaotic system via a sum over the periodic orbits of the system [27]. Periodic orbits exist in chaotic systems, but are rare and unstable, and the trace formula is tricky to implement. Nevertheless, this reawakened the semiclassical agenda of the pioneers of quantum physics. 19 As classical chaos became more appreciated, the question of how chaotic dynamics arise from the underlying quantum mechanics was asked. The fact that quantum mechanical operators are linear and chaotic dynamical equations are nonlinear led some to suppose that the two would be irreconcilable. This is wrong, as semiclassical results can bridge both domains successfully, but it motivated consideration of the quantum physics of classically chaotic systems. These systems, which we call quantum chaotic systems, have several identifying characteristics. The energy level spacings (the differences between two adjacent en- ergy eigenvalues) of quantum chaotic systems have a probability distribution identical with the Wigner distribution. Regular quantum systems have energy level spacings identical to the Possion distribution. The main difference between these two dis- tributions is that the Wigner distribution is zero at the origin, while the Possion distribution is equal to unity there. The phenomenon of ‘level repulsion’ which happens in quantum chaotic systems was first identified by Wigner in the 19508. The energy spectra of complicated nuclei, beyond the first hundred or so states, are unpredictable. This motivated the statistical treatment of nuclear spectra. The key observation was that the level spacings did not obey a Poisson distribution, as would be expected if the energy levels were randomly distributed. Rather, the observed distributions were close to what we now call the Wigner distribution. The Wigner distribution of energy eigenvalues is expected from matrices that are selected randomly with a probability proportional to the euclidean norm of the matrix. In 1977, Berry [26] demonstrated that regular systems have energy spectra with Poisson level spacing distribution. It was later demonstrated that chaotic systems have energy spectra with a Wigner-like level spacing distribution. This called atten- 20 tion to the broad applicability of the earlier random matrix theory results to quantum chaotic systems. 1.2.1 Energy Level Statistics ’ , as would Quantum chaotic spectra do not obey a Poisson distribution, p(s) = e’ be expected if the energy levels were randomly distributed. Rather, the observed distributions were close to what we now call the Wigner distribution. The Wigner distribution of energy eigenvalues is expected from matrices that are selected ran- domly with a probability proportional to the euclidean norm of the matrix. 1.2.2 Random Matrix Theory The central idea of what became known as random matrix theory was that compli- cated quantum systems have Hamiltonian matrices with statistics indistinguishable from those of matrices drawn at random from an ensemble. In this way we can consider complicated Hamiltonian matrices ‘random’ [29]. There are some subtle distinctions about the ensembles from which the matrices are drawn and the energy spectra that are analyzed. Quantum chaos is the study of quantum systems that are chaotic in the classical limit; these systems are distin- guished not by sensitivity to initial conditions but by statistics of eigenvalues and eigenvectors. GOE The Gaussian Orthogonal Ensemble, or GOE, is the one in which we will be interested. Hamiltonians in this ensemble are invariant under rotations in Hilbert space, so that they have no ‘preferred’ basis. Wigner derived the eigenvalue distribution mentioned above from the assumption that the random matrices belonged to this ensemble. 21 This distribution is easily found for 2 x 2 matrices: assume that the probability p of a matrix M being chosen from an ensemble is proportional to the trace of the matrix Tr(M) (which is equal to the sum of the diagonal elements and the sum of the eigenvalues). Then the probability p(s) of the spacing between the two eigenvalues being equal to s is given by p(s) = [Ziexp(—7rsz/4) (1.41) GUE The other ensemble of interest is the Gaussian Unitary Ensemble, or GUE. It applies in cases where there is a magnetic field, which destroys time reversal symmetry. The level spacing distribution that it leads to is p(s) = gszexp(—4sz/n) _ (1.42) 1.2.3 ‘Level Repulsion’ and Level Spacing Distribution The principal qualitative difference between the Wigner and Poisson distributions is at the origin, where the former is zero and the latter is unity. The Wigner distribution is said to describe ‘level repulsion’, in that the levels avoid each other and are never degenerate. It is possible to heuristically derive the Poisson and Wigner distributions by using the idea of level repulsion [30, 31]. For the nearest neighbor spacing distribution P(s), P(s)ds = P(1 6 ds | 0 E s)P(0 6 ds) (1.43) 22 where P(n E s) is the probability that the interval 3 contains n levels and P(n 6 ds | m 6 3) is the conditional probability that the interval of length ds contains 72 levels, when that of length 3 contains m levels. The second factor, P(O E s), the probability that the spacing is larger than 3, is equal to fa°° dz P(z). Assume that the first term is equal to r(s)ds, which represents a ‘repulsion’. Then P(s) = r(s)£mdzP(z) (1.44) which can easily be solved to find P(s) or r(s)exp (- [0‘ dz r(z)) . (1.45) For r(s) equal to a constant, P(s) or exp(—s), which is the Poisson distribution. For r(s) = s, P(s) or sexp(—32), which is the Wigner distribution. 1.2.4 Analysis of Energy Levels The energy spectra must be considerably massaged before they are analyzed. The analysis can only be performed on sets of eigenvalues corresponding to states with similar parity. The eigenvalue distributions themselves are not studied, but rather the distribution of eigenvalue spacings. This is like studying the derivative of the distribution function rather than the function itself. The spacings are normalized so that the sum of the spacings is equal to one; this serves to make the distribution universal by removing a free parameter. The resulting normalized spacings are binned, and the numbers of spacings in each bin are themselves normalized to sum to one, so that they reflect the probabilities of the spacings occurring in the bins. These probability histograms are compared to theoretical probability distributions. 23 The distribution of energy level spacings is now central to quantum chaos. Regular quantum systems have poisson distributions, as has been proven ([32, 33]), while having a GOE distribution is now an accepted criterion to be a quantum chaotic system [34]. 24 Chapter 2 Nuclear Multipole Oscillations and Chaos 2.1 Collective and Single Particle Motion This chapter is based on work I did with Bauer, Schuck, and Zelevinsky originally published in Physical Review Letters [35]. Many-body systems with strong interac- tions, notably finite Fermi systems such as nuclei [36], can be well-represented by considering single particles moving in static potentials. When there is a physically important ‘collective’ coordinate, such as the total multipole moment of a nucleus, it is possible to represent this with some weaker interaction between the single par- ticles and the collective coordinate. The origin of dissipation in these systems is not yet completely understood; in particular, the interaction of single-particle forces and collective forces is an open question. Giant multipole moments in nuclei are harmonically oscillating collective mo- ments; it has been hypothesized that the energy of these oscillations would be dis- sipated, through chaotic single particle motion, into thermal energy of the single particles. Our computations show that this picture is invalid, and that single particle chaos 25 need not lead to dissipation of the collective coordinate. 2.2 Dynamically Deformed Billiards A schematic model that represents the interaction between a collective multipole moment and the individual nucleons was developed by Blocki et al. [39, 40]. Nucleons are represented by a classical gas of particles, and they are contained in a billiard undergoing periodic deformations (which represents the collective multipole force). The deformations are multipole dependent and oscillate with frequencies much smaller than typical single particle frequencies. For octupole and higher deformations the kinetic energy of the particles increases and the particles move chaotically. However, this model is not consistent in that it has a ‘collective coordinate’ (the deformation of the wall) that is independent from, rather than a result of, the single particle coordinates. Blocki el al [39, 40] consider a classical gas of particles contained in a deformed billiard. The gas of particles initially has a momentum distributionequal to that of a Fermi gas. The deformation of the walls of the container oscillate in time, with a frequency much smaller than a typical single particle frequency. They study the increase of kinetic energy of the particles as a function of time, and find that for ellipsoidal shape deformations the particles act as a classical Knudsen gas. 2.3 Introducing Self-Consistency These models are not self-consistent in that they do not conserve energy: the particles gain energy from their collisions with the wall, but the wall itself remains unaffected by these collisions. We introduce selfconsistericy into the problem by devising a nuclear 26 Hamiltonian that features a multipole interaction but conserves the total energy of the system. We use the Bohr-Mottelson [41] type of interaction with a static potential of the 2 or r6 and a quadrupole or octupole interaction through which the nucleons can form r exchange energy, as studied by Stringari et a1. [42, 43, 44] Thus our model conserves energy, unlike the earlier models. We also investigate the case with static r2 or r6 potentials and static quadrupole or octupole moments, which are similar to the Bohr-Mottleson model but do not allow the exchange of energy between nucleons. These cases provide comparison with the models of Blocki et a1. 2.4 Bohr-Mottelson Type Hamiltonians The single particle Hamiltonian ’H is 'H = H0+V(‘)(r,t) 2 .8. (1) 2m + Vo + V (r,t) (2.1) where V(‘)(r, t) is the potential leading to the multipole-multipole force and V0 is the static potential. The multipole-multipole potential is taken to be V“’(r,t) = #:qz(P)Qz(t), ' (2-2) as in [42, 45, 43]. The static potential is taken as £mw3r2, resulting in the Bohr- Mottelson Hamiltonian [41], or as i-mwgr", to investigate non-harmonic potentials. The coupling constants in are calculated using a self-consistent normalization condition [41, 44] 27 3mw3 A < r2 > ”2,: 15mg , = ————, 2.3 #3, A - ( ) where A is the mass number of the nucleus. q1(r) is given by 92(1') = rxrv q3(r) = rzryrz. (2.4) 2.5 The Vlasov Equation and the Test Particle So- lution The multipole moments Q1(t) are given by on) = (2,1,). / dsrdapqi(r)f(r,p,t). (2.5) f(r,p,t) is the one-body phase space distribution function, which is the Wigner transform of the one-body density. We treat these problems in a semi-classical approximation by a Wigner transform of the von Neumann equation of motion for the density matrix, 2'ng = ['H, p], to obtain a Vlasov equation a. f = {71, f}. We then solve the Vlasov equation in the test particle method [46, 47] using a fourth-order Runge-Kutta algorithm with a typical timestep of 1 fm/c. 28 2.6 Numerical Integration Hamilton’s equations are integrated forward in time using a fourth-order Runge-Kutta algorithm [48]. The basic idea behind this algorithm is that Euler’s method for solving :5: = f($). z(t + dt) = z(t) + z(t) - dt (2.6) can be improved by using not just information about 2: at the beginning of the interval dt, but also at points within the interval. Conceptually, the Runge—Kutta algorithm takes ‘trial’ steps inside the interval, and uses :2,- at these new points to more accurately predict z(t + dt) (in actuality, these steps are not carried out). The time steps in our implementation were all of order 1fm/c. That the numerical calculation is fully self-consistent is shown by the fact that the system conserves total energy to better than 0.1%. 2.7 Initial Conditions The giant multipole oscillation is set up in a two stage process: first, the nucleus is initialized in the ground state; second, an external time-dependent field is used to initiate the multipole oscillation as is then shut off. We start with spherical distributions of nucleons in coordinate and momentum space, so that the particles uniformly populate a Fermi sphere (neglecting the defor- mation potential V‘). In order to initiate a giant multipole oscillation of the nucleus, a temporary external potential, sinusoidal in time, is applied. The actual potential used is 29 Vo"(I, r,t) = nsin(th)s(t)q;(r) (2.7) where up is the driving frequency, and s(t) is a differentiable interpolation function, strictly increasing from 0 to 1 over the range [0, r], with zero first derivative at both ends. The period 1' is chosen so that r is much larger than the period of the sinusoidal driving force «2131. After t = 7', a large multipole oscillation is present (Fig. 2.1). ;3()()[....I....r . () 2 we} 3 N] NE 0: AVAVA AVA fl fl [ll/l . I r. z - W WW 0' . -100:" i]- ....1....1 '0 500 1000 1500 t '(fm/c) Figure 2.1: Time evolution of the collective coordinate quadrupole moment Q2 and the collective momentum quadrupole moment P2 during the ramping stage. The period T = 005‘ of the ramping potential is illustrated as well. 30 2.8 Multipole Oscillations After the excitation, regular undamped motion of the collective multipole coordinate is observed for all cases (Fig. 2.2). This contradicts the wall-formula prediction for a strongly damped octupole oscillation. A Fourier transform of these coordinates shows a single peak at a dominant frequency and no w" noise, indicating that no chaoticity is present in these coordinates. This result is surprising as the collective multipole coordinates are merely normalized sums of the single particle multipole coordinates, and the single particles can exhibit chaotic motion. 200‘ r. r55 I #1. v1300 i Q2. 2 ‘C 62.2“ i 100 i] l] l]] i] W ]i i 200 5. .f [[ [ [ - M n 100 g °i iii/[1“] 5' ] i ‘ ° N : i E I o' I /]i [Iii] [j] )j :— ] [l {—100 -100: ”VVU i: i W] U: _200 4 [TLs ] 3- i [4- :[:.:::: 400:— Qa-rz “‘— Q‘!' Is — 500 5‘ 200:J Mi [MMMMMLE ]] WWI] :5, 0:» I ll] ] o 5'0 -200- i '- l ] 00000000 0. M ' 2.01.. ‘ ‘ ‘ ‘ '25... ‘ ‘ ‘25:. ' ' t (fm/c) t (fm/c) Figure 2.2: Time evolution of the collective quadrupole (top) and octupole (bottom) moments for all combinations of static potential and multipole interactions. In all cases, the collective moments have multiply periodic, and not chaotic, motion. 31 2.9 Single Particle Motion The Jacobian matrices for the single particles are given below, where r = (z’2 + y2 + 22)1/2. For V}, = %mw§r2 and I = 2, | ocS coo .38.. For V0 = %mw§r2 and I = 3, f 0 0 0 J _ —mwg —#3Q32 -#aQay For V0 = §mwgr6 and I: 2, 0 f 0 0 J: —m(.vgr2 0 \ 0 COO GOOD .1 oocccalw coccSIHc oooaluoo ¥ __/ ooocoa|~ cccoilwo cecal—co ¥ __/ coco coco occocslw cocoalwc oooalwoc \_ (2.3) (2.9) (2.10) For V0 = imw3r2 and I = 3, ( 0 0 0 i 0 0) 0 0 0 0 i 0 0 0 0 0 ~0 l J: m 2.11 -mw8r‘ ‘l‘aQsZ was 0 o 0 ( ) -,u3Q32 —m(.vgr4 —,u3Q3z 0 0 0 A ‘flsty ‘flstfE —-mw8r4 0 0 0 i Lyapunov Spectra with Time-Independent Jacobians When I = 2 and V0 = '12" 3r2, the Jacobian is independent of all position coordinates. Thus, when Q; is constant, the Jacobian is constant, and ( 1.12) can. be immediately solved to find the trajectory matrix M (t) = e‘", and we can analytically determine the presence or absence of chaos. To do this, we need to consider the eigenvalue spectrum of J. Physical systems with time-independent Hamiltonians, kinetic energy given by p2 / 2m, and no momentum dependence in the potential have Jacobians with the simple form 0 #1 J— (A 0). (2.12) where the matrix A is defined by A;,- = fi. The eigenvalues of J can be related to those of A by using the identity for determinants of partitioned matrices [78], giving |J—A|=(Ar(A—V|. (2m) Thus if the eigenvalues of A are negative, those of J are pure imaginary. The eigen- values of e" thus have unit magnitude, and the singular values of M (t) /t are unity. 33 The Lyapunov spectrum is entirely composed of zeros, and the system is not chaotic. If any eigenvalue of J is nonnegative, then the system must be chaotic. For V0 = % 3r2 and I = 2, A has eigenvalues equal to ~00”, —w2 — qu/m, and —w2 + Aqg/m. For the physical regime we are interested in, with the effect of the static potential dominating that of the multipole interaction, there is no chaos. 2.9.1 The Lyapunov Spectrum To investigate single particle motion, we test for sensitive dependence on initial con- ditions by finding the Lyapunov exponents. These exponents are a measure of the exponential divergence (or convergence) of initially nearby trajectories in phase space. A dynamical system has a positive Lyapunov exponent only if it is chaotic, and the lack of a positive Lyapunov exponent indicates the absence of chaos. To calculate the Lyapunov exponents, we consider the motion of infinitesimal displacements from a trajectory in phase space, called tangent vectors. These dis- placements obey the linearized equations motion dx(t) _ T — J(t)x(t) (2.14) where X(t) is the six-dimensional tangent vector and J (t) is the Jacobian matrix of partial derivatives evaluated along the trajectory in phase space. The ith Lyapunov exponent A,- is defined as ,=,m10 IIXe(t)II) " ‘l-'°°t132(llx.-(0)II ’ (2'15) where "x” denotes the norm of x and the tangent vectors x;(t),i = 1,...,6, are orthogonal. The logarithm base two is used so that the units of the exponents are 34 bits per time. 2.9.2 Finding the Lyapunov Spectrum via Orthogonalization We must adapt (1.12) to represent Lyapunov exponents for finite time, A.-(t), then consider limb",o A,-(t) = Ag. Exponents that are in reality zero will be represented by a A(t) that goes to zero like t'l. We follow the practical scheme of Wolf et al. [49, 50, 51] to numerically integrate the linearized equation for six orthogonal tangent vectors. Orthonormality of the vec- tors is maintained by applying the Gram - Schmidt algorithm between every timestep. The first tangent vector seeks the direction in which it can grow most rapidly, and determines the largest Lyapunov exponent. The second tangent vector cannot grow in the direction of the first, since at every timestep its component in that direction is eliminated. Thus it seeks the direction of largest growth in the space orthogonal to the first tangent vector. In general, the i“ tangent vector seeks the direction of most rapid growth in the subspace orthogonal to the i — 1 previous tangent vectors. The re-normalization of the vectors that occurs in the Gram-Schmidt algorithm keeps them bounded and minimizes numerical error. That this method is correct can be seen by realizing that the Gram-Schmidt orthogonalization is equivalent to calculating a QR decomposition of the Jacobian matrix, assuming that we start with orthogonal tangent vectors, equivalent to the columns of the identity matrix. The QR decomposition of J is computed at each timestep: JJ' = QJRJ‘ (2.16) where Jj is the Jacobian at the 1'“ timestep, Q3 is orthogonal and R1 is right (upper) 35 triangular. The actual Jacobian is the product I],- Jj. The diagonal elements Rf,- are equal to the growth of the 2'“ tangent vector, so that the 2"” Lyapunov exponent is given by . 1 " ' A; = 3.1.3310 'n—N 2: log.) R‘:,. (2.17) i=0. The Gram-Schmidt method is fine for our application, which has a six-dimensional phase space, but for larger systems the more numerically stable Householder transform would be a better way to implement the QR algorithm. Consistent with an information-theoretic picture, we use log2 in our calculations, leading to the odd units of bits c/fermi. We applied this method to twenty representative particles in the model and aver- aged their exponents together. The sum of the averaged exponents provides a check on accuracy; the sum should be zero as phase space volume is conserved in our sys- tem. In all cases the magnitude of the sum of the exponents was less than 10-7 bits c/fm. The results are given in table 2.1, which lists the largest Lyapunov exponent for each case. Note that the positive exponents are all significantly larger than 10'”. The convergence of the exponents with time can be seen in Fig. 2.3. selfconsistent static r2 1.6 r2 1.6 02 0 (2 :1: 1) x 10-3 0 0 Q3 (4 :l: 1) x 10-5 (1.5 :1: 0.5) x 10-3 (8 :i: 2) x 10-5 (11 0,3) x 10—3 Table 2.1: Values of the largest positive Lyapunov exponents obtained in the full selfconsistent calculations and in the calculations with static external multipole po- tentials. Units are bits c/fm. The plots of A, vs. t where V}; = 1/2ma12r2 appear to have ‘icicles’ dangling from them. This is due to the almost periodic nature of the particles in the har- 36 monic potential. The linearized motion will be almost harmonic, so that periodically the tangent vectors will approach zero and the logarithm of their magnitudes will approach negative infinity. Odd looking Lyapunov spectrum convergence plots are common in weakly chaotic systems. The irregular appearance of these ‘icicles’ is due to the sampling period used to generate the plot and. the harmonic period being incommensurate, and the logarithmic scale of the plot. I T'YI' 10_2 r 10—3 r 10'"4 g A, (bits c/fm) 10’5 g A, (bits c/fm) 103 104 105 106 103 104 105 106 t (fm/c) t (fm/c) Figure 2.3: Convergence of the positive Lyapunov exponents with time for the single- particle trajectories of self-consistent nuclear Hamilitonians. All combinations of static potential and multipole interaction are shown. 37 2.9.3 Constant Multipole Moments When the ‘collective’ multipole moment is constant, the model consists of single particles moving in a static potential. We investigated static multipole models for all cases of multipole moment and static potential. The static multipole model is especially interesting when the V0 = % 3r2 and I = 3. In this case, the Hamiltonian has the form of the Henon-Heiles Hamiltonian; specifically, particles moving in the z-y plane are governed by the Henon-Heiles equa- tions. This Hamiltonian has been well studied as a case of chaos in a physical system, and is known to be weakly chaotic. ...I . ......I . WI . .,...I WWI . ..I a - 10 2 1‘ _ 1 ” dynamic i A '1 E -3 .. 10 _._ 1 \ as = 0 :: 3 .. 5 10—4 ’3‘? 1 v 1: 2 i3 10‘5 1r 1 111111 1 41111111 1 1 1111111 A I IL111:LLJJ_LII 1 1 1111111 1_L 1111111 A Lllllll 103 104 105 106 103 104 105 106 t (fm/c) t (fm/c) Figure 2.4: A comparison of the Lyapunov exponent convergence plot for the anhar- monic (r6) potential with a quadrupole interaction. The case with the static multipole moment (left) is non—chaotic (as the Lyapunov exponents are all converging to zero), while the self-consistent case (right) is chaotic. 38 2.9.4 Evidence for Chaos due to Energy Exchange An interesting result comes from comparing the selfconsistent and the static cases for (Q2,r6). The former has chaos while the latter does not (Fig. 2.4). In the former model the particles can exchange energy with the collective quadrupole moment, whereas in the latter they cannot, as the quadrupole moment is static. Therefore, we attribute the origin of this chaoticity to the exchange of energy between the single particles and the collective coordinate. 2.10 Single Particle Energy Distribution The energy density of the particles was computed by binning the number of particles N (E + dE) with energies within a small range E to E + dE, then dividing by the average energy (E + dE ) / 2 for that bin. The energy densities computed in this way all have large errors at small energy, as the error 6N in N (E + dB) is proportional to the square root of N (E + dE) (assuming a Poisson distribution of energies) and the E"1 causes large errors at small energies. Before the ramping which introduces the multipole oscillation in the nucleus, the energy density is equal to a Fermi distribution at zero temperature, with" some error due to the randomness in the initialization procedure. After ramping, a temperature has been introduced into the nucleus by the ramping. This temperature remains constant afterwards, in direct contradiction to the model of Blocki et 01. 2.1 1 Interpretation We show ordered collective motion resulting from underlying chaotic dynamics, some- thing qualitatively new in our investigation. This agrees with the idea of the mean 39 field (static or coherently oscillating) generated by averaging out random features of the motion of individual constituents. The smoothest component of single-particle dynamics survives after such averaging and gives rise to the self-consistent mean field. Therefore, chaos on a microscopic level does not necessarily lead to a catastrophic breakdown of the system on the macroscopic scale. 40 Chapter 3 The Constraint Operator Solution to Quantum Billiards This chapter, and parts of the two that follow, are based on a paper that I wrote with the help of Wolfgang Bauer, which has been accepted by Physical Review E [52]. We introduce a new method to solve Schrc'idinger’s equation for a free particle in an infinite well of arbitrary shape (the Helmholtz equation with Dirichlet boundary conditions), a problem of interest in the area of Quantum Chaos. We expand the wavefunction in a basis of products of sine functions, then use the constraint operator to contain the wavefunction to a region within the domain of the basis functions. In this manner, a quantum billiard problem of arbitrary shape can be solved. Several methods exist to solve problems of this sort, but as recent work reviewing these methods has shown, all have shortcomings. Our work represents a new direction in the solution of these problems. Our method is unique in that it provides a means of computing an eigenbasis. It is also interesting from a physical standpoint in that it can represent the Hamiltonian of a classically chaotic system in the basis of a classically regular system. 41 3.1 Introduction A billiard system consists of a particle bouncing around in a rigid box of arbitrary shape. Billiard systems are useful in the study of chaos, as the chaoticity of the system is determined by the shape of the box. Circles and squares give rise to regular motion; in more complicated shapes, like stadia, both regular and chaotic motion is possible, depending on the initial conditions [53, 54]. Quantum billiard systems are widely used in the study of quantum chaos. Quan- tum chaotic systems can be characterized by statistics. The distribution of normal- ized energy level spacings of a quantum system is one such characterization; chaotic systems have Wigner distributions and regular systems have Poisson distributions [26, 55, 84, 33, 34]. The wavefunctions of quantum chaotic systems qualitatively re- semble a random superposition of plane waves, though ‘scars’ in the quantum wave- functions corresponding to classical periodic orbits can appear [57]. The decay of quantum billiard systems through small exit channels is of current interest. The chaoticity of the billiard system controls the decay of the system; regular billiard systems decay algebraically in time, while chaotic billiard systems decay exponentially in time [22]. Recent work [58, 59] shows an even richer variety of behaviors. The quantum chaotic billiard decay problem is yet unsolved; existing methods of solving quantum billiards are unsuited for it. Many methods exist for solving quantum billiard problems. The most used are the Boundary Integral Method [32, 60, 61, 62, 63, 64, 65], the Plane Wave Decomposition Method [57, 66, 67], and the Conformal Mapping Diagonalization Method [68, 69, 70, 71]. However, recent work reviewing the Boundary Integral Method [72] and the Plane Wave Decomposition Method [73] demonstrates that both have weaknesses. The Boundary Integral Method solves a two dimensional billiard problem by de- 42 riving an integral equation for the normal derivative of the wavefunction using Green’s theorem [32, 60, 61, 63, 64, 65]. Discretization of the boundary integral results in a complex determinant nonlinear in the wavevector magnitude, the zeros of which cor- respond to solutions of the wavefunction equation. It is widely used, but has recently been shown to have problems when the box geometry is nonconvex [72]. The Plane Wave Decomposition Method assumes an expansion in plane waves with the same wavevector magnitude, then tries to force the wavefunction to be zero along the boundary of the box by proper selection of the plane wave components[57, 66, 67]. If it succeeds in making the wavefunction approximately zero on the boundary, then it has found an approximate eigenfunction of V2. The procedure iterates over wavevector magnitudes, recording the eigenvalues that it finds. It is widely used to find quantum billiard wavefunctions, but cannot be relied upon for accurate spectra, as some eigenvalues can be stepped over in the iteration process. The wavefunctions it finds are not necessarily orthogonal to very good accuracy, as shown in [73]. The Conformal Mapping Diagonalization Method elegantly solves a billiard prob- lem by finding a conformal map from the shape of the box to the unit circle [68, 69, 70, 71]. The problem is solved by the mapping, but this method is limited to two dimensional problems with boxes for which a conformal mapping to the unit disk can be found. We present a novel method that solves the problem in a more ‘quantum mechani- cal’ way. We find many eigenfunctions simultaneously by diagonalizing a Hamiltonian matrix. This results in a (truncated) complete set of eigenfunctions that are necessar- ily orthonormal (within the limitations of the diagonalization algorithm), unlike the methods mentioned above. The availability of a complete basis provides a straight- forward approach to time-dependent problems like quantum chaotic decay. We also 43 connect this method to some existing methods. Random matrices that are band diagonal are of interest in quantum chaos[74]. Band diagonal Hamiltonians are ‘natural’ in the sense that many systems have local- ized interactions and localized wavefunctions. Our method, introduced below, results in an approximately band diagonal Hamiltonian. A connection between the two may prove revealing. 3.2 The Constraint Operator Schriidinger’s equation for the quantum billiard is V2\Il(5:') + A\Il(a':') = 0, (3.1) where A is an energy eigenvalue of the system in some convenient units. The boundary condition, which characterizes the quantum billiard problem, is that l11(5) = 0 on the boundary surface 61 of an arbitrarily shaped region I, the included region. We express the shape of I by starting with a larger region T where we can solve (3.1), then ‘cutting away’ the unwanted parts of T to make I. We do this by constraining the wavefunction to be zero in the excluded region E = T/I. Fig. 3.1 shows the regions I and E for the stadium billiard. (Most of our examples are two dimensional, but the method can be applied to three dimensions or higher.) In practice, we choose the region T so that it has a boundary with surfaces of constant coordinates in a coordinate system where V2 is separable. Define the constraint operator C for 5' in T, which multiplies functions on its right by the constraint function C(13): { 1 if z is in I (3.2) 0 ifz'isinE. 44 Included region Excluded region Figure 3.1: The included and excluded regions shown for the desymmetrized stadium. The total region is the entire rectangle. The ratio of included to total regions p here is 0.8570. The constraint operator is the projector for functions over the larger, simpler region T that are zero over E (it ‘constrains’ functions to be zero in E.) Functions in the range of C are zero in E. Functions in the nullspace of C are zero in the region I. C2 = C, so that C is idempotent and has eigenvalues 0 and 1. Therefore, it is the projector of its range, and 1 — C is the projector of its nullspace. We define the included fraction 11 = I, (15/de5: as the ratio of the included volume to the total volume; we will use it below. C is represented in the basis {43,-(5)} of functions over T as the matrix C with elements a..- = I; Mam-(5) = (amass). (33) 45 We will return to the matrix C later to discuss its properties and its computation. The solutions to the problem we are interested in are the eigenvectors of V2 that are in the range of C. Intuitively, it seems that we can solve our problem by finding the eigenvectors of CV2C. This is almost correct. To be completely correct, we will derive the solutions using a Green’s function [75, 76, 77] then simplify the result using our knowledge of the constraint operator. 3.3 Green’s Function Derivation We need a Green’s function C(z', 50) that satisfies the equation (v2 + A)G(z’, so) = 415(5); — 50). (3.4) Multiplying (3.1) by G'(:i:',:i:'o) and (3.4) by iIl(ai:’), then subtracting the second result from the first gives G(5,50)v2\1(s) — 0(s)v20(5, so) = 430(3):;(5 — 30). (3.5) Integrating (3.5) over the included region I we obtain an expression for the wavefunc- tion @030) 11.5.0 18 in I 0 if :50 is in E. 21; j] 33(G(5,3.)v2\1(5) —- WWZGWJOH = { (3-6) To enforce the boundary condition that \P = 0 on 6I we use Green’s theorem to change the volume integral into a surface integral, then apply the boundary condition, resulting in 46 11(30): 11; [S dA . [G(5,50)V\II(5)] (3.7) for :30 in I. We then use Green’s first identity to return to a volume integral, (zo)—— / dz [VG z, 530) V\Il(z )+G(5,so)v20(s)] (3.3) which is true for so in I. We choose the basis {43,-(5)} of eigenfunctions of V2 over T. To express C(z', 550) in this basis, we use (3.4) to determine the expansion coefficients, utilizing the eigen- function expansion for 6(5 — 530). This results in - -4zbfffA (w) The wavefunction \II has the expansion = Z ¢k¢k(5)- (3.10) I: Now we put everything together. By substituting (3.9) and (3.10) into (3.8), we get the eigenvalue equation Zkfivk¢1(fo)= 5: ,f1' ,Wo) [Idi[V¢1(f)V¢.-(5=') + ¢1(5)V’¢j(f)i- (3.11) M This can be simplified by converting it to matrix form. To effect this change, we first multiply both sides by ¢.-(:i:'o), and use the orthonormality of the basis functions over the region T. Then 47 42.- : :[fldsV¢.-V¢.—C..Aj]-,71;—,wj l = 2:354“.in — 5v - C.,A,]mvj J 8 (3.12) where 5.,- = Larva-V3,. (3.13) Here we have used the fact that [ma-Va = Lava-va—jEsza-ve = A.5.,,-— Larva-V3,. (3.14) The matrix C arises naturally in this derivation; the right-hand side of (3.6) reflects the fact that \II is in the range of C. We now rearrange the equation into conventional matrix form, (CA + S — A)¢ = 0 (3.15) where A is the diagonal matrix containing the eigenvalues of the basis functions over T. Solving this eigensystem gives us a solution to (3.1) in the basis of eigenfunctions over T. Notice that the matrix in (3.15) is not hermitian. We now use our knowledge of C to make the eigensystem hermitian. The wave- function expansion (3.10) was equated to the right-hand side of (3.11), which is zero outside of I. Our wavefunction expansion actually represents a function that is equal 48 to \Ii in I and equal to zero in E. Therefore the solutions to (3.15) are zero in E and are in the range of C, that is, 011(5) = \Il(a':') or Cw = 1b. We can introduce this fact on the left hand side of (3.8) to make the eigensystem hermitian. The resulting generalized eigensystem is (CA + AC — A + s — AC)¢ = 0. (3.16) or equivalently, ((c— %)oH+S—AC)¢=O (3.17) where Hg,- 2 A,- + A,- and 0 represents the Hadamard or elementwise product defined by (A O 3);,- = Anggj. Alternatively we can modify the eigenvalue equation (3.15) by inserting C before 1/2 and multiplying by C from the left: C(CA + s — mos = 0 (GAO + 030 — 2.0).» = 0 (GAO + 050 — w = 0. (3.18) Here the idempotence of C makes the eigenproblem hermitian. The problem now looks ‘quantum mechanical’, as it has been reduced to finding the eigenvalues of a hermitian matrix. We can express the problem as a perturbation of the original hamiltonian A by using the complimentary projector 1 — C, which is a ‘small’ matrix in some problems. We will not pursue this here, as we are not interested in perturbative solutions. 49 Now we investigate the details involved in solving (3.18). As C is a projector, it can be decomposed as C = PPT, where PTP = In... P is an m X r matrix, where r is the rank of C. Then the eigensystem (3.18) is equivalent to (PTAP — PTSP — A)1/2’ = 0 (3.19) where 1/2 = PTr/J’ and 11/ is an r dimensional vector; there are r non-trivial solutions to (3.18). As the constraint matrix for a region that is the union of two non-overlapping regions must be the sum of the constraint matrices for the two regions from (3.3), we expect that the rank of a constraint matrix is proportional to the area in the range of the constraint matrix. When the included region is the total region the constraint matrix is the identity matrix, with rank m. Thus we expect that that the rank r of C is given by the closest integer to pm, which was confirmed numerically. Before we solve (3.19), we need to find P; in other words, we need to find an orthonormal basis for the range of C. This matrix is only approximately a projector, due not to error in its elements but to the fact that it is representing C in a truncated basis; therefore its range is not well defined. We use the r eigenvectors of C with the largest eigenvalues to define the range of C. This step is justified in appendices B and C. The spectrum of C proves to be close to that expected for it, so that the uncertainty in the range of C is not large. In Fig. 3.2, we plot the eigenvalues of C, in decreasing order, vs. the fractional eigenvalue number, which is the merely the eigenvalue number divided by the number of basis functions m. The spectrum is quite close to that of a projector with fractional rank equal to the included fraction ,1 = f, di/ fT d5. 50 Illl IIII llll ITTI ITII MI 1.00 _ 0.75 :— Eigenvalue of C o 01 o l lLlllllllJllllllllllll 0.00 l- - - -l l l l I l l J I Ll [J l l l l l l I I l l 1‘ 0.0 0.2 0.4 0.6 0.8 1.0 Fractional Eigenvalue number i/m Figure 3.2: The spectrum of C for the desymmetrized stadium, for m = 124 and m = 1004, where r/m = a = 0.8570. 51 3.4 Numerical Calculation of Matrix Elements Unless the boundary 61 is along surfaces of constant coordinates we will need to use numerical integration to find the elements of C and S. This is because although the basis functions are analytically calculable, the integral of a basis function over an arbitrary region is not. We can integrate over E or I, as C and 1 — C are trivially related. We have integrated over 1 — C as the region is smaller, necessitating fewer grid points in the numerical integration. Thus we will find C and S by approximating (3) as a sum over all grid points in E of the Taylor expansion of the integrand. To facilitate simple notation we will assume that :3 is two dimensional with com- ponents zl, zg. Consider a cell surrounding a grid point :39,“ with dimensions hl and kg. The integral of ¢.(5)¢,(£) over the cell is a a - , a , 4 .- 4 [221. (I21 [222 (132 l:¢i($)¢j($) lat-‘9,“ 4.1521,? ¢ (:53: (x) Ifgrid 17); + - . . _ (320) As we are using a regularly spaced grid, the integrals are over intervals centered around zero and odd powers of zl and z; integrate to zero and (3.20) becomes h1h2¢i(5)¢1(5) ii’gria: (3'21) neglecting terms of order h‘I’hg + h1 h3. Summing over all grid points, Co = Z ¢;(Ek)¢j(51)hfh§ (3-22) Ic=l, where 55k, 10 = 1, 2, . . . , n is a grid in the region E. hf and h’; are the dimensions of the cell surrounding the It“ grid point. If we define the m x n matrix FIJ- = ¢;(z'j)(hf h§)i, then CE = F F T. Notice that FF T is manifestly positive semidefinite, as expected. The matrices F and P should not be confused; even though C = F F T : PPT, 52 P aé F. That the matrices cannot be equal is obviously true, as F is m x n and P is mxr,andn>>m2r. We observed that in some situations using grids with an odd number of grid points per side gave significantly better results than grids with an even number of grid points per side. We attribute this to the greater number of coincidental zeroes of the basis functions and grid points in latter case, which degrades computation of the matrix elements. The number of coincidental zeroes and grid points can be lessened by using grids that begin at h / 2, not 0. Our grids are evenly spaced, except at the boundary, where we put a point on the boundary and give it the appropriate weighting. The two dimensional numerical integration of the matrix elements merits no more consid- eration as the line integral computation of the matrix elements (discussed later) will be superior. We can find the elements of S with a similar approach. . From (3.13), 5.5 = h2:(%%)lifi. H 831163;, = gDidpili} S = ZD‘(D‘)T (3.23) i where ka = (2%) lg, . All of the matrices D‘(D‘)T are positive semidefinite. For a function \II(§:’) represented by a vector 1,!) to be in the nullspace of F F T the vector must satisfy 2147542,- : 0 i=1. 53 (123353454504.- = 0 (3.24) i=1. for k = 1, . . . , n. Demanding \II to be in the range of C is thus equivalent to demanding that 11(5).) = 0 at the n grid points 5,. used in the numerical integration. Only r of the equations (3.24) are linearly independent, giving rank r to the matrix C. 3.5 Computational Algorithm Here we present a straightforward algorithm that solves (3.18); it uses a subroutine to find all of the eigenvalues and their associated eigenvectors. The only complication in this procedure is that finding all the elements of an m x m matrix that is equal to the product of three matrices (as is CS C) is an order m4 process, which should be avoided. As the matrices will be large, the eigensystem solving routine should be one that uses only one array, such as Householder tridiagonalization followed by QR iteration. An uncomplicated algorithm that follows this advice and uses three storage arrays is as follows: 0 In the first storage array: 1. Compute the elements of C using numerical integration. 2. Compute the eigenvalues and eigenvectors of C. 3. Truncate the spectrum of C to find P such that C = PPT and FTP = 1. o In the second storage array: 1. Compute the elements of S using numerical integration. 2. Compute the eigenvalues and eigenvectors of S. 3. Truncate the spectrum of S to find D such that S = DDT. 54 0 Compute PTD and write it into the third storage array. 0 Compute PTAP (which is order m3) and write the result in the second storage array. 0 Compute (PTD)(PTD)T and add the result to PTAP in the second array. 0 Compute the eigenvalues of PTAP + PTS P which resides in the second array. The algorithm solves three m x m eigensystems and three times computes the products of m x m matrices. 3.6 Comparison to an Analytic Case To evaluate the accuracy of our method, we found the eigenfunctions of V2 over a unit disk; we used a unit square for T and a unit disk for I. As the disk is ‘separable’, the eigenfunctions are known to be the product of sine functions and Bessel’s functions. The eigenvalues are the zeroes of the integer order Bessel functions. We found the first 300 eigenvalues by finding the zeroes of the Bessel functions with a Newton - Raphson method for comparison. Even though the circle and the square are both separable, their symmetries are quite different. Nevertheless, the constraint operator solution works. With 868 basis functions, we obtained rms fractional eigenvalue error of 2.0138 x 10'3 for the first 300 eigenvalues (Fig. 3.3). The errors of the eigenvalues depend on the eigenvalue number in a consistent way. To show this, we introduce the fractional eigenvalue number, which is the eigenvalue number i divided by the number of basis functions m. This enables us to put data for solutions with different m values on the same plot. Fig 3.4 shows the fractional error as a function of the fractional eigenvalue number. The dependence is nearly 55 hi I I I I ITI I I I l I I IlIIIIIIIIIIIIIIIIIIIIIIIII: O 1.0 E— —g A .=_ o __: 5\° E E v 0.7 5— —5 . : O : I“ — O —_ 8 E O O : =4 Z— —Z ‘33 O 5 : : s :— —: a: E O 3 0.3 f j r- a g . 0.2 _ 0— bi I l l 1 [4| 1 I l l llllllllllllillllllllllllllLLT 200 300 500 700 1000 In Figure 3.3: The convergence of the RMS fractional error in the energy eigenvalues as a function of the number of basis functions used for the disk problem. Here the number of grid points and the number of basis functions are equal. 56 exponential for the higher eigenvalues, while the lower eigenvalues have fractional errors bounded by a constant. _l 1T1 ] 1T 1 I 1 iii T] i l 11] l 1T1]! I I 3 - 4 - i-c : t9 _ :13 44' 01 10—1 :— "'33 —:‘ :3 : d1. : p—q ,. .. (U L' DDDEQ’; “ > -n a o 0 con 13" 0%., - g ”a O o 0% ma + t "‘ D O o D Do*£+ .210 —2 a on D ° :1 0° 490+ .— m 10 E_0 a on + Q: a +“m :1 [I a up + Z —t 1- 0" O + + ‘l’ _ 2 ~ + t + + + A i— + d o + 9' ++ a b *++D o —3 + + 2 10 "5"“ a Cir-1 D I _ + : 71111111111111111 llllJlLllll-i 0.0 0.2 0.4 0.6 0.8 1.0 Fractional Eigenvalue Number i/ r Figure 3.4: The fractional eigenvalue error as a function of the fractional eigenvalue number for the unit disk with m = 124 (boxes) and m = 316 (pluses). We assume that W = 0 in E; we can easily find out how bad this assumption is. Define the volume fractional error V to be the integral _ -0 2 u_/de|\1|. (3.25) 0 S V S 1, with only values near 0 being acceptable. The average volume fractional 57 error for the first 300 eigenvalues was 7.113 x 10‘3. The volume fractional error also has a consistent distribution (Fig. 3.5.) The volume fractional error is roughly exponential in fractional eigenvalue number for low eigenvalue numbers. We observed that the rms energy eigenvalue error converges to a finite value when n is increased but 111 is held fixed (Fig. 3.6.) This is expected, as the‘matrix elements converge to their correct values with increasing n and there will always be a finite error when m is finite. 10—1 3.. 0 L1 L1 Lil 73 10—2 Ci 0 :3 0 65 5-1 “* 10—3 I, E 5.5. O > 10—4 I I I I I I I T I I I I I I I I I I I I I I I I I I - o —— 1: + + + 00° : 1- 0‘ a". ’4' 40“, T '- +3 3° $9 400 0 ° ‘ '- 3' 34° *3 2 1' 0+ - + + *z" + +9" + o *3 1- + + a o o 0 9" ”9° _ + o 5+ I '* 0 +3 ++ + ta + +4; + ° '5 + + + ‘H‘ W W + 1— if + 9:? + + z o —-i : ° + + + + 8 I .. o o... o c + _ 1— + —l __ 93* #49:+" 1- &+++* +° 0° - +++°++¢ + o o++"’ fl,”& 0 {—— o o'+ ° 4‘ . —j I 0 I 1— + I +*$+ + - + 3* 3+ .4 + s - ”‘0 t 9 u w? 1— +. —1-‘ 'J I I I I I l I I I I I I I I l I J I I I I I I I I I“ Fractional Eigenvalue Number i/r Figure 3.5: The volume fractional error as a function of the fractional eigenvalue number for the unit disk with m = 124 (boxes) and m = 316 (pluses). We observed that the rms energy eigenvalue error converges to zero as m —1 oo. 58 The exponent of m was approximately —0.4 for the disk (Fig. 3.3). The application of the constraint operator method to one dimensional problems is superfluous. For comparisons sake, we solved the one dimensional problem with a sine function basis. The rms fractional eigenvalue error decreased as m‘l. When the included region I is separable, the Constraint Operator‘Method can be decomposed into two one dimensional problems (see appendix C, equations C7 and CB). Thus we expect that the rms fractional eigenvalue error in the two dimensional case can, at best, decrease as m'l/z. The observed exponent of m in the convergence of the eigenvalue error of the unit disk is consistent with this. 3.7 Comparison with the Plane Wave Decomposi- tion Method The Plane Wave Decomposition Method devised by Heller [66] uses a basis of plane waves to express the wavefunctions, then demands that the wavefunctions be zero on some points evenly spaced around the boundary surface. The eigenfunctions are found by an iterative process: an eigenvalue is assumed, the linear equations that set the wavefunctions equal to zero at the points are solved, then the error in the bound- ary condition is evaluated. The process is repeated until the error in the boundary condition vanishes; the wavefunction is then an eigenfunction of V2 that satisfies the necessary boundary conditions. Because of its iterative nature, it cannot be relied upon to find every eigenvalue [73], even though the error in the individual eigenvalues can be made arbitrarily small by varying the step size. The Plane Wave Decomposition Method finds eigenfunctions, but each eigenfunction is found in a different basis, as the wavevector magnitude is not the same for each eigenvector. This renders impracticable the task of manipulating 59 l I I I I I I I IflI I IIIIIIIIIIIIIIIIIIIIIIIIIIIII o 1.00 :- —: A Z. o _: 5: 0.50 _ _ 1.. - o _ o k 0 20 — — til (I) 2 0.10 :— O "2 0: _ O _ I 1 0.05 — O —a - o O . 0.02 I l I l I I I I l I ll IlIIIIIIIIIIIIIIIIIIIIIIIIII 200 300 500 700 1000 11 Figure 3.6: The convergence of the RMS fractional error in the energy eigenvalues as a function of the number of grid points used in the numerical integration for the disk problem, using 400 basis functions. 60 the set of eigenvectors. The Constraint Operator Method has limited eigenvalue accuracy, but it finds a (truncated) complete set of eigenfunctions. The chore of knowing all m2 matrix elements prevents the eigenvalue accuracy from being improved by using a large m. This is the price that is paid for finding a complete basis. The relative merits of the Plane Wave Decomposition Method and the Constraint Operator Method suggest that the methods could be used in conjunction. The Plane Wave Decomposition Method can be used to improve the eigenvalue accuracy of the Constraint Operator Method, using the eigenvalues found by the latter as starting points. This would alleviate the Plane Wave Decomposition Method’s problem of stepping over eigenvalues, and reduce the time spent searching for eigenvalues. The net result would be a complete set of eigenfunctions with accurate eigenvalues. To check the method in a non-integrable case, we also solved the desymmetrized stadium problem, where the included region is as shown in Fig. 3.1. No analytical solution is possible, so we compared our eigenvalues to those produced by the plane wave decomposition method for the same problem. With 378 basis functions, we ob- tained a rms fractional energy eigenvalue deviation of .002 for the first 100 eigenvalues of the two methods. The first four wavefunctions are plotted in Fig. 3.7, which shows the wavefunctions over the total region. We checked the distribution of normalized eigenvalue spacings in both of our test cases. The disk spacings closely follow a Pois- son distribution, and the stadium spacings closely follow a Wigner distribution, as expected (Fig. 3.8.) We regard this as evidence that the constraint operator solution preserves the essential details of a system. 61 First wavefunction of quarter stadium Second wavefunction of quarter stadium 1 u: Third wavefunction of quarter stadium Fourth wavefunction of quarter stadium f1 ’Vll' hm m HI 1. Figure 3.7: \II(:'1:‘) for the first four eigenfunctions over the desymmetrized stadium. Something like Gibbs phenomenon can be seen, with the maximum error occurring near, but not on, the boundary 61. 62 __lllillllTTlllllITVlIIIll llll_ 0.8— " — - cI/on“ - 0.61 - r “ «00 1 A i I (I) _- v _ __ _ D‘04— — I 1 0.2 — 00 llllllllLIllllllllIIl$l - '0.0 0.5 1.0 1.5 2.0 2.5 8.0 S Figure 3.8: The distributions of normalized energy spacings for the desymmetrized disk problem (eigenvalues 100-200) and the Wigner distribution. 63 3.8 Connection with the Finite Difference Method The constraint operator solution can be connected to the Finite Difference Method. Consider the change of basis given by F ii” = it». From (10), the function 111(5) -_- {tamifijw i=1, n m = 2: '6' Z ¢i(5)¢1(f)(h1h2)% = : 112,6;"(53 — 5,) (3.26) i=1. where 6;"(5 — 5,) is the unit normalized delta function expanded in a size m basis. This corresponds to changing to a basis of n delta functions, which is qualitatively similar to the finite difference method with an n x n array. it must be considerably larger than m in’ order for the numerical integration of the matrix elements to be accurate, and we have not neglected terms in representing V2, demonstrating the relative merit of the constraint operator method. 3.9 Other Operators, Bases, and Boundary Con- ditions Although we used a basis such that A was diagonal, this is not necessary. It may be possible to get better results by using other bases; for example, wavelets are more ‘localized’ than plane waves and may better represent the constraint operator. Below we derive the correct eigensystem in a general basis. For complete generality, we assume that there is a quantum mechanical potential term V(z'). Now we need a Green’s function C(z', :30) that satisfies the equation 64 (v2 + V(z) + A)G(5:‘, so) = —435(5*— .80). (3.27) This leaves (3.5) through (3.7) unchanged. As our basis does not diagonalize V2, this changes our expression for 0(5', 50) to 0(5150) = 47" ZHT - V " Al—IIij¢i(5)¢j(f0)- (3'28) ti The matrices T and V are given by T3 = _/T433.(3)v23.(3) v. = [1.4444411/(4844'). (3.29) The expansion for W is still 2(5) = Z $1.4M?) ~ (330) I: but here the functions are not eigenfunctions of V2. Substituting the expansions into (3.8). E; «114350) = 2 MT — V — A]“)..4.(a) [S 44 - [Ava-1. (3.31) i.“ To convert this into matrix form, we multiply both sides by 05,-(50) and then use the orthogonality of the eigenfunctions, giving 1111 = — EXIT - V - A]"),,-A,7I¢,- j,I (T — v — A - 11).» = 0 (3.32) 65 where Ag, = f5 dA - [djV¢;]. A can be expressed in terms of C as Aij = [SdA'l45iV45jl = [14314.v2¢.+v¢.-V¢11 = [T 43 [4.cv24. + [1de41.- - V1.1 ; Cik(_Tkj) + (5)13 A = (1 — C)T — s (3.33) and the final form of the eigenvalue equation is (C(T — v + S)C — 103 = 0. (3.34) In the case of Neumann boundary conditions, if - V\Il(z') = 0 on the boundary surface, and (3.7) is replaced by 111(40): 11,; — [$441- lawman, (3.35) changing ( 3.32) to 15:“ = EXIT - V — A]“),,-A£¢,- j,I (T — v + A —— 11w = 0. (3.36) The final form of the eigenvalue equation is then (C(T — v — S)C — A)¢ = 0. (3.37) 66 3.10 Wavelet Bases Wavelets are sets of orthogonal functions that are simultaneously localized in both frequency space and coordinate space [79]. This suggests that they would be a better basis to use with the constriant operator method than the eigenfunctions of V2, which represent V2 perfectly but represent the constraint operator poorly. In fact, matrices which become smooth away from the diagonal can be shown to be well represented in a wavelet basis [48] (the constraint matrix C has this prOperty, as will be shown in the next chapter). A discrete wavelet transform (which corresponds to changing to a wavelet basis) applied to a matrix of this type results in a sparse matrix. The V2 operator is also well represented by wavelets. The sparsity of the matrices resulting from this transform would greatly improve numerical efficiency and the accuracy of the results. Of course, it would be preferable not to use a discrete wavelet transform, but rather find the matrix elements in a wavelet basis. This is certainly a promising avenue for future research. 3.1-1 Approximating a Projector In (3.19) we choose the r eigenfunctions of C with the eigenvalues closest to unity to span the range of C. This is a sensible thing to do, but we would like a firmer justification for this step. Here we show that this step is equivalent to replacing C with the ‘closest’ rank r projector C’ , namely the one that minimizes the Euclidean matrix norm ||C — C’Ilz. (3.38) To show this, we write C and C’ in terms of their unitary decompositions: 67 C = UII‘U C’ = VTF’V (3.39) where the diagonal matrix I},- contains the eigenvalues of C, and I"-- — 1 if z _<_ r and is zero otherwise. I‘, I", and U are known to us; to solve the problem we want to find a unitary matrix V that minimizes “C _ CI“; = ”r — UVir'VUtu = Tr[(I‘ — UVtr'VU*)(r — UV’P’VUU] Tru‘2 + (r')2 — 2r'UV1‘rVUt] (3.40) which is equivalent to maximizing iP Pi;(UVl)ikPkk(VUl)ki =1, Ma Tr[r'UVtrVU*] = 1, 8" r | (VUI) ).-,. | r... (3.41) M Ma 71- II 1, :1, Define the m x m matrix M by Mg; = (VU 1).), and note that it is orthostochastic (that is, Z,- ng= ZJM ,J- = 1. ) Define the vector 7 by 7,- = I‘,-,-. Then we want to maximize the sum 530431.. (3.42) i=1, As M is orthostochastic, M 7 majorizes 7; the sum of the r largest elements of M 7 is less than or equal to the sum of the r largest elements of 7, with equality holding for 68 M = 1. We minimize the norm by taking M = 1, which means that U = V. Thus C and C’ are diagonal in the same basis, and replacing F with P is equivalent to replacing C with C’. 69 Chapter 4 The Constraint Function Expansion We are currently pursuing the approach of expanding the constraint function (2) in the eigenfunctions of V2. The matrix elements of C and S are completely determined by the elements of this expansion. Fewer numerical integrations are required, and Green’s theorem can be used to express the constraint function elements as integrals over the boundary surface 61. We expect better results from this approach as the problem of coincidental zeroes and grid points that can arise with the area integration will not be present when surface integrals are used. This approach connects the constraint operator with the theory of Fourier ex- pansions. Some of the properties of C mentioned above can be proven using this approach. The band diagonality of C and S is manifested, as a truncation of the constraint function expansion to m functions results in matrices with bandwidth 2m. The Constraint Operator Method will work with arbitrary bases; a basis other than one that diagonalizes V2 may result in a higher computational load but greater accuracy. In particular, we speculate that wavelets might do a better job of repre- senting the constraint operator than the Fourier basis does. It is possible to expand the constraint function (3.2) in terms of the Neumann 70 functions that also satisfy the Helmholtz equation over T. This expansion makes the computation of the constraint matrix C less numerically intensive, and also facilitates proving some elementary facts about that matrix. However, it is more complicated to implement, especially in that it requires the introduction of Neumann functions. Denoting the Neumann functions over T as 12,-, we have ‘co-orthogonal’ bases, where V21); - A51); = 0 V2451 - A1451 = 0 d" I . = 5." j] zv v, I ./Idf¢’¢j = 5.5 A 3150.5,- = M,- (4.1) and M is a hermitian matrix that is non-diagonal, as the Dirichlet and Neumann functions are not mutually orthogonal. In practice, a sensible choice of bases can make the matrix M easily calculable. Expanding the constraint function as c(a':') = Z,- v,-(:i:°)c,-, we can find the expansion coefficients c,- in terms of integrals over the surface 61: l = — dAf-V ' 4.2 Aj/sr/sr n v, ( ) where fl is the outward unit vector normal to the surface element 6A. The last equality comes from the fact that ii - ij = 0 on the boundary 6T, which is satisfied for all Neumann functions. It is an interesting and useful result that the integral need be taken only over the segment of the boundary that does not coincide with the boundary of the total region. Here it must be assumed that AJ- 75 0, which is true for all but the constant func- tion (which is the first Neumann function). Assuming that the Neumann functions are indexed in order of increasing Ag, then the expansion coefficient of the constant function is co = /T .1545) = [1 d5 (4.3) which is just the volume of the included region. We can now solve for the constraint matrix C using this expansion: Ci]. = Ld5¢ic(5)¢j = deijvkwchw) k = Z ’66:: L d5 ¢i¢jvk (4.4) and the ‘surface term’ matrix S is given by 5,]. = [Tamawpwj 72 851' = Zk:CkAd5V¢g'V¢jvk. (4.5) The terms fT d5 (bggijk and IT d5 V43,- ~ VqSJ-vk can be simplified somewhat using re- lations between the Dirichlet and Neumann functions [78]. We will do this later for the particular case of sine and cosine functions. 4.1 A One-dimensional Example Signifigant insight can be gained by considering a one dimensional problem. The constraint operator solution is an unnecessary complication in this case, but the results we derive in this case can be related to higher dimensional cases. Similar, but more complicated, relations hold in the higher dimensional cases. Consider the one dimensional problem of finding the eigenfunctions of $ in the interval 0, L that are zero on the boundaries. We know the solutions to be the func- tions g5,- = (%)i sin %’3 with eigenvalues -(%)2i2. Expanding the constraint function, 00 C(x) = Z; c.- cos 1? (4.6) where . _ p _ i = 0 c,—{ %f,dxcos’-’I'f’- i750 (4'7) Here p is the excluded fraction % f, dz. From Bessel’s equality, we know that 2 1 0° 2 Co+§ZCi =l1- (4-8) i=1, We can obtain a bound on the magnitude of the elements c.- by integrating (4.7) by parts, giving ||c,-|| _<_ ff for i 74 0 where n is the number of points on the boundary of I. 73 The matrix elements of C in terms of the c vector are - l ' for i = ' Cg" 2 Co 262‘ . .7.) 49 J { %(Clli-J’ll - 6:41) for 2 ;£ J. ( ) S can be similarly expressed as '2 2 Ur— ' for i = ‘ Si. ={ 'L; (CD + c2,) - J7 4.10 J 72.177(Clli-jll " Ci+j) for z 76 j. ( ) If the constraint function C(m) is a finite sum 2L0, c.~, then the matrices C and S are band diagonal with bandwidth b. If c(a:) is well represented by a finite sum, then we expect C and S to be well represented by band diagonal matrices. C can be expressed as the sum of a band diagonal matrix Cb with bandwidth (1 and a residual 6C = C — C5. The Weiland - Hoffman theorem [80] can now be used to bound the RMS error of the eigenvalues of C1,: 1 2 [%i[A;(C)—Ai(Cb)]2] _ illwllz i=1, A l 2 4113010 [%i[A.-(C)—A.(Cb)l2] 3 lim iII 3’ or t > t'. The first term in (4.23) contributes along the diagonal. The last term in (4.23) will provide the trailing edge of the band. 78 4.4 Finding the Expansion Coefl‘icients The constraint function’s expansion coefficients can be obtained by numerically inte- grating the line integral in (4.2). For A“ 7’: 0, 1 C, t = / d1 ft ' VT, t ' A” 61/6T ’ ws,t Amt [GI/6T dl[n,,51r $11037”) COSUWII) + "'th cos(s7rz) sin(t1ry)] (4.24) The basis functions can be rapidly oscillating, so be careful when numerically inte- grating. 4.5 Questions About the Constraint Operator The constraint operator determines whether or not the quantum billiard problem that it describes is chaotic or not, as it represents the geometry of the problem. Thus it is natural to ask what is the difference between a constraint operator for a chaotic problem and one for a regular problem. This is an interesting open question. 79 Chapter 5 Matrix Structure and Eigenvalue Error of C 5.1 The Eigenvalue Spectrum In practice, the eigenvalue spectrum of C is observed to closely follow the function 7(i)= [exp(i’;”)+l]- , (5.1) where z" = i/m is the ‘fractional eigenvalue number’ and p = r/m (Fig. 3.2.) This function closely approximates the unit step function, which is the eigenvalue distribu- tion expected for a projector. When a = 1, 7 is the Fermi function; for 0 < a < 1, 7 is greater than the Fermi function, and when a > 1, 7 is less than the Fermi function; all cases are possible. In (Fig. 3.2) we see that 7(p) is approximately the same for m = 124 and m = 1004, as expected, though the slope of 7 at i" = ,u is larger for larger m. 5.1.1 The ‘Temperature’ of C The ‘temperature’ T can be found, by analogy to the Fermi function, using the slope 5% of (5.1) at z" = p: 80 _ -a-—l T = ——°‘,2 4| dt' “ -7(#)10g2(7(#)) (5.2) fill; We observed that T is inversely proportional to m, with a proportionality constant which can depend on the region I. 5.2 Band Diagonality An essential fact about this method is that the matrices are approximately band diagonal in the basis of the eigenfunctions of V2 arranged with their eigenvalues in increasing order. More precisely, they are well approximated by a matrix with all elements zero when Ili - j” > I), Where b is the bandwidth. This is important, as we need to represent infinite matrices with finite ones, and would like to understand exactly what we can safely neglect. 5.2.1 The Wieland-Hoffman Bound on Eigenvalue Error C can be expressed as the sum of a band diagonal matrix Cb with bandwidth b and a residual 6C = C — C5. The Weiland - Hoffman theorem [80] can now be used to bound the RMS eigenvalue error of Cb: iZIMC M00] S m"/’||5C||2- (5-3) mi=l, Interestingly, m'1/2H5CII2 can be computed due to the idempotence of C: 81 0,.- = :02 = 2 0.5+ 2 lli-J'llsb lli-J'||>b ||6C||§ = 210..- 2 03,1 ' (5.4) " lli-J'Ilsb Here we have neglected the fact that m is finite, but this analysis works for large m. It is easy to compute m“1/2||6C “2, and easy to update it if m is increased. A pro- gram can use this to determine the minimum value of m that has a certain maximum allowed rms error in the spectrum of C. 5.2.2 Measuring Band Diagonality To investigate the rate of convergence of m'1/2||6C|| to zero as (1 increases, we need a measure of the size of the k“ off-diagonal. To account for the fact that different off-diagonals have different numbers of elements, we define {k to be the rms value of the elements of the Is“ off-diagonal: 1/2 5" = [M] (55) where N], is the number of elements in the k“ off-diagonal. For large m, the bound on the rms eigenvalue error of Cb is —||5C||2 - 2 2: £13 (5-6) k=b, We observed that {1, ~ b‘1 (Fig. 5.1), implying that the bound on the rms eigenvalue error of C5 is proportional to b‘l/z. 82 0.050 0.020 ' 0.010 50)) 0.005 0.002 0.001 I [Lllllll l llllllll l Lilli]! 1 5 10 50100 500 b Figure 5.1: The rms value 6 of the b“ off-diagonal elements of C as a function of b, for the disk (upper curve) and stadium (lower curve), demonstrating the band diagonal nature of C. ' V 83 5.2.3 Banded Random Matrices There is current interest in banded random matrices, as these matrices correspond to physical systems with ‘nearby’ states that are strongly interacting [81, 82, 83]. The study of banded random matrices begs the question ‘banded in what basis?’ Every Hamiltonian matrix can be transformed into a banded matrix by a unitary transform (for example, an incomplete Jacobi rotation diagonalization). The idea is that the Hamiltonians of interest are banded in ‘natural’ bases, like the plane wave basis. Regardless of the interpretation, many naturally occuring Hamiltonians are like banded random matrices [84]. The constraint operator approach results in a banded Hamiltonian matrix for a quantum billiard problem. The matrix elements decay with a power law b’3/2 away from the diagonal, and increase with the power law 2"” along the diagonal. Quantum billiard Hamiltonians derived with the constraint operator method can themselves form an ensemble, for example, the ensemble of all Hamiltonians with the same total region T but different included regions I, with the included fraction p fixed. 5.3 Separability Whenever the wavefunction is separable, the matrices C and 5' must be separable as well, reducing the two dimensional problem to two one dimensional problems. When (3.1) can be solved by separation of variables, then the vector 1b will be decomposable into the Kroneker product 1]) = E 8) 17. As there are separable eigenvectors 1b = 6 <8) 17, there are separable unitary matrices that diagonalize C and 5'. Therefore C can be written as the Kroneker product C = C” (8) C" and S can be written as the kroneker 84 sum 5 = 5’” ®1+1®S”. As we choose the total region T to correspond to a separable case, we can always write T as the Kroneker sum of two matrices: T = Tf®1+l® T". Considering all of these separabilities, the matrix in (3.18) becomes (CI®C”(T’®1+1®Ty+Sx®l+l®Sy—A®1)C‘®Cy = (C’(T’ + S‘” — mar) 09 C” + Cr 00 (0”(TV + 52/ — may) (5.7) where we have written /\ = A3 ®1+1® A” and made repeated use of the mixed product identity. The solutions are given by the Kroneker product of the one dimensional solutions 6 and r), where (C’(T:‘ + 52 — max): = 0 (Gym + 5!! — may», = 0. (5.8) 5.3.1 Separability and Chaoticity There is a well-recognized correspondence between non-chaoticity and separability; this correspondence extends to the matrices C and S. 5.4 Error in Eigenvalue Estimates of Infinite Rank Operators in Finite Dimensional Bases 5.4.1 Truncation and Diagonalization An infinite or extremely large matrix can often be well approximated by a finite or smaller dimensional matrix when the matrix to be approximated is nearly block diagonal or band diagonal. Suppose we have an m X m hermitian matrix H that 85 we would like to approximate with an n x n matrix A. The matrix H will have the conformally partitioned form H=(§, g) . (5.9) where A is n x n, B is n x (m — n), and C is (m — n) x (m — n). We can diagonalize our approximation A with a unitary matrix U, as it is hermitian as well: UTAU = A. (5.10) The entries of the diagonal matrix A are our estimates of the eigenvalues of H; assume without loss of generality that they are arranged in increasing order. Define the unitary matrix Q as U 0 Q—(Ol), on) partitioned in the same manner as H. The matrix H’ = QIH Q is unitarily equivalent to H, and therefore has the same eigenvalues. Writing out H ’ in partitioned form, , A UIB We can find a useful upper bound on the difference between a diagonal element of a matrix and the nearest eigenvalue using the Gersgorin disk theorem, or an adaptation of it. 5.4.2 The Gersgorin Theorem Let A be an eigenvalue of H, with the associated eigenvector z. Then 86 Zngitj = ij (5.13) 1' for i = 1,... ,m. Subtract the diagonal elements from both sides: 211,356.] = (A — H;,-)z,-. (5.14) #5 Square both sides and use the triangle inequality: 2 I Hij‘Ej |2Z| (A - Hie) |2| $1 |2 (5-15) 19“ for all 2'. Now choose i such that | z,- |22| z,- I2 for all j. Using the triangle inequality again, Inc.- |2 Z I Hi,- |22| (A - Ha) |2| z.- I2 (5-16) #6 or Z l Hij I22| (A - Hii) I2- (5-17) #i Thus I A — Hg,‘ IS r;, where r,- = (2,5,5,- | ng I2)i. In order to be able to associate the i“ largest eigenvalue A,- with the 2'“ largest eigenvalue estimate A,-,-, we must ensure that the disk I A,- — Agg IS 1',- does not overlap with any other disks. This requires that A£+1,£+1 - Au > 1‘; + T‘.‘+1 (5-18) 87 and Au - Ai—l,i-l > 1'; + 7‘1-1- (5.19) When these requirements are met, then the error bound for the 2"” eigenvalue holds. This provides a criterion for the size of the matrix A: as n increases, all rs decrease ( or at least do not increase.) Thus 72 can be chosen so that conditions (5.18) and (5.19) hold. 5.4.3 A Bound of the Eigenvalue Error Apply equation (10) to H’, then I A; - Ag; IS 7‘; (5.20) and r.- is the bound of the error in our estimate of the 2'“ eigenvalue, where i = l, . . . , n. The row sum of H’ is now the row sum of U 13. Writing out the matrix product, then using the triangle inequality once again, 7‘12 = Z I Hfj 2 1.9“ = Z I (U’B).,- 12 J = E I ZUABM |2 j k 5 Z I ch '2 Z I Bkj I2 (5.21) k j = Z M150“); (5.22) k 88 where a). = Z,- I Bk,- I2 is in some sense the error associated with the 1:“ row of A and the orthostocahstic matrix M is defined by M;,- =| Uij I2. To benefit from this analysis, we need to have a bound on 01., which depends of course on the individual matrix considered. As the equation r? = 2 M50]; (523) I: represents a convex linear combination of the as, it follows that r? 5 max;‘ 0;, for i = 1, . . . , n. 5.4.4 Projectors and Matrix Sections A matrix P representing a projection operator necessarily satisfies the equation P2 = P. When P is n dimensional, yet is representing a rank m operator with m > n, it cannot satisfy this equation exactly. We can, however, derive an expression for the row errors 01,. Consider the diagonal element 1).", which is Pii = P?- = z PikPki : k=l, m = Zle]2 k=1, =3 I ng [2 +0; (5.24) k=1, using the fact that P is hermitian. The row errors are a,- = 22:1. | P.)‘ |2 —P,-,-. The matrix section H’ of H is PH P where P is a projector. We can make a similar analysis of the error in this case. Assume that H is diagonal, then consider a diagonal element of H’: 89 H’- = EIPikIZHkk'I" 2: IPz'kIszk 83 15:1, k=n+1 = Z I Pu. I2 Hick + 01- _ (525) k=l, Unfortunately, we cannot find Hf,- directly; the finite sum on the right is our estimate of it. This means that we cannot solve for 0,- directly. However, we can make an estimate of it by increasing n and observing the convergence of H’. The fact that we must make an estimation is unfortunate; it turns our error bounds into error estimates, but still provides a guide to the eigenvalue error. 90 Chapter 6 Quantum Chaotic Decay When a bound system is allowed to decay through a weak channel into a continuum, the results are quite different depending on whether the bound system is classical or chaotic. Bauer and Bertsch ([22]) demonstrated that the probability of decay for a chaotic billiard system is exponential in time, while that of regular billiard systems obeys a power law in time. Thus the chaotic systems decay faster, which is expected as chaotic systems explore phase space faster. Bauer and I sought to establish these decay laws for a similar quantum system, without result. The problems inherent in connecting a discrete state to a continuum undermine the approach of using the constraint operator method. The work in this direction is outlined, and its failings are noted. 6.1 Finite Basis Approach The approach of using quantized states as a basis fails in describing quantum chaotic decay. In order to allow a quantum billiard to decay into an exterior solution, assume that there is an infinitesimal hole in a wall of the confining box. The billiard wavefunction can be nonzero in this hole. This points to a fundamental difference between bound 91 billiard wavefunctions and decaying billiard wavefunctions: the former are Dirichlet functions (zero at the boundary), while the latter can include Neumann functions (with zero normal derivative on the boundary.) Thus the space of decaying billiard wavefunctions is not spanned by the bound billiard wavefunctions. 6.1.1 A Complete Basis The simplest possible complete basis for the decaying billiard system consists of all the bound system eigenstates and a delta function at the hole in the wall. This ignores the connection between the hole and the outside, but it captures the essence of the problem. A basis of this sort can be constructed by including both Dirichlet and Neumann functions in the basis, then demanding that the wavefunction be zero everywhere on the boundary except for one point zo. We will use the definitions (4.1) for the Dirichlet and Neumann functions. The wavefunction \Il(z) is represented in this basis by the partitioned vector ¢=(:Z) on where sz is the vector of Dirichlet function coefficients and $0 is the vector of Neumann function coefficients. The Hamiltonian for this system will be AD X where AD and A” are the diagonal matrices containing the eigenvalues of V2 in the Dirichlet and Neumann bases, respectively. X is the ‘cross term’ matrix resulting from the non-orthogonality of the Dirichlet and Neumann functions. 92 To impose the condition that W on the surface 61 is proportional to a delta function 6(zo), 11) must be in the range of the projector 1043.2.) ~ «as» where a.- = v;(zo)/ Eh vflzo). That P2 = P follows directly from aal - aaT = a(a"a)aI = aal. P has a rank equal to n + 1, where n is the number of Dirichlet functions. As Pd) = 11), H is transformed into 1 0 A0 X 1 0 I _ PHP_ (0aa1)(XI AN)(0aaI) AD Xa l = (1 a) aIXI aIANa aI = mm* ms where Q is m x m+1 and H’ is (m+1) x (m+1). Define the (m+ 1) x1 wavefunction vector zb’ in the ‘contracted’ basis as 42(?) (M) where 6 is the magnitude of the unit delta function at zo. The contracted basis consists of the Dirichlet functions and a delta function at zo with unit height and unit norm. Two regions can be stitched together by asserting that 6 for one region equals 6 for the other; the Hamiltonian matrix describing the dynamics of this system is V 93 A? X101 0 a[X[ a]A[”a1 agX;f (6.6) 0 X202 A? where the subscripts refer to the systems and a[A[va1 = agAQ’ag. (6.7) This last condition can certainly be met if the two systems are identical. Therefore, we need consider only the interaction of a single system with the delta function on its boundary to understand the dynamics of quantum billiard decay through an infinitesimal hole. 6.1.2 Non-Convergence The Hamiltonian above is useless, however, as the element which represents the in- teraction of the delta function with itself does not converge with m. Specifically, "[1130 ainval (6.8) is not finite. Any larger hole in the wall could be represented by an integral over delta func- tions. This would still have the problem of non-convergence, as it would include an integral over non-convergent quantities. Thus the problem of non-convergence is a fundamental one. 94 Bibliography [1] H. Poincare, Les Methodes Nouvelles de la Mechanique Celeste, Gauthier-Villars, Paris (1892); English translation NASA Translation TT F—450/452, US. Federal Clearinghouse, Springfield, VA. [2] A. M. Lyapunov (1907), Ann. Math. Studies 17, Princeton (1947) [3] N. Macrae, John Von Neumann (Pantheon Books, New York, 1992) [4] E. N. Lorenz, J. Atmos. Sci, 20, (1963) [5] M. J. Feigenbaum, J. Stat. Phys 19, 25 (1978) [6] M. J. Feigenbaum, Comm. Math. Phys. 77, 65 (1980b) [7] N. Packard, J. Crutchfield, D. Farmer, R. Shaw, Phys. Rev. Lett. 45, 712 (1980) [8] D. S. Broomhead and G. P. King, Physica D 20, 217-236 (1986) [9] E. Ott, T. Sauer, J. A. Yorke Coping with Chaos (John Wiley & Sons, 1994) [10] R. Shaw, The Dripping Faucet as a Model Chaotic System, (Ariel Press, Santa Cruz, 1984) [11] S. Wiggins Introduction to Applied Nonlinear Dynamical Systems and Chaos (Springer-Verlag 1990) [12] A. J. Lichtenberg and M. A. Lieberman, Regular and Stochastic Motion, (Springer, New York, 1983) [13] H. G. Shuster, Deterministic Chaos (VCH Verlagsgesellschaft, Weinheim, 1988) [14] E. Ott, Chaos in Dynamical Systems (Cambridge University Press, 1993) [15] J. D. Farmer, Z. Naturforsch 37a, 1304 (1982) [16] J. D. Farmer, Evolution of Order and Chaos, (Springer, Heidelberg-New York 1982) [17] J.C. Roux, A. Rossi, S. Bachelart, C. Vidal, Physica 2D, 395 (1981) [18] G. Casati, F. Valz-Gris, I. Guarneri, Physica 3D 644 (1981) [19] H. Goldstein, Classical Mechanics (Addison-Wesley, 1980) 95 [20] A. L. Fetter and J. D. Walecka, Theoretical Mechanics of Particles and Continua (McGraw-Hill, 1980) [21] K. Huang, Statistical Mechanics, (John Wiley & Sons, 1963) [22] W. Bauer and G. Bertsch, Phys. Rev. Lett. 65,2213 (1990) [23] H. A. Cerdeira, R. Ramaswamy, M. C. Gutzwiller, G. Casati Quantum Chaos (World Scientific Publishing, 1991) [24] F. Hakke, Quantum Signatures of Chaos (Springer—Verlag, 1991) [25] E. Heller and S. Tomsovic, Physics Today July, 1993 [26] M.V. Berry and M. Tabor, Proc. Roy. Soc. (London) A, 356 (1977) [27] M. C. Gutzwiller Chaos in Classical and Quantum Mechanics (New York: Springer, 1990) [28] A. Bohr and B. R. Mottelson, Nuclear Structure, vol. 1, p. 294 (W. Bemjamin, Inc. 1969) [29] M. L. Mehta, Random Matrices (Academic Press, 1991) [30] E. P. Wigner, SIAM Rev. 9, 1. [31] T. A. Brody, J. Flores, J. B. French, P. A. Mello, A. Pandey, and S.S.M. Wong, Rev. Mod. Phys. 53, No. 3, (1981) [32] M. V. Berry and M. Wilkinson, Proc. R. Soc. Lond. A 392 15 (1984) [33] M. V. Berry, Ann. Phys. 131 163 (1981) [34] O. Bohigas, M. J. Giannoni, and C. Schmit, Phys. Rev.iLett. 52, 1 (1984) [35] W. Bauer, D. McGrew, P. Schuck, and V. Zelevinsky, Phys. Rev. Lett. 72 24, (1994) [36] D. L. Hill and J. A. Wheeler, Phys. Rev. 89, 1102 (1953) [37] J. Blocki, Y. Boneh, J. R. Randrup, M. Robel, A. J. Sierk, and W. J. Swiatecki, Ann. Phys. (NY) 113, 330 (1978) [38] J. Randrup and W. Swiatecki, Ann. Phys. (NY) 125, 193 (1980) [39] J. Blocki, F. Brut, T. Srokowski, and W. J. Swiatecki, Nucl. Phys A545, 511c (1992) [40] J. Blocki, J.-J. Shi, and W. J. Swiatecki, Nucl. Phys A554, 387, (1993) [41] A. Bohr and B. R. Mottelson, Nuclear Structure, vol. 2, p. 350 (W. Bemjamin, Inc.1969) [42] S. Stringari, Nucl. Phys. A325, 199 (1979) [43] H. Reinhardt and H. Shulz, Nucl. Phys. A391, 36 (1982) 96 [44] H. Kohl, P. Schuck, and S. Stringari, Nucl. Phys. A459 (1986) 265. [45] S. Stringari, Phys. Lett. 103B, 5 (1981) [46] GP. Maddison and D.M. Brink, Nucl. Phys. A378, (1982) 566. [47] C.Y. Wong, Phys. Rev. C 25 (1982) 1460. [48] Press, Teukolsky, Vetterling, and Flannery, Numerical Recipies in Fortran (Cam- bridge University Press, 1986, 1989) [49] A. Wolf et al., Physica 16D (1985) 285. [50] A. V. Holden, Chaos, chapter 13, (Princeton University Press, 1986) [51] K. Geist, U. Parlitz, W. Lauterborn, Prog. Theor. Phys. 83, No. 5 (1990) [52] D. A. McGrew and W. Bauer, The Constraint Operator Solution to Quantum Billiard Problems, accepted by Phys. Rev. E. [53] Ya. G. Sinai, Russ. Math. Surveys 25 137 (1970) [54] L. A. Bunimovich, Comm. Math. Phys. 65, 295 (1979) [55] S. W. McDonald and A. N. Kaufman, Phys. Rev. Lett. 42, 1189 (1979) [56] G. Casati, B. V. Chirikov, I. Guarneri, F. M. Izrailev, Phys. Rev. E 48, 1613 (1993) [57] E. Heller, P. O’Connor and J. Gehlen, Physica Scripta. Vol. 40, 354 (1989) [58] H. Alt, H. —D. Graf, H. L. Harney, R. Hofferbert, H. Lengeler, A Richter, P. Schardt and H. A. Weidenmiiller, Phys. Rev. Lett. 74 62 (1995) [59] H. Alt, H. -D. Graf, H. L. Harney, R. Hofferbert, H. Rehfeld, A Richter and P. Schardt, to be published in Phys. Rev. E [60] P. K. Banerjee The Boundary Element Methods in Engineering, (McGraw-Hill, 1994) [61] G. De Mey, Int. J. Numer. Meth. Engng. 10, 59-66 (1976) [62] R. E. Kleinman and G. F. Roach, SIAM Review, 16, 214-235 (1974) [63] Y. Niwa, S. Kobayashi, and M. Kitahara Developments in Boundary Element Methods 2 143-176 (London: Appl. Sci. Publications, 1980) [64] R. J. Riddell, J. Comp. Phys. 31, 21-41,42-59 (1979) [65] G. R. C. Tai and R. P. Shaw J. Acoust. Soc. Am. 56 796-804 (1974) [66] E. Heller, Phys. Rev. Lett. 53 1515 (1984) [67] E. Heller, Chaos and Quantum Systems (Proc. NATO ASI Les Houches Summer School) eds M.-J. Giannoni, A. Voros and J. Zinn-Justin, (Amsterdam: Elsevier, 1991) 97 [68] M. Robnik J. Phys. A: Math. Gen. 17 1049 (1984) [69] M. V. Berry and M. Robnik, J. Phys. A.: Math. Gen 19 649 (1986) [70] T. Prosen and M. Robnik, J. Phys. A.: Math. Gen 26 2371 (1993) [71] T. Prosen and M. Robnik, J. Phys. A.: Math. Gen 27 8059 (1994) [72] B. Li and M. Robnik, L. Phys. A: Math. Gen.) 27 5509 [73] B. Li and M. Robnik, preprint CAMTP/95—5 submitted to Journal of Physics A [74] G. Casati, I. Guarneri, F. M. Izrailev, L. Molinari, K. Zyczkowski, Phys. Rev. Lett. 72 2697 (1994) [75] P. M. Morse and H. Feshbach, Methods of Theoretical Physics (McGraw - Hill, 1953) [76] G. Arfken, Mathematical Methods for Physicists (Academic Press, 1985) [77] Courant and Hilbert, Methods of Mathematical Physics vol. 1, (Julius Springer, Berlin, 1937, and John Wiley & Sons, 1989) [78] Beyer, CRC Standard Mathematical Tables (CRC Press, 1974) [79] M. DeVore, Wavelets: An Introduction, (Addison-Wesley, 1991) [80] R. H. Horn, and C. R. Johnson Matriz Analysis (Cambridge University Press, 1985) [81] Y. V. Fyodorov, O. A. Chubykalo, F. M. Izrailev, and G. Casati Phys. Rev. Lett. 76, 1603 (1996) [82] M. Feingold, A. Gioletta, F. M. Izrailev, L. Molinari, Phys. Rev. Lett. 70, 2963 (1993) [83] M. S. Hussein and M. P. Pato, Phys. Rev. Lett. 70, 1089 (1993) [84] G. Casati, F. Valz-Gris, and I. Guarneri, Lett. Nuovo Cimento 28, 279 (1980) [85] R. H. Horn, and C. R. Johnson Topics in Matriz Analysis (Cambridge University Press, 1992) 98 "111111111111111113