LIBRARY Michigan State University PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/01 c:/ClRC/DateDue.p65-p.15 TREES, PATHS AND AVALANCHES ON RANDOM NETWORKS By Radu Dobrin A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Physics and Astronomy 2002 ABSTRACT TREES, PATHS AND AVALANCHES ON RANDOM NETWORKS By Radu Dobrin The investigation of equilibrium and non-equilibrium processes in disordered systems and particularly the relation between them is a complex problem that deserves attention. We concentrate on analyzing several relations of this type and appropriate numerical solutions. Invasion percolation (IP) model was motivated by the problem of fluid displacement in disordered media but in principle it could be applied to any invasion process which evolves along the minimum resistance path. Finding the invasion paths is a global optimization problem where the front advances by occupying the least resistant bond. Once the invasion is finished, the union of all the invasion paths on the lattice forms a minimum energy spanning tree (MST). We show that the geometry of a MST on random graphs is universal. Due to this geometric universality, we are able to characterize the energy of this optimal tree for any type of disorder using a scaling distribution found using uniform disorder. Therefore we expect the hopping transport in random media to have universal behavior. Kinetic interfaces is an important branch of statistical mechanics, fueled by applica- tion such as fluid-fluid displacement, imbibition in porous media, flame fronts, tumors, etc. These processes can be unified via Kardar-Parisi-Zhang (KPZ) equation, which is mapped exactly to an equilibrium problem (DPRM). We are able to characterize both using Dijk- stra’s algorithm, which is known to generate shortest path tree in a random network. We found that while obtaining the polymers the algorithm develops a KPZ type interface. We have extracted the interface exponents for both 2d square lattice and 3d cubic lattice, being in agreement with previously recorded results for KPZ. The IP and KPZ classes are known to be very different: while the first one generates a distinct self-similar (fractal) interface, the second one has a self-similar invasion front. Though they are different we are able to construct a generalized algorithm that interpolates between these two universality classes. We discuss the relationship with the IF, the directed polymer in a random media; and the implications for the broader issue of universality in disordered systems. Random Field Ising Model (RFIM) is one of the most important models of phase tran- sitions in disordered systems. We present exact results for the critical behavior of the RFIM on complete graphs and trees, both at equilibrium and away from equilibrium, i.e., models for hysteresis and Barkhausen noise. We show that for stretched exponential and power- law distributions of random fields the behavior on complete graphs is non-universal, while the behavior on Cayley trees is universal even in the limit of large coordination. Until recently, the evolution of WW, Internet, etc., was thought to be highly complex. The model proposed by Barabasi and Albert shows that such networks can be modeled with the help of "preferential attachment", i.e. a highly connected vertex has a higher chance to get further links compared with a weakly connected vertex. We find that the random network constructed from a self-organized critical mechanism, (1P), falls in the same class without imposing any "preferential" growth rule. The network obtained has a connectivity exponent '7 E 2.45, close to the WWW outgoing—links exponent. To my family iv Table of Contents List of Tables List of Figures 1 Introduction 2 Invasion Percolation and Minimum Spanning 'Ii'ees 2.1 2.2 2.3 2.4 2.5 Universality ............................ Invasion Percolation (IP) ..................... 2.2.1 The Invasion Percolation Model ............. 2.2.2 Acceptance Function ................... Minimum Spanning Trees .................... 2.3.1 MST Geometry ...................... 2.3.2 MST Cost ......................... The Physics of Strong Disorder .................. Conclusions ............................ Directed Polymers in Random Media and KPZ Equation 3.1 3.2 3.3 3.4 3.5 Models of Stochastic Growth ................... 3.1.1 Eden Model ........................ 3.1.2 Edwards-Wilkinson Equation .............. 3.1.3 Solid-On-Solid Models .................. Non-linear Stochastic Growth: KPZ Equation .......... 3.2.1 Scaling Functions ..................... 3.2.2 Relations Between Exponents .............. 3.2.3 Symmetry Breaking and Scaling Arguments ....... 3.2.4 Galilean Invariance .................... 3.2.5 Exact Result for KPZ equation in 1 + 10 ........ Directed Polymers in Random Media (DPRM) .......... 3.3.1 DPRM to KPZ Equation Mapping ............ 3.3.2 DPRM Scaling Functions ................ 3.3.3 Relations between exponents ............... Numerical Simulations ...................... 3.4.1 Invasion Algorithm with Overhangs ........... Conclusions ............................ Generalized Invasion 4.1 Analytical Derivations ...................... 4.1.1 Strong Disorder Distribution ............... 4.1.2 Invasion Percolation Limit ................ vii viii 58 61 4.1.3 Relevant Scales ............................ 62 4.2 Numerical Results ............................... 64 4.2.1 Spanning Tree Cost .......................... 64 4.2.2 Acceptance Function ......................... 66 4.2.3 Path Geometry and Fractal Dimension ................ 67 4.2.4 Trees Overlapping Function ..................... 71 4.3 Conclusions .................................. 71 5 Random Field Ising Model 74 5.1 Introduction .................................. 74 5.2 Non-equilibrium Random Field Ising Model ................. 76 5.2.1 Mean-Field Theory .......................... 78 5.2.2 Mean-Field Avalanches ........................ 79 5.2.3 Magnetization and Critical Field ................... 81 5.3 Equilibrium RFIM on Cayley Tree ...................... 84 5.4 Finite Temperature Expansion ........................ 89 5.5 Conclusions .................................. 94 6 Scale-Free Random Networks 95 6.1 Models for Random Networks ........................ 95 6.1.1 Erdos-Rényi Model ......................... 95 6.1.2 Barabasi’s Model for Scale-Free Networks ............. 99 6.2 Self-Organized Growth ............................ 103 6.2.1 Avalanche Distribution ........................ 104 6.2.2 From Avalanches to Networks .................... 106 6.2.3 New-network Topology ....................... 107 6.3 Conclusions .................................. 110 7 Outlook 111 APPENDICES 114 A Definitions 115 A.1 Graph definitions ............................... 115 B Greedy Algorithms 119 B.1 Minimal Cost Path .............................. 120 B2 Minimal Spanning Trees ........................... 123 B3 Spin-flip Dynamics .............................. 125 3.4 Greedy Dynamics in Disordered Systems .................. 133 BIBLIOGRAPHY 135 vi 3.1 3.2 3.3 6.1 List of Tables DPRM critical exponents according to [72]. ................... 49 KPZ critical exponents according to [5]. ..................... 50 The critical exponents extracted with Dijkstra’s algorithm. ........... 53 Important constants for some well known random networks. .......... 99 vii 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 3.1 3.2 3.3 3.4 3.5 3.6 3.7 4.1 4.2 List of Figures Self-similar invasion using Prim’s algorithm. .................. Acceptance function for 2d square lattices, site IP ................. Acceptance function for bond IP on 2d square lattices and 3d cubic lattices. . . The minimum spanning tree for a 30 x 30 square lattice .............. The scaled distributions of path-lengths, 9(3), for spanning trees generated on 2d square lattices ................................ The scaled distributions of path-lengths, 9(5), for spanning trees generated on 3d cubic lattices. ............................... The probability, P(.r.), that a bond with energy .r lies on the minimum spanning tree for 2d square lattices ............................ The probability, P(.z:), that a bond with energy .1? lies on the minimum spanning tree for 3d cubic lattices. ........................... Self-affine growth using Dijkstra’s algorithm. .................. Lateral growth from KPZ equation ........................ Diagrams associated with one-loop expansion for the KPZ equation ...... Linear front invasion using Dijkstra’s algorithm .................. Plot of the Dijkstra interface roughness for 2d square lattices ........... Plot of the Dijkstra interface roughness for 3d cubic lattices. .......... Plot of the Dijkstra interface roughness for when the time unit is different from the natural time unit. ............................. Invasion Front. .................................. Tree cost Etree for various m exponents ...................... viii 11 12 13 17 19 25 33 52 53 65 4.3 4.4 4.5 4.6 4.7 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 6.1 6.2 6.3 6.4 Acceptance function for the first percolating tree on square lattices ........ 67 Plot of the acceptance function for the complete tree on square lattices ...... 68 The scaled distributions of path—lengths, g(.s), on spanning trees on square lat- tices ....................................... 69 Scaled plot of the total length 725, versus the Euclidian length l. ......... 70 The overlap of the generalized tree with MST (using a 64 bit and 32 bit ma- chines) and SPT (using only a 32 bit machines) ................ 72 The disorder distribution p1. ........................... 77 The disorder distribution p2. ........................... 78 The phase diagram of the non-equilibrium RFIM MFT using the disorder dis- tribution (5.3) .................................. 83 The magnetization jump at the phase boundary. ................. 84 The phase diagram for the non-equilibrium RFIM on a Cayley tree with coor- dination number a = 3 using the distribution (5.3) and taking the exchange constant J = 1. ................................ 86 The magnetization jump for the RFIM on Cayley trees for z = 4 and the distribution (5.3), with the exchange constant J = 1. ............ 87 Critical disorder versus temperature in the absence of the field (H = 0). . . . . 92 Critical field as a function of temperature when disorder is suppressed (6h 2 0). 92 Finite temperature phase diagram using the distribution (5.3) for y = 0.5. . . . 93 Finite temperature phase diagram using the distribution (5.3) for y = 2.0. . . . 93 1928 United States of America, North-West highway map. ........... 100 Poisson distribution for k' = 6. .......................... 100 Hierarchical topology of the international web cache. .............. 102 The connectivity distribution for WW ...................... 103 ix 6.5 6.6 6.7 6.8 6.9 A.1 3.1 B2 Time signal from invasion percolation generated with Prim’s algorithm. Connectivity distribution P (A?) for avalanche network ............... The network and avalanches using Prim’s algorithm with 4 invasion sites. The network and avalanches using Prim’s algorithm with 10 invasion sites. The network and avalanches using Prim’s algorithm with 30 invasion sites. A connected graph example. ........................... Dijkstra’s algorithm example ............................ Prim’s algorithm example. ............................ . 104 108 . 109 . 109 . 109 117 122 124 Chapter 1 Introduction The dynamics of disordered systems is a very attractive topic of research. In many sys- tems, disorder leads to very interesting effects such as depinning, self-organized criticality (SOC) and roughening transition. Physical examples include: random magnets; flux lines in superconductors; growth and invasion processes; and avalanche dynamics. We could classify these effects into two very broad classes (1) equilibrium and (2) non-equilibrium processes. The main complication brought by disorder is that apart from thermal averages we have to do a second average over the disorder. A great deal of attention is dedicated to extracting critical exponents of relevant quantities because this enables us to classify sys- tems according to "universality classes". Associating each invasion process with a growth equation allows us to describe them as part of an universality class governed by that equa- tion. Unfortunately it is not always possible to map a random process to a growth equation. In that case, we consider that the set of critical exponents of the relevant process, defines the universality class. In the first chapter we discuss a non-equilibrium process called bond invasion percola- tion (BIP) in random networks and describe its connection with the minimum spanning tree (MST) [37], an equilibrium property. The second chapter is dedicated to the study of di- rected polymer in random media (DPRM) [40], and its associated non-equilibrium process the Kardar-Parisi-Zhang (KPZ) equation. We are able to investigate in the third chapter IP, KPZ, and relations between them, as limiting cases of a generalized algorithm. In the fourth chapter, we attempt to unravel both equilibrium and non-equilibrium effects in one of the most important models of disordered systems, the random field Ising model (RFIM). The fifth chapter introduces us to a very new topic called "scale-free random networks", in which the network itself grows without any "preferential attachment" rule. The invasion percolation (IP) model was introduced in 1983 by Wilkinson and Willem- sen [113] as a new form of percolation theory. The proposed model tried to explain the mechanism of one fluid displacing another from a porous medium in the presence of capil— laryforces, with immediate application to oil recovery. The invasion process automatically finds a critical point and is an example of self-organized criticality. IP defines a very broad universality class without an associated growth equation. The main problem when trying to derive a growth equation for IP is the fractal behavior of the interface, which is governed by the local dynamics. When studying percolation, one wants to investigate the clusters generated. This ap- proach makes it possible to extract critical exponents and thus to relate this problem to phase transitions. Later, it was realized that other geometrical properties of the percolation cluster are important such as the perimeter, backbone, elastic backbone, shortest path, dan- 2 gling ends, leading to new fractal dimensions and critical exponents. It is natural to extend these methods of analyzing clusters to something involving paths and trees since we have nodes as pores of the random media and bonds as channels connecting them. The process described originally as invasion percolation can also be found in the context of graph theory as the well known Prim’s algorithm. This algorithm is mapped exactly to IP [10] and it is also known to generate the minimum spanning tree (MST) in the network. We can easily prove that all the paths in the tree are those with the smallest barrier, making MST a valu- able tool in characterizing this class of invasion models. We will show in this chapter, using Prim’s algorithm, that we can solve the equilibrium process (MST) by a non-equilibrium invasion process and also that the geometry of MST is universal, and independent of the details of the randomness. Another important non-equilibrium universality class is the KPZ class associated with the growth equation of the same name [69]. Kardar—Parisi-Zhang is a non-linear model of interface growth in random media . The KPZ invasion is often described as driving an interface in an inhomogenous media subject to quenched or annealed forces. The noise- driven growth leads to self-affine interfaces described by the KPZ equation. A member of KPZ class is the directed polymer in random media (DPRM). Vortex-line wandering in disordered superconductors, the propagation of flame fronts and domain-wall roughening in impurity-stricken magnets are also known to be part of DPRM class. In equilibrium the "polymer" tries to minimize its free energy. If one maps the random interacting potentials to bond costs, minimizing the free energy is equivalent with minimizing the sum of the bond costs along the polymer’s path. In other words, the DPRM is equivalent to the "shortest- 3 path" problem in the computer science literature. One of the attractive features of the DPRM is that some beautiful analytical results may be derived for it. Using path-integral techniques it can be proved that the free-energy fluctuations of the polymer are analogous to the height fluctuation of the interface from the KPZ equation. Dijkstra’s algorithm is a well known algorithmic method, from computer science, to solve the "shortest-path" problem. Like Prim’s algorithm, this is an invasion algorithm but with a different invasion rule, one which generates the shortest path tree (SPT) in a random network. Our goal for chapter two and three is to prove that the invasion front generated by Dijkstra’s algorithm is in the KPZ universality class and also that in the strong disorder limit the KPZ type evolution changes to IP growth. We will also describe a generalized invasion process which interpolates continuously between these two limits and characterize its behavior using large scale simulations. The random field Ising model represents one of the best models to study phase transi- tions in random systems. It has a vast number of applications from solid-state physics to biology and recently to economics [32, 107]. The pioneering non-equilibrium work was done by Sethna et a1. [33] who studied the ferro-magnetic RFIM as a model for hysteresis and Barkhausen noise. Their study points to a critical point in the disorder space associated with a strong power law distribution for avalanches. The most controversial result is the hysteresis loop width at this point. In mean-field theory, the width of the loop is zero while from simulations we see a split, a non-zero width, at the same critical point. One proposed model to explain the existence of the hysteresis loop at the critical point is the soft—spin model where spins are allowed to take any value from minus to plus infinity. Since we 4 believe that the soft—spin model has some pathological properties, we will introduce our own model of the equilibrium RFIM on a Cayley tree and we will compare it with non- equilibrium results. Also we will analyse the mean-field limit of RFIM in both T = 0 and at finite temperatures, T 75 0, for various random field distributions, since universality is an important feature of this model. A recent topic of interest in disordered systems are networks such as the WWW, with- out a clear topological description but with large implications in every day life. These systems called random networks were believed to have ordinary [45] random like behavior, in other words the probability distribution of links per node was expected to follow a Pois- son distribution. However, a recent study [12] proved that this complex system has different behavior when compared to random networks. The calculated probability distribution for the Internet, WWW, etc. shows no resemblance to random graph theory but is rather similar to a scale-free graph topology in which the probability distribution is a power-law. These early models have strict rules for evolution allowing little or no flexibility. We will con- struct scale-free random networks from the avalanche structure of the invasion percolation model. This enables us to show that the most common, self-organized, invasion process has an underlying scale-free topology. Chapter 2 Invasion Percolation and Minimum Spanning Trees Percolation [104] is one of the most studied problems in physics, not only because of its fundamental nature, but also because of its applicability to a wide variety of systems. Resistor networks [34], forest fires [58], biological evolution [67] and epidemics [85] are some subjects where percolation has been used. Invasion percolation [113] is an extension of the static percolation to a dynamic process. The model was motivated by the problem of fluid displacement in disordered media but, in principle, it could be applied to any invasion process that evolves along the minimum resistance path. In this chapter we investigate the relation between invasion percolation and minimum spanning trees (MST) and also the properties of the minimum spanning tree. This approach is motivated by the fact that for the study of transport processes a detailed knowledge of the internal structure of the cluster is necessary, in particular the topology of the conducting 6 paths plays a critical role [20,42]. The tree’s paths are found to be involved in hopping transport in semiconductors [l] where the electrons would follow similar trajectories. Also they have been used when studying the magnetic properties of solid materials at low tem- perature [16,60] 2.1 Universality One of the most important properties of disordered materials is universality. Universality is an unproven hypothesis in disordered systems, though it has been widely assumed in the development of theories for the random-field Ising model and for spin glasses [115]. The hope has been that scaling exponents in disordered systems should not depend on the nature of the disorder, provided it is uncorrelated, and provided that the disorder distribution is not too broad. Percolation [104] and the directed polymer in a random medium (DPRM) [56] reassure us that universality does hold. However, the random field Ising model (RFIM) has recently provided a counter example [7, 38,41, 105]. In particular we showed that the mean—field RFIM is non-universal in the ground state as the order parameter exponent can depend continuously on the details of the disorder [38, 41]. In contrast we show in this chapter that the MST is super-universal in the sense that the MST geometry is unaltered even if the distribution of disorder is made very broad. Due to this fact, the energy of a MST can be found from an universal function, for a given graph topology. In the strong disorder limit The MST geometry is important in the [25, 26], for example, as a model for spin glasses in the strong disorder limit [88] and also relates to hopping transport at low temperatures [20, 25,42]. As we shall discuss in this chapter, the paths on the minimum 7 spanning tree are those on which the energy barrier is smallest and for this reason the MST paths dominate the kinetics at low temperatures. 2.2 Invasion Percolation (IP) Invasion processes are widely found in nature: paper burning, paper wetting, cell colony growth. Invasion percolation (IP) is an extension of percolation theory that takes into ac- count the transport processes in the network. Depending on how the invasion takes place, there are two different types of IP, bond invasion percolation (BIP) and site invasion per- colation (SIP). In the limit of an infinite lattice size IP is like ordinary percolation at it’s critical threshold. IP proved to be an important tool in the oil recovery problem [73, 74], where it can estimate the quantity of oil trapped inside a reservoir after water injection. 43—. Figure 2.1: Invasion percolation cluster on a 200x200 square lattice generated using Prim’s algorithm and the uniform distribution of real numbers in [0, 1]. If we represent the porous media as a network of pores connected by pipes, we can view it as a regular lattice in which sites are pores. If we consider the process of a non-wetting fluid, oil, being displaced in such a medium, at infinitesimal constant flow rate the viscous forces are completely dominated by the capillary forces at the oil—water interface. The forces are stronger in narrow places, so to make a simple model we represent the motion of water as a series of discrete jumps where the smallest pore is invaded. The process described above, proposed for the first time by D. Wilkinson and J. F. Willemsen [113] is called invasion percolation (IP). One of the problems with IP is the existence of trapping if the fluid is incompressible (if the fluid is compressible, trapping can not occur). As the water advances, it is possible to completely trap oil forming disconnected clusters. This problem is known as "residual oil“ a problem well known in the oil industry. To create a model we have to impose a new rule: once a region is surrounded all sites enclosed need to be removed from the growing sites list. The effect of this rule is important because it reduces the mass of the invading cluster. 2.2.1 The Invasion Percolation Model The model for invasion percolation is very flexible. It consists of a graph ‘ on which random numbers can be distributed on bonds (bond IP) or on vertices (site IP). We can consider any type of geometry for the graph in any dimension (for example in 2d: honeycomb, Kagome, square, diced etc; 3d: FCC, bcc, cubic, etc) and we can vary the distribution of the random 1A graph G is considered a collection of vertices V and bonds B. A bond connects two vertices. In physics we have to usually deal with sparse graphs, graphs with the number of bonds having the same order as the number of vertices. numbers. Even though the model looks very rich, in 2d it has been proved [117] that the corrections to the average cluster density (number per site) behaves as b n : nc + S + (2.1) were 5 is the number of sites and b is a function of the system only and thus an universal constant. The most studied model in 2d is the square lattice (cubic lattice in 3d) where the distribution is uniform, drawn from the interval [0, 1]. 2.2.2 Acceptance Function In analogy with the percolation model, Wilkinson and Willemsen were interested in the behavior of the first IP cluster to percolatc because in the infinite size limit it was believed (and latter proved to be true [113]) that the IP cluster and percolation cluster are identical. In order to capture the essence of IP they extracted the acceptance function defined as follows: number of random numbers accepted in the cluster from [r, r + dr] (2.2) a(r = . . total available numbers in [73 7' + dr] We see that as we increase the lattice size, the acceptance function evolves to a step function exactly as in ordinary percolation (Fig. 2.2). To test the assumption that the invasion percolation cluster is asymptotically identical to the percolation cluster we have calculated, following Wilkinson et. a]. [113], the acceptance 10 1 l I T 1 l ' I 0.8 — 1 — 1000x1000 — 500x500 [- .. 0.6 *- - 3(1') _ . 0.4 — — 0.2 - _ O l l 1 0 0 2 0.4 0 6 0 8 1 Figure 2.2: Acceptance function for 2d square lattices site IP for two different lattice sizes according to the legend. As the lattice size increases the acceptance function asymptoti- cally evolves to a step function which has the integral equal to the critical threshold, as in ordinary percolation. fraction p defined as the area under a.(r). l p = / mm (2.3) 0 We see that p —+ pc as L —~> 00. The calculated value for site percolation, pC = 0.593(1) was very accurate and coincides with 126 from ordinary percolation. Following the same steps as before we calculated the acceptance function in the BIP case and also pC for this case, which is 0.5 (from conjecture). As we can see from Fig. 2.3 a(r) will never evolve to a step function since there are always bonds with cost less than pc, which, when added to the tree, would make cycles and others that are part of the front. Because of this, the acceptance fraction in the limit 11 1 1' I F l l l 0.8 “ — square lattice — - - - cubic lattice 0.6 ~ — a(r) 0.4 — — 0.2 — a i O r l L L 41 L 4 1 1L 1 l a J 1 l J l r 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 I' Figure 2.3: Acceptance function for BIP in 2d square lattices (solid line) and 3d cubic lattices (dotted line). Similar to Fig. 2.2, for bond invasion percolation the acceptance function shows a jump at the critical threshold smaller than the jump from site invasion percolation since a tree is loop-less. L ——> 00 is not pc. In this case, we can define pC as the maximum bond value included in the cluster cm”. This new parameter can play the role of p in the site case because as L —> 00 the maximum value of the bond cm“. —> pc. The same arguments are valid for the site IP because for asymptotically large samples the probability to add a site with a value bigger than pc is zero. 2.3 Minimum Spanning 'h'ees Perhaps the simplest non-trivial optimization problem, the minimum spanning tree (MST), is a tree, which visits every site in a graph so that the sum of the costs of the edges in the tree is minimal (see Fig. 2.4). In physics terminology, each edge (ij), has an energy 50-, 12 Figure 2.4: The minimum spanning tree (MST) for the 30 x 30 square lattice. Each edge has an energy drawn from the uniform distribution on the interval [0, 1]. Only the bonds on the minimum spanning tree are drawn. The heavy line is one path on the MST, starting at the center of the square lattice. The Euclidian distance between the two ends of this path is also indicated. and the total energy is the sum of the energies of the bonds that make up the minimum spanning tree, i.e. EMST: Z G) (2-4) (ij)€tree Due to its practical applications in a variety of contexts, including image analysis, trans- portation networks etc, this problem has been heavily studied by the engineering commu- nity. This problem is also one of the most fundamental problems in combinatorial opti- mization and has been intensively studied in the computer science and applied math com- munities [31]. The physics community has been less aware of this problem, with notable exceptions [4, 10, 24], though it has close connections to the problem of a fluid invading 13 a porous medium, as modeled by the invasion percolation (IP) process. However, inva- sion percolation is a dynamical process which grows minimum spanning trees, whereas the MST itself is a global minimum of a cost function. The MST must visit every site in the graph, and so corresponds to continuing the invasion process until every site in a finite graph is reached. This is not usually studied in invasion percolation, in which the steady state regime in a very large lattice is of most interest [102,111]. 2.3.1 MST Geometry For simplicity, consider square and cubic lattices whose edges are assigned costs (energies) drawn from an uniform distribution on the interval [0, 1]. In order to find the minimum spanning tree on such graphs, we use Prim’s algorithm that is a greedy algorithm (in physics these are called growth, invasion or extremal algorithms) which chooses the best site for advance at each time-step. In the computer science literature [31] Prim’s method starts by choosing the cheapest bond in the whole graph, and then by growing outward to the cheapest bond that is adjacent to the starting bond. Each bond that is invaded is added to the growth cluster and the process is iterated until every site has been reached. Bonds can only be invaded if they do not produce a cycle, so that the tree structure is maintained. However, it is not essential to start from the cheapest bond. Growth starting from any site leads to the same MST. This latter process is identical to bond invasion percolation, which then finds the MST exactly. Intuitively, the invasion algorithm finds the exact minimum spanning tree because each site in a MST must be visited at least once and the IP algorithm selects the best way to make this choice at each site. It is a standard exercise in algorithm 14 theory to prove this rigorously [31]. The key observation is that the geometry of the MST on a graph depends only on the ordering of the bond energies on that graph. It does not matter if the bond energies are nearly the same, or wildly different, it is only the ordering that matters. This can be intuitively understood by imagining making a list of the bonds ordered from the smallest in energy to the largest. Now start removing the largest energy bonds, however with the rule that the removal of a bond cannot disconnect the graph into two pieces. Continue this process until no bonds can be removed. This final state is the minimum spanning tree and is very similar to the algorithm suggested by Cieplak et al. for this problem [25] (the invasion method is much more efficient though). All that matters for this bond removal process is the ordering of the bond energies, and hence the geometry of the final tree so formed only depends on that ordering. Thus if we make a transformation 6 —> f (e), that preserves the ordering of the bond energies (e.g., the bond that has the fiftieth largest energy is the same before and after the transformation), then the MST geometry is unaltered. Now note that if f (c) is a non-decreasing fimction of 2:, the ordering of a set {.rl...a:,,} is unaltered under the transformation to { f (1:1 )... f (a'n)}. This observation is germane to the issue of universality, due to the fact that we can use the uniform distribution to sample according to a general distribution F(;rr), by assigning F (.r)dr. = dy, which transforms the interval dy of the uniform distribution to the interval F (.r)d.r of the general distribution. Thus if we randomly choose a number y from the uniform distribution, the corresponding random 15 number from the distribution F (:1?) is .r = G”l(y) where C(17) = / F(.r')d.r' (2.5) 0 Now note that G(:r) is a non-decreasing function of 1: because F (:1?) is a probability and hence positive, and hence 0‘1 (y) is a non-decreasing function of y. Thus the transforma- tion (2.5) preserves the ordering of the bond costs and the hence the geometry of MST is the same for any probability distribution. Note that even if the bond costs where negative, we can choose an uniform shift to make the bond costs positive. Since this uniform shift does not alter the ordering of the bond costs, MST geometry is universal even when there are negative bond costs. The calculation of the geometry of the paths on the MST is carried out as illustrated in Fig. 2.4, where we have shown both the total length between two points in the lattice n, and the Euclidian length between the same points I. We plot the scaled distributions of path- lengths, g(3), on spanning trees on a) square and b) cubic lattices. The scaling variable is s : n/lDf where n is the number of bonds in the MST path, l is the Euclidian distance and Df is the scaling dimension (D; = 1.22 :t 0.01 (square lattice) and Df = 1.42 :E 0.02 (cubic lattice)). The dashed line in these figures is the scaling distribution on the MST. For comparison we also give the scaling distribution for the steady state during growth of the MST, ie. invasion percolation (solid line). In both cases the paths scale with the same fractal dimension, in fact this holds at all stages of growth of the MST. The results are found from averaging over 2000 realizations of 401 x 401 square lattices and over 1500 realizations of 101 x 101 x 101 cubic lattices. 16 0.001 I g(S) 0.0001 1 1e-05 Figure 2.5: The scaled distributions of path-lengths, 9(3), for spanning trees generated on 2d square lattices. The scaling variable is s = n / l Df where n is the number of bonds in the MST path, 1 is the Euclidian distance and D f is the scaling dimension (D f = 1.22 :l: 0.01 for square lattice). The dotted line in the figure is the scaling distribution on the MST. For comparison we also give the scaling distribution for the steady state during growth of the MST, i.e. invasion percolation (solid line). The results are found from averaging over 2000 realizations of 401 x 401 square lattices. The number of bonds, 71(1), which lie on a MST path between two sites which are separated by a Euclidian distance I, is found to scale as 13’, where y = 1.22 :l: 0.01 (square lattice) and y = 1.42 :l: 0.02 (cubic lattice) (see Fig. 2.5 and Fig. 2.6). The fractal geometry of paths (strands) in invasion percolation [26] gives exactly the same scaling. Thus the paths are fractal even away from the invasion percolation critical point, as seen in the scaling plots of Figs. 2.5 and 2.6 and as is qualitatively evident from Fig. 2.4. 17 0.001 1 \ I g(S) . I 0.0001 1 le-05 Figure 2.6: The scaled distributions of path-lengths, g(s), for spanning trees generated on 3d cubic lattices. The scaling variable is s = n / lD! where n is the number of bonds in the MST path, 1 is the Euclidian distance and D f is the scaling dimension (D f = 1.42 :E 0.02 for cubic lattice). The dotted line in the figure is the scaling distribution on the MST. For comparison we also give the scaling distribution for the steady state during growth of the MST, i.e. invasion percolation (solid line). The results are found from averaging over 1500 realizations of 101 x 101 x 101 cubic lattices. 2.3.2 MST Cost Having proven in the previous section (Section 2.3.1) that the geometry of MST is fractal and universal, we now show that it is possible to find the energy of MST from one universal function, for a given graph topology. Numerical results for the appropriate universal func- tion are presented in Figs. 2.7 and 2.8 for minimum spanning trees on square and cubic lattices. The function we plot is the probability, P(e), that a bond of cost 6 for an uniform distribution (on the interval [0, 1]), lies on the minimum spanning tree. The dashed lines in 18 0.8 F 0.6 P(x) 0.4 l l 0.2 l Figure 2.7: The probability, P (x), that a bond with energy :1: lies on the minimum spanning tree for 2d square lattices. The curving dashed lines are for the MST while the solid lines are for the steady state during growth of the MST, i.e. invasion percolation. The results are found from averaging over 1000 realizations of 401 x 401 square lattices. Fig. 2.7 and Fig. 2.8 are the cost distributions for the MST. The solid lines are for invasion percolation. In invasion percolation P (c) is the acceptance function for the case of bond invasion. In this case, it has the interesting property that in the scaling limit P(e) ——> 0 for e > pc (pc is the bond percolation threshold). However, the MST must reach every site of the graph, in which case it is necessary to include bonds which have 6 > pc (see the dashed lines in Figs. 2.7 and 2.8). From the acceptance probability P(e), the total energy of the minimum spanning tree with bonds drawn from an uniform distribution is simply (in the scaling limit), Ez/ eP(c)dc (2.6) o 19 \r l 1 r \\ \I §\ 0.8 - §\ — Z \ \ \ \\ 0.6 — \ — P(x) - \\ 0.4 — \\ — \ \\ l \\ 0.2 — \\ a \ \\ 0 r 1 i r r . \Tr“-q.__ l_ r 0 0.2 0.4 0.6 0.8 1 Figure 2.8: The probability, P(r), that a bond with energy :2: lies on the minimum spanning tree for 3d cubic lattices. The curving dashed lines are for the MST while the solid lines are for the steady state during growth of the MST, i.e. invasion percolation. The results are found from averaging over 1000 realizations of 101 x 101 x101 cubic lattices. As shown above, the geometry of the minimum spanning tree is unaltered if we make a transformation e ——> f (c) of the bond costs provided f ( c) is a non-decreasing fimction of the bond costs. After making this transformation, the energy is simply E = fol f (e)P(e)de. In addition, using the arguments given above, we may generalize to the case of an arbitrary distribution F (e), in which case (from (2.5)) 1 Ep 2/ G_l(c)P(e)dc (2.7) 0 for any disorder distribution F (.r). As stated above, paths on the MST are those on which the bond of maximum energy 20 is minimal. In physical terms, MST paths are those on which the barrier is minimal. The barrier on such a path is the largest cost bond that lies on that path. In the steady state limit (i.e., on the IP percolation cluster, see Figs. 2.7, 2.8) Cbarrier "2 PC (28) that is the barrier on invasion percolation paths; which in a graph with edge costs drawn from an uniform distribution, takes a value equal to the percolation threshold on that graph. For other distributions of disorder, the barrier on IP paths becomes ebarrie, = G-1(pc) (from (2.5)). However the barrier on typical MST paths (i.e. on the whole lattice, see Figs. 2.7, 2.8) have ebmier > pc for l —> co, and it is only on the fractal IP subset of paths on which (2.8) holds. Thus hopping transport at low temperatures will typically occur on IP paths, which justifies the use of percolation models in the calculation of diffusivity and conductivity in the strong disorder limit [20,42]. 2.4 The Physics of Strong Disorder It is interesting to compare the behavior of paths on the MST, with the behavior of the directed polymer in a random medium (DPRM) (see Chapter 3). The DPRM, and the associated Kardar-Parisi-Zhang growth process [69], has become a paradigm in the study of disordered systems. More recently it has been noted that DPRM is a subset of the shortest path (SP) problem in computer science and engineering [31,40, 82]. The statement of the SP problem is deceptively similar to that of the MST problem discussed above, however its 21 properties are radically different. The shortest path problem seeks to find a path between two sites in a graph such that the sum of the bond costs is minimal, so that, ESP: 2 étj (2-9) (ij)€path If one seeks the shortest path from a starting source site to all other sites in a graph, then a shortest path tree (SPT) is formed. The total cost of the shortest path tree is the sum of the costs of all of the paths in the tree. Note however, that in this sum it is inevitable that some of the bond costs will appear more than once. In fact, bonds near the source site will be counted many times. The SP problem is in the DPRM universality class except for the limit of strong disorder, when it approaches the MST problem [26,92]. The crossover to the strong disorder limit can be analysed explicitly using the generalized energy, 2 6?}. In the limit m —> co the largest energy dominates and hence the largest barrier is all that matters. It is possible to show (Chapter 4), using (2.5) that m —+ 00 corresponds to the strong disorder limit. The SPI‘ is also distinguished by the fact that there is a diflerent SPT for each starting site in the graph, whereas there is only one MST for a finite graph with continuous disorder. 2.5 Conclusions In summary, the geometry of minimum spanning trees(MST) is universal for all disorder distributions because a MST is invariant under the transformation of its edge costs 6 —> f (e) where f is a non-decreasing function of the edge cost 6. This universality enabled us to 22 find an universal cost function for the uniform distribution (see Fig. 2.8) that can be used to calculate the cost of minimum spanning trees, on the same graph, for any other distribution (using (2.7)). Paths on the MST are those with minimal barrier and in the steady state IP process this barrier approaches pc for an uniform distribution and G_1(pc) for an arbitrary distribution (where G is given in (2.5)). The MST geometry underlies physics in the strong disorder limit, implying that in that limit there is a strong universality. 23 Chapter 3 Directed Polymers in Random Media and KPZ Equation The Kardar—Parisi-Zhang equation [69] describes the movement of an elastic interface when lateral growth is allowed. Introduced originally to model non-equilibrium interface growth it acquired a much larger importance as one of the simplest examples of strong coupling. In this chapter we investigate the shortest path (SP) problem and its relation with the KPZ equation. Many phenomena with similar behavior can be found not only in different areas of physics but also biology: domain growth in the 2d Ising model [62], Eden model for bacterial growth [43], ballistic deposition [47], magnetic flux lines in superconductors [94]. We are motivated by the fact that the shortest path problem is of great interest in computer science. As a result fast algorithms have been developed that could help us understand the nature of the KPZ front. We study the paths geometry in the weak disorder limit where overhangs play a minor role. 24 Figure 3.1: Invasion using Dijkstra’s algorithm on 200 x 200 square lattice. The invasion front is self-affine. Another important feature of the KPZ equation is the interesting mapping to the equi- librium properties of the directed polymer in random media problem (DPRM) [68] and also to the noisy Burgers equation [14, 19]. Though it is easy to understand the source of the equation, its solution has proven to be a nemesis for many years. Most of the work has tried to explain properties of the equation in the strong coupling regime. The dynamical renormalization group formalism which can be applied for studying time-dependent fluc— tuations failed in finding an exact solution or a systematic approximation in dimensions d 2 3. For (1 = 1 + 1 the exact exponents are obtained, while d = 2 + 1 is the critical dimension at which the existing perturbation theory first fails. In d = 2 + 1, the mapping to DPRM at least makes KPZ accessible through numerical simulations, though there is a large crossover regime before the asymptotic scaling occurs. In dimensions d > 3 the fixed 25 point is again not accessible; the scaling exponents can be obtained only through numerical simulations. Another open question is the existence of a finite upper critical dimension for the KPZ equation. Pro [29, 30, 54, 55, 77] and contrary [5, 21,22] arguments are given for do = 4 + 1 as the upper critical dimension, but the debate is not over since all standard perturbative methods fail. 3.1 Models of Stochastic Growth 3.1.1 Eden Model One of the earliest model of stochastic growth, Eden growth was originally developed for simulating the appearance of various biological patterns; in particular, describing the growth of bacterial colonies, tissue or cancer colonies [43]. Although it received little attention from biologists, it was adopted later by solid state researchers, other physicists and chemists. The model is easy to grasp: start a colony on a square lattice with a single cell (even though growth from multiple sites is possible) and add with equal probability a neighboring site to grow the colony. Apply this growth rule to the two-site cluster just formed. In this way it is possible to generate a compact, yet rough cluster as in Fig. 3.1. There are three versions of the Eden model with different rules for choosing the next site but all belong to the same class even though the crossover is not similar. Version (1) is the one described above which we call Site Eden Growth. The second version (ii) is Bond Eden Growth were instead of choosing sites we chose bonds to grow to. Both methods above are one-step growth, since we chose the next growing site with equal probability between all 26 possible ones. The third version (iii) is the one with the weakest finite-size effects, and the rule states that we chose the next site to grow from with equal probability from the cluster’s perimeter and we grow to any of the possible neighbors with equal probability. Compared with the first two variations (iii) is a two—step process, first we chose randomly from the cluster’s perimeter a site to grow from, and second from that perimeter site we grow with equal probability in any possible direction. One of the biggest improvements to find an analytical solution was given by mapping the Eden growth to the problem of directed polymer in random media [97]. The mapping is possible since the cluster is compact and every site of the lattice will sooner or later be part of the cluster. We introduce a new variable for each site called waiting time T,- of site i, which is the time interval between the possibility to become part of the cluster, and the actual event of incorporating site i. To compute the real time i, when the site 2' becomes part of the cluster we observe that, t; Zij-l-T,‘ (3.1) where t,- is the minimum real time over all sites j which neighbor 2'. Using (3.1) recursively we can further write t,- in the same fashion as t,. By doing this we obtain the following expression ti 1‘ 11;)111{: Tj} (3.2) ‘ 16?. where P, are all possible paths from the initial growth site to site i. The energy of a polymer interacting with the substrate through the random potential 27 energies :13,- is the sum of all energies along its path P. At zero temperature the polymer is in the state with the minimum possible energy thus, E = 112%: 4,} (3.3) jEP We see that (3.2) and (3.3) have exactly the same form. The energy E of a polymer span- ning between i and the starting seed is compared to the total time t,-. It is possible [97] to construct a dictionary between these problems. Numerical simulations for Eden model proved that the roughness exponent of the interface a z 0.5 for d = 1+1 in agreement with the DPRM predictions, while in higher dimensions strong crossover effects give scattered results. 3.1.2 Edwards-Wilkinson Equation Edwards-Wilkinson [44] derived a linear continuum partial differential equation as an ap- proach for sedimentary kinetic roughening. If we start from an initial flat interface, the roughness of the growth front moving at constant velocity v increases in time as to ~ t5, while it saturates at a size-dependent rem ~ L". __> w = W21 + +725}. 1) (34) (4(4. the“. t’)> = 206%: — ar’wt — t’) (3.5) 28 where h(7, t) is the interface height at coordinate E} in d — 1. dimensional space and time t; ’7 is the stochastic noise; and u is the surface tension since the term z/Vzh tends to smooth the interface by redistributing the interface irregularities while maintaining the av- erage height unchanged. The average interface velocity is zero since the periodic boundary conditions assure that the Laplacian contribution is zero while the stochastic noise is white: L ‘ —} 0 0t For a moving interface we have to add velocity to the original equation (3.4), (”1(7), 1’) at = r/V2h + +n(7, t) + v (3.7) The motion of the interface does not affect the scaling relations since we can obtain the original EW equation by performing the following change of variables h —> h + "at (3.8) The EW equation (3.4) can be solved in two ways: using scaling arguments or directly. By using scaling, the interface should be invariant under the following scaling relations 29 (keep in mind that the interface is self-affine): Substituting (3.9) into (3.4) we obtain: a} . . . ._£ : Vba—2v2h + b—d/Z—z/Zn bo—z ()t We have used the general property of the delta function, 6d(a4r) = —1—dd(;r) ad d/‘Z—z/Q. and 7} —> b" 7] Since (7](b1'.bzi)t](b.t",bzt’)) = sob-s’(.r — with — t’) (3.9) (3.10) (3.11) (3.12) The EW equation must be invariant under these transformations. To find the relations between exponents we multiply (3.10) with 123‘“, (9 h. at : Vbz-2V2h + b-d/2Tz/2-at] 30 (3.13) and require that the coefficients are independent of b which implies: 2—(1 , 2—d . a: 2 , l3: 4 , 2:22 (3.14) For (i > 2 + 1 the exponents become negative, meaning that the interface is flat. Also for d = 1 + 1, a = 1/ 2 and i3 z 1/ 4 are close to those found for random deposition with surface relaxation [46]. 3.1.3 Solid-On-Solid Models The Solid-On-Solid models have been introduced in order to minimize scaling corrections [14]. These models have two basic properties: (i) they do not allow overhangs (the interface is single valued) and (ii) eliminate large slopes by limiting the height difference between neighboring sites. The single-step SOS model is valuable since by mapping it to an Ising model we are able to extract some analytical parameters. The model can be described by adding squares to all local minima with equal probability 1).]. on a (1, 1) oriented square lattice on an origi- nal "flat" interface (the "flat" interface in (1, 1) oriented lattice looks like a zig-zag, in fact it is composed of identical squares aligned along their diagonals), or removing squares from the local maxima with equal probability p_. This ensures that at any time the height dif- ference between two neighboring sites is always unity, even though we change the growth site height by 2, h, —> h, i 2. Introduced by Kim and Kosterlitz [71], the restricted solid-on-solid model allowed for growth exponents investigation in higher dimensions. The algorithm picks randomly a site 31 from the interface and increases its height by one unit, with the condition that the height difference between neighbors is not bigger than one (it can be 0). The model shows very good scaling properties. Using a fitting ansatz [5] for the correlation function, it is possible to precisely extract the scaling exponents. 3.2 Non-linear Stochastic Growth: KPZ Equation The Kardar-Parisi-Zhang equation [69] is a non-linear model of interface growth in random media: I, Tflt A __._3 “8: l = N2}. + 3m)? + mm) (3.15) The first term on the right-hand side describes the relaxation of the interface caused by an interface tension V, the second term reflects the presence of lateral growth and 77 is the noise assumed for simplicity to be white: (77(1', t)77(:r:', t')) = 2D5d(:1: — :r')6(t — t') (3.16) The important feature of this model is the addition of the non-linear term (Vh)2 re- sponsible for the symmetry-breaking; i.e. (3.15) is not symmetric under h -> —h. transfor- mation. The symmetry breaking implies that the positive h site of the interface represents growth while the negative h is appropriate for erosion processes. Eden growth, Solid-On- Solid (SOS) model, paper wetting, paper burning, driven charged density waves (CDW) are some models proved to be part of the KPZ class. The non-linear term from the KPZ equation accounts for the lateral growth [56] of the 32 h(x) A Figure 3.2: Lateral growth from KPZ equation front such that it is important to be included it in any surface growth model. The KPZ equation looks like a Langevin equation if one finds the appropriate Hamiltonian, alum) 6% ,—.> If we add a new particle in a direction normal to the interface wit, the increase (iii in the interface height along the h axis (Fig. 3.2), can be easily expressed as, 612. :t'dh/1+(\7h)2 (3.18) Since in (3.17) the (SH/6h term is already in the normal direction to the interface, we have to correct the noise 77 and the left-hand side of the equation since its direction is along h 33 axis. The corrected equation should read: h(7,t) : _6__H+ 77(J'at) (3,19) ,/1+(Vh)2 M 1+ (Vh)2 The correct Hamiltonian for the KPZ equation is, HUI )=/dx\/1 (V12)2 —-/dxh(x (3.20) where the first term accounts for surface tension, while the second one is the bulk contri- bution, i.e. the area below the interface. In order to verify that H( h) recovers the original KPZ equation (3. 15) we substitute (3.20) into (3.19) obtaining 814?, t) V2}; T: 1———+(W)2 +A./1+ (Vh) +n(?t) (3.21) In the limit IVhI << 1 one can expand (3.21) and, by keeping the leading order terms we recover the KPZ equation (3.15) implying that the (Vh)2 has to be present in order to incorporate the lateral growth. 3.2.1 Scaling Functions In order for us to "tackle" the problem correctly we have to start by choosing the correct tools. In order to analyze universality classes and to determine critical exponents, we are using the well known formalism of scaling functions [47,48]. The critical exponents ob- tained in this fashion help us to classify growth/invasion (which at first sight might look 34 completely different) into universality classes, proving that the intricate details of the mo- tion can be cleverly conceded into a few relevant parameters. The prOperties of the KPZ equation are usually described through the first two moments of the associated probability function of 12(7, t), h(L,t) s (h(L,t)) = 0 (3.22) which is the average height of the front, and is zero since the equation is written in the center of mass, and tr:2(L,t) = ((h(L, t) — (h(L,t)))2) (3.23) which has a nontrivial scaling as a function of system size and time: At 'w(L, t) ~ L" f(%;) (3.24) where the scaling function f (.r) is given by, 2’3 if (Ii—>0 f(:rf) = (3.25) constant if :1? —> 00 Following this hypothesis, there should be a cross-over time t1. : L2, such that the rough- ness of the front for times less than the cross-over time t1. is increasing as a power-law function of time, while in the long time regime the roughness is time independent and only scales with the size of the lattice. 35 3.2.2 Relations Between Exponents We are left with a major problem: once all critical exponents are extracted from simula- tions, how can we ensure that the values obtained are the ones we intended to measure? To solve this problem one can look for relations between exponents, a practice widely encoun- tered in RG. The first relation can be easily extracted using the definition for the scaling function from (3.24) and (3.25), i.e. t .13 — ~19 3.26 L (1.) < > leadingto a 13 ( ) Using symmetry invariance, we could attempt to find other relations between expo- nents. Unfortunately, since KPZ equation is non-linear this method (which worked well for Edwards-Wilkinson equation (3.4)) does not work, as we will see in the next section, instead the renormalization group approach should be used. 36 3.2.3 Symmetry Breaking and Scaling Arguments We can try to find other relations, between exponents, in a similar fashion as we did for the Edwards-Wilkinson equation. With the help of the following scaling relations, h —> h’Ebah (3.28) KPZ equation (3.15) is transformed into or . . /\ . , . Ira-27: = uba‘ZVzh + 5Macaw.)2 + b-d/Z-z/Zn (3.29) The first observation regards the b ——> oo limit when the non-linear term dominates only if a > 0. By multiplying both sites with (flu—3) we obtain, 8h ~—2 2 A + —2 2 —d/2+~/2- Dt— : ub“ V h + §ba z (Vh) + b “ an (3.30) If we would consider that this equation behaves trivially, in order for the scaling to work the equation (3.30) should be independent of b. A close inspections reveals that we have been provided with three equations for only two exponents. A first approximation would be to consider that the non-linear term dominates over the surface tension. The exponent 37 equations then lead to the following formulas: 2—d _ 2—d d 4 a=—-—, l3=——-. Z=—.+- (331) For d :2 1 + 1 the exponents are a = 1/3 and i3 = 1 / 5 which can not be right since simulations [5] give a = 049(1) and ,B 2 033(1). The question arising is what could be wrong? The parameters of the KPZ equation {14 A, D} are coupled to each other which is the answer to the previous question. To derive the relation between exponents and dimen- sionality, one would have to use a renormalization group approach, since it is the only one to carefully treat the strong coupling regime. 3.2.4 Galilean Invariance We were unable to use invariance to extract exponent relations, since we are interested in the strong coupling regime. However, we have one more resource left, the mapping of KPZ to Burgers equation. It has been rigorously proved [14,56] that the KPZ equation maps exactly to a Burgers equation for flow with random noise in a vorticity free field [19], and thus both the KPZ equation and Burgers equation, should have related critical exponents and relations between them. We see that KPZ maps onto Burgers equation with noise for a vorticity-free field by changing v —> —Vh: 8o 67 + /\(v . V)v : uvzv — V'7](;l?, t) (332) where v(:r., t) is the velocity of the fluid and V7/(r, t) is the random force. The left side of 38 the equation originates from the total derivative 'Dt’ (9t? (3.33) As we can see /\ : 1, but we will include it only for convenience. We expect that the total derivative after re-scaling remains unchanged. The re—scaling relations are as follows: It —-> bah v —+ b“"1v In the equations above the scaling relations lead to Do at“ , ba—l-z b2o—3(U E or 'V )v This invariance leads to a second relation between exponents (3.34) (3.35) (3.36) It has been also confirmed, using renormalization group [14], that the scaling relation given by (3.36) is valid in any dimension. 39 3.2.5 Exact Result for KPZ equation in 1 + 1D A more detailed analyses of KPZ solutions can be obtained using renormalization group theory (RG). Proposed by Wilson [114], RG is a systematic way of obtaining the critical exponents. RG can be tooled to work in either real or Fourier space. In real space, RG is more intuitive, while the Fourier space is more mathematically accessible. For partial differential equations, such as the non-equilibrium KPZ, the standard meth- ods were developed decades ago. The idea is to write the KPZ equation in Fourier space, d" d9 l,(k w)=hk0( 43/!“ ——q——2d+lq(k—q)h(q,§l)h(k—q,w—Q) (3.37) where ho(k,w) = G0(k,w)n(k,w) (3.38) the free propagator, 1 C;0(k,UJ) = —-,———.— (3.39) r/kz — 2a) and the noise correlation function, < 71(k,w)7)(k',w') >= 206(1(k + k’)d(w + w') (3.40) In order to simplify the calculations it is more convenient to use diagrams for representing (3.37), as indicated in Fig. 3.3. 40 a) 6) Mafia *4? ~44. 4:: Figure 3.3: Diagrams associated with one-loop expansion for the KPZ equation RG anal- ysis. The propagator (a); noise D (b); and vertex (c). The double arrowed line represents the propagator G, while the single arrowed line is Go. The rules for the diagrams in Fig. 3.3 are as follow: dd 2 ——/., mk-q) W) -7T In order to calculate physical observables, we have to perform averages over the stochastic noise for every diagram. If there is only one noise term in the diagram its value is zero since we have considered the noise to be white. The most important step in the 41 calculations is the "elimination of the fast modes" by integration over momentum A7, in the range Ae" g l: S A. By doing this, we eliminate the small wavelength components while the long wavelength part is left intact. The complete derivation of the flow equations [14] is simple in principle, but in fact is a very tedious mathematics. After performing the integration in the limit 1 << 1 the flow equations for the parameter set {1/, A, D} are: dr/ ‘ , .22 — d W — I/ [(~ '— 2) + [\dg 4d :l (341) ll) 2 5—-= D(:—®—20+Kfl— GAD (ll 4 [A L.: Am+z—m 04» (ll where the coupling constant is defined to be, . A2D g’ = V3 (3.44) and It'd = 5.1/(27r)d, while 5'.) is the surface area of the unit sphere in d-dimensional space. We note that the diagrams contributing to the vertex renormalization cancel in this case, and in fact they will cancel in higher order expansion too, because the Galilean invariance enforces the exponent relation a + z : 2. The flow equation for the effective coupling constant can thus be written with the help of (3.44) and (3.4l)-(3.43) as, 2d_3, (lg 2 — d — ‘ 3.45 4d a ( ) 27— 2 £1 + K01 The RG procedure requires the flow equation to be independent of scale such that the 42 right side of (3.4l)-(3.43) should equal zero. (1V (1 1) (1A _ _(17 _—_ W = W _ (3.46) By doing this the exponent relations read, . 2 — d z — 2 I ' 2 = . + my 4d 0 (3 47) g2 z_d—20+Kd—2f:0 (3.48) a+z—2=0 (3.49) As we can see the flow equations have different solutions for different dimension. In d = 1 the flow equation for the effective coupling constants has two solutions, 2 1/2 91 = 0 92 = (-—) (3.50) [\1 The first solution 91 is repulsive meaning that starting with a solution different than 91 it flows away from the fixed point while the second solution 92 is an attractive fixed point the flow is toward 9; if g ¢ 9;. From equations (3.47)-(3.49) we can extract the scaling exponents, N [\D‘IOO (3.51) [\UlH We have extracted the exact scaling exponents in d = 1, as a consequence of the Galilean invariance and validity of the fluctuation-dissipation theorem. If we investi- gate further RG for d = 2, the effective coupling constant grows under rescaling, being 43 marginally stable. The only fixed point is g = 0, and the perturbative RG fails to give physical predictions. If d > 2 we have two fixed points but the only stable one is 91 = 0. The other fixed point 9;, sets the boundary between weak and strong coupling regimes: if we start with g < 92 the flow takes us to gl 2 0 while if g > 92 the flow diverges. The strong coupling regime can not be accessed through current perturbative methods. 3.3 Directed Polymers in Random Media (DPRM) The problem of finding the optimal path is frequently encountered in the physics of dis- ordered media, and is known as the directed polymer in random media (DPRM). If we consider a random graph (as defined in section A.1) the optimal (shortest) path between two sites is the path P on which the sum of bond costs is minimal, Ep = 13111 2 q,- (3.52) In the literature there are two different paths defined: directed paths (DPRM) which can not turn backwards (given a direction of growth), and non-directed paths (N-DPRM) which have no preferential direction a priori. A recent paper [100] makes the observation that the directed polymers and the non-directed polymers have the same critical exponents, proving the fact that both are in the same universality class. The overhangs present in N-DPRM, therefore, play an unimportant role in the geometry of the polymer, provided the disorder is weak enough. Some well known members of the DPRM class are domain wall roughening in 44 impurity-sticken magnets [56], vortex-line wandering in disordered superconductors [75, 93], tearing or cracks [70], etc.. 3.3.1 DPRM to KPZ Equation Mapping In the continuum limit DPRM is described by, H. = /d-.14 [5,-(Vb)? + V(:1:,h) (3.53) The random pinning potential V(.r, h) is chosen to be white noise. The partition function of the DPRM can thus be written as, (3.54) (2'.h) 1 1‘ Zr(h) = / Dh'(;r')exp {——/ d:r' [:(Vh')2 + V(:L", h')]} 0.0) which can be viewed as the world line of a particle moving in a d —- 1 dimensional space. It is easy to show that the partition function defined above obeys a Schrodinger-like equation, 8 1T 82 1 —ZI h = —-———Z_1r h —V . ,h Zr] 3.55 8:0 () 2V8/12 ()+T (I: ) (I) ( ) by simply substituting Z,.(h) in the equation above or observing that if we map, 1 _> _ 1 h, T u —> m (3.56) we get exactly the Schrodinger equation and Z1. ( h) becomes the transition amplitude of a quantum system. The free energy of DPRM f (1:, h) = —Tan,,.( h) obeys the KPZ non- linear diffusion equation 8 . A . .1 = DVZf + .-(Vf)2+11(h.a:) (357) ()1: 2 were D = T / 21/ can be seen as the diffusion constant, A = —T / 1/ describes the strength of coupling to nonlinearity and 1) = V/ T is the re-scaled interaction potential. This mapping [68] f ——> h demonstrates that the height fluctuations during KPZ growth are identical to free energy fluctuations in the DPRM, and also makes it possible to relate an equilibrium configuration (the polymer) to a non-equilibrium invasion process (the front). 3.3.2 DPRM Scaling Functions Because the free energy of a polymer interacting with random media through a random potential at zero temperature was mapped to the KPZ equation, the scaling functions and scaling exponents should be related. In order to distinguish between problems the expo- nents were chosen to have different notations. Following Kim et al. [72] the energy of a polymer of length (number of sites along the polymer’s path; sometimes referred as the number of steps) t confined in a lattice of size Ld, is expected to scale as, 46 4mm = «E - >2>W ~ L‘fU/Lz) 19 , t<< L3 ~ (3.58) LX , t>> LZ The scaling function has the same behavior as the one previously discussed for KPZ (3.24), 2:6 if :r -—> 0 f (.14) = (3.59) constant if .r —> 00 Here LZ is the cross-over length, i.e. once the polymer exceeds it, its energy fluctuations become constant, only affected by the size of the system. The exponents are expected to be dimension dependent. 3.3.3 Relations between exponents The relations between DPRM exponents are obtained from the scaling function (3.59) and also from the DPRM to Burgers equation mapping. From the scaling functions in the limit t << L2 we obtain, t 1% 47 ) ~ 19 (3.60) leading to, 93 = x (3.61) The second relation is obtained from the Galilean invariance and reads, (1+ 3 z 2 (3.62) The relations between the DPRM exponents defined in (3.59) and the KPZ exponents from (3.25) can be found via the well known mapping described in Section 3.3.1: (3.63) N I” N In the next Section, with the help of (3.63), we are able to compare exponents obtained with different numerical methods for both KPZ and DPRM. 3.4 Numerical Simulations The scaling exponents for the KPZ equation are exactly known in d = 1 + 1 from Burgers equation, a = 1 / 2, 13 = 1 /3 and z = 3 / 2. These values have been confirmed by numerical calculations such as direct integration of discretized KPZ equation [23], and simulations of ballistic growth models [50]. Even though extensive work has been done there is no analytical solution for KPZ equation in dimensions d 2 2. It was believed that the upper critical dimension d, = 4 + 1 48 [29, 30, 54, 55, 77] but numerical simulations up to d = 7 + 1 [5] and some analytical calculations (non-perturbative renormalization) [21,22] showed no sign of an upper critical dimension. With no strong theoretical arguments, the problem of finding (IC is still under strong debate, and no solution was found, leaving this problem very attractive for future studies. Numerical simulations [72] give the following values for DPRM exponents: Dimension 6 1/2: d=1+1 0.332 i 0.003 0.664 i 0.003 =2+l 0.248 d: 0.004 0.662 :1: 0.01 =3+1 0.20 :1: 0.01 0.59 i 0.01 Table 3.1: DPRM critical exponents according to [72]. This was the first serious attempt to extract the exponents. The conjecture (3.64) derived by Kim et al. [72] is verified by their numerical simulations. 1 ‘2 2(d+1) ”— ~— 11+1’ Azd+2’ (1+2 (3'64) For comparison, in the table 3.2 we have recorded values extracted for the KPZ critical exponents according to Ala-Nissila et al. [5]. The numerical simulation were done in higher dimension, the highest considered d = 7 + 1. They have developed a new fitting ansatz for the equal height correlation function for restricted solid-on-solid growth. Using the "dictionary" given in the Section 3.3.3, we can see that the mapping between KPZ and DPRM is in complete agreement for dimensions up to d = 2 + 1 while for d > 3 the values obtained are below the predictions given by (3.64). One of the possible answers 49 Dimension 0' 6 z d=l+1 1/2 1/3 3/2 d=2+l 0.38 0.24 1.58 d=3+1 0.30 0.18 1.66 Table 3.2: KPZ critical exponents according to [5]. is that the results from the DPRM [72] are affected by finite size effects (the lattice size is of order 108 but the number of runs is only of order 102). 3.4.1 Invasion Algorithm with Overhangs In graph theory language DPRM represents the shortest path in a network where we have assigned to each bond a cost representing the strength of the pinning potential. An impor- tant numerical method for calculating shortest paths on a graph with positive 1 bonds uses Dijkstra’s algorithm. This algorithm is able to calculate all shortest paths between a source and all the sites in the network by using a global optimization procedure that simply grows to the site with the smallest distance. In order to do that we have to divide sites into two categories: (1) labeled for which we already know the shortest distance, and ( 2) unlabeled, the remainder of the graph vertices. The algorithm repeats the following steps until there is no unlabeled vertex. A more detailed review of the algorithm is presented in Appendix B. 1. Initially all bonds and vertices are unlabeled. We denote by s the source (we will measure distances from this source vertex to all vertices of the graph). Assign tenta- tive distances such that d(s) : 0 and d(2~ 74 s) z 00. Let y : s. 1We need only positive numbers since the distance should be a monotonically increasing function. For negative numbers there are other algorithms such as Negative Cycle Canceling 50 2. For each unlabeled vertex 1 we define d(.r') d(.r) = min{d(.r),d(y) + c(2', y)} Continue if there are any unlabeled vertices, otherwise stop. 3. Let y be the unlabeled vertex with the smallest distance. Label y and go to step 2. When the viscosity of the displacing fluid is greater than that of the invaded fluid, the displacement front in a random media is known to be smooth [99], otherwise it develops fingers. If a long finger tries to develop from the interface, because of difficulty of move- ment in the viscous fluid the finger will be slowed down. In the same way, in our algorithm, when a very favorable spatial region is encountered, a finger tries to develop but is slowed down due to the global nature of finding distances, which works in this case as "viscosity", suggesting that both Dijkstra’s algorithm and this special case of invasion should give the same result. In order to find the universality class for the front developed by Dijkstra’s algorithm, we have performed simulation in both 2d and 3d for square and cubic lattices. In order to obtain a linear interface, we have considered that two opposing sides of the lattice are virtually connected through zero cost bonds to a source and a sink vertex, while on the remaining sides we have periodical boundary conditions (in Fig. 3.4 we show snapshots for linear front invasion on a 2d square lattice with periodical boundary conditions). The costs for the bonds are real numbers drawn from a [0, 1] uniform distribution. 51 fi @Wfiwfi 41%;” "1‘.'i" m: Figure 3.4: Linear front invasion using Dijkstra’s algorithm. The front taken in different snapshots is self-similar. It can be compared with the non-directed polymer obtained (solid line) which is known to have only few overhangs and belongs to the KPZJDPRM class, for weak disorder. The linear front in Fig. 3.4 and at the same time the shortest path (solid line) have a rough appearance but are not fractal. The lattice has periodic boundary conditions such that the front maintains its linear aspect during evolution. In the case of free boundaries, the front "sticks", due to a smaller degree of freedom at lattice edges. We have used for the simulations lattices periodic in all directions except along the propagation axis. We know that the shortest path is part of the KPZ/DPRM class, the exponent values are presented in table 3.2 and we will not further extract its critical exponents. For the linear front obtained in the invasion process, the roughness is similar to that of the polymer and is shown in Figs. 3.5 and 3.6. As we can see from Figs. 3.5 and 3.6, we should be able to extract the scaling expo- nents using the previously defined scaling functions given by (3.24) and (3.25). In order 52 ILLILL 1 1 111111 11] 41 Figure 3.5: Plot of the Dijkstra interface roughness for 2d square lattices. The front rough- ness is increasing as a power—law with exponent 6 = 0.33( 3) at early times, saturating after the crossover time t1. at a value which is function of the lattice size L" with a = 0.49(1). The averages have been done over 10000 realizations on lattice sizes from 10 x 10 to 100 x 100. The long crossover is believed to be caused by overhangs. to precisely evaluate the exponents we have done simulations on lattices with maximum of 109 sites, the averages have been taken over 105 realizations. As we can see from table 3.3, the Dijkstra front is in the KPZ/DPRM universality class. The known KPZ front exponents in 2d are a“ = 1 / 2 and (32" = 1 / 3. The Dijkstra’s invasion front exponents extracted by us in 2d, 02d 2 0.49( 1) and (32" = 033(3) are close Dimension 01 /3 z d=2 049(1) 033(3) 147(4) d=3 035(3) 0.122(3) ' 168(2) Table 3.3: The critical exponents extracted with Dijkstra’s algorithm. 53 j T I I I T T I I II fi L=100 I ’l ’I I’ ' )- I’ -1 I’ / I I %%®\’l’ /, 6&0. ” w «.3309 ’,” 11., I ’I ’l ’I 1’ ’ W ’I 1— ,e’ — I ;/z L=10 14J1JJL1 1 1 1 111111 1 1‘1 11411 10 Time 100 1000 Figure 3.6: Plot of the Dijkstra interface roughness for 3d cubic lattices. Similar to Fig. 3.5 the roughness behaves as a power-law initially with exponent 6 = 024(1), while after the crossover the saturation value is L“ with a 2 035(3). We have done over 10000 averages at each lattice size, on lattices of sizes ranging from 10 x 10 x 10 to 100 x 100 x 100. enough to KPZ exponents such that we can safely say that Dijkstra’s front is in the KPZ class. Regarding the front in 3d, there is no unanimous agreement between simulation extracted exponents, however Ala-Nissila et al. [5] have obtained c134 = 038(1) and 133" 2 024(1), close to the values obtained by us a“ 2 035(3) and 63" 2 022(3). We note that the Galilean invariance is quite closely satisfied, (13" + 23“ 2 193(5) and a“ + 22" 2 196(5) in both dimensions of interest. In order to obtain smaller errors for exponents, much larger system sizes are needed. In our simulations the system size has been limited by the size of the current memory. The time variable in the plots shown in Figs. 3.5 and 3.6 has unity equal with 1 T = (d — 1)L algorithm cycles, were L is the size of the lattice, meaning that in order to measure roughness correctly we have wait for the interface to 54 -- 3d __ 1d .- -d 1.5 l l 0.5 Time Figure 3.7: Plot of the roughness of the interface in 2d and 3d. The natural time unit, T, is the time for a complete layer to be added. In 2d (L = 1000) the time unit in figure is T / 20. In 3d (L = 100) the time unit in figure is T/4. move by (d — 1)L sites. This time unit allows for the front center of mass to move linearly in time. The effect of changing the time unit becomes clear in Fig. 3.7. We can see oscillations in the front roughness in both 2d and 3d, which occur only if the time unit is less than (d — 1)L. This effect is equivalent to the one observed in layer- by-layer crystal etching/growth [91] where the experiment showed that in a certain range of temperature and etching/growth rate, the surface can heal itself as the etching/growth is canied out. If one measures the "true" (complete) interface, the roughness function should be monotonically either increasing or decreasing. The natural explanation for oscillations occurrence is that we did not measure the height fluctuations for a "true" interface, instead we took snapshots at intermediate growth steps, between the complete movement of the interface. 55 3.5 Conclusions In summary our results show that the shortest path problem is in the DPRM class in the case of uniformly distributed energies. The calculated exponents for non-directed polymer prob— lem are very close to the DPRM exponents suggesting that overhangs play an unimportant role. The simulations have been done using a greedy algorithm well known in computer science as Dijkstra’s algorithm. Since shortest paths (a global optimization problem) are constructed using local dynamics it is natural to study the nature of the growth process. The extracted exponents for the interface generated by the greedy algorithm are a?" : 0.49( 1) and ,8?" = 0.233(3), which are close to the KPZ exponents. The interface roughness shows early time oscillations characteristic of layer-by—layer growth. 56 Chapter 4 Generalized Invasion To investigate the strong disorder limit of the DPRM/KPZ problems, we introduce a gen- eralized invasion process which enables us to interpolate between the minimum spanning tree (MST) and shortest path tree (SPT). These two universality classes are very similar from the point of view of the invasion process and at the same time have completely differ- ent properties. KPZ invasion is a model for interface growth/movement in random media taking into account lateral growth, while IP invasion is a self-organized greedy process with applications in oil recovery, generating a fractal invasion front. Both algorithms used for modeling are able to generate paths and corresponding trees relevant for polymer equi- librium structure (shortest paths) and hopping transport paths (paths from the minimum spanning trees). The trees generated have an essential difference: MST is a super-universal tree in the sense that the geometry is universally independent of the randomness and is non-degenerate since we obtain the same tree regardless the starting point, while the SPT has different structures when the starting seed is changed. Having said that, it would be 57 very interesting to find a mechanism that could continuously interpolate between the MST and SPT classes, since they are so different. It has been shown [27] that in the strong disorder limit the shortest path is dramatically different from the weak disorder limit. For weak disorder the shortest path has a fractal dimension d j = l more precisely the path is not fractal at all. In the case of strong disorder (i.e. distributions with long tails) the bonds’ energies are well spread such that the cost of the shortest path is dominated by the largest cost. Interestingly enough, the fractal dimension of the shortest path is identical with the fractal dimension of the paths from the minimum spanning tree, with (13" = 1.22 i 0.01 for square lattices and (I?i = 1.42 d: 0.02 for cubic lattices. We will prove that our generalized algorithm interpolates between the strong and weak disorder limits by tuning an additional parameter defined with the help of a more generalized energy function. 4.1 Analytical Derivations 4.1.1 Strong Disorder Distribution Using the generalized energy function, d3, : rnin{ Z 613‘} (4.1) i,j€path additional information about invasion processes should be gained. It is trivial to see that for m = 1, the generalized energy function gives exactly the DPRM problem. In the limit m —> 00, we argue that the functional corresponds to the invasion percolation (IP) problem 58 and is relevant for "strong disorder" physics. Equation (4.1) with a probability distribution P(c,-J-), which is uniform in the interval [0, 1], is equivalent to, (is, = min{ 2 raj} (4.2) i,j€path with the new variables ya 2 eff randomly distributed according to, GUM) = —‘1 4+5; (4-3) To prove this note that the new probability distribution has to satisfy, Glyuldyu = P(Cij)‘lcij (4-4) Since both y and c are related (we have dropped the subscripts for convenience) we know that 1 do = —y‘l+#dy (4.5) 771 By combining (4.5) with (4.4) we obtain (4.3). In the limit 771 —> co, the new probability distribution G becomes 1 C(11) ~ —, m ——> 00, (4.6) 3] which has a long tail. The mean and standard deviation of G( y) are (note that the integration 59 limits for G(y) are the edges of [0, 1]), < y > = 1 Jim (4.7) < y? > = 1 (4.8) 1 + ‘2m 0... : (< y? > — < y >2)1/2 : m ,_ (4.9) (1 + m.)\/—l—+—2‘_n_t thus we expect that —1/2 < 711. < 00 for the second moment of the distribution to exist. In the case —1/2 < m < 0, Marconi and Zhang [81] found that the corresponding DPRM problem has a wandering exponent 1/ which varies continuously with m. The strong distributions changes the nature of the invasion algorithm by simplifying the energy functional. The strength of the bond costs are distributed in such a manner that the cost of the path is dominated by the largest bond cost, :4? =43... 1+ Z (LY—1cm,x (4.10) ‘ - (“max L1) #‘ max It follows that the energy functional _ - m ' m (1.91 — mrn{ E on} —> 111111 cmax (4.11) describes a path along which the maximum barrier is minimum. 60 Figure 4.1: Invasion Front. 4.1.2 Invasion Percolation Limit To prove that the generalized algorithm in the limit 771 —+ 00 becomes invasion percolation we proceed as follow. Let us consider two sites, 11 and t2, on the invasion front (Fig. 4.1), and t’l and if, any two neighboring sites of 11 respectively t2, such that Ctlt] < CtQtlz (4.12) We can write the generalized energies d5); and d3”) from the source to the respective sites as follows: __ 111 m 371 dst’l — dstl + Ctlt] —‘> C + Ctlt] (4.13) max 'max __ 7n 7". 71 61 Helped by (4.12) we see that: dst] < (151,; (4.15) The algorithm invades choosing the smallest energy bond. This is equivalent with choosing the smallest cost (regardless of the previous history), which is identical with the invasion percolation rule. 4.1.3 Relevant Scales We have proved that the generalized algorithm interpolates between the DPRM (for m = 1) and IP (for m ——> 00). It is known [100] that the directed polymer (DP) problem and non- directed polymer (NDP) problem are in the same universality class (DPRM), provided the disorder is weak. The overhangs, which occur in NDP, are not important. The explanation comes from the definition of the polymer energy in which all terms are equally important, meaning that the polymer’s "memory" is strong. Contrary, when m —> 00, we have proved that the energy functional has no "memory", because the algorithm chooses the next bond regardless the polymer cost. The algorithm is now identical with invasion percolation. Thus, we can associate the size of the overhangs with the inverse "memory" strength. This takes us to the conclusion that by increasing m, the polymer looses "memory" and from self—affine evolves into a self-similar (fractal) structure. The crossover length 1.1,, which is the Euclidian length scale on which the polymer is fractal, can be evaluated from the generalized energy function (4.1). We have made the 62 following approximation: CU” m 1 171 Z — ~ 1 —- f (4.16) Cmax 1V C1] #Crnax were N is the total number of elements in the sum, thus the length of the polymer. The polymer energy for any 171, (1,, = min{ 2 03‘} i,j€pat.h . { m ( CU >171 = min cmax 1 + Z: C11¢Cmax max . 1 m = rn1n{cn":ax [1+ (1— N) ]} ~ rnin {€13.61 [1+ exp {—fléH} (4.17) ‘ 1 This simple argument implies that the crossover length scale 12,, = m. In the simulations we assume, 71.1. : 171’“ (4.18) were k. is an exponent which is to be determined. It is believed [92] that its value is k = 1.6 regardless of the dimension (the simulations were done only for d = 2,3). Because the polymer is fractal its fractal dimension d; is defined by, nb : [df (4.19) 63 were I is the Euclidian length and m, is the total length, the Euclidian crossover length [1. is, "I 2 [if m. = i’ 1,. = (771k)1/df (4.20) 4.2 Numerical Results 4.2.1 Spanning Tree Cost We can analyze the crossover explicitly given the generalized energy function, (131 = min{ 2 c3} (4.21) i,j€path In the limit m —> 00, the largest term dominates the sum and can be interpreted as an energy barrier. In order to test our new invasion algorithm, we have done simulations using an uniform distribution of real numbers in [0, 1] with variable weight factor m. The largest value of m that we can use is in the range of 30 for single precision while for double precision we have been able to successfully work with exponents m = 60. For values larger than these it becomes very difficult to do any mathematical operations on the variables due to the round-off. From Fig. 4.2 we can see that the cost of the shortest path tree, ESPT, increases linearly 64 60000 . I . fi fi 1 1- -4 50000 — _ SW ~ _ -- MST ”1:3 - 40000 — f _ E , 0 [ICC )- /®/\ .4 30000 *- ~ 1 . m=33 20000 [— 4 1 - 10000 — — O 1 l g 1 1 l 0 500 Time 1000 1500 Figure 4.2: Plot of the tree cost Em... versus time for shortest path tree (SPT) (solid line), minimum spanning tree (MST) (long dashed line). For m = 33 the generalized algorithm tree cost function is almost identical with that of the MST. All simulations have been done on 401 x 401 square lattices averaging over 100 configurations. in time as an effect of the direct proportionality between the cost of the paths and their length. The "time" increases by 1 any time we add a new bond to the tree (initially when we start the algorithm we turn the time to 0) and cost of the tree is calculated using (4.1). The cost of the minimum spanning tree, E M ST, is linear in time until the cluster first percolates the lattice. After this point, since it is necessar to explore unfavorable cost regions, the EMST rises higher than a linear dependence. The exact cost function eludes us but we point to the fact that in the large limit for m. we recover the IP class. 65 4.2.2 Acceptance Function To further clarify the transition, we have extracted (similar to Figs. 2.7 and 2.8 ) the ac- ceptance function a(r), since it tells us how close to the IP phase we get as we increase 772. The definition of the acceptance function reads, number 0 f random numbers accepted in the cluster from [73 7‘ + dr] a(r) = (4.22) total available numbers in [r, 'r + d-r] In other words, the acceptance function gives the probability for a bond to be in the tree/cluster given its weight is in the interval [7“, r + dr], where dr is the size of the bin. In percolation, we are interested in the properties of the first percolating cluster. In this case, for site IF the acceptance function for the first percolating cluster is a step function (in the infinite size limit) at the critical threshold (pC z 0.593(1)). In contrast, for the bond IF the acceptance function shows a jump at critical threshold but its size is less then 1. Up to the critical threshold, a(r) g 1, because some bonds with weight Cij < pC would make cycles such that the tree property would be destroyed, which is not a valid option for our bond IP algorithm. As we can see in figures 4.3 and 4.4 the acceptance function for both the first percolating tree and for the complete tree gets closer to the IP curve as we increase m. For m = 33, the acceptance function of the generalized algorithm moves close to the one representing the bond IP. If we further increase the generalized weight exponent m, the algorithm has to suffer from real numbers round-off, giving incorrect data. For the simulation, we have used square lattices of size 401 x 401 and averaged over 1000 simulations. 66 —— SPT __ -- MST 0.8 0.6 a(r) ~ 0.4 _ — I I l 0 2 ~ ' — ' : —' m=3 + I I ->m=33 "0:10 0 l l L 1 A _1 _L 0 0.2 0.4 r 0.6 0.8 1 Figure 4.3: Acceptance function for the first percolating tree (cluster) on square lattices. The function is approaching the MST acceptance function (the long dashed line) as we increase the exponent m. The solid line is the acceptance function for the shortest path tree. The lattice size used in simulations is 401 x 401, while the averaging is done over 2000 realizations. 4.2.3 Path Geometry and Fractal Dimension We have argued in previous chapters (Chapter 2, 3) that the paths have important roles in transport, flux lines in superconductors, etc.. It is of great importance to see how the struc- ture of the paths change from polymers at small m with no fractal dimension (d, = 1), to a fractal paths (0'; 2 122(1) in 2d, and d; = 1.42(2) in 3d) at large m. We have been able to extract the geometrical structure evolution by measuring the probability distribution g(.s) for a path to have s, where s is the path length scaled with the Euclidian length raised to the power 01, 2 122(1), as seen in Fig. 4.5. Since we argued that our generalized algorithm gives both IP and KPZ limits we have added as an extra proof the distribution of scaled 67 (L8 (l6 a(r) (l4 ()2 Figure 4.4: Plot of the acceptance function for the complete tree on square lattices. We can see that with increasing m we get closer to the IP (long dashed line). The solid line is the acceptance function for the shortest path tree. The averages are done over 2000 realizations of a 401 x 401 lattice. paths length as seen in Fig. 4.5. While increasing the weight exponent m the distribu- tion function 9(3) approaches the IP limit, where paths are fractal with fractal dimension d f 2 122(1). To better understand the transition from DPRM to IP we have studied the paths fractal dimension as a function of m. The distribution of the weights is again uniform in [0, 1]. We find that the paths are self-similar for length l < 11., the cross-over length while self-affine for l > [1,. The cross-over length scales as, [I ~ (7721‘)le (4.23) 68 0.01 1 g(S) * 0.001) 0.0001 1 Figure 4.5: The scaled distributions of path-lengths, 9(3), on spanning trees on square lattices. The scaling variable is s = n / [DI where n is the number of bonds in the MST path, I is the Euclidian distance and D f is the scaling dimension (D f = 1.22 i 0.01 for square lattices with m = 10,90 and D f = 1.00 :t 0.01 for square lattices with m = 1). The dotted line in these figures is the scaling distribution on the trees for various m = 1, 10, 90. For comparison we also give the scaling distribution for the steady state during growth of the trees, i.e. invasion percolation like (solid line). In both cases the paths scale with the same fractal dimension, in fact this holds at all stages of growth of the trees. The results are found from averaging over 90000 realizations of 201 x 201 square lattices. were 01; is the fractal dimension and k = 1.60 :l: 0.03 is an exponent that does not depend on dimensionality [92]. The simulations were done for 2d square lattices with L = 401 and over 1000 simulations. The fractal dimension on short length scales proved to be self- similar behaving like 1 1/ d! with (If 2 1.22(1), while over large scales the fractal dimension of the path changes continuously from d, = 1 for m = 1 (DPRM) to d, = 1.22(1) for m —> 00 (IP). The conclusion is that if I < [I the path is dominated by the largest bond while for l > [x the path is no longer self-similar, showing self-affine behavior depending 69 1 0000 100’ nb /mk E 0.01 0.0001 1/ (:nk l/df Figure 4.6: Scaled plot of the total length 72;, versus the Euclidian length l. The constant k = 1.6 from [92]. The fractal dimension has been assumed to have the value d f = 1.22(1). The dotted line has slope equal to 1 and is for invasion percolation (the IP paths have fractal dimension d, 2 122(1)). on the value of m. Because [I is the only relevant scale we expect that, l l = [If (r) (4.24) where f (:1?) is a scaling function. The same behavior is expected for the polymer roughness: t w = trg (T) (4.25) Here g(x) is the roughness scaling function and t1. = 11,. From Fig. 4.6 we conclude that overhangs come in to play as we transition from KPZ to IP paths. While for the KPZ we have been able to prove that overhangs play no role (as they are almost nonexistent) the 70 bond IP paths are completely dominated by them. 4.2.4 Trees Overlapping Function An important part of our study was to perform numerical simulations for a high general- ized exponent m. As we have already seen in previous sections a very good overlap with IP curves occurred at a value m = 33 at L = 401. By further increasing m we did not increase the quality of the overlap, rather we have decreased it. To verify this hypothesis, we have calculated the overlap function F (772) between the tree obtained from the general- ized algorithm and either the shortest path tree or the minimum spanning tree. We define the overlap as the number of bonds part of a tree but not part of the other one (uncommon bonds), divided by the total number of bonds in the tree. As seen in the log-log plot from Fig. 4.7 the overlap with the SPT increases with the increase of m. At the same time, the overlap with the MST shows a minimal value for m above which the trees start to differ more than before, contrary to our analytical calculations which suggested that in the limit m —> co the generalized tree and MST become identical. To explain the minima in the MST overlap we have run the same simulations on two different machines. The results (Fig. 4.7) proves that a round-off for reals occurs at the approximate value m = 33 on a 32 bit machine, while on the 64 bit machine the value is doubled m = 60. 4.3 Conclusions We have found that the generalized algorithm given by (4.1) is able to continuously inter- polate between two important universality classes: KPZ when m = 1 and IP for m —> oo. 71 .‘.-.-._l-l—l-U-I-I-D-I-O—D-I-I-I—I-O-D-._IO ’ a' " . °-- SPT O 11— -- MST on 32bit CPU ' r -- MSTon64bitCPU r— lLll 0.01 1T1] 0.001 ' 4 L Figure 4.7: The fraction F of uncommon bonds to the total number of bonds in the tree ver- sus the generalized exponent m for a 201 x 201 square lattices (10 simulations) on different processors; the solid line is for simulation running on a Digital UNIX V4.0E, while the long dashed line is for simulation running on an Intel Pentium HI. We can clearly see that the machine round-off for the real numbers accounts for the valley in the overlap function, while the maximum overlap it is achieved for m z 30 for the 32 bit machine, on a 64 bit machine the maximum overlap occurs at m z 60 value of the generalized exponent. The dot-dashed line is for overlap between the generalized tree and SPT (on an Intel Pentium III); all other curves are for MST overlap with the generalized tree. The paths are characterized by only one scale the crossover length (,5, which divides the paths into self-similar for l < [1. and self-affine l > 11.. The paths fractal dimension in the IP limit is (If : 1.22( 1) for 2d square lattices while (1) 2 142(2) for 301 square lattices. The overhangs behavior is responsible for the path geometry. In weak disorder they are almost nonexistent contrary to the strong disorder case when they dominate the paths. Our results are in agreement with similar calculations done for DPRM problem in the strong disorder limit [26, 27,92,100], where, using Dijkstra’s algorithm, they have obtained paths with the same geometrical structure as MST. We have been able to prove that our generalized algo- 72 rithm with weak disorder is equivalent to the DPRM problem in the strong disorder case, which in turn, for m —> 00 is identical with invasion percolation. Furthermore the path length scaling proves that our analytical results are true. One problem is the round-off for real numbers once they are raised to high powers (remember our original real numbers are uniformly distributed in [0, 1]). The tree overlap function shows a minimum as a function of m, which could be attributed to the machine precision. 73 Chapter 5 Random Field Ising Model 5.1 Introduction The central issue in the equilibrium random field Ising model(RFIM) is the nature of the phase transition from the ferromagnetic state at weak disorder to the frozen paramagnetic state at high disorder. The existence and universality class of the RFIM transition, is key as the best experimental tests of RFIM theory are diluted antiferromagnets in a field, which are believed to be in the same universality class as the RFIM [49]. After some controversy, it was rigorously demonstrated that the RFIM transition occurs at a finite width of the distribution in three dimensions [63] and at an infinitesimal width in one and two dimen- sions. Moreover, Aharony [2] showed that within mean field theory at low temperatures, the transition is first order for bimodal disorder distributions but second order for unimodal distributions. Numerical studies at zero temperature suggest, that in four dimensions, the bimodal case is first order and the Gaussian case is second order. The analysis in three di— 74 mensions is less conclusive [105]. The difference between the Gaussian and bimodal cases has been attributed to percolative effects [106]. We have recently shown that at zero tem- perature, the mean-field theory is non-universal [41] in the sense that the order parameter exponent may vary continuously with the disorder. Exact optimization calculations [6,94] in three dimensions have also suggested that the correlation length exponent, as deduced from finite size scaling, is non-universal [7]. Motivated by the fact that the RFIM is non-universal within mean-field theory for the stretched exponential distribution, we have analyzed the RFIM on complete graphs with disorder distribution, (612/ |h|)r (0 < :1: < l, h| < 6h). We find that this distribution is anomalous in the sense that this sort of disorder never destroys the spontaneously mag- netized state, at least within mean-field theory. The behavior of the RFIM on complete graphs is thus quite varied and anomalous. To determine whether this non-universality extends to other lattices, we have analyzed the zero temperature RFIM on a Bethe lattice for the stretched exponential and power law distributions of disorder. We prove that the Bethe lattice is universal, provided the transition is second order, even in the limit of large co-ordination. This is surprising since in this limit the Bethe lattice usually approaches the mean-field limit. We also extend the results outlined above to the non-equilibrium case. Ground state cal- culations of hysteresis and Barkhausen noise in the RFIM have demonstrated that the spin avalanches are controlled by the equilibrium RFIM critical point [33,101]. It is thus not sur- prising, and we confirm, that the magnetization jump in the hysteresis loop is non-universal for the stretched exponential disorder distribution. The integrated avalanche distribution 75 also has a non-universal exponent due to the non-universality of the order parameter. But the “differential" mean-field avalanche exponent is universal, even in cases where the order parameter exponent is not. In contrast, as expected from the equilibrium results, the Bethe lattice exhibits universal non-equilibrium critical behavior. 5.2 N on-equilibrium Random Field Ising Model The Lenz-Isin g model is probably the oldest and simplest non-trivial model for cooperative behavior that shows spontaneous symmetry breaking. It has a vast number of applications from solid state physics to biology and recently to economics. The Hamiltonian is, . 3 H = — 2: 2,5,5, — DH + has. (5.1) 0' where the exchange is ferromagnetic (Jij > 0) and the fields h,- are random and uncorre- lated. In the non-equilibrium problem we sweep the applied uniform field, H, from ~00 to co and monitor the magnetization at a fixed J.)- = J and for a fixed disorder configura— tion {/2,}. This model has been proposed as a model for Barkhausen noise by Dahmen et al. [33]. The local effective field responsible for a spin-flip is hf” = J 2 s, + h.- + H (5.2) #i The condition for a spin to flip is that hf” > 0. The random fields are drawn from a specified distribution p(h). To test universality, we use the following distributions which 76 2 1 1 1 —- y=0.5 I H I\ I \ I \ I \ II \\ 1— , \ _ I \ I \ I \ ,. I \ I \ I \ ll \\ 0.5 — l’ \\ "‘ II \\ I’ \\ y— l’ \\ I” “\ ”/ \\\ 0 ’ 1 l l l \ -l -0 5 0 0 5 1 Figure 5.1: The disorder distribution p1 given by (5.3) for 6h 2 1.0. The most important feature of this function is the behavior near the origin for various y exponents. are defined on the interval «Sh g h 3 6h, y+1 |h| y I, :2 — — ' . pl( 1) 23161111 (6h 0 < y < 00 (5 3) and (1)-filmy —1<‘< (54) p2 " ‘ 26h 6h y 00 ' We have shown that m, which is the low field expansion of a stretched exponential disorder distribution, leads to non-universality in the ground state of the equilibrium mean-field RFIM [41]. Here, we extend that result to the non-equilibrium case. We then show that the distribution [)2 destroys the RFIM phase transition, within mean-field theory but not on Bethe lattices, for —1 < y < 0. 77 Figure 5.2: The disorder distribution p2 given by (5.4) for 6h = 1.0. 5.2.1 Mean-Field Theory Mean-field results can provide us with insight to the problem. Using the Hamiltonian defined above and mean-field (MF) formalism we discuss the behavior of the ground state at zero-te mperature . H = — 2 1251’s., (5.5) i The magnetization is given by hc(m) «x; m = —/ p(h)dh + / p(h)(lh (5.6) 00 . hC(-m) 78 were hc(m) = —Jm — H. The energy at a given magnetization is .11722 0° h“ E('m) : 9 —/ Il‘2.|/)(/2)(1h + 2/ hp(lz)rlh. (5.7) — 1 .0 H (Y; 4 Extremizing with respect to the order parameter, m, yields the ground-state mean-field equafion, Jme+H me = 2/ p(h)(lh (5.8) 0 The non-equilibrium critical points are found from the susceptibility X : 8772/8H, which from (5.8) is given by, , _ 2p(Jm + H) \ ‘ 1 — 2Jp(.]m + H) (5.9) 5.2.2 Mean-Field Avalanches Using the same framework one can also calculate the scaling exponent for the avalanche size distribution. If a spin is flipped the average number of spins that will be triggered is given by nm-g : 2.] P(—J M — H). If 72mg < 1 the avalanche will die—out, (will have a finite size) while if nm-g : 1 the avalanche will be infinite sweeping the entire system. The probability 11(5) for an avalanche of size S starting with a spin flip at h,- = —J M — H is given by the Poisson distribution with the average value /\ = 2.] SP(—J M — H) divided by 1/5' 0(5): , L exp(—A) (5.10) Expanding near the critical point and also using Stirling’s formula we obtain the scaling form for avalanche size distribution D ~ 57-3/2 (5.11) More generally, the avalanche distribution, a(s, t) gives the probability of finding an avalanche of size 5 at parameter value 1, found using a Poisson statistics argument [33], yields, d(3,1)~ s-Te-“s = s'Tg(s"t), (5.12) where g(x) is a scaling function and t = 1 — 2J p( J m,3 + H). Experimentally, it is more natural to make a histogram of all avalanches up to the critical applied field at which the magnetization changes sign. This “integrated" distribution behaves as, D(s,0h) : 3—7-0/35g(s”7') (5.13) where r 2 W2 — 612,]. For a Gaussian distribution of disorder, /3 = 1/2, a = 1/2, T 2 3 / 2. We have shown, however, that in the ground state for the distribution (5.3), the equilibrium order parameter exponent, )3 = 1 /y. In contrast, it is evident from (5.12) that the exponents o and r are universal. The non-universality in non-equilibrium behavior arises in the magnetization jump and in the shape of the non-equilibrium phase boundary, as we will demonstrate in this chapter. 80 11‘ 5.2.3 Magnetization and Critical Field Consider the distribution (5.3). Integrating (5.8) yields the mean field equations, 1 _ _ 1 _ _ m = y + (Jm + H) — —|Jm + Hly“ (5.14) y y for J-m. + H > 0, and ' 1 - __ 1 _ _ m = y + (Jm + H) + —|Jm + Hly+1 (5.15) y y for Jm + H < 0. Here we have defined, 7 = J/(Sh, H = H/(Sh. Setting H = 0 in either (5.14) or (5.15) yields the equilibrium magnetization [41], 1/ m,q : ih/ +1]l/y [I — _—y—] y (5.16) " J J(y +1) At the critical point, the magnetization scales with the magnetic field as 712.6(7‘ = 0, H) ~ H1”. From (5.15) it is evident that 6 z y + 1. The susceptibility x = (7771/071— diverges when the barrier between the two local magnetization minima of the ground state energy ceases to exist. From (5 .9), we have (y+ 1)[1 — (7n1+H)y] *2 _ _ __ . 5.17 \ y-(y+1)J[1—(Jm+H)3/] ( ) and the critical condition y = (y + l).7[1 -— (jmmq + EV]. (5.18) 81 This equation has the simple solution, _ _ y 1/y we = Jmncq + H: = [1 — :——J . (5.19) J(y +1) Substituting (5.19) into (5.14), we find that the non-equilibrium magnetization jump is positive and has the value . 1 _ y 1/y mm, = 1 + =——-— 1 — _—— , (5.20) J(y+1) J(y+l) for H —> 11:. Substituting this into (5.19), the critical field is found to be, I 1+1/y y 61 )] . (5.21) HC=—.]1——— [ J(y+1 This negative critical field is expected when starting with the positive magnetized state. By symmetry, the negative magnetization solution is at —HC (figure 5.3). The value of the magnetization at that point is —-m,,.q. Note that 177171qu is not the size of the magne- tization jump in the hysteresis loop. The jump in magnetization in the hysteresis loop is 6mm,“ 2 [mnwl + m(|HC ) (figure 5.4), where m(|HC|) is found by solving (5.15). The critical exponent associated with the jump in magnetization is determined by the behavior of the distribution p(h) at small fields, so that the critical exponents found here apply to distributions of the form p(/r) = exp(—( lh | / H )3”). For y < 1 these are the stretched expo- nential distributions ubiquitous in glasses, while for y > 2 they are more concentrated near the origin. 82 1 -- y=0.5 — y=2.0 ‘ 0.5 '_ \\\\ _ 0 i % ’:3::::4::—;m.+——+—_ 5h 0 1 I ,,,, 2 3 4|. -0.5 "" ’1” — Figure 5 .3: The phase diagram of the non-equilibrium RFIM MFT using the disorder dis- tribution (5.3). In this figure, we took the exchange constant J = 1. The dotted line is for y = 0.5 while the solid line is for y = 2. Note that at the equilibrium critical disorder, (She, the hysteresis loop disappears. Below the critical disorder point the hysteresis loop is finite while above it disappears. The model is not universal; as we can see from equation (5.21) the critical field has an associated exponent dependent on the disorder exponent y. Now we briefly consider the distribution p201) given in (5.4). For y > 0 this distribution is bimodal and it is easy to confirm the conclusion of Aharony [2] that the transition is first order. However the cases —1 < y < 0 are more interesting. In these cases the disorder is dominated by small random fields, as the distribution is singular at the origin. It is easy to carry out the mean-field calculation (5.8) with the result, (7] 1+1/y mm] = (7:) 6h > J (5.22) By comparing the energies of E(m = 0), E(m : l) and E(m,q) (using (5.7)), we find that 83 —- y=0.5 _ L 1 5h 4 Figure 5.4: The magnetization jump at the phase boundary. In this figure, we took the exchange constant J = 1. The dotted line is for y = 0.5 while the solid line is for y = 2. Note that at the equilibrium critical disorder, (She, the hysteresis loop disappears. The critical exponent associated with the jump in magnetization is determined by the behavior of the distribution p(h) and has different values when changing y as seen above. for 6h < J, the ground state is fully magnetized, while for 6h > J the ground state has magnetization (5.22). The interesting feature of the result (5.22) is that there is no phase transition at finite 612, and the system is always ordered. The disorder distribution (5.4) thus destroys the ground state phase transition, due to the large number of small random fields. 5.3 Equilibrium RFIM on Cayley Tree Now, we determine whether the non-universal results found above for the mean—field theory extend to the ground state of the RFIM on a Cayley tree. The coordination number of a 84 tree is taken to be 2, while the probability that a spin is up is P+ and the probability that a spin is down is P_. The probability that a spin is up at level 1 can be written in terms of the probabilities at the level which is one lower down in the tree, this yields [18,36] P+(l) : Z (Q)Pi(l—1)Pf_g(l—1)a+(a,g) (5.23) where a+(a, g) is the probability that the local effective field is positive when 9 neigh- bors are up. If we know the distribution p(h,~) we can compute a+(a, 9). Analyzing the equilibrium behavior, we have, a:’(a,g) = /(00 p(h)dh (5.24) a-2g)J—H The equilibrium Cayley tree model has been extended to the non-equilibrium case by considering a growth problem in which the spin above the currently considered level in the tree is pinned in the down position [36, 98]. This models the growth of a domain. The formalism is the same as in (5.23), with the modification that (1.:6q(a,g) = a.:q(z,g). (5.25) From this equality and the form (5.23) it is easy to derive all of the non-equilibrium results from the equilibrium results found using (5.23) and (5.24). To find the hysteresis curve on a Cayley tree, we just shift the equilibrium magnetization as a function of field: by H ——> H — J when sweeping from large positive fields and by H —> H + J when sweeping from large negative fields (figure 5 .5). The behavior is evident in previous numerical work, 85 5h Figure 5.5: The phase diagram for the non-equilibrium RFIM on a Cayley tree with coordi- nation number a 2 3 using the distribution (5.3) and taking the exchange constant J = 1. The dotted line is for y = 0.5 and solid line is for y = 2. The initial, linear part, of the phase boundary is due to the finite cutoff of the distribution (5.3). There is a discontinuity in slope of HC(6h) at the equilibrium critical disorder (She. but does not seem to have been noticed before. By direct iteration of the recurrence relation (5.23) we show that a stable steady state solution, P; = 1 — PI, exists. It is easy to solve equation (5.23) in the steady state limit, at least for small values of a. For a : 1, 2 Cayley trees have no ordered state for any finite 6h, for the disorder distribution (5.3). But for a = 3 a ferromagnetic state does exist for a range of disorder. As we see from (5.23), the a = 3 case leads to a polynomial of order 3 which can be simplified to, :34sz — 3b + a) — 1 + 3a + 3b] = 0 (5-26) 86 2 1 T 1 l 1.5 - _ 1_ _ 0.5 — _ 00 i i 55“ Figure 5.6: The magnetization jump for the RFIM on Cayley trees for z = 4 and the distribution (5.3), with the exchange constant J = 1. The dotted line is for y = 0.5 while the solid line is for y = 2. In both cases we find the same critical exponent, for example 13 = 1/ 2. In contrast, the mean-field result is H = 1 / y. were m = 2(1); —1/2),a : aff(3, 0) and b = (11"(3, 1). (5.26) has the following solutions: (5.27) 3a+3b—1 ”2 Bb—l—a ' m = O; and m = :1: ( These solutions apply for any disorder distribution. For the distribution p1(h), performing the integrals yields, (5.28) —y+1 _ _ l ‘2 4y — 12(1) +1)! + 3039+l +1)Jerl / 771: 3(1 — 321).] 87 We can now expand the magnetization around the critical point, JC, 7 2 Jo — e. We find, 1/2 y +1)(1+ 3y+1)7y)c] (5.29) 772 ~ [(—1‘2(y +1) +3( 3] Thus 772 ~ 61/2 for any y, so that ,6 = 1/ 2 is universal (figure 5.6). Since the non- equilibrium behavior on trees is related to that of the equilibrium behavior in such a simple manner, this universality extends to the hysteresis and avalanche exponents. It is easy to confirm numerically that the behavior extends to large values of the branch co-ordination number 0. Moreover by doing an expansion of (5.23) using R, = 1/ 2 + m, it is possible to show analytically that only the first and third order terms in m exist, regardless of the value of y in the disorder distribution (5.3). This confirms that for this distribution, the behavior is universal for all coordination numbers. For the distribution p202.) and a! : 3 we get from (5.27) (5.30) --y+1 4 —- 3(3y+1+1)7"+1 ”2 777. Z . Just like we have done before we can expand 771 around the critical point, J 2 JC — e: . . __ 1/2 m ~ [(1) +1)13y“+1))«16y‘€1 ' y Thus for p2, I3 = l / 2 is an universal exponent, too. 88 5.4 Finite Temperature Expansion The analysis above can be extended to finite temperature.The mean field equation in this case is, m = / thUz) tanh[(Jm + H + h)/T] (5.31) 00 Carrying out the same procedure like in the T = 0 case the susceptibility is, 1 00 T/ th(h)sech2[(Jm + H + (1)/T1 X Z . _,,0 (5.32) J 00 1— 77/ th(h)sech2[(Jm + H + h)/T] 00 The poles of the susceptibility will give us the critical condition, 1 = 7‘]: f... th(h)sech2[(ar + h)/T] (5.33) 00 where .1: = J m + H. Integrating by parts and using the symmetry of P(h) yields, _ 0° 8PM) 2' +12 2‘ — h 1 — J/_OO dh—af—l— [tanh( T > — tanh< T )] (5.34) from which it is easy to see that when T —> 0 this reduces to the ground state result,(5.18). Now we want to find the behavior at finite temperature case, especially near the critical point. In the case of no random fields the critical point is determined by, 1 = —% sech2 (Q (5.35) 89 To leading order this leads to 3: ~ T(1 — T/JW'Z. Substituting this in the mean-field equation m = tanh(.r-/T), yields mm, ~ (1 — T/J)1/2. Using this result for :r', the critical field for this case turns to be HC 2 T(1 —— J/T)(1 — T/J)”2 Now we seek to find generalizations to the case of finite disorder. A more convenient form, found using some trigonometric identities is, 22 2h 1 + cosh ? cosh 7,— 1 = 2%] (Input) 2 (5.36) 0 21: 2h cosh ? +cosh T For finite temperature and near the critical point, there is a regime in which 2- / T is small. In that limit (5.36) reduces to, 1 = 2 f? [(0, T) sech2 G) (5.37) where o. 1 [(H. T) = / (0119(1)) (5.38) 0 2(h + H) l + cosh 7— Using the fact that the field distribution is symmetric, and the properties of the hyper- 90 bolic functions, we can write magnetization as, 00 PU!) m z ‘2 tanh(2J7n/T)/ dh (5.39) 0 cosh (2(h + H)/T) cosh (2m/T) 1+ As we can see there is a regime when the magnetization is much smaller than the tempera- ture, m / T << 1. The magnetization becomes in this case, 772 = 2 [(11, T) tanh(2Jm/T) m/T << 1 (5.40) Providing there is a regime where m << T we can expand to third order in magnetization, and we find, 1 = 4 [(H, T) 55‘ (5.41) We also solved (5.39) numerically using Mathematica. The results show indeed an universal behavior as soon as temperature is turned on. The surface in Figs. 5.9 and 5.10 are the phase diagrams in the 3d parametric space {H, 0h, T} for y = 0.5, 2.0. In Figs. 5.7 and 5.8 we have solved (5.39) for two special cases when the field H or the disorder 0h have been suppressed giving us a clear look at the surface boundaries from Figs. 5.9 and 5.10. 91 3 f l l l l \ \ " \ \ 2 5 - ‘~\\ _- 51:05 T ‘s“ — y=2.0 \“ 2— ““‘ — ‘\ ‘ ‘\“ “‘ 1.5 ~\ _ ‘\ \ \\ '1 \ ‘\ 1* \ _3 \ \ \ \ \\ 0.5“ \— \ 0 l I I l T 0 0 2 0.4 0.6 0 8 1 Figure 5.7: Critical disorder versus temperature in the absence of the field (H = 0). We can see that for different disorders both functions have the same behavior approaching the temperature critical point (T = J = 1). The plot has been obtained by numerical solving (5.39) in the limit H = 0. Hc 1 I l ' l ' T I l l l 0.8 0.6 l l l 0.4 0.2 l Figure 5.8: Critical field as a function of temperature when disorder is suppressed (072. = 0). The RFIM is now the regular ferromagnetic Ising model at finite temperatures. 92 O) {3" 4' Figure 5.9: The phase diagrams for T # 0 RFIM for y = 0.5. In order to obtained the surface we have fixed T and solved (5.39). Figure 5.10: The phase diagrams for T # 0 RFIM for y = 2.0. This figure is obtained identical to Fig. 5.9. 93 5.5 Conclusions In summary, on complete graphs (i.e., in mean-field theory) the RFIM at T = 0 is non- universal. In particular, the stretched exponential disorder distribution leads to a non- universal order parameter exponent and non-universal integrated avalanche exponent. In addition, the power law distribution has a regime in which a predominance of small random fields destroys the transition and the RFIM always has a finite magnetization. In contrast the Cayley tree does not show either of these behaviors. Even in the limit of large coordi- nation, it is universal, with the usual mean-field order-parameter exponent l/ 2. We have carried out some preliminary numerical studies of the behavior in three dimensions (with short range interactions) and find that the power law distribution of random fields does not destroy the transition. Moreover, the exceedingly small value of ,3 in three dimensions ren- ders any non-universality in ,8 a moot point. However the behavior in dimensions higher than three, or for longer range interactions in three dimensions could be more interesting. Finally, even for short range interactions in three dimensions, there have been suggestions of non-universality in the finite size scaling behavior [7]. It is unclear, as yet, whether that behavior is related to the non-universality seen here. 94 Chapter 6 Scale-Free Random Networks Complex interacting networks are a common theme in diverse disciplines. One example is the world wide web (WWW) with great impact on economy and also on Internet security. Another example is the large scale study of biological networks that just now becomes accessible experimentally and, remarkably, raises considerable interest in the statistical physics community. 6.1 Models for Random Networks 6.1.1 Erdiis-Rényi Model The first model for random networks, proposed by Erdos and Rényi [45] is known to be inadequate as a representation of real networks for many reasons, in particular it has a Pois- son degree distribution. Regardless of its inability to model real networks, it was the first work to be considered "the first conscious application of the probabilistic method" [65]. 95 This simple model is the base of the most basic random graph models, the binomial model and the uniform model. We denote by G(n, p) a graph with n vertices with probability p that an edge between any two vertices exists; Q the set of all graphs with n vertices; and 8c: the number of edges in the graph G. Given 0 g p g 1, in the binomial model, technically speaking, G(n, p) is defined as part of the ensemble Q of all graphs with n vertices for which, P 00 this becomes 2: ~ up, n —> 00 (6.4) so instead of talking in terms of p we could express everything in terms of z. The Erdos-Rényi model has the ability to give exact average properties in the large n limit [17, 65]. In particular it has been demonstrated that the model exhibits a phase transition with increasing 2. If initially (z < 1), in the incipient stages of the evolution, 1The degree of a vertex is the number of edges connecting that vertex 97 G( n, M ) is formed by components2 with the size of the maximum component of order O(log 72). As we increase the number of edges the size of the maximum component changes sharply and for M > n. / 2 (z > 1) scales linearly with the graph size, becoming the giant component of order 0(n). The small components do not completely disappear but their average size remains constant as the graph grows, the size of the next largest component is at most O(log n). This phase transition has been thoroughly investigated [17,65] and the detailed phase diagram it somehow more complicated by the existence of a "double jump", as proved by Erdbs and Re’nyi, the size of the largest component changes twice near the critical point, from O(log n) in sparse graphs with M < 71/2 to 0(712/ 3) for M = n /2(sub critical phase), and from 0(72.2/3) to 0(n) if M > n / 2 (super-critical phase). This phase transition falls in the same universality class as the mean-field percolation transition, considering max 5' the size of the giant component the order parameter. However important these models are, there are some fundamental differences between them and real-world networks. The first one, noted by Watts and Stogratz [108, 109], is a difference in clustering3. The real-world random networks have very high clustering coeffi- cient C compared with the Erdos-Rényi model in which the edges are drawn independently giving C = p. The second, found by Barabasi and Albert [11, 13] and described in the next section, regards the degree distribution. 2The components of a graph G are its maximal connected sub-graphs [110]. 3The average probability that two neighbors of a given vertex are also neighbors of one another [108] 98 Network 12 2 C Random Network C WWW 153127 35.2 0.11 0.00023 Power Grid 4941 2.7 0.08 0.00054 Food Web 134 8.7 0.22 0.065 Metabolic Network 315 28.3 0.59 0.09 Table 6.1: The number of sites n, vertex degree 2 and clustering C for some known net- works compared with the clustering coefficient for the random network with the same it and 2. 6.1.2 Barabasi’s Model for Scale-Free Networks In the Erdbs-Re’nyi model the probability that a vertex has degree k is .—1 . pk(:) : (77 k )pk(1 _ p)71—l—k (65) which in the limit 72 >> k: is, 101(2) : _ (6.6) the Poisson distribution (Fig. 6.1.2). It is well known that both binomial and Poisson distribution are strongly peaked at the mean value of z, with rapidly decaying tails. In contrast, in the real-world networks, the degree distribution is nowhere near the Poisson distribution. Many well known networks (WWW, Internet (also see Fig. 6.3), etc.) have a power-law degree distribution (Fig. 6.4), which means that there is a small fraction of highly connected vertices while the majority of vertices are weakly connected. Barabasi and Albert [11, 13] proposed an improved version to account for the scaling properties of these systems, that tries to partially answer an important question: what is the mechanism that leads to power-law connectivity distribution? 99 1 12 )8 k Figure 6.2: The Poisson distribution for : 6 has a clear maxima, which tels us that that most of the vertices on a random graph obeying such a distribution have connectivities in the range of the peak value. Figure 6.1: 1928 United States of Amer- ica, North-West highway map. The map- ping to a graph is trivial in this case; note that the connectivity distribution follows a Poisson distribution as in Fig. 6.2. The Erdos—Re’nyi model is static, i.e. the number of nodes N is always fixed and the probability that two vertices are connected p is uniform, in contrast, most real-life networks are open, i.e. they continuously increase by addition of new vertices, and the connectivity is not uniform, for example a newly created web page will more likely include popular documents with already high connectivity. A simple model to incorporate both of these ingredients can be defined in two steps: 1. Growth: starting with a small number no of vertices at every time step we add a new vertex with n( S no) edges that will be connected to the vertices already present in the system. 2. Preferential attachment: when choosing the vertices to which the vertex connects we assume that the probability II that a new vertex will be connected to vertex 2' depends on the connectivity k,- of that vertex, such that H(k,~) : k, / Z] k). The model evolves to a scale-free network with the probability that a vertex has k links 100 following a power-law with exponent 7nrodel = 2.9 :t 0.1 [13]. This model tries to capture the essence of the scale-free networks but can not explain the exact topology of the WWW (70m = 2.45, 7,” = 2.1). Naturally the W has a much richer structure, for example, links are not invariant in time, documents are not stable, such that in order to obtain accurate results we have to incorporate these ingredients. Through a simple mean-field type of calculation it is possible to evaluate the power-law exponent [l 1]. Keeping in mind the evolution rules of this model, the connectivity growth rate for a vertex is, 8k,- 1, V at 21 ( ( )) (6 ) which integrated gives, t 1/2 k,('t) = n (I) (6.8) with n the initial number of edges the attached vertex has, and t,- the time when vertex i was added to the network. Thus, older vertices with small t, become richer, at the expense of the newly added vertices with larger 1,. This phenomenon, rich-get-richer, can be tested in the real-life networks and makes possible the analytical determination of the power-law exponent 7. Assuming that vertices are added with a constant rate, the probability that a vertex i has connectivity smaller than k, P[lc,~(t) < k], can be written, ‘2 . . . t Pt,- 2tk221—Pt,<2tk2= ——" . [ >71 / ] [ _n / ] l k2(t+no) (69) 101 since aftert steps the network hast + no vertices and tm edges. The probability density is, 8P[k,~(t) 00, there exists a critical value came, above which we can always find an avalanche of infinite size percolating the lattice. If Cmax < came there is no infinite size avalanche, thus cmax can be identified with the percolation threshold for bond percolation and has the value Ccritic : 0.5. The avalanche distribution can be assumed to be similar to that from directed percola- tion [53]. It is possible to define forward and backward avalanche probability distribution relative to the time axis [83] P;(s.fo) = s‘TGHsU — fo)‘/") (6.11) 136(8) f0) = 3‘“ G'b(8(f - fell/a) (6.12) where r and Tb are forward and backward avalanche exponents, and o is a model depen- dent exponent. The scaling functions G; and 0;, decay rapidly if the argument is large and are constants if the argument is small enough. For invasion percolation the forward avalanche exponent is known to be 7' = 1.6(0). The equations above give the probability that an avalanche of size .5 and initiator cost f0 could take place in the system. The scaling functions have been confirmed by numerical simulations [79,96]. The overall avalanche distribution has an universal exponent Ta” = 2 [83] for a broad class of self-organized 105 models such as invasion percolation, interface depinning, and a model for evolution. 6.2.2 From Avalanches to Networks An interesting problem can be built using the avalanche structure: what if we could con- sider avalanches as separate entities, part of a network where nodes are avalanches and links are present only if two avalanches have a common boundary. This construction is equiva— lent to contracting each avalanche into a hub connected to all neighboring avalanches (see Figs. 6.7-6.9). The network emerging from this construction can be a model for the Inter- net growth (or W) in the following way: the big hubs correspond to a big investment, such that after they have been built there are no more financial resources to create new ones. These large hubs can be naturally identified with the biggest avalanches in the lat- tice. After the early building phase, where big avalanches take place occupying most part of the lattice, we see a connection phase, dominated by small avalanches, in which many other groups or individuals connect to the big hubs since their financial power is smaller, causing the inability to build new large hubs. The scale-free type connectivity is automat- ically ensured since most of the small avalanches occur on trapped regions always in the neighborhood of a large avalanche. The network matures, i.e.. becomes scale-free, only during the connection phase, the building phase ensures only the existence of highly con- nected hubs. This model differentiates itself from other scale-free network models [12,13] by naturally evolving to its final state in an undeterministic fashion, from a SOC process such as BIP. The invasion process widely encountered in every aspects of life, turns out to have 106 two very important characteristics: (1) it is SOC and (2) its avalanche structure has an underlying scale-free network associated with it, the most frequent network topology found in nature or society. An optimized method in studying BIP is using Prim’s algorithm [10,37], which is known to generate the minimal spanning tree (MST) on the lattice. We can decompose the tree into sub-trees, such that each sub-tree represents a different avalanche. We have considered in all our simulations that the avalanche threshold is always cm“. s cam...) by choosing the injection sites to have the smallest bond cost in the lattice. By contracting each sub-tree into a node and linking it with all neighboring nodes, we create a self-similar network. 6.2.3 New-network Topology Following the work of Barabasi et al. [12] we have investigated the obtained structure by calculating the probability P( k) that a vertex in the network is connected to 1: other vertices. As for WW, Internet and many other encountered systems, the probability distribution for our network is a power-law, following P(k) ~ 19‘“ (Fig. 6.6), very close to the measured WWW outgoing links exponent you, = 2.45 [12], only if one starts the invasion from the lowest cost site. It is important to ask what happens if we increase the number of injection sites, observing that network maturity occurs at the end of the evolution while the tail properties of P(k) are defined by the early growth. An intuitive explanation would be that the big clusters fragments into smaller domains, in this way destroying the scale- free topology. Increasing the number of injection sites is also equivalent with decreasing 107 1 I I I I I I I II I I I I I I I III — 4seeds : 001 - 10 seeds _ -- 305ecds 0.0001— - P(k) E i le-06 - ~ E . E e- I— v=2.45 ‘- 1 l l l 1 1 1 11 l 1 l l 1 L1_l_11 1 100 k Figure 6.6: The P( k) calculated on a 200 x 200 lattice running 5000 simulations. The increase in the tail is due to the small size of the square lattice. The only difference in using bigger lattices is the reduction of finite size effects on the distribution. Unfortunately using big lattices make simulations much slower due to sorting. cm” below the critical value, in this way big avalanches are hierarchically decomposed into smaller sub-avalanches. 108 . :2 a ”I. . L .‘ .‘ I :‘hmqufim “p: I ‘ ,. ‘ ‘ 1 —‘ 11111 Figure 6.7: The network (right) obtained from avalanches (left) using Prim’s algorithm with 4 invasion sites. l .2... u u a - - u u - I ' .. n - - c Figure 6.8: The network (right) obtained from avalanches (left) using IP with 10 invasion sites. Ii, 2 . (I 95’ '5. Figure 6.9: The network (right) obtained from avalanches (left) using IP with 30 invasion sites. 109 6.3 Conclusions In summary, we found that the invasion percolation growth can be viewed as a more gen- eral scale-free network growth. In the most popular models in order to obtain a power-law distribution for the connectivity is is necessary an additional rule based on the rich-gets- richer paradigm called preferential attachment. We have found that this rule is not always necessary to be imposed by-hand as a part of the growth rules for our system, rather us- ing a self-organized critical growth process (invasion percolation in our case) a scale-free network is grown while the preferential attachment can be extracted posteriory. The self- free network was constructed on avalanches has an exponent 7 = 2.4 close to the WWW outgoing links exponent yWWW = 2.45. 110 Chapter 7 Outlook Disordered materials have become the focus of great interest in research. Amorphous ma- terials, like glasses, polymers, gels, colloids, ceramic superconductors and random alloys or magnets, do not have a homogeneous microscopic structure. At a microscopic level the system’s properties vary from site to site, this randomness adding to the complexity and the richness of the properties of these materials. The dynamical behavior is particularly challenging, relaxation in disordered systems generally follows an unusual time-dependent trajectory. Our work is theoretical in nature, but the questions we address are relevant to a broad area of interdisciplinary research, involving studies of the physics, chemistry, mathematics, biology and engineering aspects of random systems. Nature is rather good at producing statistically non-trivial objects. One of the first en- countered features in surfaces is the self-similar (fractal) scaling. Later, invasion percola- tion was proposed as a new dynamical percolation theory were one could study the behavior of such interfaces. One other feature often encountered in the properties of surfaces is self- 111 affine scaling of surface wandering. Such a phenomenon can arise in fracture processes or cracks (the fracture lines of paper have been claimed to be of the Kardar—Parisi-Zhang universality class in 2D) and in a bit more esoteric context when studying domain walls in Ising ferromagnets. The latter problem is formally equivalent to perfect plasticity in ran- dom systems, and is an example of minimum energy interfaces. The mapping between KPZ and the directed polymer in random media (DPRM) makes KPZ accessible through numer- ical simulations. As described in Chapter 3, in graph theory language DPRM problem is equivalent to finding the shortest path on a random weighted network. One advantage of making contact with the computer science community is that useful strategies exist for opti- mizing the algorithms’ efficiency. Numerical simulations show a change in behavior when using strong disorder for DPRM. The polymers obtained in this limit are no longer part of the KPZ/DPRM class rather, from the extracted fractal dimension, they belong to the IP class. The disorder dependence can be further studied (see Chapter 4) using a generalized algorithm. The random field Ising model, one of the most important model for phase transitions in disordered systems, has long been believed to be universal. Numerical simulations in finite dimensions confirmed that to be true, while RG calculations did not even take into consideration the possibility of disorder depended properties. Our analytical mean-field calculations show that the universality is broken in the ground state while the model at finite temperatures is universal. Another remarkable fact is that calculations on the Cayley tree for random field Ising model confirm the universality hypothesis, leading to the conclusion that the finite connectivity is the source for universality. Further directions would be to per- 112 form numerical simulations for finite dimensional systems using our custom made random distributions. It is known that at the critical point in the disorder space the avalanche distri- bution is a power-law, which is the only indication of the transition. The lack of precision in determining the critical point suggests that one should look for other properties, such as percolation effects, which would give a better understanding of the transition’s nature. The random network model in the last chapter shows that scale-free random networks can be obtained from the most simple self-organized critical process, invasion percolation. The natural extension would be to study the random networks which could be similarly ob- tained from the random field Ising model since the model has a much richer phase diagram with the avalanche distribution changing as function of disorder. In order to complete the study, we would have to also investigate the connectivity evolution and also the robustness of the network. 113 APPENDICES 114 Appendix A Definitions A.1 Graph definitions Graph: A graph C(V, E) is a collection of two sets (1) V(G) = {01, v2, ..., an} a vertex set and (2) E (G ) Z {(31, e2, 6m} an edge set. Each edge has two end-points. If the end-points are equal the edge is called loop. We denote an edge e = (ij) 6 E (G ) with end-points i, j 6 WC). Weighted Graph: A graph C(V, E) where we assign to each bond (ij) a cost c.) 6 IR. Directed Graph: The graph G(V,E) is called directed if each edge is an ordered pair (ij) with i the tail and j the head. Connected Graph: The graph G(V,E) is called connected if there is a path be- 115 tween any two vertices v,- and vj, otherwise it is disconnected. 'II'ee: A tree is a connected graph with no cycles. Spanning Tree: A spanning tree of a graph G(V, E) is a sub-graph T(V, E’), E’ Q E, with tree property. Distance: If there is a path in G between two vertices s and t, we call the distance from s to t, written (1,, or (1(3, t), the least length path between i and j. If there is no such a path in G then, (1,. z oo. (1,, = min 2 q,- (1.1) (ij)Epath All the graphs used in our research are finite and undirected. A lattice is a regular graph with precise rules in assigning bonds between vertices. We used for simulation the well known square, cubic and triangular lattices. The random costs Cij are drawn from an uniform distribution in [0, 1]. All algorithms used in simulations are using the graph as a rich environment which can be further characterized. We define the most important trees, minimum spanning tree (MST) and shortest path tree (SPT). Minimum Spanning Tree: Minimum spanning tree (MST) on a weighted graph G(E, V) is the spanning tree with total minimum weight EMST, E.MST 2 [Hill 2 Ci]: (1.2) (ij)€tree 116 The minimum in (1.2) is taken over all spanning trees. Prim’s algorithm generates the MST in any graph. Shortest Path Tree: Shortest Path Tree (SPT) on a weighted graph G(E,V) is the spanning tree rooted at vertex .9 such that any path on the tree, (1,,- is the minimum distance between vertex i and vertex 3, (1,,- = min Z Cij (1.3) (ij)6path The minimum in (1.3) is over all possible paths between the two considered vertices. It is known that Dijkstra’s algorithm generates the SPT on any graph with positive weights. I J A 7 XS xt X Figure A.1: A connected graph. The dotted and solid lines are the edges of the graph. Between the vertices s and t we have drawn a path (solid lines) and the Euclidian distance (long dashed line) [3,. Following this definitions we can clearly see that between any two vertices on the lattice 117 we can have an Euclidian distance, [st = \/(1*s — at.)2 + (y. — in)" (1.4) were 1:3,) and y“ are the coordinates for the vertices s and t, and a path, with length d,,. The number of bonds in the path NS, and the Euclidian distance 1,. are related to one other through the fractal dimension (1f of the path, N,, ~ 1::f (1.5) 118 Appendix B Greedy Algorithms There are a large number of well developed graph algorithms in computer science [3, 31, 57, 80, 89] which are useful in the study of disordered systems [95]. One advantage of making contact with the computer science community in this area is that useful strategies exist for optimizing the efficiency of these algorithms. In many cases algorithm libraries exist (e. g. [78]), although many of these algorithms are rather simple, in which case writing ones own code is not too arduous. However it is important to check that one is using the optimal strategy as there are enormous differences in speed between some of the "generic" implementations as compared to the optimal implementations. We discuss two classes of graph algorithms [3, 31,57, 80, 89]: Shortest path algorithms (Section B. l ); and Minimal cost spanning tree algorithms (Section B.2). We also clarify the relation of these methods to standard [59,61,62,78,112, 113] and more recent algorithms [52, 87,95, 100] for percolation and minimal path problems. Since it has the same flavor, non-equilibrium dynamics of Random Field Ising Model [76] is discussed in Section B3. 119 These greedy algorithms [31, 80,89] work by choosing moves which are locally opti- mal. This locality condition makes their implementation very efficient. Although greedy methods solve some optimization problems exactly, they are also useful as approximate methods for hard computational problems. Greedy algorithms are similar in the spirit to hottest bond [35,39] or extremal [9,84, 103, 116] algorithms as will be discuss briefly in Section B.4.Two other physically important classes of graph algorithms [15, 64, 86, 95] (namely flow [3] and matching methods [80]), rely upon efficient implementations of some of the algorithms described here. B.1 Minimal Cost Path Consider a graph G(V, A) containing a vertex set V and are set A with Size-1W} = N and Si:e[.4] = E. The pair (ij) 6 .4 identifies the arc between vertices i E V and j E V. A connected graph has sufficient arcs such that a connected path exists between any two vertices. Now each bond is assigned a cost Cij- We seek the minimal cost path between site 3 and any site 2'. The cost of this path is (15,-. To identify the path we also need a "predecessor index" preds, and a "label index" label,- which has only two possible values: 1 if the site has been visited by the algorithm and thus its path cost is known; or 0 if the site is unvisited. Dijkstra’s method [31, 110] is an example of a greedy algorithm which accepts the best point of advance on a " growth front". Q Algorithm: Minimum Cost Path/Dijkstra’s Algorithm 120 1. Initially: (1(3) =2 0 and d(~i 75 s) z oo label, : 1 and labeli¢s = 0 predss = 0 2. CONTINUE if there are any unlabeled vertices STOP otherwise For each unlabeled vertex i we define d(i) d(i) : min{d(i),d(s) + cm} 3. Let 2' be the unlabeled vertex with the smallest overall distance d(i) : min{d(‘j) la'(j) < 00, and label]- : 0} label,- = 1 pred, = s s : i GO TO Step 2. The critical and nontrivial programming step in this algorithm is maintaining an or- dered list of "active" or potential growth sites. That is, we must keep track of the sites 121 at the growth front and chose among them the point of advance that costs the least. It is straightforward to prove that this program finds the minimal distance between the source and all points in the graph. Dijkstra’s algorithm is 0(N2) for generic implementations, O( E log E) for general implementations which maintain a heap of sites at the growth front. However, for sparse graphs, we use a priority list based on buckets which is O( E) in prac- tice. An example of the Dijkstra’s algorithm is presented in Fig. B.l. 18 16 15 17 21 9| 10 10' 6| 9 13 7 16 5 14 9 20 _7._ 27 ll 10 7 l3 19 10' 7 4| 6| 8 3 4 9 _1_. 10 __4__1 14 10 20 4 5 6 8 14 8| 3' 10 2 1 0 _6__ 6 __r__ 7 _9_ 16 __i_ 19 l 2 3 9 12 Figure B.1: Dijkstra’s algorithm example on a 5 x 5 square lattice. Only the tree bonds are drawn. In the lower right comer is the step number while in the square’s middle we have written the distance to that vertex i.e. sum of all costs of the bonds on the path . The random numbers were chosen integers in [1, 10]. Dijkstra’s algorithm generates undirected shortest paths with "overhangs" [28,95,100]. The transfer matrix method is an alternative shortest path algorithm [62], which works for solid-on-solid interfaces which have no overhangs [95]. In fact if we restrict Dijkstra’s algorithm to layer-by-layer growth, it is exactly the same as the transfer matrix method. 122 The interface energy is equivalent to distance in Dijkstra’s algorithm. B.2 Minimal Spanning Trees We seek a tree that reaches each node of a connected graph, and that minimizes Ztm c,~,-. Prim’s algorithm and Kruskal’s algorithm are two methods for finding the minimal span- ning tree [31, 110]. Prim’s algorithm is very similar in structure to Dijkstra’s algorithm, although the physics is very different. In Prim’s algorithm we start by choosing the lowest cost bond in the graph and growing outward from this seed site. This leads to a growth front which sweeps through the lattice. Q Algorithm: Minimum Cost Spanning Tree/Prim ’s Algorithm 1. Initially: (1(3) 2 0 and (1(i ¢ .9) = 00 label, = 1 and label,¢s = 0 2. CONTINUE if there are any unlabeled vertices STOP otherwise For each unlabeled vertex i we define d(i) d(i) = min{d(i), csi} 123 3. Let i be the unlabeled vertex with the smallest overall distance (1(1) 2 min{d(j) I d(j) < 00, and labelj = 0} label,- = 1 s = 2 GO TO Step 2. In Fig. B.2 we show an example of Prim’s algorithm on a 5 x 5 square lattice where for simplicity we have chosen the random numbers to be integers: 22 21 18 l7 l6 7 7| 4| 7 3| 23 20 l9 l4 9 10 10 6| 9 7 1 5 4 9 6 __7_. 7 25 12 ll 13 24 10 7 4 6| 8 6 4 5 7 10 8 3| 10 2| 1| 0 _6_ 6 _.L_ l 9 2 _3_... 3 l 2 3 8 9 Figure B.2: Prim’s algorithm example. Only the tree bonds are drawn. In the lower right corner is the step number while in the square’s middle we have written the bond cost to reach that vertex. In Dijkstra’s method, one chooses the site at the growth front that has the minimum cost 124 path to the source, while in Prim’s algorithm one simply chooses the minimum cost bond. Both of these algorithms lead to spanning trees, but Prim’s has lower total cost (it is less constrained). In Dijkstra’s algorithm one is checking the cost of the path from the tested site all the way back to the "source" or "root" of the tree. It is thus more non-local. As has been noted recently [10], Prim’s algorithm is trivially equivalent to the invasion algorithm for percolation. Note that this corresponds to "perfectly compressible" fluid and does not consider "trapping" which is important in more realistic models of fluid invasion in porous materials [112]. An alternative method for finding the minimal spanning tree is Kruskal’s algorithm, which nucleates many trees and then allows trees to merge successively into one tree by the end of the procedure. Using the same strategy as for Dijkstra’s method (i.e. a priority list based on buckets), we are have an implementation of Prim’s algorithms which is O(E). B.3 Spin-flip Dynamics Consider the zero temperature Random Field Ising Model where we assign to each vertex i a spin and also a random field. The interaction between spins is modeled through bonds. We are interested in finding the sequence of spin flips as one sweeps the applied field H from small negative to large positive values. As discussed in Chapter 5 the Hamiltonian reads, H = — Z J,,-.S',S,~ — Em +1.55,- (2.1) where the spin-spin interaction is of the same strength J.) = J and the random fields h,- are randomly distributed according to p(h,-). Then the spin flips are governed by the change in sign of the local field. That is, a spin flips when hf” = J Z S,- + h. + H (2.2) 1951' changes sign. The sign change can occur in two different ways: the spin-flip can be part of an avalanches so it flips when some of the neighbor flip; or can be the avalanche initiator, in this case the spin-flip is triggered by the increase in the external field H. This model has been used by H. J i and MO. Robbins [66] for fluid invasion and interface movement in random media and also by Sethna et al. [33, 76,90,101] as a model for Barkhausen noise, and noisy hysteresis loops. The simplest method to flip spins is to store the spins and the corresponding random field for each spin in a list and apply an increasing field with constant rate AH. First we check which spins have flipped and after that we checked their neighbors and so one until there are no spins to flip. Q Algorithm: Brute Force Algorithm 1. Initially: H 2 large negative. AH : fixed 2. CONTINUE if there are any unflipped spins. 126 STOP otherwise. Calculate the new effective fields hf“ :: hf” + H. (a) Flip all needed spins and re-calculate hf” of the neighbors. (b) Repeat the previous step (a) until there are no spins left to flip. 3. H:=H+AH GO TO Step 2. This algorithm continues until there are no spins to flip in the lattice. It is possible to change the increment AH, the only problem arising would be that if the increment is too large several avalanches might occur resulting in a loss of information about single avalanches. The running time is of order O(NXT) where N is the number of spins, X is the number of fields at which we measure the magnetization, and T is the average time to check the neighbors of a flipped spin. It is possible to refine this method by computing the field increments AH“ such that only one avalanche occurs at a time. This can be done by keeping all the spins effective fields in an ordered list with variable length, extracting all spins that have flipped. Always the avalanche initiator will have the smallest effective field ant is located at the top (bottom) of the list making possible the calculation of the next field increment accordingly to the following equation, AH” = — max{hffl} (2.3) Q Algorithm: Effective Field Sorted List 127 1. Initially: For H = 0 compute and order hf”. H = — max{hffl} hf“ :2 hf“ + H 2. CONTINUE if there are any unflipped spins. STOP otherwise. (a) Flip all needed spins and re-calculate hf“ of the neighbors. (b) Repeat the previous Step (a) until there are no spins left to flip. 3. AH = — 111ax{hffl} H:=H+AH GO TO Step 2. This brute force algorithms is very inefficient compared with the sorted list which has the running time O(N log N) but is memory costly. In order to have a very efficient al- gorithm Sethna et al. [76] combined the sorted list with an one-bit-per-spin algorithm, described next. The key is to recognize that we have to generate random fields only along the interface, similar to the invasion percolation (IP) problem. Further it is possible to compute the probability that a spin will flip give the change in its local effective field. It is unnecessary to store the random fields in this case since only the field H, the status of the spin (flipped/unflipped), and the orientation of the spins is necessary to calculate 128 the flipping probability. The spins can be stored on single bits making the algorithm very memory-efficient. The probability P] that a spin is down at external field H given that "t neighboring spins are up is given by, —(H+(2nT—z)) 131024.41) =/ p(h) an (2.4) 00 where 2: is the coordination number of the lattice. To find the field increment AH such that only one avalanche occurs is the key of the algorithm. The probability for a spin with "T neighboring up spins not to flip at a field increase by AH is, Pj(nT, H + AH) 131(721‘1 H) Punflip : 1 _ Pfiip : (25) Thus the probability Pf?“ that no spin with nT neighboring up spins has flipped after the increase AH is, Pin... H + AH) N": 13.04.40 (2'6) none _ 10,,T _ with NM the total number of spins with ”T neighbors up. Further, the probability that no spin has flipped in the interval H and H + AH is, Z Pnone : H ppm: (2.7) nT:O We can proceed now calculating AH. Since the next spin to flip is determined by randomly choosing "r and after that randomly chosing the spin with in neighbors up, one has to 129 generate a random number 7' from an uniform distribution in [0, 1] such that AH is obtain by solving the following equation: PIIOII€( A H) z The solution is found numerically, using an initial guess, log(r) F(H) AH]:— (2.8) (2.9) F is the flip rate (constant), analog to the nuclear decay if the spins flip with with the rate F, it is expected that the probability that no spins have flipped to be Pnone : e—FAH giving d log ( Pno'w) dAH AH: Pj( nT,H+AH) _dTH (10%; H 01 H) (1((10g Pj( m, H + AH))) _ ZO Nnr dAH p(H P((+ : ‘VpnT (ll. 71T=0 l ) AHZO ::F(nT,H) nT=U P“ 711-, 272T) — T 130 (2.10) none nT AH=O AH=0 (2.11) A second better guess can be made looking at the error of the first guess: Ar = P”°"C(AH) (2.12) log(r — Ar) AH = — . 2 NH) (2 13) Together AH 1 and AH.) can be used as an input in a find-root subroutine. The authors of this algorithm found that for very large guesses AH choosing AH 1 :2 0 and AHg = 20 work much better. Q Algorithm: One Spin Per Bit 1. Generate randomly uniform r 6 [0,1]. 2. Pre-calculate Pj(nT, H) from (2.4). 3. Use root-find method to calculate AH using AH 1 and AH2 given by (2.12) and (2.9). 4. H := H + AH 5. Calculate ProbFlipbrT] for a spin to flip at current field H when in increases by l for "T :1, z. 6. Calculate the spin flipping rates F(nT, H) at field H and total rate F(H ): . {)(H + (272T — 3)) F . .H : Vn ‘ ("r ) 1 r Punish” I‘(H) = Zr(.—.,,H) nT=0 131 7. Chose in random uniform from [0, F]. 8. Locate randomly a spin in the lattice with "t- 9. START the avalanche with that spin. 10. Flip all necessary spins: (a) Push the first spin onto the queue. (b) Pop the top spin off the queue. (c) If it is unflipped flip it and decrease NM 2: N,,T — 1; otherwise GO TO Step (e). (d) For each unflipped neighbor find its "t and: [\‘rnT_1 :2 [VnT—l — 1 17V,” 2: anT+1 Push the spin on to the queue with ProbFlip[-n¢ — 1] as in Step (5). (e) If any spins left in the queue GO TO Step (b). 11. GO TO Step 1 if there are any unflipped spins in the lattice. An implementation of the algorithm is available at: http://www.lassp.comell.edu/sethna/sethna.html. 132 B.4 Greedy Dynamics in Disordered Systems In "greedy" algorithms one chooses and updates locally optimal sites [31, 80, 89,110]. The greedy strategy is useful in solving exactly some important theoretical and practical opti- mization problems (e.g. telephone distribution networks). They form a general strategy for non-linear optimization. In the physics community the greedy strategy is actually relevant to the dynamics nature chooses, and it has been called extremal dynamics [9, 84,103,116]. Extremal dynamics occurs in systems that are driven weakly and so are evolving slowly. In these systems, there is a lot of time for the systems to explore phase space and to chose the optimal ways in which to advance. The hottest bond algorithm for fuse networks [35,39] and the invasion algorithm for percolation are two important examples of extremal dynam- ics. However, two key differences between fracture and invasion percolation are: load con- servation and load redistribution. In the case of fuse networks, and fracture in general, load (e. g. applied stress) is conserved so that when a bond breaks the load it was carrying must be passed on the other bonds in the network. In contrast, in percolation there is no such load redistribution. When load is conserved, the loading condition in combination with the load redistribution law determine whether it undergoes a jump discontinuity to an unstable S1816. To this point we have not included earthquake models, evolution models, and sand- pile models in the discussion. These systems have an additional feature in their dynamics, namely renewal, that is, each region in the system may return to its original state given the correct conditions. In fracture models (invasion percolation), a bond remains broken 133 (invaded), however renewal would allow these bonds to reform (reverse their invasion). A popular account of the application of self-organized criticality and extremal dynamics to a wide variety of physical systems is now available [8]. 134 BIBLIOGRAPHY 135 Bibliography [1] V. Abegaokar, B.I. Halperin, and J.S. Langer. Phys. Rev. B, 4:2612, 1971. [2] A. Aharony. Phys. Rev. B, 1823318, 1978. [3] RV. Ahuja, T.L. Magnanti, and 1B. Orlin. SIAM Review, 33, 1991. [4] M. Aizenman, A. Burchard, C.M. Newman, and DB. Newman. Rand. Struct. Alg., 152319, 1999. [5] T. Ala-Nissila, T. Hjelt, J. M. Kosterlitz, and O. Ven‘al'ainen. J. of Stat. Phys, 72:207. 1993. [6] M.J. Alava, P.M. Duxbury, C.F. Moukarzel, and H. Rieger. Phase Transitions and Critical Phenonema Volume 18. Academic Press, 2001. [7] J.C. Anges, D’Auriac, and N. Sourlas. Europhys. Lett., 391473, 1997. [8] P. Bak. How Nature Works. Springer-Verlag, New York, 1996. [9] P. Bak and K. Sneppen. Phys. Rev. Lett., 71, 1993. [10] A.-L. Barabasi. Phys. Rev. Lett., 76:3750, 1996. [l 1] A.-L. Barabasi and R. Albert. Science, 286:509, 1999. [12] A.-L. Barabasi, R. Albert, and H. Jeong. Physica A, 281:69, 2000. [13] A.-L. Barabasi, E. Ravasz, and T. Vicsek. Physica A, 2992559, 2001. [14] A.-L. Barabasi and HE. Stanley. Fractal Concepts in surface growth. Cambridge University Press, 1995. [15] S. Bastea. Phys. Rev. E, 58, 1998. [16] RN. Bhatt and RA. Lee. Phys. Rev. Lett., 48:344. 1982. [17] Béla Bollobas. Random Graphs. Academi Press INC. (London) LTD., 1985. [18] R. Bruinsma. Phys. Rev. B, 30:289, 1984. 136 [19] J .M. Burgers. The Non-linear Diffusion Equation. Reidel, Boston, 1974. [20] S. Tyé and BI. Halperin.‘ Phys. Rev. B, 39:877, 1989. [21] C. Castellano, A. Gabrielli, M. Marsili, M.A. Munoz, and L. Pietronero. Phys. Rev. E, 58:R5209, 1998. [22] C. Castellano, M. Marsili, and L. Pietronero. Phys. Rev. Lett., 8023527, 1998. [23] A. Chakrabarty and R. Toral. Phys. Rev. A, 40:11419, 1989. [24] J. Chayes, L. Chayes, and CM. Newman. Commun. Math. Phys, 101:383, 1985. [25] M. Cieplak, A. Maritan, and J.R. Banavar. Phys. Rev. Lett., 72:2320, 1994. [26] M. Cieplak, A. Maritan, and J.R. Banavar. Phys. Rev. Lett., 76:3754, 1996. [27] M. Cieplak, A. Maritan, and J.R. Banavar. Physica A, 266:291, 1999. [28] M. Cieplak, A. Maritan, M.R. Swift, A. Bhattacharya, A.L. Stella, and J.R. Banavar. J. Phys. A, 28, 1995. [29] J. Cook and B. Derrida. Europhys. Lett., 102195, 1989. [30] J. Cook and B. Derrida. J. Phys. A, 23:1523, 1990. [31] TH. Cormen, C.E. Leiserson, and R.L. Rivest. Introduction to algorithms. MIT press (Cambridge), 1990. [32] LR. da Silva and D. Stauffer. Physica A, 294:253, 2001. [33] K. Dahmen and J .P. Sethna. Phys. Rev. B, 53214872, 1996. [34] L. de Arcangelis, S. Redner, and A. Coniglio. Phys. Rev. B, 3114725, 1985. [35] L. de Arcangelis, S. Redner, and H]. Herrmann. J. de Physique, 46, 1985. [36] D. Dhar, P. Shukla, and JP. Sethna. J. Phys. A, 30:5259, 1997. [37] R. Dobrin and P. M. Duxbury. Phys. Rev. Lett., 86:5076, 2001. [38] R. Dobrin, J .H Meinke, and RM. Duxbury. J. of Phys. A: Math. Gen, 35:L247, 2002. [39] RM. Duxbury, RD. Bealy, and PL. leath. Phys. Rev. B, 36, 1987. [40] RM. Duxbury and R. Dobrin. Physica A, 2702263, 1999. [41] RM. Duxbury and J .H. Meinke. Phys. Rev. E, 6403:6112, 2001. [42] J.C. Dyre and TB. Schroder. Rev. Mod. Phys, 72:873, 2000. 137 [43] M. Eden. Proceedings of the Fourth Berkeley Symposionum on Mathematical Statis- tics and Probabilities. edited by F. Neyman (University of California Press, Berke- ley), 1961. [44] SF. Edwards and DR. Wilkinson. Proc. Roy. Soc. London, A381:17, 1982. [45] P. Erdos and A. Re’nyi. Publ. Math. Inst. Hung. Acad. Sci, 5:17, 1960. [46] F. Family. J. Phys. A, 19:L441, 1986. [47] F. Family and T. Vicsek. J. Phys. A, l8:L75, 1985. [48] F. Family and T. Vicsek. Dynamics of Fractal Surfaces. World Scientific, Singapore, 1991. [49] S. Fishman and A. Aharony. J. Phys. C, 12:L729, 1979. [50] BM. Forrest and L.-H. Thang. Phys. Rev. Lett., 64:1405, 1990. [51] A. Gabrielli, R. Cafiero, and G. Caldarelli. J. Phys. A, 31:7429, 1998. [52] P. Grassberger. J. Phys. A, 25, 1992. [53] P. Grassberger and K. Sundermeyer. Phys. Lett. B, 771220, 1978. [54] T. Halpin-Healy. Phys. Rev. Lett., 62:442, 1989. [55] T. Halpin-Healy. Phys. Rev. A, 42:711, 1990. [56] T. Halpin-Healy and Y.-C. Zhang. Phys. Rep., 254:215, 1995. [57] F. Harary. Graph Theory. Addison-Weseley, London, 1969. [58] CL. Henlet. Phys. Rev. Lett., 71:2741, 1993. [59] H]. Herrmann, D.C. Hong, and HE. Stanley. J. Phys. A, 17, 1984. [60] J. Hirsch and IV Jose. Phys. Rev. B, 2215339, 1980. [61] J. Hoshen and R. Kopelman. Phys. Rev. B, 14, 1976. [62] DA. Huse and CL. Henley. Phys. Rev. Lett., 54, 1985. [63] 1.2. Imbrie. Phys. Rev. Lett., 53:1747, 1984. [64] DJ. Jacobs and B. Hendrickson. J. Comput. Phys, 137, 1997. [65] S. Janson, T. Luczak, and A. Rucinski. Random Graphs. John Wiley & Sons, INC., 2000. [66] H. Ji and MO. Robbins. Phys. Rev. B, 46:14519, 1992. 138 [67] B. Jovanovic, S.V. Buldyrev, S. Havlin, and HE. Stanley. Phys. Rev. E, 50:2403, 1994. [68] M. Kadar and Y.-C. Zhang. Phys. Rev. Lett., 58:2087, 1987. [69] M. Kardar, G. Parisi, and Y.-C. Zhang. Phys. Rev. Lett., 582889, 1986. [70] J. Kertesz, V.K. Horvath, and F. Weber. Fractals, 1:67, 1992. [71] J.M. Kim and J.M. Kosterlitz. Phys. Rev. Lett., 6222289, 1989. [72] J.M. Kim, MA. More, and A]. Bray. Phys. Rev. A, 4422345, 1991. [73] RR. King, S. V. Buldyrev, N.V. Dokholyan, S. Havlin, Y. Lee, G. Paul, and HE. Stanley. Physica A, 274260, 1999. [74] RR. King, J.S. Andrade Jr., 8. V. Buldyrev, N. Dokholyan, Y. Lee, S. Havlin, and HE. Stanley. Physica A, 266:107, 1999. [75] T. Knetter, G. Schroder, M.J. Alava, and H. Rieger. Europhys. Lett., 55:719, 2001. [76] MC. Kunz, O. Petrovic, K.A. Dahmen, B.W. Roberts, and JP. Sethna. Computer Simulations, July/Augustz73, 1999. [77] M. Missing and H. Kinzelbach. Phys. Rev. Lett., 782903, 1997. [78] P. Leath. Phys. Rev. Lett., 36, 1986. [79] H. Leschhom and L.—H. Tang. Phys. Rev. E, 49:1238, 1994. [80] L. Lovasz and MD. Plummer. Matching Theory. North-Holland, Amsterdam, 1986. [81] U.M.B. Marconi and Y.-C. Zhang. J. of Stat. Phys, 61:885, 1990. [82] M. Marsili and Y.-C. Zhang. Phys. Rev. E, 57:4814, 1998. [83] S. Maslov. Phys. Rev. Lett., 742562, 1995. [84] S.L. Miller, W.M. Miller, and RI. McWorter. J. Appl. Phys, 73, 1993. [85] C. Moore and ME]. Newman. Phys. Rev. E, 61:4678, 2000. [86] C. Moukarzel. J. Phys. A, 29, 1996. [87] C. Moukarzel. Int. J. Mod. Phys. C, 8, 1998. [88] CM. Newman and D.L. Stein. J. Stat. Phys, 82:1113, 1996. [89] CH. Papadimitriou and K. Steiglitz. Combinatorial Optimization: ALgorithms and Complexity. Prentice Hall, London, 1982. [90] O. Petrovic, K. Dahmen, and J .P. Sethna. Phys. Rev. Lett., 7524528, 1995. 139 [91] A. Pimpinelli and J. Villain. Physics of Crystal Growth. Cambridge University Press, 1998. [92] M. Porto, N. Schwartz, S. Havlin, and A. Bunde. Phys. Rev. E, 602R2448, 1999. [93] H. Rieger. Phys. Rev. Lett., 8124488, 1998. [94] Heiko Rieger. Frustrated Systems: Ground State Properties via Combinatorial Opti— mization, volume 501 of Lecture Notes in Physics. Springer Verlag, Editor J. Kertesz and I Kondor, 1997. [95] Heiko Rieger. Advances in Computer Simulation. Lecture Notes in Physics. Springer Verlag, Editor J. Kertesz and I Kondor, 1998. [96] S. Roux and E. Guyon. J. Phys. A, 2223693, 1989. [97] S. Roux, A. Hansen, and EL. Hinrichsen. J. Phys. A, 242L295, 1991. [98] S. Sabhapandit, P. Shukla, and D. Dhar. J. Stat. Phys, 982103, 2000. [99] AB. Scheidegger. The physics of flow in porous media. University of Toronto Press, Toronto, 3rd edition, 1984. [100] N. Schwartz, A.L. Nazaryev, and S. Havlin. Phys. Rev. E, 5827642, 1998. [101] JP Sethna, K. Dahmen, and CR. Myers. Nature, 4102242, 2001. [102] AP. Sheppard, M.A. Knackstedt, W.V. Pinczewski, and M. Sahimi. J. Phys. A, 322L521, 1999. [103] K. Sneppen. Physica A, 221, 1995. [104] D. Stauffer and A. Aharony. Introduction to Percolation Theory. Taylor and Francis, London, 1995. [105] MR. Swift, A.J. Bray, A. Maritan, M. Cieplak, and J .R. Banavar. Europhys. Lett., 382273, 1997. [106] MR. Swift, A. Maritan, M. Cieplak, and J.R. Banavar. J. Phys. A, 2721525, 1994. [107] N. Vandewalle, P. Boveroux, A. Minguet, and M. Asloos. Physica A, 2552201, 1998. [108] DJ. Watts. Small Worlds: The Dynamics of Networks between Order and Random- ness. Princeton University Press, 1999. [109] DJ. Watts and SH. Strogatz. Nature, 3932440, 1998. [110] D.B. West. Introduction to Graph Theory. Prentice Hall, 1996. [111] D. Wilkinson. Phys. Rev. A, 3421380, 1986. 140 [112] D. Wilkinson and M. Barsony. Phys. A, 17, 1984. [113] D. Wilkinson and J.F. Willemsen. J. Phys. A, 1623365, 1983. [114] K.G. Wilson and J. Kogut. Phys. Rep., 12:75, 1974. ' [115] AP. Young. Spin Glasses and Random Fields. World Scientific, Singapore, 1997. [116] 8.1. Zaitsev. Physica A, 189, 1992. [117] RM. Ziff, S.R. Finch, and VS. Adamchik. Phys. Rev. Lett., 7923447, 1997. 141 S llitilllitiil