‘In v . . ,u... .. e , . ‘ urn? .. ”Gar. ‘ _ a; .15 2 .14 t. .. .r :u. “(1.. fink I ’12. . (1‘;le . \VE 11:. n .1413: 9116. .l. (6%)“ vtulntl . . C 3’; ".1 z... EC! 1111. looé LIBRARY Michigan State University This is to certify that the dissertation entitled ON GRAPHS WITH LARGE NUMBERS 0F SPANNING TREES presented by Andrew Chen has been accepted towards fulfillment of the requirements for Ph.D degreein Computer Science \MVX CW“ Major prte’ssor DateJt/ixll .1 i; 2005 MS U is an Affirmative Action/Equal Opportunity Institution 0- 12771 PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE ON GRAPHS WITH LARGE NUMBERS OF SPANNING TREES By Andrew Chen A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Computer Science 2005 ABSTRACT ON GRAPHS WITH LARGE NUMBERS OF SPANNING TREES By Andrew Chen Communication network topology designs modeled by graphs with a large number of spanning trees tend towards higher reliability. Given the number of vertices n and the number of edges m, we have studied graphs with a large number of spanning trees and have characterized (n, m) graphs that have maximum number of spanning trees. We studied (72, n + 4) graphs, a formerly open case of this problem, as well as other aspects of this problem. Let t(G’) denote the number of labeled spanning trees of a connected graph G. Given C, it is known how to compute t(G). However, little is known about the extremal version of the problem, that is, given the number of vertices n and the number of edges m, find a connected (n, m) graph G such that t(G) 2 t(H), where H is any other (12, m) connected graph. Such a graph G is called a t—optimal graph. We present a summary of the known results on t-optimal graphs and we present some new results for the case of (n, n + 4) t-Optimal graphs and begin to develop techniques for exploring (n, m) graphs in general. Let t(n, m) be the number of spanning trees that a t-optimal (n, m) graph has. We present (Obtained through using a software called nauty) the values of t(n, m) for n _<_ 12. We also present histograms of t(G) for G e (n, n+4) for n < 14. Additionally we present information about the few non-isomorphic t—optimal graphs that we have encountered. The Feussman formula provides a well-known recursive relationship for the number of spanning trees of a graph G in terms of the deletion and contraction of an edge 6, namely, t(G) = t(G —— e) + t(G- e). A common technique for finding the number of spanning trees of a graph is finding the minor of a matrix form of a graph. However, the impact of relatively simple changes to a graph (such as edge contraction or edge deletion) is not readily visible when the technique of finding the minor of a matrix form of a graph is used. We show a technique for computing the number of spanning trees of a graph which clearly shows the impact of graph Operations such as edge deletion, subdivision, and contraction on the number of spanning trees of a graph. Additionally we extend that result to include edge addition. Our technique makes use of reductions of the graph by repeatedly contracting cycles, and is demonstrated to be advantageous to the current techniques for the computation of t(G) when the modeling of such graph operations is desired. ACKNOWLEDGMENTS “If I have seen further it is by standing on ye shoulders of Giants” - Isaac Newton First and foremost, I could not have made the progress I did on this problem if it had not been for those that have worked on it and in this field before. They are too numerous to mention, but a small sampling of them can be found in the bibliography. Gratitude is due to Professor McKay for making the nauty software [1] avail- able. For every other piece of software, there may have been a (perhaps more time consuming) alternative that could have been used, but nauty exists in a class unto itself. Without the suggestion from my advisor, Dr. Esfahanian, that I consider this problem, I might very well have bitten Off far more than I could chew by pursuing another problem, so great thanks is due to him, not only for suggesting the problem, but for great understanding, patience, and wise counsel with regard to both this and other things. Friends and acquaintances far too numerous to mention have been very encourag- ing in both reminding me to work on this and in reminding me to take a break from it. For those who gave me appropriate amounts of encouragement of each, I thank them greatly. Connie deserves special mention, as do the residents Of Owen Graduate Center, the Society of Success and Leadership’s 2004.2005 Success Networking Team, and the campus ministry of The People’s Church. All of these groups, the people in them, and others helped me to cultivate the appropriate mindset to carry me through iv this task and retain my sanity. The task Of hating everyone that helped is an arduous one — I am tempted to dump my entire address book here. That might be excessive. I wouldn’t feel right if I didn’t mention my parents, whose concern for my progress was genuinely felt, and that higher power which no words can completely describe. TABLE OF CONTENTS LIST OF FIGURES LIST OF TABLES 1 Introduction 1.1 1.2 1.3 1.4 Problem Statement ............................. Definitions And Notations ......................... Executive Summary of Contributions ................... Overview ................................... 2 Previous Work 2.1 2.2 2.3 2.4 General Theorems .............................. The Case of Dense Graphs ......................... The Case of Sparse Graphs ......................... Literature Review Summary ........................ 3 Preliminary Observations And Findings 3.1 3.2 3.3 3.4 3.5 Previous Conjectures Revisited ...................... Experimental Data ............................. Motivation for Pursuing Previous Conjectures .............. Consequences of Conjectures ........................ The Average Number of Spanning "flees upon Deletion Or Contraction 4 Further Results 4.1 4.2 4.3 4.4 4.5 4.6 A Lemma Concerning Distillations .................... Chain Classes ................................ The Formula for t(n, n + 3) ........................ Theoretical Results ............................. Empirical Results .............................. The Impact of Edge Addition ....................... 4.6.1 The Impact of Cojoining Vertices .................... 4.6.2 The Reverse Application of The Feussman Formula to A Cycle Con- traction Formula ......................... 5 Future Work viii WNMNH APPENDICES A Figures B Tables BIBLIOGRAPHY 61 61 78 84 1.1 1.2 3.1 3.2 4.1 4.2 4.3 4.4 4.5 4.6 4.7 5.1 A.1 A.2 LIST OF FIGURES Examples of graphs that are not distillations ............... Examples of trivial and non-trivial chains ................. A cube with two parallel edges swapped. This graph is isomorphic to Figure A.6. ................................ An embedding of the t—Optimal graph with 8 vertices and 12 edges. The edges are labeled in the form avl, v2 : e, where 121 and 222 are the numbers corresponding to the adjacent vertices and e is the number associated with that edge. ........................ In a 0 graph, possible ways to have two chains with one edge removed. An example of two cycle contractions of a graph. ............ An example of the co joining of two vertices. ............... An expression for the number of trees Of Figure 4.4(a) where mm- is the length of the partial chain from vertex i to vertex j. ......... An expression for the number of trees of Figure 4.4(b) where x,”- is the length of the partial chain from vertex i to vertex j. ......... A matrix ................................... The taoptimal (12, 18) graph ........................ A complete 5-regular 1-partite graph. ................... A complete 3aregular 2-partite graph. ................... viii 19 20 37 37 50 53 54 55 56 60 A.3 A complete 4—regular 3-partite graph. Note how every vertex is connected to any vertex not in the same partition. ................ AA A complete 4 — 5 regular 4—partite graph. ................ A5 The Petersen graph, a t-optimal graph with 10 vertices and 15 edges. A.6 A spring embedding of the t-optimal graph with 8 vertices and 12 edges. This graph is more commonly drawn as in Figure 3.1. ........ A.7 An embedding of the t-optimal graph with 9 vertices and 13 edges A.8 An embedding of the t-optimal graph with 10 vertices and 14 edges A.9 An embedding of the t-Optimal graph with 11 vertices and 15 edges A.10 An embedding of the t-optimal graph with 12 vertices and 16 edges A.11 An embedding of the t-Optimal graph with 13 vertices and 17 edges A.12 A histogram of non-isomorphic graphs and their number of spanning trees with 8 vertices and 12 edges. The 2: axis is the number of spanning trees. The y axis is the number of non-isomorphic graphs with that number of spanning trees ........................ A. 13 A histogram of non-isomorphic graphs and their number of spanning trees with 9 vertices and 13 edges. The x axis is the number of spanning trees. The y axis is the number of non-isomorphic graphs with that number of spanning trees ........................ AM A histogram of non-isomorphic graphs and their number of spanning trees with 10 vertices and 14 edges. The :2: axis is the number of spanning trees. The y axis is the number of non-isomorphic graphs with that number of spanning trees ........................ A.15 A histogram of non-isomorphic graphs and their number of spanning trees with 11 vertices and 15 edges. The 1: axis is the number of spanning trees. The y axis is the number of non-isomorphic graphs with that number of spanning trees ........................ A.16 t-optimal (8,18) graphs .......................... A.1? t-Optimal (10, 27) graphs .......................... A.18 t—optimal (11,18) graphs .......................... A.19 t-optimal (11,42) graphs .......................... 65 65 66 66 67 67 A.20 t-Optimal (12, 22) graphs .......................... A21 The result of applying Algorithm 1 0 times to the Petersen graph. A22 The result of applying Algorithm 1 1 times to the Petersen graph. A23 The result of applying Algorithm 1 2 times to the Petersen graph. A24 The result of applying Algorithm 1 3 times to the Petersen graph. A25 The result of applying Algorithm 1 4 times to the Petersen graph. A.26 The result Of applying Algorithm 1 5 times to the Petersen graph. A.27 The result of applying Algorithm 1 6 times to the Petersen graph. A.28 The result of applying Algorithm 1 7 times to the Petersen graph. A29 The result of applying Algorithm 1 8 times to the Petersen graph. A30 The result of applying Algorithm 1 9 times to the Petersen graph. A31 The result of applying Algorithm 1 10 times to the Petersen graph. A.32 The result of applying Algorithm 1 11 times to the Petersen graph. A33 The result of applying Algorithm 1 12 times to the Petersen graph. A.34 The result of applying Algorithm 1 13 times to the Petersen graph. A.35 The result of applying Algorithm 1 14 times to the Petersen graph. A36 The result of applying Algorithm 1 15 times to the Petersen graph. 72 72 72 73 73 73 74 74 74 75 75 75 76 76 76 77 LIST OF TABLES 2.1 Summary of some values Of k = m - n for t-optimal (n, m) graphs and their current status as of this writing. ................. 2.2 Summary of ranges and and their current status as of this writing 3.1 Tuples of chain lengths for t-optimal graphs with distillation and labeling as found in Figure 3.2. Each position in the 12-tuple is the length of a chain that corresponds to a labelled edge in Figure 3.2 as per ((a1,2:0),(a2,3:1),(a3,4:2),(a4,1:3),. . . ). The sum of the entries in the tuple is the number of edges. ...................... 4. 1 The piecewise formula for the number of spanning trees of t—optimal (n, n + 3) graphs. The left column is the formula, and the right is the remainder upon division of n — 6 by 9. .............. 8.1 A table of some of the results we obtained. (I) .............. 8.2 A table of some Of the results we obtained. (11) ............. B.3 Tuples of chain lengths, number of spanning trees, and diagram figures for t—optimal graphs with distillation and labeling as found in Figure A.5. ................................... 13 14 21 33 78 79 Chapter 1 Introduction Communication network topology designs modeled by graphs with a large number of spanning trees tend towards higher reliability [2]. Graphs with the most trees give the best topology when the edges are sufficiently unreliable [3]. This makes intuitive sense due to the fact that the number of spanning trees is the number of ways we can have a maximal number of link failures in the network while preserving connectivity. A network can be modeled by a graph G = (V, E), where V is the vertex set and E is the edge setl. A tree is a connected graph that would become disconnected if any edge were removed. A spanning tree of a graph is a subgraph of G which is a tree and includes all vertices of G. We are addressing the problem of finding graphs with maximum number of span- ning trees. Given a graph, we know how to find the number of spanning trees of that graph. Given a number of vertices and a number of edges, what graph has those num- bers and has the maximum number of spanning trees? This question is the problem we are addressing. 1An edge is often thought of as “connecting” two vertices. We are describing undirected edges and we are allowing multiple edges between the same two vertices - also known as parallel edges. We are also allowing multiple edges between the same vertex - also known as loops. If a graph has no parallel edges or loops the graph is known as a simple graph. Technically speaking, E, the set of edges, is a multi-set of unordered subsets of size 2 of V. Sometimes graphs that are not simple are called multigraphs. In our work we are looking for t- Optimal simple graphs, with the realization that certain graph operations may result in multi-graphs. 1 For graph G = (V, E), G is called dense if m = 0(n2), and sparse if m = 0(n), where m = [E] and n = [V]. Much work has been done ([4], [5], [6], [7], and [8]) regarding dense cases, but little work has been done regarding most sparse cases. We have focused on the sparsest case which had not been done prior to our work, and we continue to investigate alternative aspects of this problem. 1.1 Problem Statement Let t(G) be the number of possible labeled spanning trees in a connected graph G. A graph with n vertices and m edges is called an (n, m) graph. An (n, m) graph G is said to be t—optimal if and only if t(G) _>_ t(H) where H is any other (n, m) graph. We study the problem of finding t-optimal (n, m) graphs, The case of m _<_ n + 3 has been previously studied; we studied (n, n + 4) graphs, a formerly open case of this problem, as well as other aspects of this problem. As part Of our studying of this problem, we provide a summary of known results, , including many known conjectures. This problem is one for which there are a variety of long-standing conjectures, consideration of which are useful for illuminating the search for solutions. We sought to determine if, for every (n, m), there is one and only one non- isomorphic t-optirnal graph. We also sought to determine the impact of simple oper- ations such as edge deletion, subdivision, contraction, and addition upon the number of spanning trees and the possible preservation of the property of t-optimality. These questions and many more we seek to answer herein. 1.2 Definitions And Notations We will use S (n, m) to denote the the set of all t-optimal (n, m) graphs. The number of spanning trees of graphs in S(n, m) will be denoted by t(n, m). 2 To contract an edge is to replace an edge and its end vertices with a single new vertex, and any other edges that were incident to either of those two vertices will be changed to instead be incident with the new vertex. The graph Obtained by contracting the edge e in G is denoted by G- e. To subdivide an edge uv in a graph G is to create a new graph G’ by replacing the edge av with the path it — w —- v, where w is a new vertex. Contraction may be thought of as the inverse of subdivision. A loop is an edge with identical end vertices. Potentially after a contraction of an edge, a loop may be produced. Edges that share the same end vertices are called parallel edges. Potentially after a contraction, parallel edges may be produced. To contract a cycle is to contract all the edges in the cycle. We denote the contrac- tion of cycle c in graph G as G c. For our purposes, loops count as cycles, as does a single pair of parallel edges. A formal definition of how we use the term cycle is as fol- lows: A cycle is a sequence of edges and their incident vertices {cm 120, 61, v1, . . . , eh, m} such that e,- is incident to v,_1 and vi, so is incident to '00 and 11),, and no vertex occurs in the sequence more than once. Thus, a loop is a cycle because it is a single edge and a single vertex, whereas a single pair of parallel edges is a cycle because it is two edges and two vertices that can be arranged in such a sequence. Figure 4.3(b) is an example of Figure 4.3(a) with the cycle ABE contracted. A chain in a graph G is a path in G in which every internal vertex in the path has degree exactly 2 and the two end vertices of the path have degree strictly greater than two ([9]). An example of a graph with some chains can be found in Figure 1.1(a). One chain in that graph is g - a - b; another is g — f — e. An example of a path in that graph that is not a chain is a —- b -— c -— 6 because b is not a vertex of degree 2 and a is not a vertex of degree greater than 2. Another example of a path in that graph that is not a chain is e —- d because d is not a vertex of degree greater than 2. An 00 0 (b) An example of a graph that is not the distillation of Figure 1.1(a) Figure 1.1: Examples of graphs that are not distillations edge whose end vertices are of degree three or greater is a trivial chain, an example of which is b —- e. If two edge—disjoint chains share the same end vertices then they are parallel chains. The length of a chain is the number of edges in it. For a graph G, the distillation D(G) of G is the result of repeatedly contracting any edge incident to a vertex of degree 2 until there are no vertices of degree two ([9]). An example of the distillation of the graph in Figure 1.2(a) can be found in Figure 1.2(b). The distillation of a graph can be seen as replacing every chain with a single edge. Figure 1.1(b) is not a distillation of Figure 1.1(a) because vertex c has degree 2. The distillation Of Figure 1. 1(a) would have two parallel trivial chains between b and e and no other chains between b and e. If we take the distillation of a graph, we can subdivide some number of edges some number of times to recreate the original graph. When we do this, we are merely changing the lengths of the chains. The cyclespace C (G) of a graph G is the set of all cycles that are subgraphs of G. If B is a basis for the cyclespace of G then B is a least sized set of cycles such that (a) An example of a graph (b) An example of a graph with a non-trivial chain with only trivial chains, and the distillation of Figure 1.2(a) Figure 1.2: Examples of trivial and non-trivial chains every cycle in C (G) can be generated by the symmetric difference of some subset of B. The dimension of a cyclespace is the size of B. The dimension of the cyclespace of a graph G is sometimes called the cycle rank of G. The cycle rank [10] Of an (n, m) graph G is m — n + w where w is the number of components in G. Thus the cycle rank of a connected (n, m) graph G is m -- n + 1 as there is only one component. A common technique for finding t(G) is finding the minor of a matrix form of G, known as the K irchofi’ matrix. (The minor of a matrix is the determinant of the matrix after the removal of a row and a column.) In other literature, the K irchofi' matrix is sometimes referred to as the Laplacian or admittance matrix, as found in [8]. An example of a Kirchoff matrix can be found in Figure 4.7. In [9] , several useful theorems are provided, including a reference to the Feussner formula [11] which is t(G) = t(G —— e) + t(G- e) where e is any edge in G which is not a loop. This formula makes intuitive sense: it is the sum of those spanning trees that do not include 6 with those that do include e. If e is a loop then t(G) = t(G — e) = t(G- e) because e can not be in a spanning tree if it is a loop. A multipartite graph has its vertex set partitioned into p subsets called partitions. There are no edges joining vertices within a partition. A complete multipartite graph is one in which there is always an edge between any two vertices in different partitions. A complete r-regular multipartite graph can be completely described by two num- bers, namely 1: and p, where k is the number of vertices in a partition, and p is the number of partitions. There is an edge between any two vertices not in the same partition, and there are no edges between vertices in the same partition. We have nzkp, r=k(p— 1), and m = n(r/2) = k2 (12’), where r is the degree of a vertex. If we consider m — n, we get m — n = (n/2)(r — 2). If 1‘ is 2, we have the case of the single cycle (m = n). If r is n — 1 then and we have the complete graph. SO, different values of r cover a variety of ranges of possible values of m and 12., but not all. In fact, the values of m and n which are not covered are those values for which no complete regular multipartite graph exists. Figure A.1 is an example of a complete regular graph. Figure A2 and Figure A.3 are examples of complete regular multipartite graphs. The degree of a vertex is the number of incident edges. Amongst all the vertices in a graph, the degree of the vertex of greatest degree is referred to as A(G), and the degree of the vertex of least degree is referred to as 6 (G) An almost regular graph is a graph for which the degrees of no two vertices differ by more than one, alternately phrased as A(G) —6(g) S 1. Thus, a complete multipartite almost regular graph is a graph with p partitions, no edges within a partition, edges between any two vertices in different partitions, and vertex degrees differing by at most one. Note that this is equivalent to being a complete multipartite graph in which partition sizes differ by at most one. An example of a complete multipartite almost regular graph can be found in Figure A4 1.3 Executive Summary of Contributions In our study of this problem we have accomplished many things, including: summa- rizing the known results on t-optimal graphs, finding some new results for the case of (n, n + 4) t-optimal graphs, and beginning to develop techniques for exploring (n, m) graphs in general. In our investigation Of the problem we obtained, with the help of a software called nauty, the values of t(n, m) for n _<_ 12 and histograms of t(G), G E (n,n + 4) for n < 14. We have found a few non-isomorphic t-optimal graphs. For every distillation there is an expression for the number of spanning trees in terms of the chain lengths, this we have proven. We describe how to find these expres- sions and the advantages of working with them as Opposed to the matrix technique for finding the number of spanning trees. It is known that for each k, such that k _<_ 3 and k S 5’34, all graphs in S(n,n+ k) have the same distillation ([12],[13], [9] and [14]). This yields a method of finding (n, n + k) t-optimal graphs when k S 3, by taking a “seed” graph (dependent on k) and subdividing its edges in a specific order. We have formulated the problem of finding t-optimal (n, n + k) graphs as a polynomial integer Optimization problem. Using the method of subdividing edges of a “seed” and the polynomial integer optimization formulation, we have found a “seed” graph that gives an infinite number of members of S(n, n + 4). It has been conjectured [15] that all graphs in S(n, n + 1:) have the same distillation for k S '3" Within that range we also prwent a heuristic for generating graphs that tend to have large number of spanning trees. As a consequence of some of our work, we also prove that t-optimal graphs do not have distillations in which there is an edge whose removal results in a graph that is not biconnected. The impact of relatively simple changes to a graph (such as edge contraction or edge deletion) is not readily visible when the technique of finding the minor of a matrix form of a graph is used. The expression for the number of spanning trees, in terms of the chain lengths, provides us with a technique for computing the number of spanning trees of a graph which shows clearly the impact of graph operations such as edge deletion, subdivision, and contraction on the number of spanning trees of a graph. We then extended that result to include edge addition. We make use of reductions of the graph by repeatedly contracting cycles in this technique. 1.4 Overview This chapter (Chapter 1) serves as an introduction to the problem (Section 1.1), provides some definitions and notations that are used throughout the remainder of this work (Section 1.2), provides an executive summary of the contributions found in this work (Section 1.3), and provides an overview of the other chapters (Section 1.4). The next chapter (Chapter 2) provides an overview of the previous work that has been done by others on this problem. Following that we have Chapter 3 which describes some Of the preliminary results that had been Obtained as of the time we formally proposed pursuing this avenue of research. The newest results can be found in Chapter 4. Some avenues to continue pursuit of this problem are mentioned in Chapter 5. Chapter 2 Previous Work In this section we outline some of the previous work that others have done on this problem. This problem is one for which there are a variety of long-standing con- jectures, consideration of which are useful for illuminating the search for solutions. Section 2.1 provides some known results including many of the longstanding conjec- tures pertaining to this problem. Section 2.2 focuses on results pertaining to dense graphs and Section 2.3 focuses on results pertaining to sparse graphs. A simple overview Of some of ranges for which this problem has been solved can be found in Section 2.4. 2. 1 General Theorems We have an upper bound for t(n, m) in terms of the number of vertices (Cayley’s Theorem [16] for complete graphs) which is nn’2. Since a spanning tree consists of n -— 1 edges, ("'21) is an upper bound as well. If m = n — 1 we can have at most one spanning tree, and hence that number is the maximum number of spanning trees. If m = n then considering the (:1) upper bound, we see that we have n as an upper bound for t(n, n). A simple cycle achieves this upper bound because the removal of any edge creates a spanning tree and thus n = t(n, n). When n is fixed, 9 increasing m can only increase the number of spanning trees. Thus we have bounds n S t(n,m) _<_ n"‘2 ifm Z n. The Feussner formula was used substantially in the contributions of Boesch et. al. in “On the existence of uniformly optimally reliable networks” [9]; those contributions are fundamental to the approach we take. They prove that for any t-optimal (n, m) graph G, if m 2 n 2 3 then G is biconnected [9] where by biconnected we mean that between any two vertices there are at least two vertex-disjoint paths. In that same work Boesch et. al. also prove that if an (n, m) graph G is t-optimal and m _>_ n + 2 2 6 and G has two parallel chains then both chains are trivial [9]. The K ircho If matrix technique has been applied most successfully to specific classes of graphs with known structure. The square of a graph G is defined as follows: H is the square of a graph G if V(H) = V(G) and E(H) = {e = uv|d(u, v) _<_ 2} where d(u, v) is the length of a shortest path between a and v in G. In [17] the technique of the K irchofir matrix is applied to the square of a cycle to determine that the number of spanning trees in the square of a cycle is n15},2 (where F" is the nth Fibonacci number). Presumably this technique could be applied to other graphs that have the same kind of symmetry as the square of a cycle does, for example as in [18]. Hitherto, the sort of symmetry for which this technique is easily applicable has been elusive. We can get more precise bounds by concentrating on the dense or the sparse cases. Gilbert and Myrvold in [15] have a recent summary of research done on both the sparse and dense cases, and they have Conjecture 1, Conjecture 2, and Conjecture 3 as follows. Conjecture 1 t-optimal graphs are almost regular Conjecture 2 t-optimal graphs do not have multiple edges as long as (’2‘) 10 Conjecture 3 t-optimal graphs with maximum degree three and the same number of vertices of degree three have the same distillation Some statistics work done in the field of design of experiments by Cheng in [19] was reinterpreted for graph theory in [20]. The relevant result is all complete r-regular multi-partite graphs are t-optimal. Petingi et. al. [21] prove that Conjecture 1 is asymptotically true. Petingi et. al. [8] also prove that complete multipartite almost regular graphs are t—optimal. 2.2 The Case of Dense Graphs If m is (’2’) then t(n, m) = n"’2 . If m is in the range (9-39.43) then the gaph G with maximum number of spanning trees has, as a complement, a set of independent edges ([4] and also in [5]). Petingi et. al. have done a number of other dense cases ([6] and [7]) extending the cases all the way through ('2‘) — n. Gilbert and Myrvold in [15] extend the range of dense graphs covered, subject to Conjecture 1. For values of n above a certain constant, Petingi et. al. [8] handle the range So in summary, the range has been covered, largely by Petingi et. al., and for all but a finite number of values of n, for simple graphs. 2.3 The Case of Sparse Graphs The sparse cases have crept along. The m = n -— 1 and m = n cases have been described in Section 2.1. If m = n + 1 the graph with the maximum number of sparming trees is a connected graph with two vertices of degree three, and all other vertices Of degree two (looks like a 0 ) as described in [12]. The m = n + 2 case was handled in [13] and the solution is when the graph is a particular subdivision of K 4. In [9] the m = n + 3 case is stated as solvable via an outlined technique, but the paper doesn’t actually prove the case using the technique - instead the paper illustrates the technique on the m =2 n + 2 case. The m = n + 3 case was proven in [14]. These cases can be found summarized in Table 2.1. 2.4 Literature Review Summary We have compiled in Table 2.2 a summary of known results on t-optimal graphs. In Table 2.1 we have compiled a summary of known results on t-optimal (n.n + k) graphs. 12 k distillation Ichainsl equation < -—1 N /A N/A 0 -—1 any tree N/ A 1 0 a simple cycle N / A m 0 (Ir-)3 1 1 3 (East‘s-32> . . 2 This 18 known (M)2(m;'2') as a 9 graph. 3 3 3 o 2'3. 1 2(m—1flm-I—2) 27 2 E 2 gm+1);gm—2) 2 6 3 D 3 21;? — If?" 4 2 gm-129§m+22 . 27 5 . 2(m+1[2(m-2[ 27 3 9 We determine this - see Tar- hle 44- 4 Previously unknown N / A Previously unknown 5 Previously unknown N / A Previously unknown Table 2.1: Summary of some values of k = m - n for t-optimal (n, m) graphs and their current status as of this writing. 13 0 S m S n - 2 : For t-optimal graphs, t(G) = 0 n — 1 S m S n - 1 : For t-optimal graphs, t(G) =1 n S m S n : For t-optimal graphs, t(G) = n n + 1 S m S n + 1 : The structure of t-optimal graphs is known. Formula is known [12] and derivable, but is piecewise by the remainder upon dividing m by 3 n + 2 S m S n + 2 : The structure Of t-optimal graphs is known. Formula is known [13], but is piecewise by the remainder upon dividing m by 6 n + 3 S m S n + 3 : The structure of t-optimal graphs is known [14]. Formula is piecewise by the remainder upon dividing m by 9. See Table 4.1. n + 4 S m S n + 4 : The structure of t-optimal graphs previously unknown or unpublished except for when n S 5 n + 5 S m S a? — i2? : The structure of t—optimal graphs is unknown ex- cept when G is a complete regular [20] or almost regular [8] multi-partite graph or certain other cases. A formula for t(G) is known for some cases, but not in terms of m and n, except for certain cases such as bipartite graphs. # — 37” S m S 91".?) : The structure of t—optimal graphs is known, with explicit closed formulas for the extremely dense cases being derivable in some cases. Table 2.2: Summary of ranges and and their current status as of this writing 14 Chapter 3 Preliminary Observations And Findings In this chapter we describe some of the preliminary research that had been done prior to formally proposing pursuing this avenue of research. Section 3.1 explains some observations upon considering some of the already existing conjectures. Section 3.2 describes some of the experimental data we had obtained as of the original proposal of pursuit of this avenue of research. Reasons to further consider these conjectures can be found in Section 3.3. The previous sections are utilized to make some new conjectures in Section 3.4. Section 3.5 describes an observation on the average number of spanning trees when we contract or delete an edge. 3.1 Previous Conjectures Revisited In this section we consider the implications of some Of the conjectures mentioned in Section 2.1. Specifically, Conjecture 1 and Conjecture 3 are used to determine the properties of the degree sequences of t-optimal graphs if those conjectures hold. Conjecture 1 states that all graphs with maximum number Of spanning trees are almost regular. Let us explore the consequences of Conjecture 1 on the degree se- 15 quence of a (n, m) almost regular graph G. If G is almost regular then there are k vertices with degree r and n - 1: vertices with degree r + 1. Since the sum of the degrees of the vertices is twice the number of edges, we have 2m=kr+(n—k)(r+1)=kr+nr—kr+n-k=nr+n—k=n(r+1)-k From this and the fact that k S n we can see that integers k and r are uniquely determinedbymand n. Ifk=nthisisanr—regulargraphandifk=0thisisan r + 1 regular graph. In all other cases, 0 < k < n. That is, r + 1 is the ceiling upon dividing 2m by n and k is n(r + 1) — 2m. The degree sequence is thus completely determined by m and n. For example, if we consider only graphs in the range n_ 8. Note that n 2 8 forces m in the range n < m S 35% for t(n, n + 4). The reasoning found in Section 3.1 applies when m is in that range and we thus (by Conjecture 1) have 2(m — 77.) number of vertices of degree 3, which is 8, and all other vertices are of degree 2. Thus t(n, n + 4) graphs with n 2 8 have the same number of vertices of degree 3 and so (if Conjecture 1 applies) Conjecture 3 applies. The brute force analysis of the next several graphs that have 4 more edges than vertices has been conducted and so far all (n, n + 4) graphs that have been analyzed (up through 13 vertices) abide by the conjectures mentioned in [15]. They can be 19 a5,8:7 a6,7:5 a1,4:3 7 4 a4,7:11 37,8:6 a3,4:2 8 3 a3,8:10 Figure 3.2: An embedding of the t-Optimal graph with 8 vertices and 12 edges. The edges are labeled in the form avl, v2 : e, where v1 and v2 are the numbers correspond- ing to the adjacent vertices and e is the number associated with that edge. finmdinfigmesAJiALllk8yA9,AJD,mHLAJL As you can tell, inferring characteristics of the graphs given in figures A.6, A.7, A.8, A.9, A.10, and A.11 when using those embeddings is not exactly straightforward. Fortunately, we can embed the graph found in Figure 3.1 as in Figure 3.2. This embedding has a few advantages. The first advantage is that the graph has a small crossing number. Another advantage that will be important to us later is that we can see that the edges form two different classes : “spokes” and “cycle-edges” . The advantage which is of the greatest concern to us right now is that for the cases with the 20 number of vertices ranging from 8 through 13, we can easily see the same underlying structure. Here’s how: if we represent the length of the chains (where chains are as described in Section 1.2), as given in Figure 3.2 as a twelve tuple (one chain per edge in the distillation) then the t-optimal graphs for those cases can be found in Table 3.1. (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) corresponds to the chain lengths of Figure A.6. (2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) corresponds to the chain lengths Of Figure A.7. (2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1) corresponds to the chain lengths of Figure A.8. (2, l, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1) corresponds to the chain lengths of Figure A.9. (2, 1, , 1, 2, 1, 2, 1, 1, 1, 1, 1) corresponds to the chain lengths of Figure A.10. (2, 2, 2, 1, 2, 1, 2, 1, 1, 1, 1, 1) corresponds to the chain lengths of Figure A.11. Table 3.1: Tuples of chain lengths for t-optimal graphs with distillation and labeling as found in Figure 3.2. Each position in the 12-tuple is the length of a chain that corre- sponds to a labelled edge in Figure 3.2 as per ((a1,2:0),(a2,3:1),(a3,4:2),(a4,1:3),. . . ). The sum of the entries in the tuple is the number of edges. This representation in terms of chain lengths also works for representing the (n, n+ 3), (n, n+2), and (n, n+ 1) cases, which have tuples of length 9, 6, and 3, respectively. As they have periodic subdivisions, those cases can be more concisely represented via the order of the chains in which the subdivisions occur. Diagrams Of those cases can be found in Table 2.1 and the order in which the edges should be subdivided are alphabetical by edge label as found in the figures in the table. The conjectures mentioned in this section are provided purely to inform us of the direction we might take in our investigation of this problem. Their correctness is in no way assumed nor proven in the results we have obtained as of this writing. 21 3.4 Consequences of Conjectures Conjectures 1, 2, and 3 hold for all graphs that have m < n + 4. In all such graphs, by considering each chain in turn, we may find the graph with the maximum number of spanning trees ([15] provides a summary, but the proofs and some formulas for the number of trees are actually found in [12],[13], [9] and [14]). A summary of those cases and others can be found in Table 2.1. When we consider each chain of an (n, m) graph in turn, as can be done in the m S n + 3 cases to find the t-optimal graphs, we consider what the number of spanning trees would be if we were to increase the length of that chain by one, that is, subdividing an edge in the chain. By choosing the chain whose length, if increased by one, would give us the maximum number of spanning trees, we then have a technique that will give us the next graph with the same number of excess edges (relative to vertices). This technique will provide us with t-Optimal graphs, for the m S n + 3 cases. This technique can be found summarized in Algorithm 1, and is known to be correct for m S n + 3 because it corresponds to the previously published solutions for those cases as found in [12],[13], [9] and [14]. Algorithm 1 Pseudocode for a technique to obtain graphs with maximum number of spanning trees. Known to be correct for m S n + 3 1. Consider a t-optimal graph G with n < m S 3.2? 2. Let maximumSoFar = 0 3. Let bestGraphSoFar = G 4. For each chain c in the distillation of G : ((1) Consider G’ which is G with the length of c increased by I. { b ) If t(G’) > maximumSoFar set maximumSoFar = t(G’) and set bestGraphSoFar = G’ 22 5. bestGraphSoFar now contains the next graph that this algorithm produces Lemma 1 Algorithm 1, for all graphs with m S n + 3, has a periodic pattern con- sisting of all of the chains. For each chain in the sequence, we increase the length of the chain, and go on to the next chain in the sequence. (Proved in [12],[13], [.9] and [14]) U Lemma 1 is true for all graphs with m S n + 3. This gives us reason to conjecture that it is true whenever started from a t-optimal graph for which m S 5’54, and thus we make the following conjecture: Conjecture 4 Algorithm 1 always generates graphs with maximum number of span— ning trees. On a per distillation basis, a formula for t(G) in terms of the chain lengths of G can be found by considering that for every tree in a graph, there is a corresponding tree in the distillation of the graph. If all of the edges in a chain c are in a spanning tree T of the graph G then the edge e in the distillation D(G) that corresponds to c is in the corresponding (to T) spanning tree in the distillation D(G). Likewise, if any edge in the chain is not in that spanning tree of the graph (there can be only one such edge per chain or else the “spanning tree” would be disconnected and hence not a spanning tree), then the edge in the distillation that corresponds to that chain is not in the spanning tree in the distillation. Thus, the formula for t(G) has a number of terms equal to the number of trees in the distillation, and each term is the product of the chain lengths of the chains that correspond to edges not in the tree in the distillation. Using Algorithm 1 results in a linear operation to find the (n, n+4) graph (because the number of chains has a constant upper bound, namely 12 in this case). Without the realization that subdividing any edge in the chain increases the size of the chain, 23 we would need to consider every edge instead of every chain, and thus, since the number of edges (m) grows, finding a t—optimal graph would be order m2, a polynomial time operation. There are certain conditions necessary to ensure that this realization works for finding t~optimal graphs. Those conditions are that all (n, n + k) graphs have the same distillation and are such that for any n, the chain length tuples for n and n + 1 differ in only one chain. Those conditions are true for the Simpler cases (m S n + 3). When we consider Lemma 1, we are naturally led to consider what happens for (n, n + 4) graphs when Algorithm 1 is applied. We find the sequence of chains is non-periodic. Consider the set of permutations that are graph automorphisms. We say two edges are in the same class if and only if there exists some graph automorphism permutation that maps from one edge to the other. Let Tm,c(e) denote the edge that edge e of graph G is mapped onto using automorphism i. Let MG(e) denote the set of m,,a(e) for all possible i that map edges on to one another. Including the identity permutation, MG (e) defines a class of edges to which e belongs, and the application of MG to any edge in a class will result in the same class. If we apply this way of finding classes of edges to the distillation of a graph, because each edge in the distillation corresponds to a chain, we then find classes of chains in the graph. Let dG(c) represent the edge in the distillation D(G) of G that the chain c of G corresponds to. Thus, we can find classes of chains in the graph G by considering the classes of edges of D(G) and then applying the inverse of dg(c) to each element of these classes. In all the simpler cases (m S n + 3), all edges in the distillation were part of the same class. But in the (n, n + 4) case there are two classes instead of one. These are the classes of chains mentioned in Section 3.3. Knowing that we have two classes of chains, we can investigate how they should relate to each other. Knowing that all the graphs (m S n + 3) have chains being 24 about the same in length, we make the following conjecture (Conjecture 5): Conjecture 5 In a t-optimal graph, all chains in the same class will have lengths that differ by no more than 1. We have discovered via brute force that t(8, 12) has 2 and only 2 classes Of chains. By using the formula for the number of spamming trees in terms of the chain lengths, which is unique for each distillation, and the fact that the sum of the chain lengths is the number of edges, and Conjecture 5, we created a system of equations, two equations, two unknowns, and so we solved and optimized for the number of spanning trees. When we optimized for the number of spanning trees given the equations, we found that the Optimal ratio of the number of edges in these different classes is an irrational number, namely J5 — J2. This finding would explain why we do not have the same kind of periodically repeating ordered progression as found in the m S n + 3 cases. The set S (10, 15) contains one graph which is the Petersen graph. It has one class and thus, by Conjecture 5, ought to have a periodic progression when we apply Algorithm 1. When we apply the algorithm, we get the results found in Table 8.3, expressed in terms of chain lengths. From this we can see that we have such a periodic progression (not shown in the table for the sake of brevity, but the progression does repeat periodically). The periodic progression is the following: 1, 9,15,2,10,12,3, 6, 14, 4, 7, 11, 5, 8, 13 The labeling of the edges of the Petersen graph is as found in Figure A5 and follows the same convention as in Figure 3.2. In summary, we conjecture that chains in the same class have lengths that differ by no more than one (Conjecture 5). We also conjecture that the technique described in Algorithm 1 always gives a t-optimal graph if started from a t-Optimal graph with 25 maximum degree Of 3 (Conjecture 4). These conjectures are not necessary for the results we obtain. 3.5 The Average Number of Spanning Trees upon Deletion Or Contraction If we consider a graph and construct a matrix with every spanning tree of the graph, one per column, and every edge, one per row, then we can count the edges in each tree by summing up each column separately and we get (n — 1)t(G). If we sum up each row separately we get the sum over every edge of the number of spanning trees containing this edge. If we use t(G . E) to represent the average number of trees upon contracting an edge c, then we get the equality (n — 1)t(G) = (m)t(G - '6). Similarly if we count the number Of edges that aren’t in the tree and we use t(G —— E) to represent the average value thereof, we get (m - (n -- 1))t(G) = (m)t(G - E)- These two equations can both be rewritten as .a = megs-a and «a --- tit: : :3- 26 Putting these equations together, we get t(G-E) __ n-—1 t(G—E) _ m-(n—1)' This equation can be useful when considering the Feussner equation mentioned in Section 1.2 [11]. Note that there may be no graph that has t(G — E) or t(G - a) spanning trees, but as an average we know there is at least one graph with number of spanning trees greater than or equal to it and that there is also at least one graph with number of spanning trees less than or equal to it. By applying these equations we may be able to construct bounds on extremal values of t(G). 27 Chapter 4 Further Results With the advent of modern computers as well as a piece of software called nauty [1] we were able to do analyses of a large number of graphs. Some of the data we accumulated from that can be found in [23]. The data and analyses thereof help to inform some of the following few sections. In Section 4. 1 we prove an important lemma concerning distillations. We share some observations concerning classes of chains in Section 4.2. In Section 4.3 we describe our technique for proving the formula for t(n,n + 3). We present, with proof, our results for the (n, n + 4) case, in Section 4.4. The more recent results of our empirical analysis are presented in Section 4.5. In Section 4.6 we begin to lay a possible groundwork for being easily able to extend some of our results beyond the (n, n + 4) case. We might wonder if the removal of an edge from a t—optimal (n, m) graph will always produce a t—optimal graph. We do not always get a t-optimal graph under such circumstances as it depends on which graph we are removing the edge from. For example, any t—optimal 9 graph (a 0 graph is a biconnected graph with two vertices of degree three and all other vertices of degree two) for which n _>_ 5 has the property that removing a single edge will result in a graph with a vertex of degree one. However, graphs with a vertex of degree one cannot be t-optimal because t-optimal graphs are 28 biconnected [9]. On the other hand, complete graphs are t-optimal and removing any single edge from one of them results in a t-optimal graph. Similarly, we might wonder if the addition of an edge to a t-optimal graph always results in a t-optimal graph. Clearly we would want to add the edge in the right “spot” . We might also wonder if there is always a “right spot” to add the edge. There is not always such a “right spot” . Again, if we have a cycle, which is a t- Optimal graph, and we add an edge connecting any two non-adjacent vertices, we get a 0 graph, but if the cycle is larger than size 4, the resulting graph will not be t-optimal. However, sometimes adding any non-parallel edge will give us a t-optimal graph, as in the case of (n, (’2’) - 1) graphs. 4.1 A Lemma Concerning Distillations The concept of the distillation, described earlier and in [9] is useful to us. We can categorize all biconnected graphs of interest to us in terms of their distillations. The following lemma helps us do that, and from there we are able to explore the conse- quences of a graph having a particular distillation. Lemma 2 For any biconnected (n,n + k) graph there is a corresponding (2k,3k) graph with the same distillation. Proof: A biconnected graph has no vertices of degree zero or one. The distil— lation process results in a graph with no vertices of degree 2 by definition. Thus, distillations of biconnected graphs have a minimum degree of 3. We prove Lemma 2 by contradiction and thus assume n 75 2k and consider two cases. Case 1 : Assume n > 21:, and consider biconnected (n, n + k) graphs with more than 21: vertices. Suppose h = n -— 2k. Thus we should consider biconnected (2k + h, 3k + h) graphs. Of these biconnected graphs, there must be at least 29 h vertices of degree 2. Suppose not, and we only have h - 1 vertices Of degree 2. Since all the other vertices are of degree 3 or greater we would have at least 3k + h +-.],- edges, which is greater than 3k + h. Thus there are at least h vertices of degree 2. Thus we can contract an edge incident to each of them and get a (2k, 3k) graph with the same distillation. Case 2 : Assume n < 2k, and consider biconnected (n, n + k) graphs with less than 2k vertices. Suppose h = 2k - n. Thus we consider biconnected (2k - h, 3k - h) graphs. If we consider an edge and subdivide it, we get a (2k -— h + 1, 3k -— h + 1) graph. If we repeat this h times, we get a (2k, 3k) graph with that same distillation. So, in either case, for any (n, n + k) graph there is a (2k, 3k) graph with the same distillation. C] 4.2 Chain Classes We have found the techniques that were used to prove the optimal solutions for the (n,n + k) cases when k S 3 to be difficult to apply to the (n,n + 4) case. We believe this difficulty is due to an important difference between the (n, n + 4) case and the other cases; this difference can be seen by considering equivalence classes under isomorphic mappings of edges on the edges in the distillation, and thus the chains in the undistilled graph (as described in Section 3.4). By drawing the graph found in Figure 3.1 as in Figure 3.2 we can see that the edges form two different classes : “spokes” and “cycle-edges” as mentioned in Section 3.3. For the cases with the number of vertices ranging from 8 through 13 (that is, Figures A.6, A.7, A.8, A.9, A.10, and All ), we easily see the same distillation by representing the length of the chains, as given in Figure 3.2, as a twelve tuple (one 30 chain per edge in the distillation). Thus the t-optimal graphs for those cases are found in Table 3.1. For an (n, n + k) graph G, if k > 321 then by necessity G will be forced to have a vertex of degree greater than 3. Algorithm 1 will not be able to remedy this if the optimal distillation has vertices of degree 3 only. Thus, to ensure that Algorithm 1 is started from a graph that could have vertices of degree 3 only, we only start it when k S g. If we start with the graph found in Figure 3.2 and iterate Algorithm 1 a few times, then Table 3. 1 contains the chain length tuple values. Between each tuple, the changed element is the one whose value is increased and so is the element whose increase was what was maximum. For a particular distillation, we only need to concern ourselves with chain length value assignments in the attempt to find a t-optimal graph with that distillation. This is due to the formula mentioned in Section 3.4. Using Algorithm 1 to determine these chain length value assignments results in a linear operation to find the (n, n + k) graph (because the number of chains has an upper bound of 3k when m S if- which follows from Lemma 2). As mentioned: we conjecture that chains in the same class have lengths that differ by no more than one (Conjecture 5). Everything mentioned in this section would support, but not prove, that conjecture. 4.3 The Formula for t(n, n + 3) Given the formula mentioned at the beginning of Section 3.4 and a repeating ordered progression of all chains in a graph we can see that increasing the lengths of the chains in accordance with the repeating ordered progression will result in polynomial expressions for each remainder upon division of the number of edges by the length 31 of the period. Furthermore, that formula also enables us to see that the maximum degree of a term in these polynomial expressions is 1 + k = 1+ (m — n). This insight is consistent with previously published formulas for t(n, m) in the m S n + 2 cases. While the distillation for the optimal (n, n + 3) case has been Shown in previous work [14], as far as we know, no formula has been previously published for t(n,n + 3). Thus, we can use this insight to determine a formula for t(n, n + 3) when n 2 6. Let r be the remainder upon dividing m by the number of chains c in an (n, m) graph. When the pattern of edge subdivision as described in Algorithm 1 periodically repeats and the length of the period is c, then for two graphs with m values mo and m1 , if they differ by exactly c, then they will be the same except the larger will have had an edge from every chain subdivided with respect to the smaller. For this reason, as found in the m = n+ 3 case, it is convenient to represent the formula for t(n, n + k) as a piecewise function depending on the value of r. Given the algorithm for the m = n + 3 case [14], and the information in the previous paragraph, we now have the formula for t(n, n + 3) (obtained by iterating Algorithm 1, looking at the result, and determining the polynomial from the result), as found in Table 4.1. 4.4 Theoretical Results In this section we utilize our observations in terms of chain length tuples and per distillation expressions. We determine that the expression for the number of spanning trees in terms of the chain lengths has an optimization solution equation that is a polynomial whose local maximum is also the global maximum. We also determine that the optimal integer valued solution will be within decreasing bounds of the optimal continuous valued solution which is expressible in terms of per distillation fixed chain length ratios. Altogether, this enables us to state that Algorithm 1, applied to the 32 Number of spanning trees remainder upon division of n - 6 by 9 n4 4713 27:2 4n 81 + 27 + 3 + 3 + 1 0 II: + 4713 + 16712 + 8071 + lg 1 81 27 27 81 27 n‘ 4713 Sn2 587; 8 81+27+ 9 +81+27 n4 4713 5712 2n 81 + 27 + 9 + 3 n‘ 4713 147i2 387i 4 81 + 27 + 27 + 81 (27) n4 41:3 141:2 52a 7 81+ 27 + 27 + 81 +27 7:4 4n3 14712 4n 81 + 27 + 27 + 9 n4 4713 Sn2 507: 81 + 27 + 9 + 81 mama‘s-cow 4 3 2 IL 4 n 16 n 64 n 81 + 27 + 27 + 81 Table 4.1: The piecewise formula for the number of spanning trees of t-optimal (n, n+ 3) graphs. The left column is the formula, and the right is the remainder upon division of n - 6 by 9. 33 correct distillation, will always result in an optimal assignment of values to chain lengths eventually. There are some properties of Optimal chain length value assignments that we would like to know. A chain length value assignment local maximum is a global maximum; this is important because it helps us determine some of those properties. Lemma 3 Consider the set of all vectors V of length ’7, the sum of whose elements is a constant 7', that is, .7 V = {vilvi = (xtla $1323 - ' ' a min) A 22:11,]. = 7} i=1 where x” E R. Under such conditions, V is a convex set. Proof: The sum of the average is the average of the sum, because the denom- inators in computing the average (the length of the vector, which is fixed) are all the same, and because addition is commutative and division is left distributive over addition. Therefore the average of the vectors will have the same sum as the vectors themselves have and thus will be in the same set. Thus the definition of convex set applies. Ci Lemma 4 Consider the set of all vectors V of length ’7, the sum of whose elements is a constant 1, that is, '1 V = {vi-[vi = (134,373, - - . Jim) A 2:37:31 2 7'} i=1 where 23,33“ 6 R. Let us further add the restriction that x” > 0. Consider the set of functions F on vectors v E V such that 34 F = {f|f(x1,x2, . . . ,x.,) = Z aMIIer-x} j€P([7]) where am- 6 {0, 1}; ['7] is the set of all the x,- for 1 S i S 7; and P(s) is the power set of set 3. Under these situations, every such function f E F is a concave function. Proof: Each term in the sum is greater than or equal to the average of the corresponding terms as shown by a proof by induction on the length of the vector: 1 They are equal because they are the same. 2 f((zhyl)';@2ay2)) Z f($1,y1)-]2~f(32,y2) <=> (31+32)(y1+y2) > 1121+xgyz 2 —' 2 #2) 3121+3221+3122+3222 > xjy1+xgy2 2 - 2 <==> x2y1+xjyg > 0 2 And thus we are done since x1, y1, x2, y2 are all _>_ 0 Inductive step The induction hypothesis is true for n elements in the vector. Now suppose we have n + 1 elements and j is element n +1. Factor the elements in a manner similar to how the base case 2 step was done. Observe that we will have all the terms on the right on the left, as well as terms on the left involving the product of a j with elements that were not in the same vector as that j. Thus Lemma 4 is proven. C] 35 A concave function on a convex set has the property that the local maximum is global maximum. That fact is useful to is. Lemma 3 and Lemma 4, together mean that we can apply that fact to the problem Of determining values for chain lengths, Let Gd(g)(u) be the graph with distillation g and chain length tuple u. Imagine, for a given distillation 9, we have a contour plot of t(Gd(g) (u)), where the dimensions are the chain lengths and with contour lines drawn at the integer values (that is a way to visualize this with only two variables). If we look at the subspace defined by any straight line, we see that the values on the contour plot that the line intersects form a parabola due to the fact that we have two variables, let us call them x and y and, since the line defines values for all the variables other than x and y and the sum of the variables is constant (that is the restriction that makes this a convex set), we have a + bx + cy + dxy over t = x + y, where t is the constant (after the values for all the other variables have been fixed to determine a, b, c, and d). So ifwe express a+bx+cy+dxy in terms ofx and t (with no y) then we have a+bx+c(t—x)+dx(t—x) = a+b:1:+ct—cx+dtx—dx2 = (a+ct)+(b+dt—c)x-—dx2. Clearly this is, expressed in terms of x, a parabola, with the local maximum being the global maximum. This should give us a more intuitive understanding of the consequences of, and rationale for, understanding that V (as mentioned in Lemma 3) is a convex set. Lemma 5 For a graph with a specific distillation, the optimal assignment of continu- ous values to chain lengths is expressible in terms of fixed ratios between chain length values. Before we prove Lemma 5, let us consider an example. The 0 graph is a graph with 3 chains. Consider the (n, m) graph found in Figure 4.1. Let a, b, and c represent the lengths of chains A, B, and C, respectively. One way of getting a spanning tree would be to remove edge uv from chain A and edge wx from chain C. That way of getting a spanning tree is just a special case of removing any edge from chain A and any 36 0 009 ‘0 0 0 000 0 Figure 4.1: A 0 graph Figure 4.2: In a 0 graph, possible ways to have two chains with one edge removed. 37 edge from chain C. This can be found depicted in Figure 4.2(b). There are ac such possible spanning trees. Similarly, for chain pairs B, C and A, B this yields be and ab spanning trees as well, respectively, as depicted in Figures 4.2(a) and 4.2(c). For each set of chains which must have edges removed, the number of possible spanning trees is the product of the chain lengths, and thus we add up for each set and get ab+bc+ac which we want to maximize subject to a + b + c = m. Thus, clearly in this case, the optimal continuous solution ratio of chain length values to m is 31;. Proof: First observe that, since the expression for the number of spanning trees in terms of the chain lengths is a polynomial, that which uniquely optimizes the leading coefficient optimizes the solution. We induct on the number of variables (which is the number of chains). Base cases When there is one chain the ratio between the chain length and the number of edges will be a ratio of 1, by definition. See the argument given in the “more intuitive understanding” discussion after Lemma 3 regarding the two-dimensional case being a parabola. Inductive step Assume for n, prove for n + 1 Amongst the n + 1 chains, take an arbitrary chain (call it k) and fix the length of it. Applying the induction hypothesis, solve for the other 77. chains. The optimal value for that is thus expressed in terms of k and the total t. The optimal value of k can be found symbolically because of the nature of the expression: the expression is fo(t — k) + f1(t — k)k, f0 and f1 are increasing polynomial functions; let the degree Of the leading term of f0 be d, and d is one greater than the degree of the leading term of f1. Let a be the leading 38 coefficient Of f0 and b be the leading coefficient of f1. Bear in mind that a and b are independent of t and k. Now we determine the k that maximizes a(t — k)“ + b(t — k)d‘1k. a(t — k)d + b(t — Ind-1k = (t — k)“‘1(a(t — k) + bk) ¢=> a(t—k)+bk=at+(b—a)k So we take (t — k)“’1(at + (b — a)k) and differentiate it with respect to k and set it equal to zero and solve for k in terms of t. The solution is _ ad—b k —— tad-bd which clearly indicates that the value of k has a fixed ratio with respect to t, as d is fixed and a and b are the values of fixed ratios with respect to t due to the induction hypothesis. Thus, since we have proven it for the inductive step, the induction proof is complete and we have proven Lemma 5. D Lemma 6 Let xm be the length of a chain in an (n, m) graph produced by Algorithm 1. Let xm+1 be the length of the same chain when the number of edges is m + 1, that is, after one iteration of Algorithm 1. Then i'E—xm+lls 1 m m+1 7n— Proof: When applying a step in Algorithm 1, xm+1 is either xm or xm + 1 and thus respectively either the difference is jails, which is clearly less than "i:- for all positive m because xm is less than m, or the difference is m — 31;; which is less than 51- because xm is less than m and an integer strictly greater than zero. [I 39 Let the chain length tuple a be given by Algorithm 1, starting with the distillation, and let chain length tuple 3 be a least integer unique optimal solution such that for all elements in a, the corresponding element in H is greater or equal. Define dis(a, B) as \[Zi€I(IaI=|fll)l(fl‘ “‘ ailz Lemma 7 Given a chain length tuple no such that a is do with one iteration of Algorithm 1 applied to it, Algorithm 1 will choose an edge that minimizes dis(a, 5). Proof: Since all possible edge choices are equidistant in the chain length tuple space, and Algorithm 1 chooses the one with the greatest number of spanning trees, and B is optimized, it chooses the one that minimizes dis(a, [3) (if not, then the solution space is not convex). E] Lemma 8 The edge that minimizes dis(a, fl) is the edge that, after the subdivision causes (5.- -- (1,-)2 to have the greatest value. Proof: Any other edge, besides the one that minimizes dis(a, fl) won’t reduce dis(a, B) as much as the one that has the greatest (5:- — ca)2 value. Ci Lemma 9 Let the chain length tuple a be given by Algorithm 1, starting with the distillation, and let chain length tuple 3 be a least integer unique optimal solution such that for all elements in a, the corresponding element in B is greater or equal. Applying Algorithm 1 to chain length tuple a will eventually yield [3. Proof: We prove this by induction on the sum of the differences between the elements of B and a. Base case The sum of the differences is 1. This case follows from the definition of Optimal and the greedy nature of Algorithm 1. Inductive step Since the lemma holds for n as per the induction hypothesis, that means that the induction hypothesis works for a’ which is a after one algorithm 40 iteration has been applied to a, as long as the edge added in that iteration is in one of the chains for which the chain length in B is greater than in a. If we see this as a multidimensional space, then Algorithm 1 always chooses the point that “climbs the mountain” the fastest. If we were to increase the length of some chain that wouldn’t lead us to the optimal, it would have to be because of a “curve in the mountain” which cannot exist because it is a convex set (Lemma 3). This can more formally be seen as a consequence of Lemma 7 and Lemma 8. Thus Algorithm 1 will choose a chain for which the number of edges in ,8 is greater than the number of edges in a. Thus Lemma 9 is proven. Ci Corollary 1 For each chain x, the ratio 27:1 will converge to the continuous optimal solution as limitmnoo, where xm is as defined in Lemma 6. Proof: The convergence is a consequence of Leanna 9 and Lemma 6. Cl Lemma 10 Zero is the optimal ratio of the length of a chain x in a graph G to the number of edges in G if and only if removal of the edge e (the edge in the distillation D(G) that corresponds to the chain x in G ) results in a graph that is not biconnected. Proof: Removal of an edge e in the distillation results in a connected graph G -— e that is not biconnected. Since G — e is not biconnected, but G was, G — e must be connected and it must have a cut vertex v. Let A be the part of G -- e on one side of the cut vertex (and containing one of the end vertices of e) and let B be the part of G - e on the other side of the cut vertex (and containing the other end vertex of e). t(G) = t(A)t(B)$ + t2,x,v(A)t(B) + t2,x,v(B)t(A) 41 where t2,y,w(H) is the number of two—component forests of H where y is in one component and w is in another. Without loss of generality, let |E(A)| — |V(A)| _>_ |E(B)| -- [V(B)]. Let us rewrite the above equation, but reflecting that A is a function of the number of edges that it is allowed: t(G) = t(A(u))t(B)x + t2,.,.)t(B) + t2,,,,,,(B)t(A(u)) The optimal ratio having the value zero is the same as t(A(u+1))t(B)(x—1)+t2,,,,,,(A(u+1))t(B)+t2,z,,,(B)t(A(u+1)) > t(A(u))t(B)x+ t.,.,.(A(u))t(B) + t2,.,.(B)t(A(u)) 4:: (t(A(u'l'l»—t(A(u)))t(B)($-'1)+t(A(u))t(B)+(t2,x,v(A(u+1))—t2,x,v(A(u)))t(B)+ t.,.,.(B) o The latter is clearly true as t(A(x)) and t2,3,,,(A(y)) must be increasing functions and since x 2 1 (otherwise it is zero and the proof is finished either way). [1 Corollary 2 t-optimal graphs do not have distillations in which there is an edge whose removal results in a graph that is not biconnected. Cl We will now resume a series of proofs and corollaries relating to how Algorithm 1 and the (optimal) chain length ratios interplay with one another. As before, let the chain length tuple a be given by Algorithm 1, starting with the distillation, and let chain length tuple B be a least integer unique Optimal solution such that for all elements in a, the corresponding element in B is greater or equal. Lemma 5 and Lemma 9 mean that Algorithm 1 is “self-correcting” - that is for every 0: there exists some 8 and thus after some finite number of iterations applied to any a, we will have an optimal solutiOn 6. I From Lemma 9 we have the following corollary: Corollary 3 If Algorithm 1 is applied to a graph whose integer chain length tuple 42 value assignments are optimal then Algorithm 1 enables us to retain the property of having optimal integer chain length value assignments if there is a repeating and peri- odic sequence of chains that are to be subdivided. Even if there is not such sequence, the greediness of the algorithm combined with the convexity of the space means AlgOo rithm 1 will produce a sequence of graphs whose number of spanning trees will, subject to integer restrictions, be asymptotic to a polynomial with the same leading coefiicient as the not-necessarily—integer optimal solution. Corollary 4 After some number of iterations, for a chain c which Algorithm 1 would choose to increment in length, assuming Algorithm 1 was initially started with a distillation, prior to any series of incrementations of c, the ratio of the length of c to the overall number of edges is, within bounds of Lemma 6, a lower bound on the optimal ratio. Proof: Corollary 4 follows from Lemma 7 and Lemma 8. CI Similar to Corollary 4, we can infer from Lemma 7 and Lemma 8 a technique for finding an upper bound as well. Corollary 5 For a chain which Algorithm 1 has finished incrementing the value of its length (that is, the next iteration wouldn’t increment the same chain length as well ), an upper bound on the optimal ratio can be determined, within the range specified by Lemma 6. D By using this upper bound (from Corollary 5) in the equation, we can infer an upper bound on the leading coefficient (similarly for a lower bound as well, using Corollary 4). Thus, for a given distillation, from Corollary 4 and from Corollary 5 we can determine lower and upper bounds on the leading coefficient for an optimal assignment of values to chain lengths. Given a particular distillation, and that the local solution is the global solution in the continuous case, we can use the Algorithm 1 sequence to determine bounds 43 on the leading coefficient of the continuous optimal solution polynomial expression. Given those bounds, if the lower bound of one application to a particular distillation is greater than the upper bound of all the others, then that is the optimal distillation. We ran enough iterations for the lower bound of one distillation to be greater than the upper bound of all the other possible distillations of (n, n + 4) graphs, excluding the cases described in Lemma 10 and Corollary 2. That one distillation is the twisted 3—cube. Thus, by applying Algorithm 1 to the twisted 3—cube, we either get the optimal solution or by Corollary 3 something very close. The sequence of chains is not periodic when Algorithm 1 is applied to the twisted 3—cube. We endeavor to consider the consequences of this. First we will consider when the sequence of chains might be periodic, and then we will be better able to discuss why the sequence of chains is not periodic when Algorithm 1 is applied to the twisted 3—cube. Given that we have two classes of chains, we can further investigate the conjecture that chains in the same class differ in length by no more than 1. Given that all the previous tuoptimal (n, m) graphs (m S n + 3) have chains about the same length, the following lemmas make sense: Lemma 11 If optimal ratios of chain lengths for a particular distillation are rational values then continuous optimal chain lengths are integer values for some infinite subset of the optimal solution (usually at least one piece of a piecewise function). Proof: The least common multiple of the denominators of the ratios, when ex- pressed as smallest possible fractions, will, by definition, be an integer such that, for every multiple thereof that is a number of edges, the chain lengths will have integer values that are optimal ratios. El Lemma 12 If optimal ratios of chain lengths for a particular distillation are rational values then there is a periodic progression of chains to subdivide. 44 Proof: The least common multiple of the denominators of the ratios, when ex— pressed as smallest possible fractions, will, by definition, be an integer that, for every multiple thereof that is a number of edges, the chain lengths will have the same ratios. Thus, the progression of chains to subdivide will be periodic with the period being that least common multiple. El Lemma 13 If, in a t-optimal graph, there is a periodic progression of chains to subdivide, then all chains in the same class have lengths that difler by no more than 1. Proof: The period will not repeat until all the chains have been incremented at least once (every chain must be represented due to Lemma 10 as Corollary 1 means that if a chain is not represented in the period then its optimal ratio would be zero which doesn’t happen). Corollary 1 also means that the periodic progression will dominate the determination of the ratio over time. Cl Optimal ratios being rational values, along with the fact that, as stated earlier, Lemma 3 and Lemma 4 together mean that since we can apply the fact that a concave function on a convex set has the property that the local maximum is global maximum to the problem of determining values for chain lengths, Lemma 13 holds under those circumstances. Now that we have established that rational optimal chain length ratios in the continuous solution mean a periodic progression of chain length subdivisions, we can explore why we do not see this in the (n, n + 4) case. Restating from Section 3.4, by using the equation for t(G) in terms of the chain lengths of G, which is unique for each distillation up to isomorphism, and the conjecture that all chain lengths in the same class are about the same length1 and the fact that the sum of the chain lengths is the number of edges, we can create a system of equations, to be specific, two equations, two unknowns, and thus solve and optimize for t(G) (as mentioned in Section 3.4). 1which is shown to be true for certain cases such as in Lemma 13 45 When we, given the equations, optimize for t(G), the optimal ratio of the number of edges in these different classes is an irrational number, namely \f5- — J5. This suggests why we do not have the same kind of periodic progression of chain lengths to increase as found in the m S n + 3 cases2. Thus, all we can work with is Corollary 3 in the (n,n + 4) case. As mentioned in Section 3.4, we have considered the (n, n + 5) case and do have a conjecture as to the distillation (the Petersen graph) and orderly periodic subdivision progression. However, we do not have a proof of this as of yet via the same means as we obtained for the (n, n + 4) case. Specifically, application of Corollary 4 and Corollary 5 to the (n, n + 5) case is problematic because of the vastly larger number of possible distillations to consider. 4.5 Empirical Results We used nauty to do analyses of as many graphs as we could to find which ones were t-optimal. Our data for the values of t(n, m) can be found in Tables 8.1 and B.2. During our investigation using that technique, we determined that t-optimal graphs are not unique; see Section 3.2 for some of the earlier results we obtained regarding this. Since then we have found some additional pairs of non-isomorphic t-optimal graphs, as found in Figure A.19 and Figure A.20. Alternatively this can be viewed as considering IS (n, n + k)| to determine how unique the t-optimal graphs are. Our results are listed in greatest detail in [24]. As you can see, for most n, n + It, IS (n, n + k)| = 1. Reiterating from before, the five cases encountered so far where |S(n, n+ k)| > 1 can be found in Figure A.16, Figure A.17, Figure A.18, Figure A.19, and Figure A.20. 2It also means that Lemma 13 does not necessarily apply to the (n, n+4) case as the subdivisions are not in a periodic pattern, one of the conditions for that lemma to apply. 46 4.6 The Impact of Edge Addition In this section, we propose some techniques to help model the impact of edge addition on t(G). As demonstrated in Corollary 3, we have a way of finding the seed graph, if one exists, for any k. If there exists a technique to find a seed for k + 1 based on a seed for k, it would seem natural to contemplate the addition of an edge. We generalize the notion of a spanning tree to include some other structures as follows: Given a graph G, a k-dispasugr‘a3 is a spanning subgraph of G whose cyclespace has dimension k. Thus, a spanning tree is a O-dispasugra. Any spanning cycle is a l-dispasugra, but the converse is not true. Consider a l-dispasugra with cycle c. It is clear that by removing any edge from c we can obtain a O-dispasugra. In fact, this observation generalizes - for any k~dispasugra, removing any edge from any cycle results in a (k — 1)-dispasugra due to the fact that the cyclespace dimension of a connected graph is m -— n + 1 [10] and removing any edge from any cycle in a k—dispasugra will result in a subgraph that is still connected and spanning. Let tk(G) represent the number of lc-dispasugras of a graph G. By definition, to(G) = t(G), the number of spanning trees of G. Let Sk(G) represent the set of k-dispasugras of a graph G. Let C(G) be the set of all cycles of G and let E(C'(G)) be the union of the edge sets of the cycles of C(G). The number of cycles that an edge is in is C(e, G). As a reminder from Section 1.2, to contract a cycle c in a graph G is to contract all the edges in the cycle c, which we denote with G- c. Thus, the number of k-dispasugra that result from the contraction of cycle c is tk(G- c). Consider an (n, m) graph G with edge set E = E(G) and let T be a spanning tree of G. For each edge e 6 E -— E(T), the graph T+ e is a l-dispasugra. Thus, for each spanning tree of G there are m — (n — 1) ways to get a l-dispasugra. We can extend this notion. Thus, for k-dispasugra p, for each edge e E E — E (p), the graph p+e is a 3The word k-dispasugra is a made up word that is a shortened form of k-dimension—spanning— subgraph. 47 (k +1)-dispasugra. A k—dispasugra p has (n - 1) +16 edges by definition. Thus, there are m— (n— 1) —- 1:: elements in E— E(p), implying that there are m — (n— 1) — k ways we can add an edge to a k-dispasugra to get a (k + 1)-dispasugra. This observation leads to the following equation: (m — n + 1 — k)tk(G) = Z |E(C(H))| (4.1) H€3h+1(Gl The left-hand side of Equation 4.1 is the number of (k-dispasugra, additional edge) pairs in G. The right-hand side is the number of ((k + 1)—dispasugra, edge from a cycle in the dispasugra) pairs in G. The right-hand side is the same as the left-hand side because any edge e from G not in a k-dispasugra p that is added to that p will be result in a (k + l)-dispasugra ,u and that edge e will also be in a cycle in u. As a reminder, we use C (e, G) to represent the set of cycles of G containing the edge e. Thus, we have Ze€E(C(G)) |C(e, G)| = ZcEC(G) |c| because each cycle contains |c| edges. For each e E E(C(G)), c E C(e, G) pair, find tk(G-c) and add them up. Thus we get the following equation: 2 loam-c): Z 2 mac) (4.2) cec(G) eeE(C(G)) c€C(e,G) , The left-hand side of Equation 4.2 is the number of (cycle, edge from cycle, k- dispasugra) triples which is the same as the number of (edge in a cycle, cycles con- taining that edge, k-dispasugra) triples, which is the right-hand side. The following equation is a form of double-counting the number of (edge e, (k + 1)- dispasugra involving edge e) tuples (on the left, the edge first and then the dispasugra, and on the right, for every such dispasugra, all the edges). 48 Z tk+1(G‘e)= Z |E(C(H))| (4.3) c€E(C(G)) HGSk-H(G) The following equation is a consequence of the fact that every edge that is in a cycle in a (k + 1)-dispasugra is in a cycle, and that the contraction of every such cycle results in some k-dispasugra. veeE(C(G))- tk+1( (G 6):: Z tk( (G C) (4.4) CEC(e, G) The following equation is merely the summation of the above equation over all the e E E(C(G)). Z tk+1(G-e)= Z 2 mac) (4.5) e€E(C(G)) e€E(C(G)) c€C(e,G) The following equation is the result of applying Equation 4.1, Equation 4.2, Equa- tion 4.3, and Equation 4.5 together. (m -— n + 1 — rams) = Z |c|tk(G~ c) (4.6) cEC(G) By a cycle reduction of a connected graph G we mean recursively contracting cycles (or loops or parallel edges), in some order, until we get a single vertex (or a tree if G is not biconnected). Note that the contraction of a cycle may result in loops or parallel edges, and that for our purposes, loops count as cycles as does a single pair of parallel edges. An ordered list of cycles which renders a cycle reduction of G will be referred to by B. Each cycle being contracted in a cycle reduction process is a cycle at the time it is contracted, but not necessarily prior to that. Note that not every list of cycles will result in a cycle reduction of G. When we contract a cycle, the label for the vertices that were part of the cycle are now all synonymous with the vertex that is the result of the contraction. We will use the notation CR5(G) to represent the cycle 49 0 0 e 9 0 0 o 0 L. .9.” (a) Graph G (b) G1 = 0023(1) (C) 02 = (ii-3(2) Figure 4.3: An example of two cycle contractions of a graph. reduction of G with respect to a particular E where CR5(G) = {G0,G1, . . wGlfiI} such that Go is G and G,- is G,_1 with the ith element (cycle) of ,3 having been contracted - that is, G,- = G,_1- 3,. Note that what is a cycle c in G,- may not be a cycle in G ,- (where j < i) as it may involve two edges that are incident in G,- only because the vertex that they are both incident to is the result of the contraction of a previous cycle and that prior to such a contraction those edges were incident to different vertices. Consider the graph G as found in Figure 4.3(a). An example [3 for G would be the ordered list (ABE, DEC, L1, L2}, where L1 and L2 are as in Figure 4.3(c). In CR5(G), G0 = G which is Figure 4.3(a). First, cycle ABE would be contracted, which yields graph 01 in Figure 4.3(b). Then cycle DEG would be contracted, which yields graph 02 in Figure 4.3(c). After that, cycle L1 (as found in Figure 4.3(c)) would be contracted resulting in a graph with a single vertex and loop for G3, and finally cycle L2 (which also can be found in Figure 4.3(c)) would be contracted so that we result in a single vertex for G4. The order in which the loops L1 and L2 are contracted matters as they are different cycles (as we are defining the notion of cycle) and a [3 is an ordering of these cycles, so order matters. 50 It can be shown that neither does an ordered basis of the cyclespace necessarily define a B nor does a ,8 necessarily define an ordered basis of the cyclespace. Showing this is left as an exercise for the reader. Let r(G) be the set of all possible E for G. We can apply Equation 4.6 iteratively, contracting cycles along the way, until there are no more cycles (or loops) left in G. Since we contract all cycles, in all possible orders, we do this duplication of Equation 4.6 for all 3 in r(G). For each fl, we multiply the sizes of the cycles since, Equation 4.6, for each cycle contraction, results in the multiplication of that cycle’s size. The factor on the left-hand side of Equation 4.6 is divided by on both sides and so brought over to the right-hand side to form the IE]! found in the denominator in the resulting closed form which follows: t(G)=to(G)-= 2: ”g?!“ (4.7) 39(0) ' Consider Equation 4.7. If we rewrite the cycle sizes as sums of chain lengths and then expand, we can see that this expression is the same as the formula in terms of chain lengths. The numerator will have every term in the chain length formula in every possible ordering and the denominator will have the number of possible orderings which will cancel out the fact that all possible orderings are represented. Define a set of partial chains of a graph to be an edge disjoint set of paths such that every chain of the graph is expressible as a combination of some number of partial chains. Lemma 14 Equation 4.7 can be rewritten in terms of sums of partial chains. El Lemma 14 is true by definition of partial chains. To do so, just replace every chain with the sum of its partial chains. 51 4.6.1 The Impact of Co joining Vertices We can cojoin two vertices of degree two that are end vertices of partial chains. This has an interpretation with respect to Lemma 14 as follows: For each term in an expression in Lemma 14, the two vertices u and v to be cojoined will always be in the same cycle after the contraction of all the cycles prior to the last cycle y that contains both vertices to be cojoined as distinct vertices that have not already been contracted together. This is because each contraction of a cycle c involving a vertex v puts that vertex v into any cycle that had involved any of the other vertices on that cycle c. That last cycle will thus then be forced, due to the co joining, to no longer be a cycle as both v and the vertex u that it is being cojoined with will be in the cycle, but cycles cannot have repeated vertices, and u and v are the same vertex after co joining. If u and v are not both on y after the contraction of all cycles prior to y then, after the contraction of y, there is still a cycle which contains u and v as distinct vertices which contradicts the definition of y as the last cycle that contains u and v as distinct vertices. Since u and v are in the same cycle after contraction of the prior cycles, the cycle can be split into two smaller cycles. As order matters (by the definition of fl), there will be two new terms (one for each arrangement of the two smaller cycles), each of which will have a greater degree (a greater number of cycles). The two vertices to be co joined will not be contracted into being the same vertex after the contraction of the prior cycles - if they were, then that means that they’re both in a prior cycle c,D (after the contraction of the cycles prior to cycle C?) and thus that is the factor in the term that should be affected. So that we may keep track of which partial chains go into which cycle, when we express a cycle as a sum of partial chains, we always arrange the terms in order such that any two adjacent partial chain terms are next to each other in the algebraic expression. To facilitate finding out which terms involve which vertices, we subscript 52 F B E C D (a) K 4 with two subdivisions (b) A and D cojoined in Figure 4.4(a). Figure 4.4: An example of the cojoining of two vertices. every partial chain term with labels for the and vertices of that partial chain. For example, the graph in Figure 4.4(a) has the terms found in Figure 4.5 in its expression for the number of spanning trees, using the previously described labeling notation, with the terms in lexicographic order. You will notice that only the first two terms have vertices A and D in the same cycle. Now, if we cojoin vertices A and D as found in Figure 4.4(b) then we have Figure 4.6 as per the previous description of how to adjust the expression. Note that the expressions found in Figure 4.5 and Figure 4.6 are both expressions in terms of partial chain lengths. Thus, the determination of the number of spanning trees for a given distillation is the evaluation of the expression and is largely irrespective of the number of vertices or edges (as long as it is a graph that has that distillation). For enough partial chains of sufficient length, the computation of the number of spanning trees through the Kirchoff matrix technique will be substantially more complex. The computational complexity for finding the determinant of a matrix is 53 “1303,43 + $3.0 + 170,1) + 330,3 + 933,5: + $F,A)($B,E)($C,Fl + 7(1343 + 338,0 + $0.17 + 170,3 + $3,}? + $F,A)($C,F)($B,E) + 7($A,s + 133,0 + $C,F + $F,A)(IC,D + $0,153 + $B,El($E,F) + 7($A,B + 333,0 + Star + $F,A)($B,E + $E,F)($C,D + 150,5) + 7($A,B + 553,0 + 130,1? + $F,A)($C,D + 30,3 + $E,F)($B,E) + 7($A,B + 333,2 + 353,1? + IF,A)($B,C + $C,F)($C,D + 270,19) + 7(SL‘A,B + 133,3 + $E,F + $F,A)($C,D + 370,3 + $C,F)($B,c) + 7($A,B + £53,1-: + $13,}? + $F,A)($B,C + 330,0 + $D,E)($c,F) + 7($B,C + 930,5“ + 152,17 + $B,El($F,A + $A,B)($D,E + 170,0) + 7($B,c + 580,17 + $13,}? + $B,E)($D,E + $C,D)($F,A + 124,3) + 7($B,c + 930,1) + 150,3 + $B,E)($C,F + $E,F)($A,B + $F,A) + 7($B,c + 330,0 + 930,1: + $B,E)($A,B + IRA + $0,F)($E,F) + 7(9333 + 330,0 + 330,3 + $B,El($A,B + ICEA + $E,F)($C,F) + “3(IEC,D + 930,13 + $E,F + $C,F)($B,E + $B,c)($A,B + $F,A) + 7($C,D + 930,13 + 933,17 + IC,F)($B,E + 134,3 + $F,A)($B,cl + §($C,D + 330,3 + far + $C,F)(IEB,C + 131,3 + $F,A)($B,E) Figure 4.5: An expression for the number of trees of Figure 4.4(a) where rig- is the length of the partial chain from vertex i to vertex j. at least order n2 (and is typically closer to n3). The number of arithmetic operations needed for the evaluation of the expression in Figure 4.6 is less than 300. Thus, if the partial chains were to be all of length three or greater, then clearly evaluation of the expressions found in Figure 4.5 and Figure 4.6 would be more efficient than utilizing the Kirchoff matrix technique. Utilization of the Kirchofl' matrix technique would result in a matrix of larger than 20 by 20 and thus definitely take more computations than the technique described here. See such a matrix in Figure 4.7. Consider the chain form of the Feussman formula: t(G) = t(G — c)|c| + t(G- c) where c is a chain. This chain form of the formula is easily verified from the original Feussman formula ( t(G) = t(G —- e) + t(G~ e) ) and it is easy to see that it is applicable to partial chains as well. Ifwe haveagrath andanedgee =uv whichisnot inE(H) andG= H+e then G- e is H with the vertices u and v cojoined. We will denote the cojoining of 54 *1‘($A.3 + $3.0 + $0.3)($3.3 + $3.3 + $F.A)($3.3)($0.3) + 7($3.3 + $3.3 + $3.Al($A.3 + $3.0 + $0.Dl($3.3)($0.3) + 7($A.3 + $3.0 + $0.3)($3.3 + $3.3 + $F,A)($0,3)($3.3) + 7($3.3 + $3.3 + $F.A)($A,3 + $3.0 + $0,3)($0,F)($3.3) + 7($A.3 + $3.0 + $0.3 + $F.A)($0,3 + $3.3 + $3.3)($3.3) + '.1T($A,B + $3.0 + $0.3 + $F,A)($3.3 + $3.3)($0.3)($3.3) + 7($A,3 + $3.0 + $0.3 + $F.A)($3,3 + $3,Fl($3.3)($0.3) + 7($A.3 + $3.0 + $0.3 + $F,A)($0.3)($3.3 + $3.3)($3.3) + 7($A.3 + $3.0 + $0.3 + $F.A)($3,3 + $3.F)($0,3)($3.E) + 7($A.3 + $3.3 + $3.3 + $F.A)($3.0 + $0.Fl($0.3l($3.3l + 7($A,3 + $3.3 + $3.3 + $F,A)($3.0 + $C,F)($3.3l($c.3) + 7($A,3 + $3.3 + $3.3 + $F,A)($0.3)($3.3 + $0.3)($3.0) + 7($A.3 + $3.3 + $3.3 + $3.A)($3.3 + $0.Fl($0.3)($3.0) + 7($A,3 + $3.3 + $3.3 + $F.A)($3.0 + $0.3)($3.3)($0.F) + 7($A,3 + $3.3 + $3.3 + $F,Al($3.3)($3.0 + $0.3)($C.Fl + 7($3.0 + $0.3 + $3.3 + $3.3)($3,A + $A.3)($3.3)($0.3) + 7($3.0 + $0.3 + $3.3 + $3.3)($F.A + $A.3)($0,3)($3.3) + 7($3.0 + $0.3 + $3.3 + $3,3)($3.3 + $0.3)($F.A)($A.3) + 7($3.0 + $0.3 + $3.3 + $3.3)($3.3 + $0.3)($A.3)($F.A) + 7($3.0 + $0.3 + $3.3 + $3.3)($0,F + $3.3)($A.3)($F,A) + 7($3,0 + $03 + $3.3 + $3.3)($0,F + $3.3)($F.A)($A,3) + 717($3.0 + $0.3 + $3.3 + $3.3)($A.3)($F.A + $0.Fl($3.3) + 7($3,0 + $0.3 + $3.3 + $3,3)($F.A + $0.3)($A,3)($3,F) + 7($3,0 + $0.3 + $3.3 + $3,3)($A.3)($F.A + $3,Fl($0.3) + 7($3.0 + $0.3 + $3.3 + $3,3)($F.A + $3.3)($A,3)($0.F) + 7($0,3 + $3.3 + $3.3 + $0.Fl($3,3 + $3.0)($A.B)($F,A) + 7($0.3 + $3.3 + $3.3 + $0,F)($3,3 + $3.0)($F,A)($A.3) + 7($0.3 + $3.3 + $3.3 + $0.3)($3,3 + $A.3)($F.A)($3.0) + 7($0.3 + $3.3 + $3.3 + $0.Fl($F.A)($3.3 + $A,3)($3.0) + 7($0.3 + $3.3 + $3.3 + $0.3)($3.0 + $A,3)($F.A)($3,3) + a($0.3 + $3.3 + $3.3 + $0.3)($F.A)($3.0 + $A.3)($3,3) Figure 4.6: An expression for the number of trees of Figure 4.4(b) where 13,-J- is the length of the partial chain from vertex i to vertex j. 55 2 0 0 0 0 0 -l 0 0 0 0 0 0 O 0 0 0 —1 0 0 O 0 l 0 3 0 0 0 0 0 -—1 --1 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 3 0 0 0 O 0 0 —1 -—1 0 0 0 0 0 0 0 0 0 -1 0 0 0 O 2 0 0 0 0 0 0 0 -1 -1 0 0 O 0 O 0 O O O 0 O 0 0 3 0 0 0 0 0 0 0 0 —1 -1 0 0 0 0 --l 0 0 0 0 0 O 0 3 0 0 0 0 0 0 0 0 0 -1 -—l 0 0 0 0 —l —-1 O 0 O 0 0 2 -—1 O 0 0 0 0 O 0 O 0 0 0 0 0 0 0 -1 0 0 0 0 --1 2 0 0 O O 0 0 0 O 0 0 O O 0 0 0 —1 0 0 0 0 0 0 2 —-1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 -—l 2 0 0 0 0 0 0 0 0 0 O 0 0 0 0 —1 0 0 0 0 O 0 0 2 -l 0 0 O 0 0 0 O 0 0 0 0 0 0 -—l 0 0 0 0 0 0 —1 2 0 0 0 0 0 0 0 O 0 O 0 0 0 --1 0 0 0 0 0 0 0 0 2 -1 0 0 0 O 0 0 0 0 O 0 0 0 —1 0 0 0 0 0 0 0 -—1 2 0 0 0 O 0 0 0 0 0 0 0 O —1 O O 0 0 0 0 0 0 0 2 —1 0 0 0 0 0 0 0 0 0 0 O —1 0 0 0 0 0 0 0 0 —1 2 0 0 0 0 0 0 0 0 0 0 0 -—1 0 0 0 0 0 0 0 O 0 0 2 -1 0 0 0 0 -1 0 0 0 0 0 O 0 0 0 0 O 0 0 0 0 -1 2 0 O 0 0 0 —1 0 0 O 0 0 0 0 0 0 0 O 0 0 0 O 0 2 —1 O 0 0 0 0 0 ——1 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 2 0 0 0 0 -—1 0 0 0 0 0 0 0 0 0 0 0 0 O 0 0 0 0 2 —l L 0 0 0 0 O —1 0 0 0 0 0 0 0 0 0 0 0 O 0 0 -l 2 _ Figure 4.7: A matrix vertices u and v of H via H - uv. Cojoining can be thought of as the contraction of a non-existent edge. Thus, we can think of adding a chain c to a graph G as the following: t(G) = t(G — c)|cl + t(G- c) We can apply the above regardless of whether or not c is a chain in G or not - if not, the internal vertices of the chain (if any) must be new vertices and the end vertices must be vertices that already exist in G. The G - c represents the cojoining. The t(G -— c) |c| represents the unaffected terms, which means the chain c is a cycle after the contraction of all other edges. Thus, we can use the cycle contraction formula and the Feussman formula to find a new chain form of the formula. There is a way we can use the cycle contraction formula and the Feussman formula to obtain a cycle contraction formula (without needing to factor the chain form of the formula) and we describe it next. 56 4.6.2 The Reverse Application of The Feussman Formula to A Cycle Contraction Formula Every chain and every edge is represented in every )6, since every 3 results in the contraction of every cycle - if a chain or edge were not contracted then 3 would not reduce the graph down to a single vertex. Consider the application of Equation 4.7 to Equation 4.6. (m—n+1—k)t(G)= 2: [cl 2 [LE/3M (4.8) «50(0) l3€r'(G~c) W! This shows that we can factor out any cycle. For every ,6, there is a cycle ‘fi that contains partial chain 45. Let us factor out that cycle ‘6. For every such 45 containing ‘6 factored ,6, we may apply the Feussman formula with regard to d). This use of the Feussman formula in the context of a cycle contraction formula is something that, similar to the addition of a chain to a chain length formula to find a new chain length formula, can be applied in reverse, with the caveat that every new 3 must be a valid 3 and that no 3 be overlooked. To ensure that every new 5 is a valid [3 it suffices to ensure that (23 only be added to terms for which it is part of that corresponding cycle, which may include previously empty cycles (previously empty terms). By “previously empty cycle” , we mean a cycle c containing the end vertices of this new chain, (but not the edges of the new chain) all of whose (c’s) edges have been contracted in prior cycles in the ,6. To ensure that no 3 is overlooked, it suffices to show that consideration of the addition of qb to all terms (including the empty ones implicit between all other terms) will cover all possible 3. This is clearly the case as every d) is either in a term by itself (an empty term) or not (an already existant term). Thus, for a given cycle contraction formula for a graph G , we can add a chain between any two vertices that are partial chain endpoints and get a new cycle con- 57 traction formula. Thus, we have a way to model the impact of chain addition on the expression for the number of spanning trees in terms of chain lengths when in the cycle contraction formula form. 58 Chapter 5 Future Work In the future, we would like to prove or at least further examine many of the con- jectures mentioned. Amongst those conjectures would be those found in Section 3.4 and this section. We would also like to further investigate the lower bounds possible through the techniques described in Section 3.5. Previous algorithms for finding t-optimal (n, m) graphs when m S n + 3 have been published, as have formulas for t(n, m) when m S n + 2. We have an algorithm that is accurate within certain bounds, when m S 3’23 and k 2 2, for (n, n + k), which provides us with polynomial formulas in some cases and bounds on leading coefficients of these polynomials on all others. This not only gives us an explicit formula for the (n, n + 3) case and the distillation of the (n, n + 4) case, but, up to computational feasibility of algorithm execution, all m 3 if cases. We also have another conjecture, which all observed (n, n + k) follow, namely, Conjecture 6. Conjecture 6 A technique to create 3-regular graphs with large numbers of spanning trees: 0 Find an optimal (n,n + k) graph for some n (where n 2 2k). 0 Find the distillation and then iterate Algorithm 1 twice. 59 Figure 5.1: The t-optimal (12, 18) graph 0 There will be two non-adjacent vertices of degree 2; join them with a new edge. 0 You now have the distillation of a t-optimal (n, n + k + 1) graph. Conjecture 6 holds for those cases we have seen so far, such as the simple cycle, 0 graph, K 4, K 33, and twisted 3—cube. Based on this conjecture, the optimal distilla- tions for (n, n + 5) and (n, n + 6) are the followingl. respectively: the Petersen graph, and then the (12. 18) graph found in Figure 5.1. In addition to Conjecture 6 and the other conjectures that we’ve mentioned earlier, we also conjecture that all t-optimal (n, m) graphs with m > 37" can be found by taking a three—regular t-optimal graph and contracting a certain spanning forest in which the tree sizes differ by no more than one. As we now have a means to better, more efficiently, and more concisely understand the impact of edge operations such as addition, deletion, subdivision, and contrac- tion on the number of spanning trees of a graph, we now plan on more thoroughly investigating some of these conjectures, specifically Conjecture 6. 1The distillations are t-optimal graphs, we have verified this through the use of brute force already. It remains to be seen if they are the distillations of all t-optimal (n, n + k) graphs with the same k where k _>_ g. 60 Appendix A Figures 61 \'l "\ Figure A.1: A complete 5—regular 1-partite graph. A D E‘H C Figure A.2: A complete 3-regular 2-partite graph. 62 3nd 0'9 Figure A.3: A complete 4-regular 3—partite graph. Note how every vertex is connected to any vertex not in the same partition. A, V\ Figure AA: A complete 4 - 5 regular 4-partite graph. This graph 18 K1,1,2,2. 63 1 .7,1081‘ '.‘1'2'1 " a4,5:4 a2,3:2 a8.10:15 a7,9:13 16,9:12 06,8311 .31‘33 Figure A5: The Petersen graph. a t-optimal graph with 10 vertices and 15 edges. 64 Figure A.6: A spring embedding of the t-optimal graph with 8 vertices and 12 edges. This graph is more commonly drawn as in Figure 3.1. Figure A.7: An embedding of the t-optimal graph with 9 vertices and 13 edges 65 ‘$“( / Figure A.8: An embedding of the t-optimal graph with 10 vertices and 14 edges ’VV 7 / Figure A.9: An embedding of the t-optimal graph with 11 vertices and 15 edges 66 Figure AM: An embedding of the t-optimal graph with 13 vertices and 17 edges 67 a I I j T I I + + 3 '- + .l so - + - < 25 - .1 § . 20 - .9 + + 6 i i i + 15 l- + + 4‘ u + + + 4 + 0 ++ ++ + * + ‘0 *- Hr + d + 4 o w e v + + ++ + H» fl «*1» i + 4» :7 + Hr + l- 5 )- . 4 . i. .. + + +2» ‘ :0 4+ 6 f 4- 9. fl +4++ O- G 6 -H + l ++ r + +4 thou . 0w 5 + 4+ 4M+ w+»+-++++u«m+e+ro+ + ++ o J l_ 1 1 l I 80 100 150 200 250 300 350 400 Figure A.12: A histogram of non-isomorphic graphs and their number of spanning trees with 8 vertices and 12 edges. The :c axis is the number of spanning trees. The y axis is the number of non-isomorphic graphs with that number of spanning trees 250 I 1 l I l I I I I 1 zoo~ * - 4. 150l- , - + o + v + 4 t r roo~ . + 1 l} 4' ' '0 + .. ‘“ + + 4» + ++ + + + e "r + + t++ -' + 50- ,4 f, 1; g l ' .a 1‘ .1» + + l acme; + .. * +*¢‘ e ’ ‘ 3 fl .,fi+§‘ -e- 4. +. ; 'rfar‘f‘w’ * a“ + + 4,. 1+. *+ + + , +1; #f‘fiHrp 1' 1* + +w+ 1, 4‘ ¢§¢Ot 1 +7“ E 4‘ + + 0 1 n l 1" +*+i + 50 100 150 2W 250 300 350 400 450 500 560 no Figure A. 13: A histogram of non-isomorphic graphs and their number of spanning trees with 9 vertices and 13 edges. The 2: axis is the number of spanning trees. The y axis is the number of non-isomorphic graphs with that number of spanning trees 68 1200 l v 1 I l l r 1000'- ‘ - . . . aoo— . - - . , mo. 0 . q l l ‘ .3 Q t , ‘ 400- ‘ r r a ‘V r .o t , . ‘9 . . w" l e w‘r. . g 200. or (w l :‘X . - in . ‘3 ~11. , ’4' , ‘+ v . . ' “3' t’dg«'~ '1: 091,13" ~ r. o 0 +‘{ "o‘ “‘e "1". o :H offité‘ 3." ‘5’ ’ O 1‘ ’1 J.\"""'.. l . 0 100 200 300 400 500 600 700 K10 Figure A.14: A histogram of non-isomorphic graphs and their number of spanning trees with 10 vertices and 14 edges. The :z: axis is the number of spanning trees. The y axis is the number of non-isomorphic graphs with that number of spanning trees m I I r I I 4 4500— . $ 4000- . 1 9 350°. ’ l ‘ " 3000- i +’ q 9 e aco~ ‘ - , h‘ ‘0 a 2000- f .g . 1500'- ' IM- - We - 0 i Figure A.15: A histogram of non-isomorphic graphs and their number of spanning trees with 11 vertices and 15 edges. The :1: axis is the number of spanning trees. The y axis is the number of non—isomorphic graphs with that number of spanning trees 69 \V (a) A t—optimal (8,18) (b) Another t-optimal graph (8,18) graph Figure A.16: t—optimal (8, 18) graphs his} " ”can v v v (a) A t—optimal (b) Another t-optimal (10,27) graph (10,27) graph Figure A.17: t-optimal (10, 27) graphs (a) A t—optimal (b) Another t-optimal (11,18) graph (11,18) graph Figure A.18: t-optimal (11,18) graphs 70 .sz «we 9'5. < . {,9 ‘4 -. x O V 2 f. ‘l. K. l - (a) A t-optimal (b) Another t-optimal (11,42) graph (11,42) graph Figure A.19: t-optimal (11,42) graphs 4, use (a) A t-optimal (b) Another t-optimal (12,22) graph (12,22) graph Figure A.20: t-optimal (12, 22) graphs 71 .-‘ t 1 I7,10114 --" 'li- n4,5:4 83.10815 a6,9n12 Figure A21: The result of applying Algorithm 1 0 times to the Petersen graph. 1 £3,1011‘ ' “.211 “,5“ e8,10:15 a6,9:12 al.4a1 Figure A22: The result of applying Algorithm 1 1 times to the Petersen graph. 1 17.10!!! ..'”"’-_ 14,5I4 ~-- eB,10:15 l7,9I13 36,9312 ”3‘33 Figure A23: The result of applying Algorithm 1 2 times to the Petersen graph. 72 1 £1,10114 “,5“ am: «1.10.15 11.9.13 al.9312 06,0311 83,4I3 Figure A24: The result of applying Algorithm 1 3 times to the Petersen graph. 1 01.1031. “,5“ al.10315 a6,9312 e3,0:J Figure A25: The result of applying Algorithm 1 4 times to the Petersen graph. 1 |1.10r14 |7,9:13 e3.4:3 Figure A.26: The result of applying Algorithm 1 5 times to the Petersen graph. 73 1 87.10116 w-’ ’—“ a4,5:4 8.,10115 a6,9:12 OJ.Cs3 Figure A.27: The result of applying Algorithm 1 6 times to the Petersen graph. 1 .7,XOIIG u:"”.l- a4,5n4 A2,}:2 .ll."-u IO,10315 l1,9313 e6,9i12 a6,ln11 a3.4:3 Figure A.28: The result of applying Algorithm 1 7 times to the Petersen graph. 1 n7,10r1£ “,5” 0.3.2 I..1°I15 a1,9:13 a6.9I12 O‘,.l11 l3.4:3 Figure A29: The result of applying Algorithm 1 8 times to the Petersen graph. 74 1 anion is «,5u al.10115 a1,9:13 a6,9312 Figure A30: The result of applying Algorithm 1 9 times to the Petersen graph. 1 a1,10:14 no.10315 I7.9013 a6,9112 Figure A31: The result of applying Algorithm 1 10 times to the Petersen graph. 1 .7'x031‘ 08,10I15 16,9I12 Figure A.32: The result of applying Algorithm 1 11 times to the Petersen graph. 75 1 |7,10:14 'la1,2s1 00,10:15 a1,9|13 a6,9u12 e6,0n11 1 l7.10016 "era-1 e0.10:15 a1,9313 a6,9:12 06.0:11 Figure A.34: The result of applying Algorithm 1 13 times to the Petersen graph. 1 I7,10I1£ ; , ' al.2rl no.10315 a6,9:12 Figure A.35: The result of applying Algorithm 1 14 times to the Petersen graph. 76 l .1,10I14 "a1.2|1 al.10313 a6,9I12 Figure A36: The result of applying Algorithm 1 15 times to the Petersen graph. 77 Appendix B Tables m = 4 5 6 7 8 9 10 11 12 3 3 4 4 5 8 5 6 16 12 6 T 7 24 16 7 8 45 36 21 8 9 75 81 51 27 9 10 125 135 117 72 33 10 11 225 231 168 96 40 11 12 384 432 392 240 128 48 12 13 576 720 720 560 328 160 56 14 864 1200 1280 1200 800 448 200 15 1296 1840 2304 2223 2000 1120 600 16 2800 4096 4032 3672 2800 1568 17 4200 6144 7168 7000 6020 3920 18 6125 9216 12480 12584 11760 9800 19 8575 13825 19760 22720 21952 19208 20 12005 21025 32000 40960 40365 37632 The entries are the values of t(n, m). 78 Table B.1: A table of some of the results we obtained. (1) m n=7 m=8 n=9 n=H) n=H. n=J2 21 16807 30000 48000 64125 72657 68992 22 42000 72000 101250 130691 127764 23 58800 105000 159375 208320 233037 24 82944 148625 250000 332800 25 110592 210125 390625 521284 26 147456 295488 546875 810000 27 196608 419904 765625 1260000 28 262144: 559872 1071875 1878000 29 746496 1500625 2740500 30 1000188 2116800 4050000 31 1333584 2861568 5670000 32 1750329 3852576 7938000 33 2250423 5186160 11113200 34 2893401 6914880 14929920 35 3720087 9219840 20030976 36 4782969 11908960 26873856 37 15366400 35831808 38 19756800 48185669 39 25401600 63518455 40 32768000 82824896 41 40960000 106489152 42 51200000 136914624 43 64000000 176046080 44 80000000 227127296 45 100000000 285474816 46 356843520 47 446054400 48 558892224 49 698615280 50 864536409 51 1056655611 52 1291467969 53 1578460851 54 1929229929 55 1937019605 Table B2: A table of some of the results we obtained. (II) The entries are the values of t(n, m). 79 e 123456789012w456 HAAAAAAAAAAAAAAAA 196824 600 ) mmmw8089618m32mnmu G 894M16082925928 ( 703831915102 w1112222222222222 ”1111111112222222 ”1111111111111112 H1111112222222222 ”1.111111111112222 w1111122222222222 91122222222222222 81111111111111.1201. 71111111111119.2222 61111111122222222 51111111111111222 41111111111222222 31111111222222222 21111222222222222 1.. .m1222222222222222 Table 3.3: Tuples of chain lengths, number of spanning trees, and diagram figures for t-optimal graphs with distillation and labeling as found in Figure A.5. 80 Bibliography [1] B. D. McKay, nauty user’s guide (version 1.5), Technical report TR-GS-90-02. Australian National University, Computer Science Department, ANU, 1990. [2] D. Cvetkovic, M. Doob, and H. Sachs, Spectra of Graphs. VEB Deutscher Verlag der Wissenchafen, Berlin, 1982. [3] A. K. Kel’mans, “The number of trees in a graph i.,” Automation and Re- mote Control - Translated from: Avtomatika i Telemekhanika (Russian), vol. 26, no. 12, pp. 2194-2204, 1965. [4] D. R. Shier, “Maximizing the number of spanning trees in a graph with n nodes and m edges,” J. Res. Nat. Bur. Stand, Sect. B, vol. 78, pp. 193-196, 1974. [5] A. K. Kel’mans and V. M. Chelnokov, “A certain polynomial of graphs and graphs with an extremal number of trees,” Journal of Combinatorial Theory, Series B, vol. 16, pp. 197—214, 1974. [6] L. Petingi, F. Boesch, and C. Suffel, “On the characterization of graphs with maximum number of spanning trees,” Discrete Mathematics, 1997. [7] L. Petingi, On the characterization of graphs with maximum number of spanning trees. PhD thesis, Stevens Institute of Technology, Hoboken, NJ, 1991. [8] L. Petingi and J. Rodriguez, “A new technique for the characterization of graphs with a maximum number of spanning trees,” Special Issue on Algebraic and Topological Methods in Graph Theory. Discrete Mathematics, vol. 244, pp. 351- 373, 2002. [9] F. T. Boesch, X. Li, and C. Suffel, “On the existence of uniformly optimally reliable networks,” Networks, vol. 21, pp. 181-—194, 1991. [10] D. B. West, Introduction to Graph Theory. Prentice Hall, Upper Saddle River, 1996. [11] W. Feussner, “Zur berechung der stromstarke in netzformigen leitern,” Ann. Phys, vol. 15, pp. 385—394, 1904. [12] J. F. Wang and M. H. Wu, “Network reliability analysis: On maximizing the number of spanning trees,” Proceed. Nat. Sci. Coun. Repub. China, Part A. Phys. Sci. Eng, vol. 11, no. 3, pp. 193—196, 1987. 81 [13] S. S. Tseng and L. R. Wang, “Maximizing the number of spanning trees of networks based on cycle basis representation,” Int. J. Comput. Math, vol. 28, pp. 49—56, 1989. [14] G. Wang, “A Proof of Boesch’s Conjecture,” Networks, vol. 24, pp. 277—284, 1994. [15] B. Gilbert and W. Myrvold, “Maximixing spanning trees in almost complete graphs,” Networks, vol. 30, pp. 23—30, 1997. [16] A. Cayley, “A theorem on trees,” Quarterly Journal of Mathematics, vol. 23, pp. 376—378, 1889. [17] G. Baron, H. Prodinger, R. F. Tichy, F. T. Boesch, and J. F. Wang, “The number of spanning trees in the square of a cycle,” Fibonnacci Journal, pp. 258-264, August 1985. [18] Z. Lonc, K. Parol, and J. M. Wojciechowski, “On the asymptotic behavior of the maximum number of spanning trees in circulant graphs,” Networks, vol. 30, pp. 47~56, January 1997. [19] C.-S. Cheng, “Optimality of certain asymmetrical experimental designs,” Annual of Statistics, vol. 6, no. 6, pp. 1239*1261, 1978. [20] C.-S. Cheng, “Maximizing the total number of spanning trees in a graph: Two related problems in graph theory and optimum design theory,” Journal of Com- binatorial Theory, Series B, vol. 31, pp. 240-248, 1981. [21] L. Petingi and J. Rodriguez, “On the conjecture that among all graphs on n nodes and e edges, one with maximum number of spanning trees must be almost- regular,” Congressus Numemntium, vol. 136, pp. 207—214, 1999. [22] A.-H. Esfahanian, L. M. Ni, and B. Sagan, “The twisted n-cube with applica- tion to multiprocessing,” IEEE' Transactions on Computers, vol. 40, pp. 88—94, January 1991. [23] A. Chen and A.oH. Esfahanian, “A demography of t-optimal (n, m) graphs where n S 12,” Proceedings of The 2005 International Conference on Algorithmic Math- ematics and Computer Science, 2nd Edition, 2005. [24] A. Chen and A.-H. Esfahanian, “A demography of t-optimal (n, m) graphs where n S 12,” Tech. Rep. MSU-CSE—04-50, Computer Science and Engineering, Michi- gan State University, East Lansing, Michigan, December 2004. 82 u[[‘][[][[][[]m[[1]]