ON SOME ASPECTS OF CLUSTER ALGEBRAS AND COMBINATORIAL HOPF ALGEBRAS By John Machacek A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Mathematics — Doctor of Philosophy 2018 ON SOME ASPECTS OF CLUSTER ALGEBRAS AND COMBINATORIAL HOPF ABSTRACT ALGEBRAS By John Machacek This dissertation deals with problems in cluster algebras and combinatorial Hopf algebras. Total positivity has been closely related to cluster algebras since their inception. Postnikov’s totally nonnegative Grassmannian is a concrete example of total positivity with rich combi- natorics. Our first problem is the computation of Pl¨ucker coordinates inside a generalization of the totally nonnegative Grassmannian. We provide a combinatorial formula in terms of edge weighted directed graphs embedded on a surface. The next problem we consider is the equality of a cluster algebra and its upper cluster algebra. Particular attention is paid to the coefficient ring of the cluster algebra. We give a sufficient condition for the cluster algebra and upper cluster algebra to coincide while allowing greater generality of coefficient ring than was previous known. The final problem we consider in cluster algebras is showing that log-canonical coordinates are as simple as possible (in a certain precise sense). Log-canonical coordinates are a fundamental part of the Poisson geometry approach to cluster algebras put forth by Gekhtman, Shapiro, and Vainshtein. In the theory of combinatorial Hopf algebras we compute a formula for the antipode in a Hopf algebra on simplicial complexes. This antipode formula generalizes Humpert and Martin’s formula for graphs. We then use the character theory of Aguiar, Bergeron, and Sottile to realize a version of Stanley’s chromatic symmetric function for simplicial complexes. We prove that the degree sequence of a uni- form hypertree can be recovered from its chromatic symmetric function. We also show the chromatic symmetric function is not a complete invariant for uniform hypertrees. To Melinda Pearl iii ACKNOWLEDGMENTS I thank the many professors, colleagues, and staff who have helped make my time at Michigan State University enjoyable and productive. A special thanks to my collaborators Carolina Benedetti, Eric Bucher, Josh Hallam, Nick Ovenhouse, and Michael Shapiro. This disserta- tion contains work joint with each of them. I thank my committee members Rajesh Kulkarni, Bruce Sagan, and Linhui Shen. A special thanks to Rajesh Kulkarni for leading the reading of toric varieties prior to my comprehensive exam. Also I wish to give a special thanks to Bruce Sagan for the graduate combinatorics course I took from him as well as many helpful conversations. I would like to express my most sincere gratitude to my advisor Michael Shapiro. It has been a pleasure and honor to work with you these past years. Your many forms of support have been invaluable. I thank my parents and grandparents for encouraging my education. I thank my wife Kalia for her support and for joining me on this journey. iv TABLE OF CONTENTS LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Chapter 1 Cluster algebra overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Total positivity and scattering amplitudes . . . . . . . . . . . . . . . . . . 1.2 Coordinate rings and upper cluster algebras 1.3 Cluster algebras and Poisson geometry . . . . . . . . . . . . . . . . . . . . . Chapter 2 The Nonnegative Grassmannian . . . . . . . . . . . . . . . . . . . 2.1 Positroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Boundary measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3 Cluster Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Defining mutation and cluster algebras . . . . . . . . . . . . . . . . . . . . . 3.2 The Laurent phenomenon and the upper cluster algebra . . . . . . . . . . . . 3.3 Cluster algebras of geometric type . . . . . . . . . . . . . . . . . . . . . . . . 3.4 An example cluster algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 4 Poisson geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Poisson algebras and varieties . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Poisson brackets compatible with cluster algebras . . . . . . . . . . . . . . . 4.3 Poisson geometry for networks of the disk . . . . . . . . . . . . . . . . . . . Chapter 5 Boundary Measurement . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Generalized boundary measurement definition . . . . . . . . . . . . . . . . . 5.1.1 Weighted Path Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Boundary Measurement Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Boundary Measurement Independence 5.3 Signing Perfectly Oriented Networks . . . . . . . . . . . . . . . . . . . . . . 5.4 A Formula for Pl¨ucker Coordinates . . . . . . . . . . . . . . . . . . . . . . . 5.5 The Gauge Action and Uniqueness of Signs . . . . . . . . . . . . . . . . . . . Chapter 6 Upper cluster algebras and ground rings . . . . . . . . . . . . . 6.1 Locally isolated and Locally acyclic cluster algebras . . . . . . . . . . . . . . 6.2 The Cremmer-Gervais example . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Relationship with reddening . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 7 Log-Canonical Coordinates . . . . . . . . . . . . . . . . . . . . . . 7.1 Nonexistence of Constant Bracket . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Nonexistence of Linear Bracket . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 3 4 6 7 9 12 12 14 16 18 22 22 28 29 32 32 33 35 41 46 56 60 66 66 72 73 76 77 83 v 7.3 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Chapter 8 Combinatorial Hopf Algebras . . . . . . . . . . . . . . . . . . . . 8.1 Hopf algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Combinatorial Hopf algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9 Simplicial Complexes . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 The Hopf algebra structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 A cancellation-free formula for the antipode . . . . . . . . . . . . . . . . . . 90 90 92 97 97 99 Chapter 10 The chromatic symmetric function . . . . . . . . . . . . . . . . . 107 10.1 Coloring in simplicial complexes . . . . . . . . . . . . . . . . . . . . . . . . . 107 10.2 Acyclic orientations and evaluations . . . . . . . . . . . . . . . . . . . . . . . 111 10.3 The f -vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 10.4 Hypertrees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 vi LIST OF TABLES Table 9.1: Information to compute the antipode of Γ. . . . . . . . . . . . . . . . . . 104 Table 10.1: Vertices and facets of the complexes X, Y, Z, and W. . . . . . . . . . . . . 115 vii LIST OF FIGURES Figure 2.1: An edge weighted directed graph on the disk. . . . . . . . . . . . . . . . 10 Figure 3.1: A quiver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Figure 3.2: A quiver on the left and the resulting quiver after mutation at 1 on the right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Figure 3.3: The two seeds for the cluster algebra structure on Gr(2, 4). . . . . . . . . 21 Figure 4.1: Coordinates on (R \ {0})3 v. . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 5.1: An example of the edge order induced by the boundary vertex order. Here we say e(cid:48) < e(cid:48)(cid:48) given the boundary vertices are ordered as usual with 1 < 2 < 3 < 4. The direction of the edges is irrelevant for the induced edge ordering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 35 Figure 5.2: Smoothing a piecewise smooth curve at a vertex. . . . . . . . . . . . . . 36 Figure 5.3: On the left we have a punctured fundamental polygon for S a torus with two boundary components with the cut between boundary components shown as a dashed line. On the right we have a closed curve around the boundary of T drawn inside the fundamental domain. . . . . . . . . . . . 38 Figure 5.4: Two networks on the annulus each with a different choice of cut. . . . . . 42 Figure 5.5: An example of a closed curve C on the torus and the corresponding curves ˆC, C(cid:48) and C(cid:48)(cid:48). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Figure 5.6: A perfectly oriented network on the disk. . . . . . . . . . . . . . . . . . . 46 Figure 5.7: Splitting a white vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Figure 5.8: Splitting a black vertex in the case that i(cid:48)0 is a sink and i(cid:48)(cid:48)0 is a source . . Figure 5.9: Closing paths in ˜N . The case i ≺ i0 ≺ j is shown on the right while the case j ≺ i0 ≺ i is shown on the left. . . . . . . . . . . . . . . . . . . . . . Figure 5.10: Splitting a black vertex in the case that i(cid:48)0 is a source and i(cid:48)(cid:48)0 is a sink . . 51 53 55 Figure 5.11: A network which is not perfectly oriented . . . . . . . . . . . . . . . . . 55 viii Figure 5.12: Crossings on the disk and the annulus. Here boundary vertices are ordered . . . . . . . . . . . . . . . . . . . . . . . . . . . i1 < i2 < π(i1) < π(i2). 58 Figure 6.1: An acyclic source freezing quiver . . . . . . . . . . . . . . . . . . . . . . 72 Figure 6.2: The quiver Q(CG, 3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Figure 6.3: The reduced Banff algorithm applied to the quiver Q(CG, 3). . . . . . . 74 Figure 6.4: A framed quiver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Figure 6.5: A maximal green sequence. . . . . . . . . . . . . . . . . . . . . . . . . . 75 Figure 6.6: The mutable part of the quiver Q(CG, 3). . . . . . . . . . . . . . . . . . 75 Figure 9.1: OneSkelEx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Figure 9.2: antipode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Figure 10.1: A 2-coloring of Γ in (a) and a 3-coloring of Γ in (b). . . . . . . . . . . . . 108 Figure 10.2: Z (left) has 2-simplicies 123, 124, 134, 234 and Y (right) has 2-simplices 123, 124, 134, 235.114 Figure 10.3: The hypertree H1 above and the hypertree H2 below which are not iso- morhpic but have the same chromatic symmetric function. . . . . . . . . 123 ix Chapter 1 Cluster algebra overview This document consists of work in various areas of cluster algebras and combinatorial Hopf algebras. The overview in this chapter discusses the cluster algebra portion of this disserta- tion. The original research contributions in the area of cluster algebra theory are contained in Chapters 5, 6, and 7. Chapters 1–4 serve as background material putting the original results into context. Chapter 2 reviews Postnikov’s totally nonnegative Grassmannian. Necessary notions in the theory of cluster algebras will be defined in Chapter 3. Poisson structures compatible with cluster algebra structures will be discussed in Chapter 4. We now explain some of the larger context of the particular results which will appear later. This chapter will be a general big picture overview. Any necessary definitions and precise statements will be given later. Fomin and Zelevinsky’s cluster algebras [FZ02] provide a framework to study a commu- tative algebra by dividing its generators into groups called clusters. Cluster algebras have an axiomatic definition through a process known as mutation which transforms one cluster to another. At first glance the axioms of a cluster algebra may seem artificial, but they are in fact natural. Cluster algebras were originally defined to study the canonical basis in Lusztig’s theory of total positivity [Lus94, Lus98]. In addition to the original goal of studying the canonical basis in Lusztig’s theory of total positivity and explaining relations between generators in certain coordinate rings, cluster algebra structures have since been 1 found to naturally appear many places in algebra, geometry, and physics. A (nonexhaus- tive) list of places where mutation and cluster algebras show up in other areas include: Seiberg duality in quiver gauge theories [Sei95], representation theory of quivers [BM06], Higher Teichm¨uller theory [FG06], Poisson geometry [GSV10], and computation of scatter- ing amplitudes [GGS+14]. We now further expand on the aspects of cluster algebras we will focus on. 1.1 Total positivity and scattering amplitudes Postnikov’s totally nonnegative Grassmannian [Pos06] is a particular example of Lusztig’s more general theory of total nonnegativity. The totally nonnegative Grassmannian comes with a rich combinatorial structure. One aspect of this combinatorial structure is a pa- rameterization of the totally nonnegative Grassmannian by edge weighted directed graphs embedded on a disk. These edge weighted directed graphs give rise to another combinatorial object, called an alternating strand diagram or Postnikov diagram, which can be used to de- fine a cluster algebra structure on the homogenous coordinate ring of a Grassmannian [Sco06]. These graphs have also become useful in high energy physics for the computation of scat- tering amplitudes [AHBC+16]. We will study a related construction which considers edge weighted directed graphs embedded on an orientable surface in Chapter 5. The main content of Chapter 5 is a proof of a conjecture, appearing the physics litera- ture, on a combinatorial formula for minors of a matrix associated to edge weight directed graphs embedded on an orientable surface [FGPW15]. Physicists have proposed a construc- tion generalizing Postnikov’s parameterization of the totally nonnegative Grassmannian by replacing the disk with any orientable surface. It is hoped this more general construction will 2 further aid in the computation of scattering amplitudes. For any orientable surface we show that the exists a combinatorial formula for Pl¨ucker coordinates which is similar to Talaska’s formula in the case of the disk [Tal08]. Our proof makes use of Talaska’s generalization of the Lindstr¨om-Gessel-Viennot lemma [Tal12]. We also remark that prior to the generaliza- tion to any orientable surface in the physics literature, Postnikov’s parameterization of the totally nonnegative Grassmannian was first generalized to edge weighted directed graphs on the annuls in the context of Poisson geometry [GSV08]. 1.2 Coordinate rings and upper cluster algebras Cluster structures can be observed in coordinate rings of many natural algebraic varieties. Some examples include Grassmannians [Sco06] and double Bruhat cells of complex simple Lie groups [BFZ05]. A closely related object called the upper cluster algebra can, as it does for Double Bruhat cells [BFZ05, Theorem 2.10], coincide with the (homogeneous) coordinate ring of the variety. The cluster algebra is contained in the corresponding upper cluster algebra, and in general this can be a proper containment. When we have equality of the cluster algebra and upper cluster algebra, such coordinate rings can be approached by utilizing the explicit combinatorial description of the generators of the cluster algebra. So, determining when we have equality of the cluster algebra and upper cluster algebra is an important problem in the theory. The question of equality of the cluster algebra and upper cluster algebra is considered in Chapter 6. An important choice in defining a cluster algebra is deciding which ground ring to generate the cluster algebra over. When considering cluster algebra structures on a coordinate ring, the choice of ground ring effects whether the cluster algebra structure is 3 actually on the coordinate ring of the variety or instead on the coordinate ring of an open subset. In their paper introducing upper cluster algebras Bernstein, Fomin, and Zelevinsky showed, with a coprimeness assumption, that there is equality of the cluster algebra and upper cluster algebra for acyclic cluster algebras [BFZ05, Corollary 1.19]. Muller has shown that this coprimeness assumption is not need and developed the theory of locally acyclic cluster algebras [Mul13, Mul14]. A locally acyclic cluster algebra is known to coincide with its upper cluster algebra for a certain choice of ground ring. Showing a cluster algebra is locally acyclic is a practical way to show it is equal to its upper cluster algebra. An example application is Muller and Speyer’s result which states the cluster algebra structure on the homogenous coordinate ring of the Grassmannian is locally acyclic [MS16]. This can be used to conclude the cluster algebra coincides with the upper cluster algebra for the coordinate ring of an open subvariety known as a postiroid variety. In Chapter 6 we address the dependence of the choice of ground ring on deciding equality of the cluster algebra and upper cluster algebra. A condition for when there is equality of the cluster algebra and upper cluster algebra is given by using a variation of Muller’s theory of cluster localization for more general ground rings. An explicit example exhibiting dependence on the ground ring is also provided. 1.3 Cluster algebras and Poisson geometry Gekhtman, Shapiro, and Vainshtein have created an approach to the theory of cluster al- gebras considering certain compatible Poisson brackets [GSV10]. This approach makes use of Poisson brackets of a certain form called log-canonical. Given a cluster algebra, the log-canonical Poisson brackets give a “nice” coordinate system on an associated geometric 4 object. In Chapter 7 we provide an answer to how nice this coordinate system is. It turns out that, in a certain precise sense, it is the “nicest” possible coordinate system when one is only allowed a rational change of coordinates. In light of this we can consider log-canonical coordinates as an algebraic analog of Darboux coordinates from symplectic geometry. 5 Chapter 2 The Nonnegative Grassmannian This chapter reviews the totally nonnegative Grassmannian and Postnikov’s boundary mea- surement parameterization of the totally nonnegative Grassmannian [Pos06] via edge weighted directed graphs embedded on the disk. We let Gr(k, n) denote the real Grassmannian of k-dimensional subspaces of Rn. Each subspace V ∈ Gr(k, n) can be represented by a k × n matrix A of rank k such that the rows of A span V . In this case we write V = [A]. The matrix A is unique up to left multiplication of elements of the general linear group. Thus, Gr(k, n) = GLk\{k × n matrices of rank k} and we obtain the Pl¨ucker embedding Gr(k, n) → P(n k)−1 [A] (cid:55)→ (∆I (A))I where ∆I (A) denotes the maximal minor with columns indexed by a k-element subset I of [n]. A minor ∆I (A) is called a Pl¨ucker coordinate. The totally nonnegative Grassmannian, denoted Gr≥0(k, n), consists of all V ∈ Gr(k, n) such that there exists A where V = [A] and ∆I (A) is nonnegative for each I. Postnikov introduced the totally nonnegative Grassmannian 6 in the seminal preprint [Pos06]. There is a wealth of combinatorial objects which play on important roles in the study of Gr≥0(k, n) including: positroids, decorated permutations, directed networks, and more. We will be mostly interested in the boundary measurement of directed networks. Boundary measurement will be introduced on the disk in Section 2.2 and Chapter 5 will focus on a generalization of boundary measurement. First we will describe positroids and certain stratifications of the Grassmannian in Section 2.1. 2.1 Positroids k Let(cid:0)[n] (cid:1) denote the collection of k-element subsets of [n] = {1, 2, . . . , n}. A matroid of rank k (cid:1) such that for all I, J ∈ M there exists i ∈ I and j ∈ J for on the set [n] is a subset M ⊆ which (I \{i})∪{j} ∈ M. Matroids generalize the concept of linear independence in vectors spaces. We will be particularly interested in matroids that come for linearly independence (cid:0)[n] k in real vector spaces. Given a full rank k × n real matrix A we obtain a matroid of rank k on [n] known as an R-linear matroid defined by MA := {I : ∆I (A) (cid:54)= 0}. Considering the columns v1, v2, . . . , vn of A as vectors in Rk, a subset I ∈ MA if and only if {vi : i ∈ I} is a basis of Rk. Given any V ∈ Gr(k, n) we define MV := MA if V = [A]. Note this matroid is well-defined since A is unique up to multiplication by elements of the general linear group, and multiplication by an invertible matrix will not change the underlying 7 matroid. The Grassmannian then has the matroid stratification (cid:91) Gr(k, n) SM where M SM = {V ∈ Gr(k, n) : MV = M} are known as matroid stratum. Example 2.1.1 (A real linear matroid). If we consider the matrix 1 0 1 0  0 1 0 1 A = we then have MA = {{1, 2},{2, 3},{3, 4},{1, 4}}. A matroid M is called a positroid if M = MA for a matrix A which has ∆I (A) ≥ 0 for all I. We then get a decomposition of the totally nonnegative Grassmannian Gr≥0(k, n) = (cid:91) S≥0 M positroids M = SM∩Gr≥0(k, n). The next example shows that not all matroids are positroids, where S≥0 M or equivalently it shows that SM ∩ Gr≥0(k, n) may be empty. Example 2.1.2 (Non-positroid). We claim the R-linear matroid from Example 2.1.1 M = {{1, 2},{2, 3},{3, 4},{1, 4}} 8 of rank 2 on [4] is not a positroid. If M was a positroid, we would have to be M = MA for 1 0 a 0  0 1 0 b A = for a, b ∈ R with a < 0, b > 0, and ab > 0. Each S≥0 M the cells S≥0 M is homeomorphic to an open ball, and the decomposition of Gr≥0(k, n) into is a CW-complex [Pos06, Theorem 3.5]. Postnikov’s approach of giving of CW-structure on Gr≥0(k, n) agrees with a more general construction of cells in theory of total positivity given by Marsh and Rietsch [MR04]. The positroid envelope of a matroid M is denoted by E(M) and is the unique smallest positroid containing the matroid [KLS13, Section 3]. Given a positroid M the corresponding open positroid variety is (cid:91) Π◦(M) := S M(cid:48). M(cid:48) E(M(cid:48))=M This decomposition groups matroid strata whose matroids have the same positroid envelope. We include this discussion of open positroid varieties because it will be relevant later when we consider cluster algebras and upper cluster algebras as coordinate rings in Section 3.4 and Chapter 6. 2.2 Boundary measurement Postnikov [Pos06] has shown how to construct a matrix A such that [A] = V for each V ∈ Gr≥0(k, n). If V ∈ Gr≥0(k, n), then the matrix A is constructed by considering an edge weighted directed graph embedded on the disk with n boundary vertices exactly k of which 9 1 2 x1 x5 x2 x8 x6 x4 x7 x3 4 3 Figure 2.1: An edge weighted directed graph on the disk. are sources. Such graphs on other surfaces will be considered in Chapter 5. We now briefly illustrate boundary measurement on the disk with an example. We consider a directed graph G, modulo homotopy, embedded on a disk such that G has n vertices on the boundary of the disk which are labeled by K = {1, 2, . . . , n} in coun- terclockwise order. Each boundary vertex is either a source or sink. An example of such a graph can be found in Figure 2.1. Remark 2.2.1. The directed graph in Figure 2.1 is an example of what is known as a per- fectly orientated network. The definition of a perfectly orientated network will be given in Chapter 5. Also, the coloring of the vertices in the Figure 2.1 will be explained in Chapter 5. Let I ⊆ K denote the set of boundary sources. So, I = {1, 3} and K = {1, 2, 3, 4} for the graph in Figure 2.1. The weight of a directed path is taken to be the product of the weights of the edges in the path. Assume G has no directed cycles1. We then form the I × K boundary measurement matrix B(G) with entry B(G)ik given by the sum of weights of all directed paths from i ∈ I to k ∈ K multiplied by (−1)sik where sik in the number of 1This assumption is not necessary. We make this assumption now for simplicity of presentation. Directed cycles will be allowed in a more general treatment in Chapter 5 10 element of I strictly between i and k. For the graph in Figure 2.1 we obtain the matrix 1 x1x5x2 0 −x1x8x4  . B = 0 x3x6x2 1 x3x7x4 The first row B records paths beginning at boundary vertex 1, while the second row of B records paths beginning at boundary vertex 3. We consider the empty path, which has weight equal to 1, to be the only path from a boundary source to itself. The only path from boundary vertex 1 to 2 has weight x1x5x2 and the only path from boundary vertex 1 to 4 has weight x1x8x4. We see x1x5x2 has a positive sign in B since s12 = 0 while x1x8x4 has a negative sign in B since s14 = 1. The matrix B is a full rank 2 × 4 matrix, and hence represents an element of Gr(2, 4). The Pl¨ucker coordinates (listed in lexicographic order) in this case are (x3x6x2, 1, x3x7x4, x1x5x2, x1x2x3x4(x5x7 + x6x8), x1x8x4) which all evaluate to nonnegative values when each xi is given a nonnegative real value. We can observe that the inclusion of the signs (−1)sij have, in this example, made all Pl¨ucker coordinates subtraction-free expressions. Boundary measurement matrices will be explored further in Chapter 5 where we study edge weighted directed graphs on surfaces other than the disk. Our main result in Chapter 5 will be a formula for the Pl¨ucker coordinates of these boundary measurement matrices which extends work of Talaska on the disk [Tal08]. 11 Chapter 3 Cluster Algebras In this chapter we give some background information on Fomin and Zelevinsky’s cluster algebras [FZ02] . We will begin by defining the essential notion of mutation and then proceed to formally defining a cluster algebra. After defining a cluster algebra we will discuss the Laurent phenomenon, the upper cluster cluster algebra, and cluster algebras of geometric type. Once we have established these key objects in the theory of cluster algebras we will give an example of a cluster algebra and upper cluster algebra. 3.1 Defining mutation and cluster algebras Let P be a semifield. This means that P is a torsion free abelian group whose operation is written multiplicatively. Additionally, P is equipped with an auxiliary addition ⊕, which is commutative, associative, and distributive over the multiplication of P. We now want to consider a ground ring, Z ⊆ A ⊆ ZP. This is sometimes called the coefficient ring of the cluster algebra. Let F be a field which contains ZP. A seed of rank n in F is a triple (x, y, B) consisting of three parts. • The cluster x = (x1, x2, . . . , xn) is an n-tuple in F which freely generates F as a field over the fraction field of ZP. • The coefficients y = (y1, y2, . . . , yn) are an n-tuple in P. 12 • The exchange matrix B is an integral, skew-symmetrizable n × n matrix. For a real number x we define [x]+ := max{0, x}. A seed (x, y, B) may be mutated at an index 1 ≤ k ≤ n, to produce a new seed (µk(x), µk(yy), µk(B)). We say that the seed has been mutated in the direction k. Meaning that there are n different mutations that can be applied to a given seed. The mutation is given by the following rules: • µk(x) := (x1, x2, . . . , xk−1, x(cid:48)kxk+1, . . . , xn), where +(cid:81) x (cid:81) x [Bjk]+ j [−Bjk]+ j • µk(y) := (y(cid:48)1, y(cid:48)2, . . . y(cid:48)n), where (yk ⊕ 1)xk if i = k (yk ⊕ 1)−Bki if i (cid:54)= k [Bki]+ yiy k yk x(cid:48)k := i y(cid:48)i := y−1 −Bij Bij + 1 • µk(B) is defined by µk(B)ij = 2(|Bik|Bkj + Bik|Bkj|) if i = k or j = k, otherwise Notice that µ2 k is the identity. We say that two seeds are mutation equivalent if one can be obtained by a sequence of mutations, up to permuting the indices of the seed. Now that we have established the fundamental definitions of seeds and mutation we can define a cluster algebra. Given a seed (x, y, B) we will call the union of all the seeds which are mutation equivalent to (x, y, B) a set of cluster variables in F. The cluster algebra, 13 AA(x, y, B), is the unital A-subalgebra of F generated by the cluster variables. Notice that since we are allowed to freely mutate when generating the cluster variables, that two mutation equivalent seeds will generate the same cluster algebra. For this reason sometimes it is common to leave the seed out of the notation and simply refer to the algebra as AA or just A when the choice of ground ring is clear. 3.2 The Laurent phenomenon and the upper cluster algebra The Laurent phenomenon [FZ02, Theorem 3.1] is a very importantly property of clus- ter algebras. It states that if our ground ring for A is ZP, then A is a subalgebra of 1 , x−1 2 , . . . , x−1 n ] = ZP[x±1 1 , x±1 2 , . . . , x±1 n ]. A[x−1 In the initial work of Fomin and Zelevinsky they give a more generalized Laurent phe- nomenon, and outline conditions that allow for the Laurent phenomenon to hold even if we work over a potentially smaller ground ring A (cid:40) ZP [FZ02, Theorem 3.2]. Throughout this dissertation, when we discuss the cluster algebra over a ground ring A, we will assume that the Laurent phenomenon holds over A and that A contains all coefficients appearing in exchange relations. In this case we have: A (cid:44)→ A[x−1 1 , x−1 2 , . . . , x−1 n ] = A[x±1 1 , x±1 2 , . . . , x±1 n ] and yi 1 ⊕ yi , 1 1 ⊕ yi ∈ A 14 (3.1) (3.2) for any seed (B, x, y) of the cluster algebra. Now we will consider an algebra closely related to the A. This is the upper cluster algebra denoted by U or by UA((B, x, y)). The upper cluster algebra is defined by (cid:92) x∈A U := A[x±1 1 , x±1 2 , . . . , x±1 n ]. Since we have chosen a ground ring, A, where the criteria for the Laurent phenomenon are met, we see that injections from each seed mean that A ⊆ U. In fact there are many occurrences where A = U which gives an alternative way of understanding the cluster algebra A. The choice of ground ring plays a substantial role in deciding whether or not we are in In Chapter 6 we will explore exactly how the choice of ground ring this nice situation. impacts whether A = U. We now provide a proposition on normality generalizing [Mul13, Proposition 2.1] to our situation. Recall, an integral domain is called a normal domain if it is integrally closed in its field of fractions. A semifield P is always torsion-free, and thus any Z ⊆ A ⊆ ZP is an integral domain. Proposition 3.2.1. If A is a normal domain, then UA is a normal domain. Proof. Assume A is a normal domain. From the definition it follows that the intersection of normal domains is again a normal domain. So, it suffices to show that A[x±1 1 , x±1 2 , . . . , x±1 n ] is a normal domain for any seed (B, x, y). This is true since any polynomial ring over a normal domain is a normal domain [Sta18, Lemma 10.36.8] and any localization of a normal domain is a normal domain [Sta18, Lemma 10.36.5]. 15 3.3 Cluster algebras of geometric type The tropical semifield Trop(z1, z2, . . . , zm) is the free abelian group generated by z1, z2, . . . , zm with auxiliary addition given by m(cid:89) i=1 m(cid:89) i=1 ai z i ⊕ bi i = z m(cid:89) i=1 min(ai,bi) z i . 1 , z±1 2 , . . . , z±1 A cluster algebra is said to be of geometric type if it is defined over a tropical semifield. In this case ZP = Z[z±1 m ] is the Laurent polynomial ring in z1, z2, . . . , zm. We are particularly interested in the polynomial ground ring Z[z1, z2, . . . , zm] which we will denote by ZP+. For cluster algebras of geometric type there is a sharpening of the Laurent phenomenon giving A ⊆ ZP+[x±1 n ]. This sharpening of the Laurent 2 , . . . , x±1 1 , x±1 phenomenon can be found in [FWZ, Theorem 3.3.6]. A quiver is a directed graph without loops or directed 2-cycles, but parallel arrows are allowed. When (B, x, y) is a seed for a cluster algebra of geometric type and the matrix B is skew-symmetric, we will often consider a quiver which is equivalent data to the seed (B, x, y). The n × n skew-symmetric matrix B is considered as a signed adjacency matrix of a quiver. That is, the quiver Q has Bji arrows i → j where negative arrows correspond to reversing the direction. The generators z1, z2, . . . , zm of P correspond to additional vertices of Q called frozen vertices. Non-frozen vertices are known as mutable vertices. Mutable vertices of the quiver will be depicted as circles while frozen vertices are represented by squares. There are no arrows between frozen vertices. Arrows between mutable and frozen vertices are obtained by considering y. If yi = z ai 1 za2 2 ··· zam m , then the mutable vertex i has aj arrows i → zj. As 16 z1 1 z2 2 Figure 3.1: A quiver. 4 2 1 5 3 4 2 1 5 3 Figure 3.2: A quiver on the left and the resulting quiver after mutation at 1 on the right. an example if P = Trop(z1, z2) while our seed has y = (z1z−1 2 , z−1 2 ) and 0 −2  2 0 B = the corresponding quiver Q would be the quiver pictured in Figure 3.1. For a quiver Q we define quiver mutation at a vertex k to be the following process which produces a new quiver Q(cid:48): • For each oriented 2-path i → k → j add an arrow i → j unless i and j are both frozen. • Reverse each arrow incident to k. That is, any arrow i → k is removed and replaced by k → i while any arrow k → i is also removed and replaced by i → k. • Repeatably remove any pairs of arrows i → j, j → i forming a 2-cycle until there are no longer any such pairs of arrows. The process of quiver mutation exactly matches the process of seed mutation in the sense that if Q corresponds to a seed (B, x, y) then Q(cid:48) corresponds to the seed (B(cid:48), x(cid:48), y(cid:48)). An example of quiver mutation can be found in Figure 3.2. 17 Given any cluster (x1, x2, . . . , xn) we call (x1, x2, . . . , xn+m) the extended cluster where xn+i = zi. There is then an (n + m)× n matrix ˜B called the extended exchange matrix such that the mutation relation takes the form (cid:89) (cid:89) [− ˜Bjk]+ j . x xkx(cid:48)k := [ ˜Bjk]+ j x + The extended cluster and extended exchange matrix will be important concepts in the com- patible Poisson brackets which will be defined in Section 4.2. 3.4 An example cluster algebra In this section we give an example of a cluster algebra to illustrate the definitions given in previous sections of this chapter. The example cluster algebra considered will turn out to be the homogeneous coordinate ring of the Grassmannian Gr(2, 4). Consider the semifield P = Trop(∆12, ∆23, ∆34, ∆14) and ground ring ZP+. Here ∆ij are a priori just formal tropical variables, but we will see (as the notation suggests) they will end up having meaning as Pl¨ucker coordinates for Gr(2, 4) where ∆{i,j} is abbreviated as ∆ij. We take the initial seed (B, x, y) with B = [0], x = (x), y = (∆12∆34∆−1 23 ). The only other mutation 14 ∆−1 equivalent seed is (B(cid:48), x(cid:48), y(cid:48)) with B(cid:48) = [0], x(cid:48) = (x(cid:48)), y = (∆−1 12 ∆−1 34 ∆14∆23). The cluster variables x and x(cid:48) satisfy the relation xx(cid:48) = ∆12∆34 + ∆14∆23 18 which is exactly the Pl¨ucker relation in Gr(2, 4) when we let x = ∆13 and x(cid:48) = ∆24. Hence, AZP+ = Z[∆12, ∆13, ∆14, ∆23, ∆24, ∆34]/(∆12∆34 − ∆13∆24 + ∆14∆23) which (after tensoring with the field) is the homogeneous coordinate ring of the Grassmannian Gr(2, 4). In this case our upper cluster algebra is UZP+ = Z[∆12, ∆14, ∆23, ∆34, ∆±1 13 ] ∩ Z ∆12, ∆14, ∆23, ∆34, (cid:34) (cid:18)∆12∆34 + ∆14∆23 (cid:19)±1(cid:35) ∆13 which satisfies AZP+ AZP = UZP is the homogeneous coordinate ring not of Gr(2, 4) but rather of the open If we work on the ground ring ZP instead we find that = UZP+ . positroid variety Π◦((cid:0)[4] 2 (cid:1)). Indeed we have AZP = Z[∆±1 12 , ∆13, ∆±1 14 , ∆±1 23 , ∆24, ∆±1 34 ]/(∆12∆34 − ∆13∆24 + ∆14∆23) and so AZP is of coordinate ring for (cid:91) M SM where the union is taken over all matroids M with {{1, 2},{1, 4},{2, 3},{3, 4}} ⊆ M. We saw in Example 2.1.2 that {{1, 2},{1, 4},{2, 3},{3, 4}} is not a positroid. It can be checked that the smallest positroid containing {{1, 2},{1, 4},{2, 3},{3, 4}} is(cid:0)[4] the coordinate ring of the open positroid variety Π◦((cid:0)[4] (cid:1)). (cid:1) and hence AZP is 2 2 The homogeneous coordinate ring of any Grassmannian is known to have a cluster algebra structure [Sco06]. It is also known that coordinate rings of open positroid varieties are cluster algebras which coincide with their upper cluster algebras [MS16]. This equality of cluster 19 algebra and upper cluster algebra can be shown using the theory of locally acyclic cluster algebras. In Section 6.1 we will further discuss locally acyclic cluster algebras. We conclude this section by explaining how the cluster algebra structure on the homoge- neous coordinate ring of Gr(2, 4) generalizes to Gr(2, n). The homogeneous coordinate ring of Gr(2, n) in generated by {∆ij : 1 ≤ i < j ≤ n} subject to the 3-term Pl¨ucker relations ∆ik∆j(cid:96) = ∆ij∆k(cid:96) + ∆i(cid:96)∆jk for 1 ≤ i < j < k < (cid:96) ≤ n. To realize the cluster algebra structure we consider a regular n-gon with vertices labeled by 1, 2, . . . , n in clockwise order. Clusters will correspond to triangulations of the polygon. We can draw a quiver associated to a triangulation so that mutable vertices are diagonals of the triangulation and frozen vertices are sides of the poly- gon. The cluster variable corresponding to the diagonal between i and j in the triangulation will be ∆ij. A tropical variable ∆i,i+1 corresponds to each side of the polygon (with indices taken modulo n). In this way we will identify veritices are the quiver and edges in the trian- gulation. Given two vertices of our quiver in the same triangle we place are arrow between them so that each triangle is a clockwise oriented 3-cycle (with arrows omitted between frozen vertices). In this way quiver mutation will exactly correspond to removing a diagonal and replacing it with the (unique) another possible diagonal in the quadrilateral created. The two quivers for Gr(2, 4) are shown in Figure 3.3 on top of the associated triangulations. 20 1 4 2 3 1 4 2 3 Figure 3.3: The two seeds for the cluster algebra structure on Gr(2, 4). 21 Chapter 4 Poisson geometry In this chapter we first define Poisson algebras, Poisson manifolds, and Poisson varieties. We then discuss an approach the studying cluster algebras using Poisson geometry where mutation appears as birational coordinate transformations respecting a Poisson structure. 4.1 Poisson algebras and varieties Let P be an associative algebra. A Poisson bracket on P is a skew-symmetric bilinear map {·,·} : P × P → P such that for any a, b, c ∈ P both the Leibnitz identity {ab, c} = a{b, c} + {a, c}b and the Jacobi identity {a,{b, c}} + {b,{c, a}} + {c,{a, b}} = 0 hold. A Poisson algebra is pair (P,{·,·}) where P is an associative algebra and {·,·} is a Poisson bracket. Notice that {·,·} makes P a Lie algebra. So, we get the adjoint representation of P on itself sending a ∈ P to ada ∈ End(P ), where ada(b) = {a, b}. Note that the Jacobi identity 22 n(cid:88) {f, g} = ∂f ∂xi ∂g ∂xj {xi, xj}. (4.1) implies that ada is a Lie algebra derivation. Also observe that ada is a derivation of the associative algebra P by the Leibniz identity. If a ∈ P∗ is a unit, then the Leibniz identity a−1 = −a−2 ada. In particular, this implies that if {a, b} = 0 for some a ∈ P∗ implies that ad and b ∈ P , then {a−1, b} = 0. Let M be a smooth manifold, and let C∞(M ) denote its algebra of smooth functions. A Poisson structure on M is a bracket {·,·} : C∞(M ) × C∞(M ) → C∞(M ) such that (C∞(M ),{·,·}) is a Poisson algebra. In this case we call (M,{·,·}) a Poisson manifold. For local coordinates (x1, . . . , xn) and f, g ∈ C∞(M ) the Poisson bracket is given by and so the bracket is completely determined by the(cid:0)n i,j=1 (cid:1) “structure functions” {xi, xj}, for 2 i < j. Following [GSV10], a system of coordinates (x1, . . . , xn) is called log-canonical with respect to a Poisson bracket {·,·} if there is a matrix of scalars Ω = (ωij) (necessarily skew- symmetric) such that the structure functions are given by {xi, xj} = ωijxixj. We note that this Poisson structure goes by many names in the literature. For example, it is called a diagonal Poisson structure in [LGPV13], Poisson n-space in [Oh06], and a semi-classical limit of quantum affine space in [GL09]. Example 4.1.1 (Standard Poisson-Lie). Consider the special linear group SLn, with coor- dinates (matrix entries) xij. The standard Poisson-Lie structure on SLn is the quadratic bracket given by {xij, xk(cid:96)} = cij k(cid:96)xi(cid:96)xkj 23 cij k(cid:96) = 1 2 (sign(k − i) + sign((cid:96) − j)) = if k > i, j = (cid:96) or k = i, (cid:96) > j if k > i, (cid:96) > j if k > i, (cid:96) < j 0 1    : ad − bc = 1 1 2 where the coefficients are given by For instance, when n = 2 we have SL2 =  with bracket relations: a b c d {a, b} = 1 2 ab {c, d} = 1 2 cd {a, c} = 1 2 ac {b, d} = 1 2 bd {a, d} = bc {b, c} = 0 If we consider the Borel subgroup of upper triangular matrices in SL2 α  β 0 α−1   , B = then the bracket is given by {α, β} = 1 gives a log-canonical bracket on the Borel subgroup. 2 αβ. In particular, the standard Poisson-Lie structure In general, the local structure of Poisson manifolds is described by the following theorem of Weinstein. 24 Theorem ([Wei83]). Let M be a Poisson manifold, and p ∈ M . Then there exists a neigh- borhood U containing p with coordinates (x1, y1, . . . , xr, yr, z1, . . . , zs), such that the bracket takes the form {xi, xj} = {yi, yj} = {xi, zj} = {yi, zj} = 0 {xi, yj} = δij {zi, zj} = ϕij where ϕij ∈ C∞(U ) depend only on z1, . . . , zs, and ϕij(p) = 0. Example 4.1.2. If (M 2n, ω) is a symplectic manifold, then there is a standard Poisson structure induced by ω. In this special case, Weinstein’s theorem is the classical Darboux theorem which says that locally ω has the form n(cid:88) i=1 ω = dxi ∧ dyi The local coordinates (x1, y1, . . . , xn, yn) are commonly called canonical coordinates or Dar- boux coordinates. Note that on a smooth Poisson manifold with a log-canonical system of coordinates (x1, . . . , xn) the system of coordinates (y1, . . . , yn) = (log x1, . . . , log xn), defined on the open set where all xi are positive, are similar to a system of canonical coordinates in the sense that the structure functions {yi, yj} = {log xi, log xj} = ωij are all constants. This is indeed the intuition behind the terminology log-canonical. From 25 Theorem 7.1.8 it will follow that there does not exist any rational change of coordinates on any Zariski open subset such that the structure functions are constant in the new coordinates. Similarly, let M be an algebraic variety and O(M ) its algebra of regular functions. If there is a bracket making O(M ) into a Poisson algebra, then we call (M,{·,·}) a Poisson variety. Suppose there is a system of coordinates (x1, . . . , xn) on some Zariski open subset of a Poisson variety M , then the bracket is given by Equation (4.1) just as in the smooth case (see for example [LGPV13] for details). We wish to investigate whether such a “simplification” of the structure functions is possible (analogous to the simplification in the Darboux/Weinstein Theorem, in the sense that all structure functions become lower degree polynomials), allowing only birational change-of-coordinates. It is suggested/conjectured in [Van01] that there are not canonical coordinates in general for an arbitrary Poisson variety, but that no specific counterexample has been demonstrated. In [GL11], it was shown that affine space with a log-canonical bracket is such a counterexample. We wish to demonstrate that this same example has the additional property that no rational change of coordinates can make the structure functions linear. The following example is given in [Van01] and demonstrates some of the nuances of the problem of finding canonical coordinates on an open set of a Poisson variety. Example 4.1.3 ([Van01]). Consider affine space C2 with coordinates (x, y) and Poisson bracket given by {x, y} = x. Viewing C2 as a smooth manifold, there is a system of canonical (cid:16) 1 (cid:17) local coordinates (log x, y) that is not algebraic. However, there is also x,−xy which is a system of canonical local coordinates that is algebraic. That is, a system of canonical coordinates consisting of rational functions in x and y defined on the Zariski-open subset {(x, y) : x (cid:54)= 0} of the variety C2. The example illustrates that there do exist Poisson varieties which admit a rational coordinate change on an open subset which make the structure 26 functions constant. Example 4.1.4. More generally, consider C2 with coordinates (x, y) and Poisson bracket given by {x, y} = xayb for (a, b) ∈ N × N. The case (a, b) = (1, 1) gives a system of log- cononical coordinates. In all other instances, we can find a system of canonical coordinates as follows: • If a (cid:54)= 1 and b (cid:54)= 1, then {x−(a−1), y−(b−1)} = (a − 1)(b − 1) is a nonzero constant. • If a = 1 and b (cid:54)= 1, then {x−1, xy−(b−1)} = (b − 1) is a nonzero constant. The case a (cid:54)= 1 and b = 1 is similar using the fact that the bracket is antisymmetric. Note that the previous example is the special case when (a, b) = (1, 0). Although the specific example (a, b) = (1, 0) does give a birational change of coordinates, this is not in general true for this family of examples. For instance, when either a or b is greater than 2, the inverse of the coordinate change is not a rational function. Thus for (a, b) (cid:54)= (1, 1) we can always find a pair of algebraically independent rational functions in two variables such that the bracket between these two functions is a nonzero constant. It is still unclear whether this example can be generalized to dimensions higher than 2. It will follow from Theorem 7.1.8 that (a, b) = (1, 1) is the unique exception to the existence of two rational functions with nonzero constant bracket between them. This begs the following interesting, and more general, question. Question 4.1.5. Given a Poisson bracket whose structure functions are all (homogeneous) polynomials of a given degree, when is it possible to find a birational change of coordinates making the structure functions (homogeneous) polynomials of a smaller degree? We will demonstrate in Chapter 7 that the quadratic log-canonical Poisson structure has 27 the property that there is no rational change of coordinates making the Poisson bracket linear or constant. 4.2 Poisson brackets compatible with cluster algebras In this section we will follow [GSV10] and discuss Poisson brackets compatible with cluster algebras of geometric type. Let F be the field of rational functions in n + m independent variables with rational coefficients. Consider a Poisson bracket {·,·} on F. In the context of cluster algebras and Poisson geometry we call functions f1, f2, . . . , fn+m log-canonical if there exists ωij ∈ Z such that {fi, fj} = ωijfifj for all 1 ≤ i, j ≤ n + m. This is a special case of our previous definition of log-canonical where now we want the scalars ωij to be integers. Let A be a rank n cluster algebra of geometric type over P = Trop(z1, z2, . . . , zm). Take F to be the ambient field of the cluster algebra. A Poisson bracket on F is called compatible with A if every extended cluster is log-canonical. Clearly the trivial Poisson bracket with {f, g} = 0 for all f, g ∈ A is always compatible. If the extended exchange matrix of A is full rank, then there exist many nontrivial compatible Poisson brackets [GSV10, Theorem 4.3]. Moreover, the collection of compatible Poisson brackets forms of vector space which can be completely described. A cluster structure in the field of rational functions of an algebraic variety is called regular if all cluster variables are regular functions. A main question is to construct explicitly a compatible regular cluster structure corresponding to a given variety equipped with an algebraic Poisson structure. For a simple complex Lie group, Belavin and Drinfled have given 28 a classification of Poisson-Lie structures coming from classical R-matrices [BD82]. The main conjecture of Gekhtman, Shapiro, and Vainshtein on cluster algebras and Poisson geometry states that for complex simple Lie groups the classification of regular cluster structures parallels the Belavin-Drinfled classification [GSV12, Conjecture 3.2]. A map ϕ : M → N between two Poisson manifolds (or Poisson varieties) is called a Poisson map if the pullback map ϕ∗ is a homomorphism of Poisson algebras. A Lie group G is called a Poisson-Lie group if the multiplication map G × G → G is a Poisson map. For further details, see [CP94]. The cluster structure on double Bruhat cells from [BFZ05] is known to correspond to a cluster structure compatible with the standard Poisson-Lie structure [GSV10, Chapter 4.3]. Other Poisson-Lie compatible cluster structures are called exotic. Thus an important problem in cluster algebras and Poisson geometry is to find exotic cluster structures. In Section 6.2 we will consider a particular exotic cluster structure which provides an example of sensitivity to the A = U question on the choice of ground ring. 4.3 Poisson geometry for networks of the disk In this section we explain the connection between the log-canonical Poisson structures which play a role in cluster algebras and the boundary measurement map that was defined in Chap- ter 2. We will focus our attention here to edge weighted directed graphs on the disk [GSV09]. Though we remark that Poisson structures exist for edge weighted directed graphs on the annulus which was the context for the first generalization on the boundary measurement map to a surface beyond the disk [GSV08]. In Chapter 5 we will consider a generalized version of the boundary measurement valid for any orientable surface. As in [Pos06], we will use the term directed network N = (V, E) to refer to the directed 29 graph N = (V, E) along with edge weights. For any directed network N we defined its boundary measurement matrix B(N ) in Chapter 2. We will now describe a Poisson structures on the space on edge weights and on matrices so that the map associating a collection of edge weights to the boundary measurement matrix evaluated at those edge weights is a Poisson map. Here we will only consider directed networks such that each internal vertex is trivalent and neither a source nor sink, and each boundary vertex is univalent. Internal vertices are of two possible types. Either incident on one incoming edge and two outgoing edges, or incident on two incoming edges and one outgoing edge. Remark 4.3.1. We can restrict to such directed networks with loss of generality. Post- nikov [Pos06] has shown how to transform any directed network, without changing the boundary measurement, to one of the trivalent networks we consider in this section. Given such a trivalent directed network N to each interval vertex v we assign a 3- v, x3 dimensional space (R \ {0})3 v along with a Poisson bracket {·,·}v. We assign coordinates v to each (R \ {0})3 v, x2 x1 v with a coordinate corresponding to each edge incident on v as depicted in Figure 4.1. To each boundary vertex v we assign a 1-dimensional space (R\{0})v with coordinate x1 v. We then define R to be the direct sum the spaces associated to all ver- tices. The Poisson bracket {·,·}R is the direct sum of brackets so that {x, y}R = 0 whenever x and y are defined are different spaces. We next define edge weights we = xi u where e = (u, v) with xi vxj v and xj u corresponding to the edge e in the spaces (R \ {0})3 u. The space of edge weights EN is then (R \ {0})|E| and inherits a Poisson bracket {·,·}N from the pushfoward of the weight map w : (R \ {0})2|E| → (R \ {0})|E|. Considering the case when the Poisson brackets {·,·}v v and (R \ {0})3 only depends on the type of the vertex, there is a 2-parameter family of Poisson brackets on the space of matrices such that the boundary measurement N → B(N ) is a Poisson 30 x1 v x2 v x3 v x1 v x2 v x3 v Figure 4.1: Coordinates on (R \ {0})3 v. map [GSV10, Theorem 8.6]. Proceeding in this way Gekhtman, Shapiro, and Vainshtein are able to use directed networks to produce Poisson structures compatible with the standard cluster algebra structure on the Grassmannian [GSV10, Theorem 8.20]. This construction shows a connection between various topics (cluster algebras, log-canonical Poisson brackets, the Grassmannian, and boundary measurement) and provides a partial explanation for their inclusion together in this dissertation. We omit further details of this compatible Poisson structure via networks as details will not be needed in the remaining chapters. 31 Chapter 5 Boundary Measurement This chapter is based on the article [Mac18]. 5.1 Generalized boundary measurement definition The totally nonnegative Grassmannian was defined by Postnikov [Pos06] and can be studied using edge weighted planar graphs embedded on a disk. These edge weighted planar graphs and the totally nonnegative Grassmannian are connected to the physics of scattering ampli- tudes and N = 4 super Yang-Mills [AHBC+16]. In the context of physics, the edge weighted planar graphs are usually called “on-shell diagrams.” A key element of Postnikov’s study of the totally nonnegative Grassmannian is the boundary measurement map which produces an element of the totally nonnegative Grassmannian for any edge weighted directed graph embedded in the disk. Under a mild hypothesis on the graph, Talaska [Tal08] gives a formula for the Pl¨ucker coordinates of the element of the totally nonnegative Grassmannian corre- sponding to a given graph. In [FGM14, FGPW15] a boundary measurement map for graphs on more general surfaces is proposed with the hopes of going beyond the “planar limit” of N = 4 super Yang-Mills. The definition of the boundary measurement map will be given later in this section, and in defining the boundary measurement we must make a choice of how to represent our directed graph in the plane. The boundary measurement map turns out to be independent 32 of this choice as we will see in Section 5.2. We will show in Section 5.3 how boundary measurement map can be obtained by signing the edges of a directed graph. This technique of signing edges will allow us to unify two formulas of Talaska [Tal08, Tal12]. A formula for the Pl¨ucker coordinates corresponding to the boundary measurement map is given in Section 5.4. In Section 5.5 we will show that the signs used in Section 5.3 are unique up to the gauge action. 5.1.1 Weighted Path Matrices Let N = (V, E) be a directed graph with finite vertex set V and finite edge set E. This means an edge e ∈ E is an ordered pair e = (i, j) for i, j ∈ V . If e = (i, j) then the edge e is said to be directed from vertex i to vertex j. For each edge e ∈ E of N we associate a formal variable xe. We will work in R[[xe : e ∈ E]] the ring of formal power series in the variables {xe}e∈E with coefficients in R. Recall, as in [Pos06], we will use the term directed network N = (V, E) to refer to the directed graph N = (V, E) along with edge weights {xe}e∈E. A path is a finite sequence of edges P = (e1, e2,··· , el) where ek = (ik−1, ik) for 1 ≤ k ≤ l. If P = (e1, e2,··· , el) where e1 = (i0, i1) and el = (il−1, il), then P is said to be a path from i0 to il. The path P is said to be self avoiding if ik (cid:54)= ik(cid:48) for k (cid:54)= k(cid:48). The path P is called a cycle if i0 = il, and we say the cycle is a simple cycle when ik = ik(cid:48) if and only if k = k(cid:48) or {k, k(cid:48)} = {0, l}. We use the notation P : i (cid:32) j to denote a path from i to j. When P = (e1, e2,··· , el) we let l(cid:89) wt(P ) = xei denote the weight of the path P . i=1 33 We order our vertex set V and consider the V × V weighted path matrix M = M (N,{xe}e∈E) (cid:88) P :i(cid:32)j wt(P ) Mij = with entries given by for all (i, j) ∈ V × V . We let C(N ) denote the set of all collections C which consist of simple cycles that are pairwise vertex disjoint. For C ∈ C(N ) we define its weight as (cid:89) C∈C wt(C) = wt(C) and its sign as sgn(C) = (−1)|C| where |C| denotes the number of cycles in the collection C. The empty collection ∅ is in C(N ) with wt(∅) = 1 and sgn(∅) = 1. We let Sn denote the symmetric group on [n] = {1, 2, . . . , n} and consider elements π ∈ Sn as bijections π : [n] → [n]. For π ∈ SN and any I, J ⊆ V with I = {i1 < i2 < ··· < in} and J = {j1 < j2 < ··· < jn} we let PI,J,π denote the set of collections P = (P1, P2, . . . , Pn) (cid:32) jπ(k) is self avoiding for each k ∈ [n], and Pk and Pk(cid:48) are vertex disjoint such that Pk : ik whenever k (cid:54)= k(cid:48). For P ∈ PI,J,π we define its weight as (cid:89) wt(P) = wt(P ) P∈P and its sign as sgn(P) = sgn(π). Note if π(k) = k we can have Pk : ik (cid:32) ik be the empty path Pk from ik to ik consisting of no edges, and in this case wt(Pk) = 1. We then let 34 Figure 5.1: An example of the edge order induced by the boundary vertex order. Here we say e(cid:48) < e(cid:48)(cid:48) given the boundary vertices are ordered as usual with 1 < 2 < 3 < 4. The direction of the edges is irrelevant for the induced edge ordering. FI,J (N ) denote the collection of flows from I to J. A flow from I to J is a pair F = (P, C) such that P ∈ PI,J,π for some π ∈ Sn, C ∈ C(N ), and all paths in P and cycles in C are pairwise vertex disjoint. For F ∈ FI,J (N ) with F = (P, C) we define its weight as wt(F) = wt(P) wt(C) and its sign as sgn(F) = sgn(P) sgn(C). Talaska’s formula [Tal12] states ∆I,J (M ) = (cid:80) (cid:80) F∈FI,J (N ) sgn(F) wt(F) C∈C(N ) sgn(C) wt(C) (5.1) where ∆I,J (M ) denotes the minor of M with rows indexed by I and columns indexed by J. Equation (5.1) generalizes the Lindstr¨om-Gessel-Viennot lemma [Lin73, GV85] which only applies to directed networks without directed cycles. Fomin also provides of generalization of the Lindstr¨om-Gessel-Viennot lemma which allows for directed cycles [Fom01] where the sum is indexed by a minimal, but infinite, collection of paths. 5.1.2 Boundary Measurement Matrices Now consider the directed network N = (V, E) embedded in a closed orientable surface with boundary S. We call a vertex on the boundary of S a boundary vertex and an edge which is 35 1234e0e00 Figure 5.2: Smoothing a piecewise smooth curve at a vertex. incident on a boundary vertex an external edge. We assume each boundary vertex is either a source or sink and that edges are embedded as smooth curves. Let K ⊆ V be the collection of boundary vertices. Let b denote the number of boundary components of S and assume each boundary component is a smooth curve diffeomorphic to a circle. We make b − 1 cuts between pairs of boundary components on the surface S to obtain a new surface T with a single boundary component. The cuts are made such that each cut is a smooth curve, no cut intersects any vertex of N , and cuts intersect edges of N transversally. The boundary ∂T is then a piecewise smooth curve homeomorphic to a single circle. We choose a piecewise smooth parameterization φ : [0, 1] → ∂T with φ(0) = φ(1) (i.e. φ : S1 → ∂T ). Throughout we will assume all parameterizations are piecewise smooth with nowhere zero derivative. We order the boundary vertices so that they appear in order when traversing ∂T according to φ. Thus we have a linear ordering of the vertices in K which we denote by < so that K = {i1 < i1 < ··· < in} with ij = φ(tj) for 0 ≤ t1 < t2 < ··· < tn < 1. The linear ordering of the boundary vertices induces an ordering on the set of edges incident on some external edge as demonstrated in Figure 5.1. We also have a cyclic ordering which we denote ≺. For i, j, k ∈ K we write i ≺ j ≺ k if i < j < k, k < i < j, or j < k < i. When S is a closed orientable surface with boundary of genus g = 0 any network embed- ded on S can be drawn in the plane. In order to draw the directed network in the plane we must choose a boundary component of S called external and identify this external boundary component with a circle bounding a disk in the plane. We then draw the directed network 36 inside this disk. In Section 5.2 we will show that this choice of external boundary component does not have an impact on our results. Consider a network N on S embedded in the plane and overlay the cuts used to construct T . We will make use of both S and T . For any path P : i (cid:32) j where (i, j) ∈ I × K we form a closed curve C(P ) in the plane as follows: 1. Traverse the path P from i to j in S. 2. Follow the boundary of T in our specified direction from j to i. We want C(P ) to be a smooth curve. Since we have assumed that all edges and boundary components are smooth curves the curve C(P ) will be piecewise smooth. In order to work with a smooth curve we will approximate C(P ) by a smooth curve at cut points and around each vertex as in Figure 5.2. We will make no distinction between C(P ) and the smooth curve we approximate it by, and in some cases we may draw a piecewise smooth curve in place of a smooth curve. Given any smooth closed curve in the plane define its rotation number to be the degree of the map T ◦ ψ : S1 → S1 where ψ : S1 → C is a parameterization of C and T : C → S1 gives the unit tangent vector of each point. The choice of which smooth curve is used as an approximation will not effect the rotation number. Now consider the case where S is a closed orientable surface with boundary S of genus g > 0. Similarly to the genus zero case, we want to construct a closed curve in the plane for each path P : i (cid:32) k where (i, k) ∈ I × K. We choose generators of the first homology group for the underlying closed surfaced without boundary. The choice of homology generators does not affect our results as we will see in Section 5.2. The homology generators are chosen so that they do not intersect any vertices of N and so that all intersections with edges of N are transversal. Also, the homology generators are chosen so that they intersect transversally with the cuts used to form T . We then consider the punctured fundamental polygon of S 37 Figure 5.3: On the left we have a punctured fundamental polygon for S a torus with two boundary components with the cut between boundary components shown as a dashed line. On the right we have a closed curve around the boundary of T drawn inside the fundamental domain. which is the usual fundamental polygon of the underlying closed surface without boundary where the sides of the polygon correspond to homology generators, but we must remove some number of disks to create the boundary of the surface. The punctured fundamental polygon has 4g sides with each corresponding to a homology generator, and when sides corresponding to the same homology generator are identified we obtain the surface S. Note no vertex appears on any side of the punctured fundamental polygon and no edge or cut ever runs parallel to any side of the punctured fundamental polygon. The punctured fundamental polygon represents a fundamental domain of our surface. See Figure 5.3 for an example of a punctured fundamental polygon. When we have a network N on S with genus g > 0 we draw N in the plane inside a single fundamental domain of S and overlay the cuts used to construct the surface T . Given any path P : i (cid:32) j for (i, j) ∈ I × K we form a closed curve C(P ) in the plane, similarly to the genus zero case, by first traversing the path P from i to j and then following the boundary of T in our specified order from j to i. However, each time the closed curve leaves the chosen fundamental domain we connect the exit and entry points by following the sides of the punctured fundamental polygon clockwise from the exit point to the entry point. 38 Let I ⊆ K be the collection of boundary vertices which are sources. We consider the I × K matrix A = A(N,{xe}e∈E) with entries given by Aij = Mij for all (i, j) ∈ I × K. So, A is obtained from M by restricting to rows I and columns K. We also consider the I × K boundary measurement matrix B = B(N,{xe}e∈E) with entries given by (cid:88) P :i(cid:32)j Bij = (−1)sij +rP +1 wt(P ) for all (i, j) ∈ I × K. Here sij denotes the number of elements of I strictly between i and j with respect to <, and rP denotes the rotation number of C(P ). This definition of the boundary measurement matrix for any closed orientable surface with boundary S is due to Franco, Galloni, Penante, and Wen [FGPW15]. Postnikov [Pos06] gave the original definition on the boundary measurement matrix in the case where the surface is a disk. The boundary measurement matrix was considered for networks on the annulus by Gekhtman, Shapiro, and Vainshtein [GSV08] and for networks on any closed orientable genus zero surface with boundary by Franco, Galloni, and Mariotti [FGM14]. Consider specializing the formal variables xe to real weights. Notice the boundary mea- surement matrix is then a real |I| × |K| matrix of rank |I|. Hence, for any directed network N and choice of real weights, the boundary measurement matrix B(N ) describes an ele- ment of the real Grassmannian Gr(|I|,|K|). This association of a directed network with real weights to an element of the Grassmannian is the known as the boundary measurement map. One feature of Postnikov’s boundary measurement map applied to a directed network N embedded in the disk is that when the edge weights are positive real numbers, the boundary measurement matrix B(N ) represents an element of the totally nonnegative Grassmannian. The totally nonnegative Grassmannian is defined to be elements of the Grassmannian such 39 that all Pl¨ucker coordinates are nonnegative or nonpositive. That is, elements of the Grass- mannian that can be represented by a matrix where each maximal minor is nonnegative. When S is a disk it is shown in [Pos06] that the maximal minors of B(N ) are subtraction- free rational expressions in the edge weights. We call a directed network N perfectly oriented if each boundary vertex is a univalent source or sink and each interior vertex is trivalent and neither a source nor sink. When a network is perfectly oriented, the interior vertices are of one of two types. We distinguish the two types of interior vertices by coloring each interior vertex white or black. White vertices have one incoming edge and two outgoing edges, and black vertices have two incoming edges and one outgoing edge. For an example of a perfectly oriented network see Figure 5.6. Remark 5.1.1. In [Pos06] it is shown how to transform any directed network N on the disk to a perfectly oriented network N(cid:48) so that the boundary measurement matrix B(N ) is a specialization of the boundary measurement matrix of B(N(cid:48)). All transformations needed take place locally around a vertex, and hence will work on more general surfaces. If N is perfectly oriented Talaska [Tal08] gives the following formula (cid:80) (cid:80) F∈FI,J (N ) wt(F) C∈C(N ) wt(C) ∆I,J (B) = (5.2) where J ⊆ K with |I| = |J|. We notice Equation (5.2) is very similar to Equation (5.1) even though they describe minors of different matrices. In Theorem 5.3.3 we will show that the boundary measurement matrix B can be obtained from A by a simple change of variables which explains the similarity of the formulas. This theorem will also allow us to prove the following conjecture. Conjecture 5.1.2 ([FGPW15]). If N = (V, E) is a perfectly oriented network embedded on 40 a closed orientable surface with boundary, then for any J ⊆ K with |I| = |J| (cid:80) (cid:80) F∈FI,J (N ) σ(F) wt(F) C∈C(N ) σ(C) wt(C) ∆I,J (B(N,{xe}e∈E)) = for some σ : FI,J (N ) ∪ C(N ) → {±1}. Our main result is that Conjecture 5.1.2 is true. It follows from Equation (5.1) and Theorem 5.3.3 which will be proven in the Section 5.3. Corollary 5.4.2 gives a formula for the maximal minors of the boundary measurement matrix where we explicitly describe the sign function σ in Conjecture 5.1.2. Recall, if we specialize are formal variables to take real values, the boundary measurement matrix represents an element of the real Grassmannian. In this context Conjecture 5.1.2 and Corollary 5.4.2 are formulas for the Pl¨ucker coordinates of this element of the Grassmannian. 5.2 Boundary Measurement Independence Given a directed network N embedded on a closed orientable surface with boundary S, we must make some choices when computing the boundary measurement matrix B(N ). The first choice we must make is how to place the cuts on the surface S to obtain the surface T with a single boundary component. The boundary measurement does depend on this choice. For example, boundary measurement matrices are (cid:20) (cid:21) B(N ) = 1 x for the directed networks in Figure 5.4. (cid:20) (cid:21) 1 −x B(N(cid:48)) = Another choice we must make when computing the boundary measurement is how to 41 Figure 5.4: Two networks on the annulus each with a different choice of cut. represent the closed orientable surface with boundary in the plane. For genus g = 0, we make a choice of which boundary component corresponds to the circle bounding the disk we draw our network inside. For genus g > 0, we choose a fundamental domain. In this section we will show that the boundary measurement does not depend on how we represent the surface in the plane. Let ψ : S1 → R2 define a smooth closed curve C. When ψ(t1) = ψ(t2) for t1 (cid:54)= t2 we call ψ(t1) a self intersection point of C. If ψ(t1) is a self intersetion point of C such that there exists a unique t2 (cid:54)= t1 with ψ(t1) = ψ(t2) and {ψ(cid:48)(t1), ψ(cid:48)(t2)} are linearly independent, we then call the self intersection point ψ(t1) simple. A smooth curve whose only self intersection points are simple is called normal. The rotation number of a normal curve differs in parity from the number of self intersections. This was proven by Whitney in [Whi37] where it is also proven that any smooth curve can be transformed into a normal curve by small deformations without changing the curves rotation number. When drawing closed curves inside a fundamental domain we sometimes may not connect exit and entry points along the sides of the punctured fundamental polygon, but rather draw some curve in the interior or exterior of the punctured fundamental polygon which has the same rotation number as the curve following the sides of the polygon. This will be done to simplify the drawing of the 42 12xN12xN0 curve and in some cases will be necessary to transform the curve into a normal curve. Observe for any closed curve C on a closed orientable surface with boundary S, we can construct a closed curve in the plane in the same way we do for the closed curves which come from paths in our oriented network. Our next lemma will consider an arbitrary closed curve C on S. Given some representation of our surface in the plane, we will let ˆC denote the corresponding closed curve in the plane. Also, in the proof of the lemma we will consider a lift C(cid:48) of the closed curve C to the universal cover of S when the surface S has genus g > 0. Recall that each time a closed curve C leaves the fundamental domain, we connected the exit and entry points along the boundary of the punctured fundamental polygon when constructing the closed curve ˆC which lives in a single fundamental domain. When doing this the tangent vector will make exactly one complete rotation. To account for this we construct another curve C(cid:48)(cid:48) on the universal cover of S. The curve C(cid:48)(cid:48) agrees with the curve C(cid:48) except we add a loop each time it crosses a homology generator. See Figure 5.5 for an example of C, ˆC, C(cid:48) and C(cid:48)(cid:48). Lemma 5.2.1. Let C be a closed curve on a closed orientable surface with boundary S and let ˆC be the closed curve corresponding to C for some choice of representation of S in the plane. The parity of the rotation number of ˆC does not depend on the choice of representation of S in the plane. Proof. Recall, the parity of the rotation number of a closed curve in the plane depends only on the number of self intersections of the curve. For the case genus g = 0, it is clear the number of self intersections of ˆC does not depend of the representation of S in the plane. Now consider the case genus g > 0. We denote the rotation number of ˆC by r. We let C(cid:48) be a lift of C to the universal cover of S. The universal cover is homeomorphic to R2. 43 Figure 5.5: An example of a closed curve C on the torus and the corresponding curves ˆC, C(cid:48) and C(cid:48)(cid:48). 44 CˆCC0C00 Let h denote the number of times C intersects any homology generator. Notice the parity of h is determined by the homology class of C, and hence the parity of h is independent of the choice of fundamental domain used to represent S in the plane. If C is null homologous, then C(cid:48) is a closed curve in R2. If C is not null homologous, then C(cid:48) is not a closed curve in R2. However, we can still define the rotation number of C(cid:48) since the unit tangent vector at the starting point and ending point of C(cid:48) will be the same. In any case, let r(cid:48) denote the rotation number of C(cid:48). Notice each time C intersects any homology generator, the tangent vector to the curve ˆC we make a complete rotation in the clockwise direction on the portion of the curve which connects the entry and exit points of the fundamental domain. We can modify the curve C(cid:48) by adding a small loop in the clockwise direction each time C(cid:48) intersects a lift of a homology generator. We let C(cid:48)(cid:48) denote this modified curve. We can construct C(cid:48)(cid:48) such that there is then a map φ : C(cid:48)(cid:48) → ˆC such that T = T ◦ φ. Let r(cid:48)(cid:48) denote the rotation number of C(cid:48)(cid:48) It then follows that r = r(cid:48)(cid:48) and that r(cid:48)(cid:48) = r(cid:48) + h. Therefore r = r(cid:48) + h, and the parity of the rotation number of ˆC is independent of how S is represented in the plane. Theorem 5.2.2. The boundary measurement of a directed network N on a closed orientable surface with boundary S is independent of how we represent S in the plane. Proof. The only part of the boundary measurement matrix that depends on the representaion of S in the plane is the rotation numbers rP of the closed curves C(P ) which correspond to paths P in N . In fact, the boundary measurement matrix only depends on the parity of rP . Therefore, the theorem then follows immediately from Lemma 5.2.1. We have also made a choice to connect exit and entry points of a closed curve along the sides of the punctured fundamental polygon in the clockwise direction. Observe the 45 Figure 5.6: A perfectly oriented network on the disk. proof of Lemma 5.2.1 can easily be modified if we chose to connect the exit points in the counterclockwise direction. 5.3 Signing Perfectly Oriented Networks In this section we prove a theorem which shows a relationship between the weighted path matrix and the boundary measurement matrix. We show the boundary measurement matrix is the weighted path matrix with some edge weights thought of as negative. That is, by replacing xe with −xe for some edges in A(N,{xe}e∈E) we obtain B(N,{xe}e∈E). We first look at an example of signing the edges of a network. Let N be the network in Figure 5.6. Here the boundary vertices are labeled to respect the usual ordering of the natural numbers so that 1 < 2 < 3 < 4. The weighted path matrix 46 1234x1x2x3x4x5x6x7x8 and boundary measurement matrix for N are 1 1 0 A = B = x1x5x2 1 − x5x6x7x8 x3x7x8x5x2 1 − x5x6x7x8 x1x5x2 1 + x5x6x7x8 0 x1x5x6x7x4 1 − x5x6x7x8 x3x7x4 1 1 − x5x6x7x8 0 −x1x5x6x7x4 1 + x5x6x7x8 0 x3x7x8x5x2 1 + x5x6x7x8 1 x3x7x4 1 + x5x6x7x8   respectively. Notice that B can be obtained from A by replacing x6 with −x6. Theorem 5.3.3 shows that when N is perfectly oriented B can always be obtained from A by a change a variable which gives each edge of N a sign. However, there is not a unique way to obtained B from A. For example, replacing x2 and x5 with −x2 and −x5 respectively is another possibility. Theorem 5.5.1 characterizes all possible ways to sign the edges of N . Before stating the main theorem of this section we prove two lemmas which will be needed. Lemma 5.3.1. Let N be a directed network embedded on a closed surface with boundary S and let T be the surface obtained after making cuts. If rT is the rotation number of the closed curve which following the boundary of T in a chosen fundamental domain, then rT ≡ 1 (mod 2). Proof. For genus g = 0 it is clear that rT ≡ 1 (mod 2). For genus g > 0 we can choose In this case it is homology generators so that they do not intersect the boundary of T . again clear the rT ≡ 1 (mod 2). The general case for genus g > 0 then follows from Lemma 5.2.1. For any path P : i (cid:32) j we can form the closed curve C(cid:48)(P ) by traversing the path P from 47 i to j and then following the boundary of T from j to i opposite to our choosen direction. We let r(cid:48)P denote the rotation number of C(cid:48)(P ). Our next lemma shows that we can use r(cid:48)P in place of rP and the boundary measurement matrix will not change. Lemma 5.3.2. Let N be a directed network embedded on a closed surface S with boundary and P is a path in N , then r(cid:48)P ≡ rP (mod 2). Proof. Let P : i (cid:32) j be a path from i to j in N . We claim rP − r(cid:48)P = rT ± 1. To see this draw C(P ) and C(cid:48)(P ) together in the same fundamental domain. We then reverse the direction of C(cid:48)(P ) and observe that we traverse the boundary of T once and also traverse the path P once from i to j as well as once in reverse from j to i. Hence we can compute rP − r(cid:48)P by considering the rotation number of the closed curve obtained by first traversing P , then traversing the boundary of T , and finally traversing the path P in reverse. Thus rP − r(cid:48)P = rT ± 1 and it follows by Lemma 5.3.1 that r(cid:48)P ≡ rP (mod 2). So, Lemma 5.3.2 shows that the direction in which we parameterize the boundary of S does not affect the boundary measurement. We now state and prove our theorem on signing edges. Theorem 5.3.3. If N = (V, E) is a perfectly oriented network embedded on a closed ori- entable surface with boundary, then there exists a collection {e}e∈E ∈ {±1}E such that B(N,{xe}e∈E) = A(N,{exe}e∈E). Proof. Let N be a perfectly oriented network with vertex set V and edge set E. To show B(N,{xe}) = A(N,{exe}) it suffices to show that the path P : i (cid:32) j for any (i, j) ∈ I × K 48 has the following property: wt(P )|{exe}e∈E = (−1)sij +rP +1 wt(P ) ((cid:5)) When this is true for a choice of signs {e}e∈E we will say the path P has property ((cid:5)). We assume the network has at least one boundary source, or else the in no boundary measure- ment matrix. Recall that K denotes the set of boundary vertices of N and we have an ordering of the boundary vertices. We fix the following notation, if j ∈ K is a boundary vertex we let . It can so ej denote the unique external edge which is incident on j and write j for ej happen that ej1 = ej2 for j1 (cid:54)= j2, in this case we will consider distinct signs j1 and j2 on half edges with the sign on the edge being the product j1 j2 . We induct on the number of interior vertices. If there are no interior vertices, then the result is true since each path consists of a single edge. For the inductive step we chose any boundary source i0 and construct a network ˜N with one fewer interior vertex. The edge set of ˜N will be denoted ˜E. We will inductively chose so that each path in ˜N has property ((cid:5)) and show how to modify these signs signs {˜e}e∈ ˜E to give a collection {e}e∈E so that each path in the N has property ((cid:5)). Recall that for two boundary vertices i and j of N we let sij denote the number of boundary sources strictly between them in N . For two boundary vertices i and j of ˜N we let ˜sij denote the number of boundary sources strictly between them in ˜N . The inductive step falls into one of three cases depending on the boundary source i0 and its unique neighboring vertex. If i0 is adjacent to a white vertex with outgoing edges e(cid:48) and e(cid:48)(cid:48) we then remove the white vertex and split i0 into two boundary sources i(cid:48)0 < i(cid:48)(cid:48)0 as shown in Figure 5.7. Choose signs 49 Figure 5.7: Splitting a white vertex {˜e} for the edges of ˜N by induction so that all paths in ˜N have property ((cid:5)). We define the signs {e} as follows j = ˜j for j ∈ K with j < i0 e(cid:48) = ˜e(cid:48) = +1 i0 e(cid:48)(cid:48) = −˜e(cid:48)(cid:48) j = −˜j e = ˜e for j ∈ K with j > i0 otherwise and now verify the collection of signs {e} are valid. Consider a path P : i (cid:32) j in N with i (cid:54)= i0. The path P corresponds to a path ˜P : i (cid:32) j in ˜N with rP = r ˜P . If i, j < i0, then sij = ˜sij and P has property ((cid:5)) since the modification does not introduce any sign change to P . If i < i0 < j or j < i0 < i, then sij = ˜sij − 1 and P has property ((cid:5)) since the modification introduces one sign change to P . If i, j > i0, then sij = ˜sij and P has property ((cid:5)) since the modification introduces two sign changes to P . Next consider a path P : i0 (cid:32) j in N . The path P corresponds either to a path (cid:32) j. First consider the case P corresponds to ˜P(cid:48). If j < i0, then (cid:32) j or ˜P(cid:48)(cid:48) : i(cid:48)(cid:48)0 ˜P(cid:48) : i(cid:48)0 sij = ˜sij and P has property ((cid:5)) since the modification does not introduce any sign change to 50 i0wi0we0we00Ni00i000we0we00˜N Figure 5.8: Splitting a black vertex in the case that i(cid:48)0 is a sink and i(cid:48)(cid:48)0 is a source P . If i0 < j, then sij = ˜sij − 1 and P has property ((cid:5)) since the modification introduces one sign change to P . Next consider the case P corresponds to ˜P(cid:48)(cid:48). If j < i0, then sij = ˜sij − 1 and P has property ((cid:5)) since the modification introduces one sign change to P . If i0 < j, then sij = ˜sij and P has property ((cid:5)) since the modification introduces two sign changes to P . Therefore the signs {e} are valid in this case. If i0 is adjacent to a black vertex we then remove the black vertex and split i0 into two boundary vertices i(cid:48)0 < i(cid:48)(cid:48)0 one of which will be a sink and the other of which will be a source. We now consider the case where i(cid:48)0 is a sink and i(cid:48)(cid:48)0 is a source as shown in Figure 5.8. Choose signs {˜e} for the edges of ˜N by induction so that all paths in ˜N have property ((cid:5)). We define the signs {e} as follows e(cid:48) = −˜e(cid:48) = +1 i0 e = ˜e, otherwise and now verify the collection of signs {e} as defined satisfy our rule. Consider a path P : i (cid:32) j in N with i (cid:54)= i0. If P does not use the edge e(cid:48), then P corresponds to a path ˜P : i → j in ˜N with rP = r ˜P and sij = ˜sij. If this is the case, then it is clear P has property ((cid:5)). Otherwise P traverses the edge e(cid:48) some number of times. 51 i0wi0we0we00Ni00i000we0we00˜N Let l + 1 be the number of times P traverses e(cid:48) for l ≥ 0. The path P corresponds to the concatenation of paths ˜P(cid:48) : i (cid:32) i(cid:48)0, ˜Pk : i(cid:48)(cid:48)0 (cid:32) j. Now the sign of the product of the weights of these paths in ˜N is (cid:32) i(cid:48)0 for 1 ≤ k ≤ l, and ˜P(cid:48)(cid:48) : i(cid:48)(cid:48)0 (cid:80)l k=1(r ˜Pk (−1) ˜s ii(cid:48)0 +r ˜P(cid:48) +1 (−1) +1) ˜s i(cid:48)(cid:48)0j (−1) +r ˜P(cid:48)(cid:48)+1 since ˜s i(cid:48)0i(cid:48)(cid:48)0 = 0. The sign of the path P in N will be (cid:80)l k=1(r ˜Pk ˜s ii(cid:48)0 (−1) +r ˜P(cid:48) +1 (−1) +1) ˜s i(cid:48)(cid:48)0 j (−1) +r ˜P(cid:48)(cid:48) +1 (−1)l+1 since we pick up an addition factor of −1 each time we traverse e(cid:48). Simplifying the sign of P is +1+r ˜P(cid:48) +(cid:80)l k=1 r ˜Pk +r ˜P(cid:48)(cid:48) . ˜s ii(cid:48)0 +˜s i(cid:48)(cid:48)0j (−1) We observe that sij = ˜s ii(cid:48)0 + ˜s i(cid:48)(cid:48)0 j + 1 if i ≺ i0 ≺ j if j ≺ i0 ≺ i, and so the sign of P is Finally we observe that sij = ˜s ii(cid:48)0 + ˜s i(cid:48)(cid:48)0 j (−1) (−1) sij +r ˜P(cid:48) +(cid:80)l sij +1+r ˜P(cid:48) +(cid:80)l rP + 1 ≡ r ˜P(cid:48) +(cid:80)l rP ≡ r ˜P(cid:48) +(cid:80)l k=1 r ˜Pk k=1 r ˜Pk +r ˜P(cid:48)(cid:48) k=1 r ˜Pk +r ˜P(cid:48)(cid:48) if i ≺ i0 ≺ j if j ≺ i0 ≺ i. k=1 r ˜Pk + r ˜P(cid:48)(cid:48) (mod 2) + r ˜P(cid:48)(cid:48) (mod 2) if i ≺ i0 ≺ j if j ≺ i0 ≺ i and it follows that P has property ((cid:5)). See Figure 5.9 for the case of the disk. More generally when the surface is not the disk the boundary will still be a circle and Lemma 5.3.1 shows 52 Figure 5.9: Closing paths in ˜N . The case i ≺ i0 ≺ j is shown on the right while the case j ≺ i0 ≺ i is shown on the left. that the rotation number of traversing the boundary will always be odd, and hence can be thought of as shown in Figure 5.9. Now consider a path P : i0 (cid:32) j in N . If P does not use the edge e(cid:48), then P corresponds and sij = ˜s i(cid:48)(cid:48)0j to a path ˜P : i(cid:48)(cid:48)0 → j in ˜N with rP = r ˜P P has property ((cid:5)). Otherwise P traverses the edge e(cid:48) some number of times. Let l be the number of times P traverses e(cid:48) for l > 0. In this case P corresponds to the concatenation (cid:32) j. Now the sign of the product of the of paths ˜Pk : i(cid:48)(cid:48)0 (cid:32) i(cid:48)0 for 1 ≤ k ≤ l, and ˜P(cid:48)(cid:48) : i(cid:48)(cid:48)0 . If this is the case, then it is clear 53 ii00i000j˜P0˜P00˜Ni≺i0≺jii0jPNji00i000i˜P0˜P00˜Nj≺i0≺iji0iPN weights of these paths in ˜N is (cid:80)l k=1(r ˜Pk (−1) +1) ˜s i(cid:48)(cid:48)0j (−1) +r ˜P(cid:48)(cid:48)+1 since ˜s i(cid:48)0i(cid:48)(cid:48)0 = 0. The sign of the path P in N will be (cid:80)l k=1(r ˜Pk (−1) +1) ˜s i(cid:48)(cid:48)0 j (−1) +r ˜P(cid:48)(cid:48) +1 (−1)l since we pick up an addition factor of −1 each time we traverse e(cid:48). Simplifying, the sign of P is k=1 r ˜Pk +r ˜P(cid:48)(cid:48) +1 sij +(cid:80)l (−1) since sij = ˜s i(cid:48)(cid:48)0j . The equality implies that P has property ((cid:5)). l(cid:88) k=1 rP = r ˜Pk + r ˜P(cid:48)(cid:48) The final case is again i0 is adjacent to a black vertex, and we remove the black vertex and split i0 into two boundary vertices i(cid:48)0 < i(cid:48)(cid:48)0. This time we consider the case where i(cid:48)0 is a source and i(cid:48)(cid:48)0 is a sink as shown in Figure 5.10. This case will be identical to the previous case of splitting a black vertex, after applying Lemma 5.3.2 and forming closed curves in the opposite direction, with the subcases i ≺ i0 ≺ j and j ≺ i0 ≺ i reversed. Theorem 5.3.3 need not be true when N is not a perfectly oriented network. See Fig- ure 5.11 for an example of a network for which Theorem 5.3.3 does not hold. The boundary 54 Figure 5.10: Splitting a black vertex in the case that i(cid:48)0 is a source and i(cid:48)(cid:48)0 is a sink Figure 5.11: A network which is not perfectly oriented measurement matrix for the network in Figure 5.11 is 1 x1x2 0 −x1x4  . B = 0 x2x3 1 x3x4 The matrix B cannot be obtained from the weighted path matrix for this example. Notice the second column of B would require x1 and x3 receive the same sign, while the fourth column of B would require x1 and x3 receive opposite signs. However, as mentioned in Remark 5.1.1 the network in Figure 5.11 can be transformed to a perfectly oriented network. In this case it turns out the perfectly oriented network we get after the transformation is the network in Figure 5.6 for which we have already seen how to sign the edges. We include Algorithm 1 for finding a signing of edges as in Theorem 5.3.3. Algorithm 1 makes use of 55 i0wi0we0we00Ni00i000we0we00˜N1234x1x2x3x4 helper functions given in Algorithm 2 This recursive algorithm exactly corresponds to the induction used in the proof. Algorithm 1 Signing edges of a perfectly oriented network Require: A perfectly oriented network N = (V, E). function FindSigns(N ) if int(V ) = ∅ then for e = (i, j) ∈ E do e = (−1)sij +re+1 else return {e}e∈E Choose boundary source i0 ∈ I adjacent to some interior vertex. Let e0 = (i0, v0) be the unique edge incident on i0. Let e(cid:48) and e(cid:48)(cid:48) be to two edges different from e0 incident on v0 with e(cid:48) < e(cid:48)(cid:48). ˜N ← Split(N, i0, v0, e0, e(cid:48), e(cid:48)(cid:48)) {˜e}e∈E( ˜N ) ← FindSigns( ˜N ) return ModifySigns(N, i0, v0, e0, e(cid:48), e(cid:48)(cid:48),{˜e}e∈E( ˜N ) ) 5.4 A Formula for Pl¨ucker Coordinates Notice that from Theorem 5.3.3 it follows that Conjecture 5.1.2 is true. We now want to give an explicit formula for the minors of the boundary measurement matrix. In order to do this we must first review some concepts and results that can be found in [Pos06] and [Tal08]. Take I, J ⊆ [n] with |I| = |J|. Let π : I → J be a bijection with π(i) = i for all i ∈ I ∩ J. A pair (i1, i2) ∈ I × I where i1 < i2 is called a crossing of π if the following condition holds (i1 − π(i2))(π(i2) − π(i1))(π(i1) − i2))(i2 − i1) < 0 This condition is equivalent to the chord for i1 to π(i1) crossing the chord from i2 to π(i2) when the elements of [n] are placed in cyclic order on the boundary of a disk. Thinking of I as a collection of boundary sources and J as a collection of boundary vertices, the condition 56 Algorithm 2 Helper functions for signing edges of a perfectly oriented network function Split(N ,i0,v0,e0,e(cid:48),e(cid:48)(cid:48)) ˜V ← (V \ {i0, v0}) ∪ {i(cid:48)0, i(cid:48)(cid:48)0} ˜K = (K \ {i0}) ∪ {i(cid:48)0, i(cid:48)(cid:48)0} Order ˜K by i1 < i(cid:48)0 < i(cid:48)(cid:48)0 < i2 for all i1, i2 ∈ K such that i1 < i0 < i2 if e(cid:48) = (v0, x) then ˜e(cid:48) ← (i(cid:48)0, x) if e(cid:48) = (x, v0) then ˜e(cid:48) ← (x, i(cid:48)0) if e(cid:48)(cid:48) = (v0, x) then ˜e(cid:48)(cid:48) ← (i(cid:48)(cid:48)0, x) if e(cid:48)(cid:48) = (x, v0) then ˜e(cid:48)(cid:48) ← (x, i(cid:48)(cid:48)0) ˜E ← (E \ {e0, e(cid:48), e(cid:48)(cid:48)}) ∪ {˜e(cid:48), ˜e(cid:48)(cid:48)} ˜N ← ( ˜V , ˜E) return ˜N function ModifySigns(N, i0, v0, e0, e(cid:48), e(cid:48)(cid:48),{˜e}e∈E( ˜N ) if ˜e(cid:48) = (v0, x) and ˜e(cid:48)(cid:48) = (v0, y) then ) if ˜e(cid:48) = (x, v0) and ˜e(cid:48)(cid:48) = (v0, y) then if ˜e(cid:48) = (v0, x) and ˜e(cid:48)(cid:48) = (y, v0) then i0 ← +1 e(cid:48) ← ˜˜e(cid:48) e(cid:48)(cid:48) ← ˜˜e(cid:48)(cid:48) for j ∈ K with j < i0 do j ← ˜j for j ∈ K with j > i0 do j ← −˜j for All other edges e ∈ E do e ← ˜e i0 ← +1 e(cid:48) ← −˜˜e(cid:48) e(cid:48)(cid:48) ← ˜˜e(cid:48)(cid:48) for All other edges e ∈ E do e ← ˜e i0 ← +1 e(cid:48) ← ˜˜e(cid:48) e(cid:48)(cid:48) ← −˜˜e(cid:48)(cid:48) for All other edges e ∈ E do e ← ˜e 57 Figure 5.12: Crossings on the disk and the annulus. Here boundary vertices are ordered i1 < i2 < π(i1) < π(i2). of being a crossing means that a path P1 : i (cid:32) π(i1) must intersect any path P2 : i2 (cid:32) π(i2) in any network embedded on a disk. However, when our surface is not a disk it can happen that (i1, i2) is a crossing of π but paths P1 : i (cid:32) π(i1) and P2 : i2 (cid:32) π(i2) do not intersect. See Figure 5.12 for a pictorial representation of a crossing on the disk and an example of paths of the annulus which come from a crossing but do not intersect. We let xing(π) denote the number of crossings of π. If |I| = |J| = k then a bijection π : I → J determines a unique permutation π ∈ Sk by standardizing I and J. We let inv(π) denote the number of inversion of π when view as an element of Sk. We let si,j denote the number of elements of I strictly between i and j. Lemma 5.4.1 ([Tal08]). If I, J ⊂ [n] with |I| = |J| and π : I → J is a bijection such that π(i) = i for all i ∈ I ∩ J, then (−1)xing(π) = (−1)inv(π)(cid:89) i∈I si,π(i). (−1) Proof. This is shown during the proof of [Tal08, Proposition 2.12]. We now give our formula for the Pl¨ucker coordinates of the boundary measurement map. 58 i1i2π(i1)π(i2)i1i2π(i1)π(i2) Corollary 5.4.2. If N = (V, E) is a perfectly oriented network embedded on a closed ori- entable surface with boundary, then for any J ⊆ K with |I| = |J| (cid:80) (cid:80) F∈FI,J (N )(−1)c(F) wt(F) C∈C(N ) wt(C) ∆I,J (B(N,{xe}e∈E)) = where c(F) = xing(π) +(cid:80) P∈P(rP + 1) if F = (P, C). Proof. We first take {e}e∈E such that B(N,{xe}e∈E) = A(N,{exe}e∈E) which necessarily exists by Theorem 5.3.3. Using Equation (5.1) we obtain F∈FI,J (N ) sgn(F)(cid:0)(cid:81) (cid:80) C∈C(N ) sgn(C)(cid:0)(cid:81) (cid:80) e∈F e e∈C e (cid:1) wt(F) (cid:1) wt(C) . ∆I,J (B(N,{xe}e∈E)) = number by exactly one, it follows that(cid:81) Since traversing a cycle in a perfectly oriented network will always change the rotation e∈C e = (−1)|C| for any C ∈ C(N ). Also, sgn(C) = (−1)|C| for any C ∈ C(N ) and thus sgn(C)  = 1 e (cid:89) e∈C and the denominator in the corollary is correct. sgn(F)(cid:0)(cid:81) (cid:1) = (−1)xing(π)+(cid:80) e∈F e It remains to show the numerator in the corollary is correct. That is we must show P∈P(rP +1) for any F ∈ FI,J (N ). Take F = (P, C) ∈ 59 FI,J (N ) and let π : I → J be the bijection determined by P, then  si,π(i) (−1)rP +1 (cid:89) e∈F sgn(F) e e e∈P  = sgn(P) sgn(C) (cid:89)  (cid:89) (cid:89) = (−1)xing(π)+(cid:80) = (−1)inv(π) = sgn(π) e∈P (−1) i∈I e e (cid:89)  (cid:89) e∈C P∈P P∈P(rP +1) where we have made use of Lemma 5.4.1. In the case our surface S is a disk it is easy to see that the formula in Corollary 5.4.2 contains no negative terms. On the disk for any flow F = (P, C) we must have xing(π) = 0 and rP = ±1 for all P ∈ P. Thus, c(F) is even for any flow F in the disk. Hence we recover Equation (5.2). For more general surfaces we no longer have positivity, for example see Figure 5.4. 5.5 The Gauge Action and Uniqueness of Signs Given a directed network N = (V, E) embedded on a surface the gauge group G = G(N ) := (R∗)int(V ) where int(V ) = V \ K denotes the set of interior vertices of N and R∗ denotes the nonzero real numbers. We also define the weight space X = X (N ) to be the set of all collections {aexe}e∈E where ae ∈ R∗. Notice here to each edge e ∈ E we associate a nonzero real number ae and a formal variable xe. An element of the gauge group g = (gv)v∈int(V ) ∈ G 60 acts on an element of the weight space X = {aexe}e∈E ∈ X as follows g · X = {(g · ae)xe}e∈E where if e = (i, j) then g · ae = g−1 j aegi (with the convention that gi = 1 if i ∈ K is a boundary vertex). It follows that A(N, X) = A(N, g · X) for all g ∈ G and X ∈ X . When X, Y ∈ X (N ) are such that Y = g · X for some g ∈ G(N ) we call X and Y gauge equivalent. Algorithm 3 Finding Gauge Transformation Require: A directed network N = (V, E) embedded on a closed orientable surface with boundary such that each vertex is contained in some path between boundary vertices and X, Y ∈ X (N ) are such that A(N, X) = A(N, Y ). function FindGauge(X,Y ) Let X = {aexe}e∈E and Y = {bexe}e∈E g ← (1)v∈int(V ) O ← int(V ) C ← V \ int(V ) while O (cid:54)= ∅ do Choose (u, v) ∈ E such that (u, v) ∈ C × O g·a(u,v) gv ← b(u,v) C ← C ∪ {v} O ← O \ {v} return g (cid:46) g returned will be such that g · X = Y Theorem 5.5.1. Let N = (V, E) be a directed network embedded on a closed orientable surface with boundary such that every vertex in contained in some path between boundary vertices, then A(N, X) = A(N, Y ) for X, Y ∈ X (N ) if and only if X and Y are gauge equivalent. 61 Proof. Our proof will show that Algorithm 3 returns g ∈ G(N ) such that g · X = Y . Let X = {aexe}e∈E and Y = {bexe}e∈E. First note that Algorithm 3 will always terminate since each vertex of N is contained in some path between boundary vertices and C initially consists of all the boundary vertices. Furthermore, when the algorithm terminates C = V . Also, observe that if v ∈ C ∩ int(V ) at some stage of the algorithm there is a directed path from some boundary vertex to the vertex v passing through only vertices in C. Lastly, we note that at a given stage of the algorithm gv = 1 whenever v (cid:54)∈ C. It suffices to show that at each step of Algorithm 3 we have the following property: g · ae = be for all e ∈ C × C ((cid:63)) Initially C consists of only the boundary vertices and g = (1)v∈int(V ). At this stage we have g · ae = ae for all e ∈ E, and ae = be whenever e ∈ C × C by the assumption that A(N, X) = A(N, Y ). So, initially we have property ((cid:63)). We now consider extending the set of vertices C. Suppose we are at some stage of the algorithm where g · ae = be for all e ∈ C × C. Consider (u, v) ∈ C × O and let C(cid:48) = C ∪ {v} and g(cid:48) be such that g(cid:48)v = g · a(u,v)/b(u,v) and g(cid:48)x = gx for x (cid:54)= v. Now we must show for all e ∈ C(cid:48) × C(cid:48) that g(cid:48) · ae = be. We need only consider edges e ∈ C(cid:48) × C(cid:48) incident on v as 62 g(cid:48) · ae = g · ae = be for e ∈ C(cid:48) × C(cid:48) with e not incident on v. First we compute g(cid:48) · a(u,v) = (g(cid:48)v)−1a(u,v)g(cid:48)u b(u,v)a(u,v)g(cid:48)u = = g · a(u,v) b(u,v)(g · a(u,v)) g · a(u,v) = b(u,v) and conclude g(cid:48) · a(u,v) = b(u,v). Consider (w, v) ∈ E such that w ∈ C. We can find paths Pu : iu (cid:32) u and Pw : iw (cid:32) w passing through only vertices of C for iu, iw ∈ I. Choose some path P : v (cid:32) j for j ∈ K so we get paths P1 = Pu(u, v)P : iu (cid:32) j and P2 = Pw(w, v)P : iw (cid:32) j. It then follows that (cid:89) (cid:89) ae = be (cid:89) (cid:89) ae = be e∈P1 e∈P1 e∈P2 e∈P2 and so also (cid:89) e∈P1 g(cid:48) · ae = (cid:89) e∈P1 be Considering ratios we see (cid:16)(cid:81) (cid:16)(cid:81) (cid:17) (cid:17) (g(cid:48) · a(u,v))(cid:0)(cid:81) (g(cid:48) · a(w,v))(cid:0)(cid:81) (cid:1) (cid:1) = e∈P g(cid:48) · ae e∈P g(cid:48) · ae g(cid:48) · ae g(cid:48) · ae e∈Pu e∈Pw (cid:89) e∈P2 g(cid:48) · ae = (cid:89) e∈P2 be. (cid:16)(cid:81) (cid:16)(cid:81) e∈Pu e∈Pw (cid:17) (cid:17) be be (b(u,v))(cid:0)(cid:81) (b(w,v))(cid:0)(cid:81) (cid:1) (cid:1) e∈P be e∈P be 63 and recalling g(cid:48) · ae = be for e ∈ Pu ∪ Pw ⊆ C we can conclude g(cid:48) · a(w,v) = b(w,v) as desired. Consider (v, w) ∈ E such that w ∈ C. We can find paths Pu : iu (cid:32) u and Pw : iw (cid:32) w passing through only vertices of C for iu, iw ∈ I. Choose some path P : w (cid:32) j for j ∈ K so we get paths P1 = Pu(u, v)(v, w)P : iu (cid:32) j and P2 = PwP : iw (cid:32) j. It then follows that (cid:89) (cid:89) ae = be (cid:89) (cid:89) ae = be e∈P1 e∈P1 e∈P2 e∈P2 and so also (cid:89) e∈P1 g(cid:48) · ae = (cid:89) e∈P1 be (cid:89) e∈P2 g(cid:48) · ae = (cid:89) e∈P2 be. Considering ratios we see (cid:16)(cid:81) g(cid:48) · ae e∈Pu (cid:17) (g(cid:48) · a(u,v))(g(cid:48) · a(v,w))(cid:0)(cid:81) (cid:17)(cid:0)(cid:81) (cid:16)(cid:81) (cid:1) e∈P g(cid:48) · ae (cid:17) (cid:16)(cid:81) (b(u,v))(b(v,w))(cid:0)(cid:81) g(cid:48) · ae (cid:17)(cid:0)(cid:81) (cid:16)(cid:81) (cid:1) e∈P be be e∈Pu e∈P g(cid:48) · ae e∈Pw = (cid:1) (cid:1) be e∈Pw e∈P be and recalling g(cid:48) · ae = be for e ∈ Pu ∪ Pw ⊆ C and we can conclude g(cid:48) · a(w,v) = b(w,v) as desired. Therefore property ((cid:63)) extends at each step of Algorithm 3 and the theorem is proven. Theorem 5.5.1 has the following corollary which says that the choice of signs guaranteed by Theorem 5.3.3 is unique up to gauge transformation provided each vertex is contained in some path between boundary vertices. 64 Corollary 5.5.2. If N = (V, E) is a directed network embedded on a closed orientable surface with boundary such that every vertex in contained in some path between boundary vertices and there exists a collections {e}e∈E,{(cid:48)e}e∈E ∈ {±1}E such that B(N,{xe}e∈E) = A(N,{exe}e∈E) and B(N,{xe}e∈E) = A(N,{(cid:48)exe}e∈E) then {e}e∈E and {(cid:48)e}e∈E are gauge equivalent. 65 Chapter 6 Upper cluster algebras and ground rings This chapter is based on the preprint [BMS18] which is joint work with Eric Bucher and Michael Shapiro. 6.1 Locally isolated and Locally acyclic cluster alge- bras We first review Muller’s notion of locally acyclic cluster algebras [Mul13]. This section closely follows [Mul14] while describing some changes needed to adapt the theory of locally acyclic cluster algebras for ground rings other than ZP. Let (B, x, y) be a seed of rank n and A be a ground ring satisfying (3.1) and (3.2). Denote the corresponding cluster algebra by A = AA(B, x, y) and upper cluster algebra by U = UA(B, x, y). The freezing of A at xn ∈ x is the cluster algebra A† = AA†(B†, x†, y†) defined as follows • The new semifield is P† = P × Z with xn as the generator of the free abelian group Z. The auxiliary addition is extended as (p1xa n) ⊕ (p2xb n) = (p1 ⊕ p2)xmin(a,b) n . 66 • The new ground ring A† ⊆ ZP† is A† = A[x±1 n ]. • The new ambient field is F† = Q(P†, x1, x2, . . . , xn−1) and x† = (x1, x2, . . . , xn−1). • The new coeffients are y† = (y†1, y†2, . . . , y†n−1) where y†i = yix • The new exchange matrix B† is obtained from B by deleting the nth row and column. Bni n . In this setting the upper cluster algebra corresponding to A† will be denoted U†. Notice in a cluster algebra arising from a quiver that a freezing exactly corresponds to replacing the mutable vertex labeled by xn with a frozen vertex. By considering a permutation of indices we can freeze at any xi ∈ x. We can freeze at some subset of cluster variables by iteratively freezing at individual cluster variables. This process is independent of the order of freezing. A freezing A† of A at {xi1 , . . . , xim} ∈ x is called a cluster localization if xi2 ··· xim)−1]. A cover of A is a collection {Ai}i∈I of cluster localizations such A† = A[(xi1 that for any prime ideal P ⊆ A there exists i ∈ I where AiP (cid:40) Ai. Lemma 6.1.1 below is , xi2 not exactly [Mul14, Lemma 1], but follows immediately and will be all that is used for our purposes. Lemma 6.1.1 ([Mul14, Lemma 1]). If A† is a freezing of a cluster algebra A at cluster , . . . , xim)−1] is a cluster variables {xi1 localization. , . . . , xim} and A† = U†, then A† = A[(xi1 , xi2 , xi2 The seed (B, x, y) is called isolated if B is the zero matrix. A cluster algebra defined by an isolated seed is also referred to as isolated. In terms of quivers, isolated means that there are no arrows between mutable vertices. The seed (B, x, y) is said to be acyclic if there are > 0 for 1 ≤ j < (cid:96) and i1 = i(cid:96). A cluster algebra not i1, i2, . . . , i(cid:96) ∈ {1, 2, . . . , n} with Bij+1ij defined by an acyclic seed is called an acyclic cluster algebra. A locally isolated, respectively 67 locally acyclic, cluster algebra is a cluster algebra for which there exists a cover by isolated, respectively acyclic, cluster algebras. In [Mul14] the only ground ring considered is ZP; however, proofs of the following results go through without change for more general ground rings. Lemma 6.1.2 ([Mul14, Lemma 2]). Let {Ai}i∈I be a cover of A. If Ai = Ui for all i ∈ I, then A = U. Proposition 6.1.3 ([Mul14, Proposition 3]). Let A be an isolated cluster algebra. Then A = U. This immediately gives the following result. Theorem 6.1.1. If A is a locally isolated cluster algebra, then A = U. The definition of a locally isolated cluster algebra and Theorem 6.1.1 are not explicitly stated in [Mul14]. This is because over the ground ring ZP being locally isolated is equivalent to being locally acyclic as every acyclic cluster algebra over ZP admits a cover by isolated cluster algebras [Mul14, Proposition 4]. The equivalence is not true over other ground rings. In Section 6.2 we give an example of a cluster algebra of geometric type which is locally acyclic over ZP, but for which the cluster algebra and upper cluster algebra do not coincide over a different natural choice of ground ring. We show this example cluster algebra is locally acyclic by using Muller’s Banff algorithm [Mul13, Theorem 5.5]. A pair of vertices (i1, i2) in a quiver Q is called a covering pair if (i1, i2) is an arrow of Q that is not contained in any bi-infinite path of mutable vertices. The notion of covering pair is needed to run the Banff algorithm. The (reduced) Banff algorithm can be found as Algorithm 4. The reduced version of the algorithm deletes vertices rather then freezes them. This makes for a simpler check 68 that a cluster algebra is locally acyclic, but has the down side that it is does not compute the actual cover. Algorithm 4 The (reduced) Banff algorithm to determine if a quiver is locally acyclic. Require: A (mutable part of a) quiver Q0 function Banff(Q0) A ← ∅ B ← {Q0} while B (cid:54)= ∅ do Remove a quiver Q from B if Q is mutation equivalent to an acyclic quiver then else if Q is mutation equivalent to a quiver Q(cid:48) with a covering pair (i1, i2) then A ← A ∪ {Q} Let Qj be the quiver obtained from Q(cid:48) by freezing (deleting) ij for j = 1, 2 B ← B ∪ {Q1, Q2} The algorithm fails else return A (cid:46) A returned will be a finite set of acyclic quivers In the remainder of this section we analyze the proof of [Mul14, Proposition 4] consider conditions which imply a cluster algebra is locally isolated. This allows the application of Theorem 6.1.1 to conclude the cluster algebra is equal to its upper cluster algebra. An index i ∈ {1, 2, . . . , n} is a source in the seed (B, x, y) if Bki ≥ 0 for all 1 ≤ k ≤ n. In this case mutation in the direction i gives (cid:89) xix(cid:48)i = yi yi ⊕ 1 Bki>0 x Bki k + 1 yi ⊕ 1 . (6.1) A key step in showing an acyclic cluster algebra over ZP is covered by isolated cluster algebras is as follows1. Let i be a non-isolated source choose j with Bji > 0, then after multiplying 1The argument that follows is the “source version” and a corresponding “sink version” holds with corre- sponding modification. The “sink version” is used in [Mul14]. 69 by yi ⊕ 1 and rearranging (6.1) we obtain yi  x x Bki k (cid:89) Bki>0 k(cid:54)=j Bji j = 1. (6.2) ((yi ⊕ 1)x(cid:48)i)xi − It is implied by (6.2) that 1 ∈ Axi + Axj and it follows that {A[x−1 ]} cover A provided these are cluster localizations. Here we see that do not necessarily need to be ],A[x−1 j i working over ZP. The essential fact is that 1 ∈ Axi + Axj. This leads to the following definition. Call the seed (B, x, y) a source freezing seed with respect to a ground ring A if yi ⊕ 1 ∈ A for all 1 ≤ i ≤ n. Lemma 6.1.4. Let (B, x, y) be a source freezing seed with respect to A. If i is a source and Bji > 0, then {A[x−1 Proof. In the case (B, x, y) a source freezing seed with respect to A, i is a source, and Bji > 0 ]} cover A provided they are cluster localizations. ],A[x−1 j i we have that 1 ∈ Axi +Axj by (6.2). Given any prime A-ideal P we must have either xi (cid:54)∈ P or xj (cid:54)∈ P since P (cid:54)= A is a proper ideal. If xi (cid:54)∈ P , then A[x−1 ]. If xj (cid:54)∈ P , then A[x−1 ] are cluster localization they form a ]. Thus if A[x−1 ]P (cid:40) A[x−1 ]P (cid:40) A[x−1 ] and A[x−1 j j j i i i cover. Lemma 6.1.5. Let (B, x, y) be a source freezing seed, and let (B†, x†, y†) the freezing at xi for any 1 ≤ i ≤ n. The seed (B†, x†, y†) is a source freezing seed with respect to A† = A[x±1 ]. i Proof. Since (B, x, y) is a source freezing seed, 1 ⊕ yj ∈ A for each 1 ≤ j ≤ n. We must 70 show that 1 ⊕ y†j ∈ A† for each 1 ≤ j ≤ n where y†j = yjx Bij i . We compute Bij i 1 ⊕ y†j = 1 ⊕ yjx = (1 ⊕ yj)x min(0,Bij ) i ∈ A† = A[x±1 i ], and the lemma is proven. Theorem 6.1.2. If A = AA(B, x, y) where (B, x, y) is an acyclic source freezing seed with respect to A, then A = U. Proof. Assume (B, x, y) is an acyclic source freezing on rank n. We will induct on the rank. If n ≤ 1, we are done by Proposition 6.1.3 since B must be isolated. Now we may assume the seed in non-isolated, otherwise we are done by Proposition 6.1.3. Since the seed in non- isolated and acyclic there must be a non-isolated source. We choose a non-isolated source i and then pick j with Bji > 0. Freezing at xi, the seed (B†, x†, y†) is an acyclic source freezing seed with respect to A† by Lemma 6.1.5, and hence A† = U† by induction since A† is of rank n− 1. Similarly freezing at xj, the seed (B††, x††, y††) is an acyclic source freezing seed with respect to A†† by Lemma 6.1.5 and A†† = U†† also. Now A† and A†† both must be cluster localizations by Lemma 6.1.1. Thus Lemma 6.1.4 says that {A†,A††} is a cover of A. We conclude A = U using Lemma 6.1.2. Theorem 6.1.2 can be used to produce example of geometric type with A = U over the polynomial ground ring ZP+. Call a quiver a source freezing quiver if all arrows involving frozen vertices are directed from a mutable vertex to a frozen vertex. An acyclic source freezing quiver is pictured in Figure 6.1. Combining Theorem 6.1.2 with the sharpening of the Laurent phenomenon [FWZ, Theorem 3.3.6] for cluster algebras of geometric type we 71 Figure 6.1: An acyclic source freezing quiver obtain the following corollary since any source freezing quiver defines a source freezing seed with respect to ZP+. Corollary 6.1.6. If Q is an acyclic source freezing quiver defining a cluster algebra A, then A = U over the ground ring ZP+. 6.2 The Cremmer-Gervais example In this section we consider an exotic cluster structure constructed by Gekhtman, Shapiro, and Vainshtein known as the Cremmer-Gervais cluster structure on SLn [GSV14, GSV17]. This cluster structure extends to a cluster structure on the set of n × n matrices denoted Matn. We let AA(CG, n) and UA(CG, n) denote the cluster algebra and upper cluster algebra over the ground ring A coming from the Cremmer-Gervais cluster structure on Matn. The quiver Q(CG, n) defines this cluster algebra. Figure 6.2 shows the quiver Q(CG, 3). It is know that for any n the upper cluster algebra UZP+ regular functions on Matn [GSV17, Theorem 3.1]. The following result shows the sensitivity (CG, n) is naturally isomorphic to the ring of of the A = U question on the ground ring Proposition 6.2.1. We have equality AZP(CG, 3) = UZP(CG, 3) over ZP but strict con- tainment AZP(CG, 3) (cid:40) UZP(CG, 3) over ZP+. Proof. The equality AZP(CG, 3) = UZP(CG, 3) follows from the fact that AZP(CG, 3) is a locally acyclic cluster algebra over ZP. This can be checked by applying the Banff algorithm. 72 Figure 6.2: The quiver Q(CG, 3). A visual representation of the reduced Banff algorithm is applied to Q(CG, 3) is shown in Figure 6.3. In Figure 6.3 mutation equivalence is denoted by ⇔ and covering pairs used are displayed in thick red. The strict containment AZP(CG, 3) (cid:40) UZP(CG, 3) is [GSV14, Theorem 4.1]. 6.3 Relationship with reddening A maximal green sequence or reddening sequence is a special sequence of mutations whose existence gives rise to additional properties of the underlying cluster algebra. These se- quences of mutations were introduced by Keller to study quantum dilogarithm identities and Donaldson-Thomas invariants [Kel11]. It has been observed in the literature that the existence of a maximal green sequence or reddening sequence seems to coincide with equality of the cluster algebra and upper cluster algebra [CLS15]. We will exhibit a maximal green sequence for Q(CG, 3) in this section. The existence of such a sequence depends only on the mutable part of a quiver. Given a mutable quiver Q, the framed quiver ˆQ is obtain by adding a new frozen vertex i(cid:48) for each (mutable) vertex i along with an arrow i → i(cid:48). A example of a framed quiver is given in Figure 6.4. A mutable vertex i is called green if all arrows involving i and a frozen vertex j(cid:48) are directed i → j. Otherwise the vertex i is called red. All mutable vertices of a framed quiver start as green. A maximal green sequence is a sequence of mutations starting with 73 ⇔ ⇔ ⇔ Acyclic Acyclic Acyclic Figure 6.3: The reduced Banff algorithm applied to the quiver Q(CG, 3). 3(cid:48) 2(cid:48) 3 2 1 1(cid:48) Figure 6.4: A framed quiver. 74 Figure 6.5: A maximal green sequence. 4 3 2 6 5 1 Figure 6.6: The mutable part of the quiver Q(CG, 3). ˆQ such that each mutation occurs at a green vertex and all mutable vertices are red after applying the sequence of mutations. The quiver with vertices {1, 2} and arrow set {1 → 2} has exactly 2 maximal green sequences which are mutations at (1, 2) and (2, 1, 2). The maximal green sequence (2, 1, 2) is illustrated in Figure 6.5. The quiver Q(CG, 3) provides an interesting case regarding the connection between maximal green sequences and the upper cluster algebra. The mutable part of the quiver Q(CG, 3) with vertices labeled is shown in Figure 6.6. With this labeling of vertices it can be checked that (2, 3, 4, 1, 5, 1, 2, 6, 3) is a maximal green sequence. Hence, we see the relationship of reddening sequences and equality of the cluster algebra and upper cluster algebra is again sensitive to the choice of ground ring since AZP+ but AZP(CG, 3) = UZP(CG, 3). The maximal green sequence exhibited for Q(CG, 3) can be found using a new technique called component preserving mutations. The idea of component (CG, 3) (cid:54)= UZP+ (CG, 3) preserving mutations along with further examples will be given in [BMR+]. 75 Chapter 7 Log-Canonical Coordinates This chapter is based on the article [MO17] which is joint work with Nicholas Ovenhouse. We will be interested in Poisson algebras of rational functions. Let K be a field and Ω = (ωij) a skew-symmetric matrix. Consider the algebra RΩ = K(x1, . . . , xn) of rational functions in n variables with a Poisson bracket in which the functions x1, . . . , xn form a system of log-canonical coordinates: {xi, xj} = ωijxixj Here we wish to show that the bracket {·,·} has the simplest expression in the coordinates x1,··· , xn. In particular, we want to show that no rational change of coordinates can make the structure functions constant or linear (homogeneous or non-homogeneously linear). We wish to investigate the following conjecture of Michael Shapiro. Conjecture 7.0.1. If f1,··· , fn ∈ RΩ are rational functions such that n(cid:88) k=1 {fi, fj} = ck ijfk + dij with ck ij, dij ∈ K for 1 ≤ i, j, k ≤ n, then {fi, fj} = 0 for 1 ≤ i, j ≤ n. We prove this conjecture in Theorem 7.2.4. Note that the conjecture implies that for any log-canonical Poisson structure on affine space, the answer to Question 4.1.5 is “no.” That is, there is no system of coordinates whose structure functions are polynomials of degree less 76 than two. 7.1 Nonexistence of Constant Bracket We have discovered that some of the results of this section have already appeared in [GL11]. However, we include this section for completeness. The results in this section will be built upon to prove our main theorem. Given some n× n skew-symmetric matrix Ω = (ωij), recall that RΩ = K(x1,··· , xn) is the algebra of rational functions in n variables with the Poisson bracket given by {xi, xj} = ωijxixj for 1 ≤ i, j ≤ n. For I = (i1, . . . , in) ∈ Zn, the corresponding Laurent monomial is written xI = xi1 J be the 2-by-n matrix n . For I = (i1, . . . , in) and J = (j1, . . . , jn) in Zn, let AI 1 ··· xin whose rows are I and J. Let ∆ij(AI J ) be the 2-by-2 minor of AI J with columns indexed by i and j. Also define M I J to be the weighted sum of the ∆ij(AI J ) given by the following formula (cid:88) k<(cid:96) M I J := ωk(cid:96)∆k(cid:96)(AI J ) = (cid:88) k<(cid:96) ωk(cid:96) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ik jk (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = i(cid:96) j(cid:96) (cid:88) k<(cid:96) ωk(cid:96)(ikj(cid:96) − i(cid:96)jk). Note that if e1, . . . , en is a basis for Zn, with e1, . . . , en the dual basis, we can define the two-form (cid:88) k<(cid:96) ω = ωk(cid:96) ek ∧ e(cid:96), and then M I J is Z-bilinear and skew-symmetric with respect to I and J. We now compute an explicit formula for the bracket of two Laurent J = ω(I, J). In particular, the expression M I polynomials. 77 Lemma 7.1.1. If I, J ∈ Zn, then {xI , xJ} = M I J xI+J . Proof. Let I = (i1, . . . , in) and J = (j1, . . . , jn). For 1 ≤ k ≤ n let and Ik = (i1, . . . , ik−1, 0, ik+1, . . . , in) Jk = (j1, . . . , jk−1, 0, jk+1, . . . , jn). By using Equation 4.1, we find {xI , xJ} = = = = = (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) 1≤k,(cid:96)≤n 1≤k,(cid:96)≤n 1≤k,(cid:96)≤n ∂xI ∂xk ∂xJ ∂x(cid:96) {xk, x(cid:96)} ik j(cid:96) xIk+J(cid:96){xk, x(cid:96)} ik j(cid:96) xIk+J(cid:96)ωk(cid:96)xk x(cid:96) ωk(cid:96)ik j(cid:96) xI+J 1≤k,(cid:96)≤n 1≤k<(cid:96)≤n J xI+J = M I ωk(cid:96)(ikj(cid:96) − i(cid:96)jk)xI+J 78 In order to prove our first theorem we want to work with iterated Laurent series. We will give a brief overview of the theory of iterated Laurent series which will be needed for our purpose. For a more in depth treatment of iterated Laurent series we refer the reader to [Xin04]. For us a formal Laurent series in variables x1, . . . , xn over K will mean any formal sum (cid:88) I∈Zn f = αI xI with αI ∈ K for all I ∈ Zn. For any I ∈ Zn let [xI ]f denote the coefficient of xI in f . In particular, [1]f denotes the constant term of f . Also, we let supp(f ) denote the set I ∈ Zn such that [xI ]f (cid:54)= 0. The set of formal Laurent series is a K-vector space, but it is not a K-algebra as we cannot multiply any two formal Laurent series in general. However, certain subsets of the set of formal Laurent series form a K-algebra. Define of formal Laurent series(cid:80) K(cid:104)(cid:104)x(cid:105)(cid:105) := K((x)) to be the field of Laurent series in a single variable. That is, K((x)) consists i∈Z aixi containing only finitely many negative exponents. Now define K(cid:104)(cid:104)x1,··· , xi+1(cid:105)(cid:105) := K(cid:104)(cid:104)x1,··· , xi(cid:105)(cid:105)((xi+1)) iteratively. We then let L = K(cid:104)(cid:104)x1,··· , xn(cid:105)(cid:105) be the field of iterated Laurent series in n variables. We have the following immediate corollary of Lemma 7.1.1 which holds for Laurent polynomials. In the remainder of this section we will show that this corollary can be extended to hold for any iterated Laurent series. Corollary 7.1.2. Let f, g ∈ K[x±1 , . . . , x±n ] be Laurent polynomials, with I = supp(f ) and 79 J = supp(g). Then their bracket is given by (cid:88) {f, g} = (I,J)∈I×J αI βJ M I J xI+J . Remark 7.1.3. Note that we have the inclusion K[x1, . . . , xn] (cid:44)→ L. Since L is a field and RΩ is the field of fractions of K[x1, . . . , xn], we also have the inclusion RΩ (cid:44)→ L. Hence, RΩ is a K-subalgebra of L. Remark 7.1.4. Notice the order in which we adjoin our variables is relevant. For instance, consider the rational function 1 x+y . As an element of K(cid:104)(cid:104)x, y(cid:105)(cid:105), it can be written as (cid:88) n≥0 (cid:88) n≥0 1 x + y = (−1)nx−(n+1)yn 1 x + y = (−1)nxny−(n+1). However, since there is no lower bound on the powers of x, this does not give an element of K(cid:104)(cid:104)y, x(cid:105)(cid:105). Instead, to represent it as an element in the latter field, we must write Remark 7.1.5. Any iterated Laurent series f ∈ L can be expressed as a formal Laurent series. That is, we can write f = αI xI . (cid:88) I∈supp(f ) 80 Given f, g ∈ L where f = (cid:88) I∈supp(f ) αI xI g = (cid:88) J∈supp(g) βJ xJ their product is (cid:88) f g = (I,J)∈supp(f )×supp(g) αI βJ xI+J . This product f g is also an iterated Laurent series, since L is a field. In particular this means f g is a formal Laurent series with the property that for any K ∈ Zn, the set {(I, J) ∈ supp(f ) × supp(g) : I + J = K} is finite. In fact, we have the following result, which will be useful later1: Proposition 7.1.6 ([Xin04, Proposition 2-2.1]). Let f be a formal Laurent series. Then f ∈ L if and only if supp(f ) is well-ordered with respect to the reverse lexicographic ordering. Lemma 7.1.7. The Poisson bracket on RΩ extends uniquely to a Poisson bracket on L. Proof. Note that by bilinearity, any Poisson bracket on L is determined by the brackets of all Laurent monomials. Thus by Lemma 7.1.1, any bracket extending the one on RΩ must be given by the same formula on monomials. We claim that the same formula in Corollary 7.1.2 gives the bracket on L. It suffices to show that for f, g ∈ L that {f, g} ∈ L. That is we must show that given f, g ∈ L, the formula from Corollary 7.1.2 yields an element of L. 1We have chosen to use the iterated Laurent construction, and hence must show the well-ordered support property in Proposition 7.1.6. Alternatively, we could have started from the well-ordered support property and shown that we obtain a ring structure. A formal series with well-order support are sometimes called a Hahn series or a Mal’cev-Neumann series and exponents can be taken from any ordered abelian group. 81 Let f, g ∈ L, and again use the notation I = supp(f ) and J = supp(g). Note that since f g ∈ L, then supp(f g) is well-ordered by Proposition 7.1.6. The formula from Corollary 7.1.2 also implies that supp({f, g}) ⊆ supp(f ) + supp(g), where “+” is used to denote Minkowski addition: supp(f ) + supp(g) = {I + J | I ∈ supp(f ), J ∈ supp(g)}. Being a subset of a well-ordered set, we see that supp({f, g}) is itself well-ordered. Hence, {f, g} ∈ L by Proposition 7.1.6. The remaining results in this section are restatements of the indicated results from [GL11]. The next theorem is a simple but powerful observation which is an essential ingredient to our proof of Conjecture 7.0.1. Theorem 7.1.8 ([GL11, Proposition 5.2 (a)]). If f, g ∈ L, then [1]{f, g} = 0. Proof. As usual, let I and J be the supports of f and g, and let (cid:88) f = (cid:88) αI xI g = βJ xJ I∈I J∈J be expressions for f and g as formal Laurent series. Computing using Corollary 7.1.2 we see that and so {f, g} = (cid:88) αI βJ M I J xI+J [1]{f, g} = αI βJ M I J . (I,J)∈I×J (cid:88) (I,J)∈I×I I+J=0 82 However, if I + J = 0, then J = −I and M I symmetric with respect to I and J. J = 0. Here we have used that M I J is skew- Corollary 7.1.9 ([GL11, Corollary 5.3]). If f1,··· , fn ∈ RΩ are rational functions such that {fi, fj} = cij with cij ∈ K for 1 ≤ i, j ≤ n, then cij = 0 for 1 ≤ i, j ≤ n. Proof. By Lemma 7.1.7, RΩ is a Poisson subalgebra of L. The corollary then follows imme- diately from Theorem 7.1.8. 7.2 Nonexistence of Linear Bracket As in the previous section, we consider the Poisson algebra of rational functions RΩ, in n variables, with bracket given by {xi, xj} = ωijxixj for some skew-symmetric matrix Ω = (ωij) with coefficients in K. It is the goal of this section to prove the aforementioned Conjecture 7.0.1, which states that there is no rational change of coordinates making the bracket linear. That is, if there are rational functions f1, . . . , fn such that {fi, fj} =(cid:80)n k=1 ck ijfk + dij for constants ck ij, dij ∈ K, then in fact all the coefficients ck ij and dij must be zero. We now prove a lemma which will be used later. Lemma 7.2.1. There do not exist linearly independent f, g ∈ RΩ such that {f, g} = af + bg for a, b ∈ K with a and b not both zero. Proof. Assume there do exist linearly independent rational functions f and g so that {f, g} = af +bg for some a, b ∈ K. Then the linear span of f and g is a two-dimensional Lie subalgebra of RΩ. Up to isomorphism, there is a unique two-dimensional non-abelian Lie algebra, with 83 the bracket given by {f, g} = f . Explicitly, the isomorphism is given by f (cid:55)→ f + b a g, g (cid:55)→ 1 a g (assuming a (cid:54)= 0). So, we may assume without loss of generality that a = 1 and b = 0, thus {f, g} = f . But then we have that 1 f {f, g} = 1. Note that since adf = {f,·} is a derivation, we have 1 f } = 1. But this directly contradicts Corollary 7.1.9, which says that the bracket of any two rational functions (cid:111) . This in turn implies that {f, g f {f, g} = f, g f (cid:110) cannot be a nonzero constant. A useful consequence of this lemma is that the adjoint maps adf can have no non-zero eigenvalues. Corollary 7.2.2. If f, g ∈ RΩ with g (cid:54)= 0 and {f, g} = λg for some λ ∈ K, then λ = 0. The next result says that the adjoint maps adf cannot be nonzero and nilpotent. Lemma 7.2.3. If f, g ∈ RΩ and {f, g} (cid:54)= 0, then {f,{f, g}} (cid:54)= 0. (cid:110) (cid:111) {f,g} Proof. Take f, g ∈ RΩ and assume that {f, g} (cid:54)= 0 but {f,{f, g}} = 0. Then we know that f, = 0. Computing, we see that 1 (cid:26) (cid:27) (cid:26) (cid:27) f, g {f, g} = g f, 1 {f, g} + 1 {f, g}{f, g} = 1 which is a contradiction to Corollary 7.1.9. We are now ready to prove the main result. Theorem 7.2.4. If f1,··· , fn ∈ RΩ are rational functions such that n(cid:88) k=1 {fi, fj} = ck ijfk + dij with ck ij, dij ∈ K for 1 ≤ i, j, k ≤ n, then {fi, fj} = 0 for 1 ≤ i, j ≤ n. 84 k=1 ck ijfk + dij for some ck functions such that {fi, fj} = (cid:80)n Proof. Assume first that K = K is algebraically closed. Let f1, . . . , fn ∈ RΩ be rational ij, dij ∈ K. This means that 1, f1, . . . , fn generate a finite dimensional Lie algebra inside RΩ. Let F ≤ RΩ denote this finite dimensional Lie algebra generated by 1, f1, . . . , fn. For any f ∈ F we have the linear map adf : F → F , and by Corollary 7.2.2 all eigenvalues of adf are zero. It follows, since K is algebraically closed, that adf is nilpotent. However, Lemma 7.2.3 implies that if adf is nilpotent we must have adf = 0. The theorem then follows. The relations {fi, fj} =(cid:80)n In the case that K is not algebraically closed, consider RΩ := K⊗KRΩ = K(x1, . . . , xn). ijfk + dij still hold. Thus 1, f1, . . . , fn will generate some k=1 ck finite dimensional Lie algebra inside RΩ, and we can complete the argument just as in the algebraically closed case. Remark 7.2.5. Given a Poisson algebra P , the quadratic Poisson Gel’fand-Kirillov problem is to determine if the field of fractions of P is isomorphic to RΩ for some Ω. This problem was first defined in [GL11] and further studied in [LL16]. In this section, we have shown a number of properties of the Poisson algebra RΩ. Hence, any Poisson algebra isomorphic to RΩ must also have these properties, and the results in this section can be viewed as necessary conditions for a Poisson algebra to be a solution to the quadratic Poisson Gelfand-Kirillov problem. 7.3 Generalizations The results of the previous section are not specific to only the Poisson algebra RΩ. Let P be a Poisson algebra P with the following two properties: • P is a field. 85 • For any a, b ∈ P we have {a, b} = 0 whenever {a, b} ∈ K. Call such an algebra a nonconstant Poisson field (since there are no elements with {f, g} = 1). Then versions of the results in the previous section hold for P since the proofs only use the conditions above. In particular we will have a version of Theorem 7.2.4 which says that P can have no finite dimensional non-abelian Lie subalgebra. Before proving this theorem, let us collect some of the essential parts of the proofs from the previous section into a useful general lemma: Lemma 7.3.1. Let P be a Poisson K-algebra which is a field. Then the following are (a) There exist f, g ∈ P such that {f, g} = 1. equivalent: (b) There exist f, g ∈ P such that {f, g} = g. (c) There exist f, g ∈ P with {f, g} (cid:54)= 0 but {f,{f, g}} = 0. Proof. (b) ⇒ (a): Follows from proof which is identical to the proof of Lemma 7.2.1. (c) ⇒ (a): Follows from proof which is identical to the proof of Lemma 7.2.3 (a) ⇒ (c): If {f, g} = 1, then {f,{f, g}} = {f, 1} = 0. (a) ⇒ (b): Suppose that {f, g} = 1, and define x = f g and y = g. Then {x, y} = {f g, g} = {f, g}g = g = y. Analogous to the above definition, define a nonlinear Poisson field as a Poisson field P which has no finite-dimensional nonabelian Lie subalgebras. This means there are no finite 86 collections f1, . . . , fk ∈ P and constants c(cid:96) says that for a Poisson field, being nonconstant is a sufficient condition to being nonlinear. ijf(cid:96). The next result (cid:96) c(cid:96) ij such that {fi, fj} =(cid:80) Theorem 7.3.2. Any nonconstant Poisson field is also nonlinear. Proof. We assume that we are working over an algebraically closed field, if not we can modify just as in the proof of Theorem 7.2.4. Suppose that there exist some f1, . . . , fk ∈ P for some k > 1 and constants c(cid:96) ij so that (cid:88) (cid:96) {fi, fj} = c(cid:96) ijf(cid:96) Then f1, . . . , fk generate a finite-dimensional Lie subalgebra F ≤ P . Each map adfi endomorphism of F . Note that adfi would be some g ∈ F and λ (cid:54)= 0 so that {fi, g} = λg. Then for ˜fi = 1 λ fi, we have { ˜fi, g} = g. By the previous theorem, there must also exist some u, v ∈ P so that {u, v} = 1. But this can have only zero eigenvalues, and hence contradicts the assumption on P . So in fact adfi cannot have any nonzero eigenvalues. If it did, there is an must be nilpotent. Again, by the previous theorem, if adfi there would exist u, v ∈ P with {u, v} = 1. So it must be that adfi abelian Lie algebra. is nonzero and nilpotent, then = 0, and thus F is an In the spirit of Question 4.1.5, let us call a system of coordinates (homogeneous) alge- braically reduced if all structure functions are (homogeneous) polynomials of a given degree, and there does not exist any rational change of coordinates making the structure functions (homogeneous) polynomials of a smaller degree. In Theorem 7.2.4 we provided an answer to Question 4.1.5 for any log-canonical system of coordinates and showed that they are algebraically reduced. It is natural to look for other (homogeneous) algebraically reduced coordinate systems. 87 Let us now consider systems of coordinates for which all structure functions are mono- mials. In dimension 2 with coordinates (x, y) so that {x, y} = xayb we have seen that such a monomial system of coordinates is algebraically reduced if and only if (a, b) = (0, 0) or (a, b) = (1, 1). In dimension 3 with coordinates (x, y, z) and bracket relations {x, y} = Axa1ya2za3 {x, z} = Bxb1yb2zb3 {y, z} = Cxc1yc2zc3 we can extend by skew-symmetry, but must also ensure the Jacobi identity holds. Computing we obtain {x,{y, z}} + {y,{z, x}} + {z,{x, y}} = (b1 − a1)ABxa1+b1−1ya2+b2za3+b3 + (c2 − a2)ACxa1+c1ya2+c2−1za3+c3 + (c3 − b3)BCxb1+c1yb2+c2zb3+c3−1. If a1 = b1, a2 = c2, and b3 = c3 the Jacobi identity will hold. In that case the bracket relations are {x, y} = A(xa1ya2zb3)za3−b3 {x, z} = B(xa1ya2zb3)yb2−a2 {y, z} = C(xa1ya2zb3)xc1−a1 Consider the simplest example of the case above, where (a1, a2, b3) = (0, 0, 0). One such 88 example is the bracket on K(x, y, z) given by {x, y} = az2 {x, z} = by2 {y, z} = cx2 for some a, b, c ∈ K∗. This bracket gives a candidate for another homogeneous quadratic algebraically reduced system of coordinates which differs from the log-canonical case. By the above discussion, it would suffice to show that this bracket makes K(x, y, z) a nonconstant Poisson field. However, unlike the log-canonical case, this bracket can produce non-zero constant terms, as exhibited by the following examples: (cid:110) (cid:111) y z2 x, (cid:111) (cid:110)x z , y z (cid:16) y z (cid:17)3 (cid:16) x z (cid:17)3 = a − 2b (cid:16)y (cid:17)3 = a − b z + c As such, the arguments used previously do not apply, since everything followed from Theorem 7.1.8, which said that the constant term of {f, g} (viewed as a Laurent series) is always zero. However, it is possible that this bracket makes K(x, y, z) a nonconstant Poisson field, despite the fact that Theorem 7.1.8 does not hold. It seems to be an interesting problem to find other algebraically reduced brackets on K(x1, . . . , xn). 89 Chapter 8 Combinatorial Hopf Algebras This chapter considers combinatorial Hopf algebras as defined by Aguiar, Bergeron, and Sot- tile [ABS06]. A particular Hopf algebra of simplicial complexes will be studied in Chapter 9. Hopf algebras in combinatorics predate the 2006 Aguiar, Bergeron, and Sottile definition. In 1979 Joni and Rota [JR79] showed that Hopf algebra structures can be a valuable tool in studying the assembly and disassembly of many combinatorial objects. A key addition to the study of Hopf algebras in combinatorics made by Aguiar, Bergeron, and Sottile was a map called a character. The character allows one to associate a (quasi)symmetric function to each combinatorial object underlying the Hopf algebra. We will study (a generalization of) the symmetric function the arises from the Hopf algebra in Chapter 9. 8.1 Hopf algebras We now define (combinatorial) Hopf algebras. Let H be a vector space over a field K. We will assume that char(K) = 0. Let Id be the identity map on H. We call H an associative K-algebra with unit 1 when H is equipped with a K-linear map m : H ⊗ H → H and an 90 element 1 ∈ H satisfying m ◦ (m ⊗ Id) = m ◦ (Id ⊗ m); m ◦ (Id ⊗ u) = m ◦ (u ⊗ Id) = Id. Here, u stands for the K-linear map K → H defined by t (cid:55)→ t · 1. A coassociative K-coalgebra with counit  is a K-vector space D over K equipped with a coproduct ∆ : D → D ⊗ D and a counit  : D → K. Both ∆ and  must be K-linear maps. The coproduct is coassociative so that (∆ ⊗ Id) ◦ ∆ = (Id ⊗ ∆) ◦ ∆ and must be compatible with  so that ( ⊗ Id) ◦ ∆ = (Id ⊗ ) ◦ ∆ = Id. If an algebra (H, m, u) is also equipped with a coalgebra structure given by ∆ and , then we say that H is a bialgebra provided ∆ and  are algebra homomorphisms. The maps m and ∆ can be applied iteratively as follows. Letting H⊗k = H ⊗ ··· ⊗ H denote the k-fold tensor, define the iterated product map m(k−1) : H⊗k → H inductively by setting m(−1) = u, m(0) = Id and for k ≥ 1 let m(k) = m ◦ (Id ⊗ m(k−1)). Similarly, the iterated coproduct map ∆(k−1) : H → H⊗k is given inductively by ∆(k) = (Id⊗∆(k−1))◦∆, where ∆(−1) =  and ∆(0) =Id. Definition 8.1.1. A Hopf algebra H is a K-bialgebra together with a K-linear map S : H → H called the antipode. This map must satisfy the following m ◦ (S ⊗ Id) ◦ ∆ = m ◦ (Id ⊗ S) ◦ ∆ = u ◦ . Example 8.1.2 (Group Hopf algebra). If G is a group, then the group algebra KG is Hopf 91 algebra with ∆(g) = g ⊗ g (g) = 1 S(g) = g−1 for all g ∈ G. Example 8.1.3 (Polynomial Hopf algebra). The polynomial ring K[x] is a Hopf algebra with ∆(x) = 1 ⊗ x + x ⊗ 1 (x) = 0 S(x) = −x. 8.2 Combinatorial Hopf algebras We say that a bialgebra H is graded if it is decomposed into a direct sum (cid:77) n≥0 H = Hn (cid:76)n where m(Hi ⊗ Hj) ⊆ Hi+j, u(K) ⊆ H0, ∆(Hn) ⊆ i=0 Hi ⊗ Hn−i, and (Hn) = 0 for n ≥ 1. We call H connected if H0 ∼= K. For each n ≥ 0 we refer to elements in Hn as homogeneous elements of degree n. Any graded and connected K-bialgebra is a Hopf algebra since the antipode can be 92 defined recursively. In many instances computing the antipode of a given Hopf algebra is a very difficult problem. However, we will provide an explicit cancellation-free formula for the antipode in the Hopf algebra of finite simplicial complexes in Chapter 9. We will compute the antipode using Takeuchi’s formula for the antipode in a graded connected Hopf algebra [Tak71]. This formula states (cid:88) k≥0 S = (−1)kmk−1π⊗k∆k−1 (8.1) where π : H → H is the projection given by linearly extending 0 π|Hn = n = 0 . Id n > 0. A combinatorial Hopf algebra is a pair (H, ζ) where H is a graded connected Hopf algebra and ζ : H → K is a algebra morphism called a character. Given combinatorial Hopf algebras (H, ζ) and (H(cid:48), ζ(cid:48)) a map φ : H → H(cid:48) is a combinatorial Hopf algebra morphism if φ is a Hopf algebra morphism and ζ = ζ(cid:48) ◦ φ. In the category of combinatorial Hopf algebras the Hopf algebra of quasisymmetric functions with a canonical character is a terminal object [ABS06, Theorem 4.1]. The Hopf algebra of symmetric functions with a canonical character is a termi- nal object in the catergory of cocommutative combinatorial Hopf algebas [ABS06, Theorem 4.3]. This gives one explanation for the ubiquity of symmetric functions and quasisymmetric functions. 93 The Hopf algebra of quasisymmetric functions QSym is graded as (cid:77) n≥0 QSym = QSymn where QSymn is spanned linearly over K by {Mα}α(cid:15)n. Here Mα is the formal power series is countable many variables x1, x2, . . . defined by (cid:88) Mα := i1 1 define ιO(σ) = (V1, . . . , Vj−1, Vj − {vi},{vi}, Vj+1, . . . , V(cid:96)) Otherwise, if |Vj| = 1 define ιO(σ) = (V1, . . . , Vj−2, Vj−1 ∪ Vj, Vj+1, . . . , V(cid:96)). In the latter case, since vi is the largest source in Oi, the vertices in Vj−1 are vertices in Oi as well and hence, Vj−1 ∪ Vj is a stable set of vertices. Notice that in both cases, sign(ιO(σ)) = −sign(σ). Moreover, ιO(ιO(σ)) = σ and πO is the unique fixed point of ιO. We conclude that for each acyclic orientation O, the involution ιO has a unique fixed point. Hence the coefficient of Γ[n],∅ in (9.1) is (−1)na(Γ(1)). The proof for the coefficient of ΓV,F when F (cid:54)= ∅ can be done using the same argument as above with slight modifications. Namely, each connected component of ΓV,F can be identified with a single vertex and a similar sign reversing involution can be defined for the graph Γ(1)/F whose vertex set has cardinality c(F ). 102 Remark 9.2.2. Obtaining a cancellation-free a formula is, in general, a difficult problem when studying Hopf algebras arising in combinatorics. Theorem 9.2.1 fits into a family of results on antipode formulas for graphs and various generalizations of graphs. In [HM12, Theorem 3.1] Humpert and Martin, the authors provide a cancellation-free formula for the antipode of graphs using induction. On the other hand, in [BS17, Theorem 7.1] Benedetti and Sagan make use of sign-reversing involutions on combinatorial objects to obtain cancellation-free formulas for antipodes of several Hopf algebras including the graph Hopf algebra. Our proof makes use of a sign-reversing involution for graphs generalized to simplicial complexes. Aguiar and Ardila [AA17] have also recovered the antipode formula for both graphs and simplicial complexes using a Hopf monoids of generalized permutahedra [AA17]. In the context the similarity of the formula and proofs is explained by the fact that the generalized permutahedra associated to a simplicial complex and its 1-skeleton have the same normal fan. Benedetti and Bergeron [BB16] have given a simplified, but not cancellation-free, formula for the antipode of hypergraphs. The fact the formula is not cancellation-free has since been explained by Benedetti, Bergeron, and Machacek in [BBM17] where it is shown that the coefficients in the antipode formula of a hypergraph are Euler characteristics of certain polyhedral complexes. Let us return to our previous example with Γ generated by the facets {1, 2, 3} and {3, 4}. Using the information in Table 9.1 we obtain the expression in Figure 9.2. Looking at the expression for the antipode in this example, we see that if we add all the coefficients together we obtain 1. It turns out that the sum of the coefficients of the antipode of a simplicial complex is always (−1)n where n is the number of vertices of the simplicial complex. We will derive this fact using characters and quasisymmetric functions in the next section (see Corollary 10.2.2). 103 F ∈ F(Γ(1)) ∅ {1, 2} {1, 3} {2, 3} {3, 4} {1, 2},{3, 4} {1, 3},{3, 4} {2, 3},{3, 4} {1, 2},{1, 3},{2, 3} {1, 2},{1, 3},{2, 3},{3, 4} (−1)c(F ) (−1)4 (−1)3 (−1)3 (−1)3 (−1)3 (−1)2 (−1)2 (−1)2 (−1)2 (−1)1 a(Γ(1)/F ) 12 4 4 4 6 2 2 2 2 1 Table 9.1: Information to compute the antipode of Γ. Figure 9.2: Antipode of an element in A4. Now, note that once we have computed S(Γ), we can easily find the antipode of the simplicial complex Γ(1) by just taking the 1-skeleton of each of the terms in the sum for the antipode. So we immediately get that S(Γ(1)) = 12K4 − 18(K2 (cid:93) K2) + 2(K2 (cid:93) K2) + 4(P3 (cid:93) K1) + 2(K3 (cid:93) K1) − Γ(1). where Kn is the complete graph on n vertices, Kn is the complement of the complete graph on n vertices, and Pn is the path on n vertices. More generally, let A(k) be the K-linear span of isomorphism classes of simplicial com- plexes of dimension at most k. That is, complexes Γ ∈ A such that Γ(k) = Γ. For each 104 S=12-18+2+4+2- k ≥ 0, we define the map φk : A → A(k) Γ (cid:55)→ Γ(k) which takes the k-skeleton of a simplicial complex. We extend this map linearly to all of A. Proposition 9.2.3. For any nonnegative integer k, A(k) is a Hopf subalgebra of A and the map φk : A → A(k) is a Hopf algebra homomorphism. Proof. Let Γ and Θ be simplicial complexes. Since dim Γ (cid:93) Θ = max{dim Γ, dim Θ} and dim ΓT ≤ dim Γ for any T ⊆ V (Γ) it follows that A(k) is a Hopf subalgebra. Observe that (Γ(cid:93) Θ)(k) = {X : X ∈ Γ(cid:93) Θ,|X| ≤ k + 1} = {X ∈ Γ : |X| ≤ k + 1}∪{X ∈ Θ : |X| ≤ k + 1}. Therefore (Γ (cid:93) Θ)(k) = Γ(k) (cid:93) Θ(k) and φk is an algebra homomorphism. Next, since (ΓT )(k) = {X ∈ Γ : X ⊆ T,|X| ≤ k + 1} = (Γ(k))T we have (cid:88) (ΓT )(k) ⊗ (ΓV (Γ)−T )(k) = T⊆V (Γ) (cid:88) T⊆V (Γ) (Γ(k))T ⊗ (Γ(k))V (Γ)−T and so φk is also a coalgebra homomorphism. We conclude that φk is a Hopf algebra homomorphism. Using Proposition 9.2.3 along with the fact that for any Hopf algebra homomorphism β : H1 → H2 one has β(SH1 (h)) = SH2 (β(h)) for all h ∈ H1 (see [GR, Proposition 1.46]) 105 we can conclude that S◦φk = φk◦S. This means that once we know S(Γ) for some simplicial complex Γ we can find S(Γ(k)) for any k. 106 Chapter 10 The chromatic symmetric function Now that we have endowed A with a Hopf algebra structure, we will proceed to define a family of characters on A. This will give rise to a family of combinatorial Hopf algebras. The morphism to the Hopf algebra of symmetric functions will map each simplicial complex to a generalized of Stanley’s chromatic symmetric function [Sta95]. For each s > 0, define the map ζs : A → K by 1 dim Γ < s, 0 dim Γ ≥ s, ζs(Γ) = and extend linearly to A. Each map ζs is multiplicative, i.e. ζs(Γ (cid:93) Θ) = ζs(Γ)ζs(Θ). Thus, for each s the pair (A, ζs) is a combinatorial Hopf algebra. Moreover, since A is cocommutative, equation (8.2) implies that Ψζ is actually a symmetric function. 10.1 Coloring in simplicial complexes Let P denote the set of positive integers and let G be a graph with vertex set V . A coloring of G is a map f : V → P. We refer to f (u) as the color of u. A proper coloring of V is a coloring such that f (u) (cid:54)= f (v) whenever uv is an edge of G. Given a simplicial complex Γ 107 Figure 10.1: A 2-coloring of Γ in (a) and a 3-coloring of Γ in (b). and s ∈ N, define an s-simplicial coloring 1 to be a coloring of V (Γ) such that there is no monochromatic face of dimension s. Notice that any 1-simplicial coloring of Γ is simply a proper coloring of its 1-skeleton Γ(1). In Figure 10.1 we use our earlier example and depict two colorings of Γ using the colors {x, y, z} ⊆ P. Given a graph G, the number of proper colorings f : V (G) → {1, 2, . . . , t} is the well- known chromatic polynomial, χ(G; t). For a simplicial complex Γ the number of s-simplicial colorings f : V (Γ) → {1, 2, . . . , t} is called the s-chromatic polynomial, χs(Γ; t), and defined in [Nor12, MN16]. Although it is not obvious that χs(Γ; t) is a polynomial, we will see that this is the case once we realize it as an evaluation of a certain symmetric function. Stanley provided a generalization (see [Sta95]) of the chromatic polynomial of a graph G by defining (cid:88) (cid:89) x|f−1(i)| i ψ(G; x1, x2, . . . ) = where the sum is over proper colorings f : V → P. This formal power series is known as f i≥1 Stanley’s chromatic symmetric function. For a simplicial complex Γ we define the s-chromatic symmetric function as ψs(Γ; x1, x2, . . . ) = (cid:88) (cid:89) f i≥1 x|f−1(i)| i where now the sum is over s-simplicial colorings f : V → P and V = V (Γ). Notice that when 1In [DMN] the authors use the term (P, s)-coloring for an s-simplicial coloring which uses some palette of colors P ⊆ P. To avoid confusion with terminology in graphs, we have adopted the term s-simplicial coloring. 108 xzyyyzyy(a)(b) s = 1 we obtain Stanley’s chromatic symmetric function. Given f (x1, x2, . . . ) ∈ QSym its principal specialization at t is defined by ps1(f )(t) = f (1, . . . , 1, 0, 0, . . . ) where t ∈ P and It turns out that ps1(f )(t) gives rise to a only the first t variables are specialized to 1. unique polynomial in t (see [GR, Proposition 7.7]). The s-chromatic polynomial χs(Γ; t) is the polynomial determined ps1(ψs(Γ))(t). We now show how the s-chromatic symmetric function arises from the CHA (A, ζs). Theorem 10.1.1. Fix s and consider the combinatorial Hopf algebra (A, ζs). simplicial complex, then Ψζs(Γ) = ψs(Γ; x1, x2, . . . ). If Γ is a Proof. Consider the formula in equation (8.2). Given a simplicial complex Γ ∈ An and a composition α = (α1, α2, . . . , α(cid:96)) (cid:15) n, we get that the coefficient of Mα is the number of ordered set partitions V1 (cid:93) V2 (cid:93) ··· (cid:93) V(cid:96) of V (Γ) such that |Vi| = αi and dim ΓVi each i. In an s-simplicial coloring, every element of a subset T of V (Γ) can be assigned the < s for same color if and only if dim ΓT < s. Thus the coefficient of Mα counts s-simplicial colorings using only colors {j1 < j2 < ··· < j(cid:96)} ⊆ P where |f−1(ji)| = αi for each i. The result follows. We discuss now the expansion of the symmetric function Ψζs(Γ; x1, x2, . . . ) in terms of the power sum basis. The power sum symmetric function of degree n, denoted by pn, in the variables x1, x2, . . . is given by pn =(cid:80) i≥1 xn i and for λ = (λ1, . . . , λ(cid:96)) an integer partition define pλ := pλ1 ··· pλ(cid:96) . Take a simplicial complex Γ and let V = V (Γ). For any s > 0, we denote the collection of s-simplices of Γ by Fs(Γ). Given any A ⊆ Fs(Γ) we let ΓV,A be the simplicial complex on the vertex set V generated by A. That is, the faces of ΓV,A of dimension greater than 109 0 are subsets X ⊆ V such that X ⊆ Y for some Y ∈ A. For A ⊆ Fs(Γ) we define an integer partition λ(A) which has length the number of connected components of (ΓV,A)(1) and whose parts are given by the number of vertices in each connected component. The s-chromatic symmetric function has the following expansion in the power sum basis (cid:88) Ψζs(Γ) = (−1)|A|pλ(A) A⊆Fs(Γ) (10.1) which can be proven analogously to [Sta95, Theorem 2.5]. Let Γ denote the (n − 1)-simplex, i.e., the simplicial complex on [n] whose only facet is the set [n] itself. We now look at the monomial and Schur expansions of Ψζs(Γ). We have Ψζs(Γ) = (cid:19) mµ (cid:18) (cid:88) n µ1,··· , µ(cid:96) µ(cid:96)n µ1≤s where, for every s ≥ 1, the sum is over partitions µ = (µ1,··· , µ(cid:96)) of n such that µ1 ≤ s. In the above case when s = n we get Ψζn(Γ) = (cid:88) λ(cid:96)n f λsλ where f λ is the number of standard Young tableaux of shape λ. This follows since (cid:19) (cid:18) (cid:88) n µ1,··· , µ(cid:96) µ(cid:96)n mµ = (m(1))n = (s(1))n = f λsλ. (cid:88) λ(cid:96)n 110 Moreover, since m(n) = pn =(cid:80)n i=0(−1)is we conclude that the symmetric function (n−i,1i) (cid:88) λ(cid:96)n (Γ) = Ψζn−1 f λsλ − m(n) is Schur positive as well. Unfortunately, the functions Ψζs(Γ) are not always Schur positive. An instance of this is when n = 4 and s = 2. In this case, Ψζs(Γ) = 6m(2,2) + 12m(2,1,1) + 24m(1,1,1,1) = 6s(2,2) + 6s(2,1,1) − 6s(1,1,1,1). It would be interesting to determine other families of simplicial complexes that give Schur positivity of the functions Ψζs for different values of s. 10.2 Acyclic orientations and evaluations In this section, we use our antipode formula along with the characters defined above to interpret certain evaluations of the s-chromatic polynomial. Given any character ζ : A → K, the following identity holds (see [ABS06, Section 1]) ζ−1 = ζ ◦ S where S is the antipode in A and ζ−1 is the inverse of ζ under convolution. In other words, ζ−1ζ = u ◦  where ζ−1ζ = m ◦ (ζ−1 ⊗ ζ) ◦ ∆. 111 Now, since ps1(ψs(Γ))(t) = χs(Γ; t), using [GR, Proposition 7.7 (iii)] yields ζs ◦ S(Γ) = ζ−1 s (Γ) = ps1(ψs(Γ))(−1) = χs(Γ;−1). (10.2) This allows us to prove the following theorem. Theorem 10.2.1. Let Γ ∈ An be a simplicial complex and let s be a positive integer. Then χs(Γ;−1) = (−1)c(F )a(Γ(1)/F ). (cid:88) F∈F(Γ(1)) dim ΓV,F dim Γ, then χs(Γ; t) = tn since there is no restriction on coloring. So, χs(Γ;−1) = (−1)n. Meanwhile, the sum in Theorem 10.2.1 runs over all F ∈ F(Γ(1)) because the condition dim ΓV,F < s is always satisfied. 10.3 The f -vector Given a simplicial complex Γ, the f -vector of Γ is defined to be (f0, f1, . . . ) where fs is the number of s-simplices in Γ. For example, if Γ is the simplicial complex generated by the facets {1, 2, 3} and {3, 4}, then Γ has f -vector (4, 4, 1, 0, 0, . . . ). In this section we show how to obtain the f -vector of a simplicial complex from the symmetric functions {Ψζs}s>0. Let [pλ]Ψζs(Γ) denote the coefficient of pλ in the power sum expansion of Ψζs(Γ). If Γ is a simplicial complex on n vertices and A ⊆ Fs(Γ), then λ(A) = (s + 1, 1n−s−1) if only if A consists of a single s-simplex. By considering equation (10.1) we obtain the following proposition. Proposition 10.3.1. If Γ is a simplicial complex with |V (Γ)| = n and s > 0, then fs = −[p (s+1,1n−s−1) ]Ψζs(Γ). 113 Figure 10.2: 123, 124, 134, 235. Z (left) has 2-simplicies 123, 124, 134, 234 and Y (right) has 2-simplices Given a simplicial complex Γ, denote its sth homology group by Hs(Γ) for each s ≥ 0. The sth Betti number is denoted βs(Γ) and defined to be the rank of Hs(Γ). One useful fact about homology groups is that if dim Γ = k, then Hs(Γ) = 0 for s > k. In particular, this means βs(Γ) = 0 for s > k. Hence, Proposition 10.3.1 allows us to recover the Euler characteristic χΓ of Γ, since χΓ =(cid:80) s≥0(−1)sfs =(cid:80) s≥0(−1)sβs where βs = βs(Γ). Since we can determine the f -vector from the s-chromatic symmetric functions, it is natural to wonder if we can also determine the Betti numbers. If Γ is a graph, i.e. if dim(Γ) = 1, then β0 equals the number of its connected components. This number can also be recovered by means of the chromatic polynomial of Γ. Thus, in this case we recover the sequence of Betti numbers (β0, β1, 0, 0, ...). However, for higher dimensional simplicial complexes this is not always the case as we see in the next example. Example 10.3.2. We now consider two simplicial complexes Γ and Θ such that Ψζs(Γ) = Ψζs(Θ) for all s > 0, but Γ and Θ have different Betti numbers. We set Γ = X (cid:93) Y and Θ = Z (cid:93) W where X, Y , Z, and W are given in Table 10.1. The simplicial complexes Y and Z are shown Figure 10.2. 114 112233445 Vertices 1, 2, 3, 4 X 1, 2, 3, 4, 5 Y 1, 2, 3, 4 Z W 1, 2, 3, 4, 5 Facets 12, 13, 14, 23, 24, 34 123, 124, 134, 235, 45 123, 124, 134, 234 12, 13, 14, 23, 24, 25, 34, 35, 45 Table 10.1: Vertices and facets of the complexes X, Y, Z, and W. Since Γ(1) = Θ(1), it follows Ψζ1 (Γ) = Ψζ1 (Θ). Also, Ψζ2 (Γ) = (p (14) )(p (15) − 4p (3,12) + 3p(4,1)) = p (19) − 4p (3,16) + 3p (4,15) Ψζ2 (Θ) = (p (14) − 4p(3,1) + 3p(4))(p (15) ) = p (19) − 4p (3,16) + 3p (4,15) and Γ and Θ have the same 2-chromatic symmetric function. Since Γ and Θ are both 2-dimensional simplicial complexes, we conclude Ψζs(Γ) = Ψζs(Θ) for all s > 0. However, the Betti numbers of Γ and Θ are not the same since β2(Θ) = 1 while β2(Γ) = 0. 10.4 Hypertrees In this final section of this chapter we turn our attention to the chromatic symmetric function of hypertres. Results in this section can also be found in the article [Mac17]. A hypergraph H is a pair H = (V, E) where E a collection of nonempty subsets of V . We call the elements of V vertices and elements of of E hyperedges. If for each e ∈ E we have that |e| = s, then we call H an s-uniform hypergraph. A map f : V → P is a proper coloring of H if it produces no monochromatic hyperedge. We observe that coloring in simplicial complexes can be thought of as coloring in uniform 115 hypergraphs and conversely. Given a simplicial complex Γ and a nonnegative integer s we get an (s + 1)-uniform hypergraph H(s)(Γ) = (V (Γ), E(s)(Γ)) with edge set defined by E(s)(Γ) := {A ∈ Γ : |A| = s + 1}. Lemma 10.4.1. A map f is an s-simplicial coloring of a simplicial complex Γ if and only if f is a proper coloring of H(s)(Γ). Next we show that coloring in a uniform hypergraph can be thought of as an instance of coloring in a simplicial complex. Given any H = (V, E) we get a simplicial complex Γ(H) on the vertex set V defined by Γ(H) = {A : A ⊆ e ∈ E} ∪ {{v} : v ∈ V }. Lemma 10.4.2. A map f is a proper coloring of an (s + 1)-uniform hypergraph H if and only if f is an s-simplicial coloring of Γ(H). It is an open problem, first considered in [Sta95], to determine if the chromatic symmet- ric function distinguishes trees up to isomorphism. For some partial results on this problem see [MMW08, APZ14]. Russel has verified that the chromatic symmetric function distin- guishes trees on 25 or fewer vertices up to isomorphism [Rus12]. We will investigate the analogous question for uniform hypertrees. Throughout the remainder of this section section let H = (V, E) be a hypergraph on n vertices. Let E be a set of subsets of V and assume that |e| > 1 for all e ∈ E. A walk of 116 length (cid:96) > 0 between v1 ∈ V and v(cid:96) ∈ V is a sequence (v1, e1, v2, e2, . . . , v(cid:96), e(cid:96), v(cid:96)+1) such that ei ∈ E with vi, vi+1 ∈ ei for all i. If all the vertices and hyperedges are distinct, then the walk is called a path. In the case all vertices and hyperedges are distinct with the exception that v1 = v(cid:96)+1 we call the walk a cycle. The hypergraph H is connected if for any v, v(cid:48) ∈ V there exists a path between v and v(cid:48). A hypertree is a connected hypergraph with no cycles. We call H a linear hypergraph if |e1 ∩ e2| ≤ 1 for all e1, e1 ∈ E such that e1 (cid:54)= e2. Notice a hypertree is necessarily linear, otherwise for distinct hyperedges e1, e2 ∈ E and distinct vertices v1, v2 ∈ e1 ∩ e2 there is a cycle a length 2 (v1, e1, v2, e2, v1). For H let (ai)n i=2 be the sequence defined by ai := |{e ∈ E : |e| = i}| which records the number of hyperedges of each size in the hypergraph. The hyperedge magnitude of H is defined to be the sum n(cid:88) i=2 (i − 1)ai. In [GK05] is it shown that a connected hypergraph on n vertices is a hypertree if and only if the hyperedge magnitude is n − 1. We give the following lemma which extends this result 117 and is a generalization of the corresponding well known fact for trees. Lemma 10.4.3. Let H = (V, E) be a hypergraph on n vertices, and consider the following conditions: (i) H is connected. (ii) H is acyclic. (iii) H has hyperedge magnitude equal n − 1. Any two of the above conditions together imply the third. Hence, to show that a hypergraph H is a hypertree is suffices to prove that any two of the above conditions hold for H. Proof. From [GK05] we already know that (i) and (ii) together imply (iii), and also that (i) and (iii) together imply (ii). It remains to show that (ii) and (iii) together imply (i). Assume that H = (V, E) is a acylic hypergraph on n vertices with hyperedge magintude equal n − 1. We order the hyperedges E = {e1, e2, . . . , em} and let Hi = (V, Ei) for where Ei = {e1, e2, . . . , ei} for 1 ≤ i ≤ m. Also let H0 = (V,∅) Notice Hi will be an acyclic hypergraph for 1 ≤ i ≤ m. Since each hypergraph is acyclic it follows that if Hi has c connected components, then Hi+1 has c − |ei+1| + 1 connected components. Now H0 has n connected components and so it follows that H = Hm has c connected components where m(cid:88) i=1 c = n − (|ei| − 1) = n − (n − 1) = 1. Here we have used the assumption that H has edge magintude n − 1. Therefore we have shown H is connected and completed the proof. If G is a graph on n vertices, then G is a tree if and only if χG(t) = t(t− 1)n−1. There is a similar result for s-uniform hypertrees when we restrict to linear hypergraphs. It is proven 118 in [B(cid:32)L07, Theorem 5] that if H is a linear hypergraph on n vertices, then H is an s-uniform hypertree with m hyperedges if and only if χH (t) = t(ts−1−1)m. Here we observe some sim- ilar behavior between trees and uniform hypertrees when we restrict to linear hypergraphs. In what follows we will show some of the results on the chromatic symmetric which can be proven from trees can also be proven for uniform hypertrees. However, we will also exhibit two 3-uniform hypertrees which are not isomorphic yet have the same chromatic symmetric function. Given any set partition π = B1/B2/··· /B(cid:96) of [n] we let type π be the integer partition of n given by the sizes of the blocks in π. Given A ⊆ E we let λ(A) = type π(A). The chromatic symmetric function XH of a hypergraph H has the expansion (cid:88) A⊆E XH := (−1)|A|pλ(A) which we will take as a definition. Of course the chromatic symmetric function can also be defined in the monomial basis as a sum of proper coloring, but we will only need the chromatic symmetric of a hypertree in the powersum basis. Let cλ(H) denote the coefficient of pλ is the powersum expansion of XH so that (cid:88) XH = cλ(H)pλ, λ and let ci(H) = c(i,1,1,...,1)(H). Notice that XH is homogeneous of degree |V | and when H is s-uniform −cs(H) = |E|. Thus, we can always recover the number of vertices from XH , and we can recover the number of hyperedges in the case of uniform hypergraphs. Now assume that H is s-uniform and acyclic. For every A ⊆ E the hypergraph (V, A) 119 has n − (s − 1)|A| connected components. For any integer partition λ we let len λ denote the length of the partition. Thus for s-uniform acyclic hypergraphs len λ(A) = n − (s − 1)|A| for any A ⊆ E. It then follows cλ = (−1) n−k s−1 |{A ⊆ E : λ(A) = λ}| for λ (cid:96) n with len λ = k. This implies the relation s−1 (cid:88) n−k λ(cid:96)n len λ=k (−1) cλ(H) = (cid:18) m (cid:19) n−k s−1 where m = |E|. For a vertex v ∈ V , the degree of v in H is deg v := |{e ∈ E : v ∈ e}|. The degree sequence of H is the collection of the degrees of all vertices of H arranged in weakly decreasing order. Our next result shows that the chromatic symmetric function of a uniform hypertree determines its degree sequence. In [MMW08, Corollary 5] it was shown that the chromatic symmetric function determines the degree sequence of a tree. Proposition 10.4.4. If H is a uniform hypertree, then the degree sequence of H can be determined from XH . Proof. Let H = (V, E) be an s-uniform hypertree on n vertices. Thus H must have m = n−1 s−1 hyperedges. Let XH = (cid:80) cλpλ and let Di denote the number of vertices of a degree i in H. It suffices to show that we can determine the numbers Di for 1 ≤ i ≤ m. Since H is a 120 hypertree and hence connected, we must have D0 = 0. For any λ (cid:96) n let 1(λ) denote the number of parts of size 1 in λ. Recall that if A ⊆ E, then len λ(A) = n − (s − 1)|A|. Any 1 in the partition λ(A) must come from a vertex of degree at most m − |A|. Now for any integer 0 ≤ i ≤ m let us consider partitions λ with len λ = ki where ki = n − (s − 1)(m − i). Exactly the vertices of H of degree at most i will contribute to the sum (−1)m−i (cid:88) λ(cid:96)n cλ · 1(λ). i−j (cid:19) (cid:18)m − j i − j Dj. Note that a vertex of degree j will contribute to the sum exactly (cid:0)m−j len λ=ki (cid:1) times. It follows that (−1)m−i (cid:88) λ(cid:96)n len λ=ki i(cid:88) j=1 cλ · 1(λ) = This gives a triangular system that we can solve for each Di. Therefore XH determines the degree sequence of a hypertree H. We conclude this section by showing that the chromatic symmetric function is not a complete invariant among uniform hypertrees. We give two pairs on 3-uniform hypertrees on 21 vertices which are not isomorphic, but have the same chromatic symmetric function. These hypertrees were found by using nauty [MP14] to enumerate all 3-uniform hypertrees up to isomorphism and then using SageMath [Dev16] to compute the chromatic symmetric functions. The computation indicates that the examples are minimal. That is, there does not exist a pair of hypertrees on fewer than 21 vertices which are not isomorphic but have the same chromatic symmetric function. Let H1 = (V, E1), H2 = (V, E2), H3 = (V, E3), 121 and H4 = (V, E4) where V = {0, 1, . . . , 20} and E1 = {{0, 1, 2},{0, 3, 4},{1, 5, 6},{0, 7, 8},{2, 9, 10},{1, 11, 12},{9, 13, 14}, {16, 3, 15},{17, 18, 7},{19, 20, 13}} E2 = {{0, 1, 2},{0, 3, 4},{1, 5, 6},{0, 7, 8},{2, 9, 10},{1, 11, 12},{9, 13, 14}, {16, 3, 15},{17, 18, 5},{19, 20, 15}} E3 = {{0, 1, 2},{0, 3, 4},{1, 5, 6},{0, 7, 8},{5, 9, 10},{5, 11, 12},{0, 13, 14}, {16, 2, 15},{1, 17, 18},{19, 20, 15}} E4 = {{0, 1, 2},{0, 3, 4},{1, 5, 6},{0, 7, 8},{2, 9, 10},{1, 11, 12},{0, 13, 14}, {16, 9, 15},{17, 18, 9},{3, 19, 20}}. One can check that H1, H2, H3, and H4 are all 3-uniform hypertrees on 21 vertices and that XH1 = XH2 and XH3 = XH4 However, H1 is not isomorphic to H2 and H3 is not isomorphic to H4. The hypertrees H1 and H2 are shown in Figure 10.3. 122 Figure 10.3: The hypertree H1 above and the hypertree H2 below which are not isomorhpic but have the same chromatic symmetric function. 123 1615340121091413192011125687171820191516340121091314781112651718 BIBLIOGRAPHY 124 BIBLIOGRAPHY [AA17] Marcelo Aguiar and Federico Ardila. Hopf monoid of generalized permutahedra, 2017. arXiv:1709.07504 [math.CO]. [ABS06] M. Aguiar, N. Bergeron, and F. Sottile. Combinatorial Hopf algebras and generalized Dehn-Sommerville relations. Compositio Mathematica, 142:1–30, 2006. [AHBC+16] Nima Arkani-Hamed, Jacob Bourjaily, Freddy Cachazo, Alexander Goncharov, Alexander Postnikov, and Jaroslav Trnka. Grassmannian geometry of scattering amplitudes. Cambridge University Press, Cambridge, 2016. [APZ14] Jos´e Aliste-Prieto and Jos´e Zamora. Proper caterpillars are distinguished by their chromatic symmetric function. Discrete Math., 315:158–164, 2014. [BB16] Nantel Bergeron and Carolina Benedetti. Cancelation free formula for the an- tipode of linearized Hopf monoid, 2016. arXiv:1611.01657 [math.CO]. [BBM17] Nantel Bergeron, Carolina Benedetti, and John Machacek. Hypergraphic polytopes: combinatorial properties and antipode, 2017. arXiv:1712.08848 [math.CO]. [BD82] A. A. Belavin and V. G. Drinfeld. Solutions of the classical Yang-Baxter equa- tion for simple Lie algebras. Funktsional. Anal. i Prilozhen., 16(3):1–29, 96, 1982. [BFZ05] Arkady Berenstein, Sergey Fomin, and Andrei Zelevinsky. Cluster algebras. III. Upper bounds and double Bruhat cells. Duke Math. J., 126(1):1–52, 2005. [BHM16] Carolina Benedetti, Joshua Hallam, and John Machacek. Combinatorial Hopf Algebras of Simplicial Complexes. SIAM J. Discrete Math., 30(3):1737–1757, 2016. [B(cid:32)L07] [BM06] Mieczys(cid:32)law Borowiecki and Ewa (cid:32)Lazuka. On chromaticity of hypergraphs. Dis- crete Math., 307(11-12):1418–1429, 2007. Aslak Bakke Buan and Robert Marsh. Cluster-tilting theory. In Trends in representation theory of algebras and related topics, volume 406 of Contemp. Math., pages 1–30. Amer. Math. Soc., Providence, RI, 2006. 125 [BMR+] Eric Bucher, John Machacek, Evan Runberg, Abe Yeck, and Ethan Zewde. Building maximal green sequences via component preserving mutations. In preparation. [BMS18] Eric Bucher, John Machacek, and Michael Shapiro. Upper cluster algebras and choice of ground ring, 2018. arXiv:1802.04835 [math.AC]. [BS17] [CLS15] [CP94] [Dev16] [DMN] [FG06] Carolina Benedetti and Bruce E. Sagan. Antipodes and involutions. J. Combin. Theory Ser. A, 148:275–315, 2017. Ilke Canakci, Kyungyong Lee, and Ralf Schiffler. On cluster algebras from unpunctured surfaces with one marked point. Proc. Amer. Math. Soc. Ser. B, 2:35–49, 2015. Vyjayanthi Chari and Andrew Pressley. A Guide to Quantum Groups. Cam- bridge University Press, 1994. The Sage Developers. SageMath, the Sage Mathematics Software System (Ver- sion 7.3), 2016. http://www.sagemath.org. N. Dobrinskaya, J. M. Møller, and D. Notbohm. Vertex colorings of simplicial complexes. arXiv:1007.0710, version 1. Vladimir Fock and Alexander Goncharov. Moduli spaces of local systems and higher Teichm¨uller theory. Publ. Math. Inst. Hautes ´Etudes Sci., (103):1–211, 2006. [FGM14] Sebasti´an Franco, Daniele Galloni, and Alberto Mariotti. The geometry of on-shell diagrams. Journal of High Energy Physics, 2014(8):1–72, 2014. [FGPW15] Sebastian Franco, Daniele Galloni, Brenda Penante, and Congkao Wen. Non- planar on-shell diagrams, 2015. arXiv:1502.02034v2. [Fom01] Sergey Fomin. Loop-erased walks and total positivity. Transactions of the American Mathematical Society, 353:3563 – 3583, 2001. [FWZ] [FZ02] [GGS+14] Sergey Fomin, Lauren Williams, and Andrei Zelevinsky. Introduction to cluster algebras. chapters 1-3. arXiv:1608.05735 [math.CO]. Sergey Fomin and Andrei Zelevinsky. Cluster algebras. I. Foundations. J. Amer. Math. Soc., 15(2):497–529, 2002. J. K. Golden, A. B. Goncharov, M. Spradlin, C. Vergu, and A. Volovich. Motivic amplitudes and cluster coordinates. Journal of High Energy Physics, 2014(1):91, Jan 2014. 126 [GK05] [GL09] [GL11] Ira M. Gessel and Louis H. Kalikow. Hypergraphs and a functional equation of Bouwkamp and de Bruijn. J. Combin. Theory Ser. A, 110(2):275–289, 2005. K. R. Goodearl and E. S. Letzter. Semiclassical limits of quantum affine spaces. Proc. Edinb. Math. Soc. (2), 52(2):387–407, 2009. K. R. Goodearl and S. Launois. The Dixmier-Moeglin equivalence and a Gel’fand-Kirillov problem for Poisson polynomial algebras. Bull. Soc. Math. France, 139(1):1–39, 2011. [GR] D. Grinberg and V. Reiner. Hopf algebras in combinatorics. arXiv:1409.8356, version 3. [GSV08] Michael Gekhtman, Michael Shapiro, and Alek Vainshtein. Poisson geometry of directed networks in an annulus. Journal of the European Mathematical Society, 14:541–570, 2008. [GSV09] Michael Gekhtman, Michael Shapiro, and Alek Vainshtein. Poisson geometry of directed networks in a disk. Selecta Math. (N.S.), 15(1):61–103, 2009. [GSV10] [GSV12] [GSV14] [GSV17] [GV85] [HM12] [JR79] Michael Gekhtman, Michael Shapiro, and Alek Vainshtein. Cluster algebras and Poisson geometry, volume 167 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2010. M. Gekhtman, M. Shapiro, and A. Vainshtein. Cluster structures on sim- ple complex Lie groups and Belavin-Drinfeld classification. Mosc. Math. J., 12(2):293–312, 460, 2012. Michael Gekhtman, Michael Shapiro, and Alek Vainshtein. Cremmer-Gervais cluster structure on SLn. Proc. Natl. Acad. Sci. USA, 111(27):9688–9695, 2014. M. Gekhtman, M. Shapiro, and A. Vainshtein. Exotic cluster structures on SLn: the Cremmer-Gervais case. Mem. Amer. Math. Soc., 246(1165):v+94, 2017. Ira Gessel and G´erard Viennot. Binomial determinants, paths, and hook length formulae. Advances in Mathematics, 53:300–321, 1985. B. Humpert and J. Martin. The incidence Hopf algebra of graphs. SIAM J. Discrete Math., 26(2):555–570, 2012. S. A. Joni and G.-C. Rota. Coalgebras and bialgebras in combinatorics. Stud. Appl. Math., 61(2):93–139, 1979. 127 [Kel11] Bernhard Keller. On cluster theory and quantum dilogarithm identities. In Representations of algebras and related topics, EMS Ser. Congr. Rep., pages 85–116. Eur. Math. Soc., Z¨urich, 2011. [KLS13] Allen Knutson, Thomas Lam, and David E. Speyer. Positroid varieties: juggling and geometry. Compos. Math., 149(10):1710–1752, 2013. [LGPV13] Camille Laurent-Gengoux, Anne Pichereau, and Pol Vanhaecke. Poisson struc- tures. Springer, 2013. [Lin73] [LL16] [Lus94] [Lus98] [Mac17] [Mac18] Bernt Lindstr¨om. On the vector representations of induced matroids. Bulletin of the London Mathematical Society, 48:85–90, 1973. St´ephane Launois and C´esar Lecoutre. A quadratic Poisson Gel’fand-Kirillov problem in prime characteristic. Trans. Amer. Math. Soc., 368(2):755–785, 2016. G. Lusztig. Total positivity in reductive groups. In Lie theory and geometry, volume 123 of Progr. Math., pages 531–568. Birkh¨auser Boston, Boston, MA, 1994. G. Lusztig. Total positivity in partial flag manifolds. Represent. Theory, 2:70– 78, 1998. John Machacek. Plurigraph coloring and scheduling problems. Electron. J. Combin., 24(2):Paper 2.29, 2017. John Machacek. Boundary measurement matrices for directed networks on surfaces. Adv. in Appl. Math., 93:69–92, 2018. [MMW08] Jeremy L. Martin, Matthew Morin, and Jennifer D. Wagner. On distinguish- ing trees by their chromatic symmetric functions. J. Combin. Theory Ser. A, 115(2):237–253, 2008. [MN16] [MO17] [MP14] [MR04] Jesper M. Møller and Gesche Nord. Chromatic Polynomials of Simplicial Com- plexes. Graphs Combin., 32(2):745–772, 2016. John Machacek and Nicholas Ovenhouse. Log-canonical coordinates for Poisson brackets and rational changes of coordinates. J. Geom. Phys., 121:288–296, 2017. Brendan D. McKay and Adolfo Piperno. Practical graph isomorphism, II. J. Symbolic Comput., 60:94–112, 2014. R. J. Marsh and K. Rietsch. Parametrizations of flag varieties. Represent. Theory, 8:212–242, 2004. 128 [MS16] Greg Muller and David E. Speyer. Cluster algebras of Grassmannians are locally acyclic. Proc. Amer. Math. Soc., 144(8):3267–3281, 2016. [Mul13] Greg Muller. Locally acyclic cluster algebras. Adv. Math., 233:207–247, 2013. [Mul14] [Nor12] [Oh06] [Pos06] [Rus12] [Sco06] [Sei95] [Sta73] [Sta95] [Sta18] [Tak71] [Tal08] [Tal12] Greg Muller. A = U for locally acyclic cluster algebras. SIGMA Symmetry Integrability Geom. Methods Appl., 10:Paper 094, 8, 2014. G. Nord. The s-chromatic polynomial. Master’s thesis, Universiteit van Ams- terdam, 2012. Sei-Qwon Oh. 34(4):12651277, 2006. Poisson polynomial rings. Communications in Algebra, Alexander Postnikov. Total positivity, Grassmannians, and networks, 2006. arXiv:0609764v1. Keeler Russell. csf. https://github.com/keeler/csf, 2012. Github reposi- tory. Joshua S. Scott. Grassmannians and cluster algebras. Proc. London Math. Soc. (3), 92(2):345–380, 2006. N. Seiberg. Electric-magnetic duality in supersymmetric non-abelian gauge theories. Nuclear Phys. B, 435(1-2):129–146, 1995. Richard P. Stanley. Acyclic orientations of graphs. Discrete Math., 5:171–178, 1973. Richard P. Stanley. A symmetric function generalization of the chromatic poly- nomial of a graph. Advances in Math., 111:166–194, 1995. The Stacks Project Authors. Stacks Project. http://stacks.math.columbia. edu, 2018. M. Takeuchi. Free Hopf algebras generated by coalgebras. J. Math. Soc. Japan, 23:561–582, 1971. Kelli Talaska. A formula for Pl¨ucker coordinates associated with a planar net- work. Int. Math. Res. Not. IMRN, pages Art. ID rnn 081, 19, 2008. Kelli Talaska. Determinants of weighted path matrices, 2012. arXiv:1202.3128 [math.CO]. [Van01] Pol Vanhaecke. Integrable systems in the realm of algebraic geometry. Springer Science & Business Media, 2001. 129 [Wei83] [Whi37] [Xin04] Alan Weinstein. The local structure of Poisson manifolds. J. Differential Geom., 18(3):523–557, 1983. Hassler Whitney. On regular closed curves in the plane. Compositio Math., 4:276–284, 1937. Guoce Xin. The Ring of Malcev-Neumann Series and the Residue Theorem. PhD thesis, Brandeis University, 2004. arXiv:math/0405133 [math.CO]. 130