53%}: \lllll‘HMWWWWI}WWILl.“ll “(ENS 9000i This is to certify that the dissertation entitled THEMATIC INDICES AND SUPEROPTIMAL SINGULAR VALUES OF MATRIX FUNCTIONS presented by ALBERTO A. CONDORI has been accepted towards fulfillment of the requirements for the Ph.D. degree in Mathematics W% Major Professor’s Signature fl Z GL7 7 1’ 2 00/9 Date MSU is an Affirmative Action/Equal Opportunity Employer THEMATIC INDICES AND SUPEROPTIMAL SIN GULAR VALUES OF MATRIX FUNCTIONS BV V Alberto A. Condori A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Mathematics 2009 ABSTRACT THEMATIC INDICES AND SUPEROPTIMAL SINGULAR VALUES OF MATRIX FUNCTIONS By Alberto A. Condori In this dissertation, we discuss a number of results on super0ptimal approximation by analytic and meromorphic matrix-valued functions on the unit circle. We first prove the existence of a monotone non-decreasing thematic factorization for admissible (eg. continuous) very badly approximable matrix functions. Unlike the case of monotone non-increasing thematic factorizations, it is shown that thematic indicas in a mono- tone non—decreasing thematic factorization are not uniquely determined. We then consider the problem of characterizing superoptimal singular values. An extremal problem is introduced and its connection with the sum of superoptimal singular val- ues is explored by considering a new class of operators: Hankel-type operators on Hardy spaces of matrix functions. Lastly, we consider approximation by meromor- phic matrix—valued functions; the so-called Nehari-Takagi problem. We provide a counterexample that shows that the index formula in connection with meromorphic approximation, which is well-known to hold in the case of scalar-valued functions, fails in the case of matrix-valued functions. DEDICATION To Papa Alberto and “Don Manuel con el lago y la montafia.” iii ACKNOWLEDGMENT First and foremost, I would like to express my gratitude to my dissertation adviser Professor Vladimir V. Peller. You have been a constant inspiration to me since my first day at Michigan State University. I thank you for seeing potential in me and taking me as your student. Without your support, patience and helpful remarks, this dissertation would have not been possible. To you, I am forever indebted. I thank my fiancée and best friend, Cara. Without your continuous love and sup- port, I would have not seen the sunshine in my darkest days at MSU; especially during my physical and metaphysical struggles. Thank you for believing in my mathematical ability in the times that I could not. During the many times I felt like surrending, I have always remembered my dad telling me “103 Condori nunca mueren.” Dad, I thank you for teaching me to never give up. Morn, Sandro and Elthon, I thank you for always reminding me about living life, especially at the times when I was consumed by my mathematical and personal problems. Lastly, I express my appreciation to Professors A. Volberg, M. Frazier, and V. Zeidan. You have been an inspiration to me as well. I thank you for believing in my potential and for stimulating conversations that led to both my mathematical and personal growth. iv TABLE OF CONTENTS Introduction ................................. 1 1.1 Best and superoptimal approximation in H (Of) ........... 4 1.2 Badly and very badly approximable matrix functions ....... 8 1.3 Thematic factorizations ....................... 9 1.4 Other notation and terminology ................... 14 Monotone thematic factorizations ................... 16 2.1 Introduction .............................. 16 2.2 Invertibility of Toeplitz operators and factorization of certain uni- modular functions ........................... 19 2.3 Badly approximable matrix functions ................ 21 2.4 Sequences of thematic indices .................... 35 2.5 Unitary—valued very badly approximable 2 x 2 matrix functions . 42 On the sum of superoptimal singular values ............. 48 3.1 An extremal problem .............. - ........... 48 3.2 Best approximation in Lq(S;,"'") and dual extremal problems . . . 53 3.3 ok() as the norm of a Hankel-type operator and k—extremal func- tions .................................. 55 3.4 How about the sum of superoptimal singular values? ....... 61 3.5 Unitary-valued very badly approximable matrix functions ..... 72 An index formula in connection with meromorphic approximation 80 Bibliography ................................. 85 Chapter 1 Introduction The problem of approximating a continuous function on the unit circle If by bounded analytic functions in the unit disk D with respect to the uniform norm has been stud- ied for quite some time. A simple compactness argument reveals that any bounded measurable function 99 on 'I‘ has a best uniform approximation Ago by bounded ana- lytic functions, i.e. H99 - Arlloo = dist(r-. HOG) = inf{||sc - flloo r f E Hm}- The uniqueness of a best approximation for continuous functions was first proved by S. Khavinson in [Kh] and rediscovered later by several mathematicians. Different authors have studied the error function go—Ago, or equivalently, functions it» for which the zero function is a best approximation. These functions 7,!) are called badly approximable. For example, it was proved by Poreda in [P0] that a continuous function ‘t/J is badly approximable if and only if it has constant modulus and negative winding number. A new light into the best approximation problem was shed by Nehari [Ne]. He found the following formula for the distance from a bounded measurable function (,9 1 to the Banach algebra H 00 of bounded analytic functions in D: distLoo( H E = L2 e H 2 with symbol (,0 is defined by wa = 113.99]: for f 6 H2, where 1?. denotes the orthogonal projection of L2 onto H3. (Throughout, H 9 IC denotes the orthogonal complement of a subspace [C of a Hilbert space H.) There- fore, formula (1.0.1) motivated the consideration of Hankel operators in the study of the best approximation problem. In the years to follow, further evidence of the intimate connection between Hankel operators and the best approximation problem was revealed through many beautiful results. For instance, Adamyan, Arov and Krein found a more general condition that guarantees the uniqueness of the best approx- imation. In [AAKl], they showed that if «,0 is admissible, then cp has a unique best approximation in H 00. A function cp E L00 is said to be admissible if the essential norm IIlele 0f the Hankel operator H3, is strictly less than its operator norm ”Hg; As usual, IITHe denotes the essential norm. of an operator T : ’H -—> IC between Hilbert spaces H and 1C. Even though the notion of winding number is not available for the class of ad- missible functions, the classification of badly approximable functions given by Poreda was also extended to this class of functions by using Hankel and Toeplitz operators. It is well-known now (e.g. see Chapter 7 in [Pe1]) that an admissible function 1,9 is badly approximable if and only if a has constant modulus, the Toeplitz operator 7“,). is Fredholm, and ind 7“,), > 0. Here, the Toeplitz operator T10 : H 2 ——> H 2 with symbol it'- E L00 is defined by m = aw. for f 6 H2. 2 where 19+ denotes the orthogonal projection of L2 onto H2. Moreover, in [PK], Peller and Khruschev also used Hankel operators to prove many general hereditary properties of the non-linear operator A of best approximation; loosely speaking, if «p E X, then .499 E X holds for many “large” classes of function spaces X on T. The problem of best uniform approximation by meromorphic functions in D and its connection to Hankel (and Toeplitz) operators was also considered. In [AAK2], Adamyan, Arov and Krein showed that if go 6 L00, then the kth singular value sk(H,.;) of the Hankel operator HP is given by the formula where H (00k) denotes the collection of meromorphic functions in ID) bounded near T and having at most k poles in ID) (counting multiplicities). Moreover, if so is k-admissible 00 (W the function u = s;1(H¢)(,.-9 — q) has modulus equal to 1 a.e. on T, the Toeplitz (i.e. IlanHe < sk(H¢)), then (,9 has a unique meromorphic approximation q E H operator T u is Fredholm and lIld T”, = 2k + IL, where ,u denotes the multiplicity of the singular value sk(H,p) of the Hankel oper- ator H99. As usual, indT denotes the index of Fredholm operator T, i.e. indT dgf dim kerT —— dim ker T“. Moreover, for n _>_ 0, the singular value sn(T) of a bounded operator T : H —> IC between two Hilbert spaces H and IC is defined by sn(T) = inf{|lT — RH : R a bounded operator from H to K, rank R g n}. Besides being of mathematical interest, the problem of best approximation by analytic and meromorphic functions is also important in applications. Since most en- 3 gineering systems have several inputs and outputs, it is of interest to find analogous results in the case of matrix—valued functions. (For instance, see [F] and [Pel].) Un- fortunately, there are significant complications in the case of matrix-valued fin‘ictions. In this dissertation, we continue the study of the best approximation problem in the case of matrix-valued functions (for short, matrix functions). In order to appropriately discuss our results, we first introduce notation and recall several results. 1.1 Best and superOptimal approximation in H 8:) Throughout, Mm,” denotes the space of m x n matrices equipped with the operator norm H - ”Mm,n (under the usual identification of elements in Mmm and operators from C" to Cm.) In the case m = n, we use the notation Min to denote Mnm. For A E Mm," andj 2 0, we denote by sj(A) the jth—singular value of A, i.e. the distance (under the operator norm) from A to the set of matrices of rank at most j. For any space X of scalar functions on T, X (Mm,n) denotes the space of m x n matrix-valued functions on 'I‘ whose entries belong to X. We also use the notation X (C") for X (Mn,1)- In the case of the space of (essentially) bounded m x n matrix functions L°°(Mm,n), we use the norm || - “LOO(Mm.n) defined by d f ll‘IjllLOO(Mm‘n) __€_ 883 211,11; II\II(<)lle,n' ‘ E A matrix function B E H (”(1%”) is called a finite Blaschke-Potapou product if it admits a factorization of the form B: (13132.nem, 4 where U is a unitary matrix and, for each 1 S j _<_ m, B — Z—AjP +(I 13-) J_1—5\jz] J for some Aj E D and orthogonal projection Pj on C”. The degree of the Blaschke— Potapov product B is defined to be 171 deg B déf 2 rank Pj. j=1 It turns out that every subspace I; of finite codimension invariant under multipli- cation by z on H 2((C") is of the form BH2((C”) for some Blaschke-Potapov product of finite degree codim £ (e.g. see Lemma 5.1 in Chapter 2 of [Pe1]). Let H (of) (Mmm) denote the collection of matrix functions Q E LOO(Mm,n) that admit a factorization of the form Q = FB“ for some F E H w(Mm,n) and Blaschke— Potapov product B of degree at most k. Alternatively, the class H as) (Mmm) consists of matrix functions ‘1! E L-°O(Mm7n) which can be written in the form ‘11 = R + F for some F E H 00(Mmm) and some rational m x n matrix function R with poles in ID and whose McMillan degree is at most k. Here we do not need the notion of McMillan degree and so we refer the interested reader to consult Chapter 2 in [Pel] for more information. Definition.1.1.1. Let k 2 0. Given an m xn matrix-valued function (I) E [DOWN/Jim”), we say that Q is a best approximation in H if) (Mmm) to (I) if Q belongs to H (of) ( Mm”) and “‘1’ — Ql|L00(MmJ,) = diStLqumanfig H§)(Mm,n))' Note that, by a compactness argument, a matrix function (I) E L°°(Mm,n) always 5 has a best approximation in H (5);)(Mmfl). In other words, the set ngl(q)) déf {Q E H&)(Mm,n) : Q minimizes 6882161? ”(NO - Q(C)[[an} is non-empty. As in the case of scalar—valued bounded functions, Hankel operators on Hardy spaces are a very useful tool in the study of best approximation by matrix functions in H (OE)(Mm,n). For a matrix function (I) E L°°(Mm,n), we define the Hankel operator Hq, by HQf = rum, for f e H2(C”), where P- denotes the orthogonal projection of L2(Cm) onto H3(Cm) = L2(Cm) {—3 H 2(Cm). By a matrix analog of a theorem of Adamyan, Arov and Krein (see [Trl] or Section 3 of Chapter 4 in [Pe1]), it is known that diSt'LOO(Mm,n)(q)t Ha?) (Mmmn = 8k(Hq)). (lull) However, in contrast to the case of scalar-valued functions, a best approximation is rarely unique (even under the assumption sk(Hq,) > ||H¢|Ie). Example 1.1.2. Consider the problem of best approximation of the matrix function l/z2 O O O (I): in H "(0;)(M2) for k = 0 and k = 1, respectively. In this case, it is easy to see that any matrix function of the form O O Pk: , ‘0 fl: is a best approximation in H (Cf)(M2) to (I), where fk is any (scalar—valued) function 6 in H8?) such that ||kaOO g 1, for k = 0,1. Notice that in Example 1.1.2, for k = 0 and k = 1, it is likely to say that the “very best” approximant to Q, in the respective H (0,3)(M2) class, should be the zero matrix function. Therefore it is natural to impose additional conditions in order to distinguish a “very best” approximant among all best approximants to Q in H (1):)(M2) for each k =-- 0,1. This idea led N.J. Young [Y] to the notion of “superoptimal” approximation in the case k = 0, i.e. the case of analytic approximation. Definition 1.1.3. Let k 2 0 and Q E L00(Mm,n). For j > 0, define the sets Q§kl(Q) déf {Q e 951:)1(Q) : Q minimizes ess 22g sj(Q(() — Q(C))}. We say that Q is a superoptimal approximation of Q in HE’E)(Mm,n) if Q belongs to n ng)(Q) 2: (2331 {m n}_1(Q) and in this case we define the superoptimal singular .720 values on in H§)(Mm,n) by tg-k)(Q) = ess sup sj((Q — Q)(C)) for j 2 0. (ET In the case k = 0, we also use the notations fly-(Q) and tj(Q) to denote Q§O)(Q) and t§0)(Q), respectively, for j Z 0. The uniqueness of a superoptimal analytic approximation F for matrix functions Q E (H °° + C)(Mm,n) was first established by Peller and Young in [PYl]. (Recall that H 00 + C denotes the closed subalgebra of L00 that consists of functions of the form f + g with f E H 00 and g E C.) Their method was based on a diagonalization of the error term Q —— F; the so—called thematic factorization (see Section 1.3). In [Tr2], Treil proved that a unique superoptimal meromorphic approximation Q in H§)(Mm,n) exists for matrix functions Q E (H00 + C)(Mm,n) such that sk(Hq)) < sk_1(H¢,) by using geometric arguments and operator weights. Shorty 7 after, Peller and Young also proved this result in [PY2] by using a diagonalization argument that also constructs (in principle) the superoptimal meromorphic approxi- mant in H 8%(an). More generally, it is now known (see Section 17 of Chapter 14 in [Pe1]) that if Q E L°°(Mm.n) is k—admissible, then Q has a unique superoptimal approximation Q in H (015(an) and sJ-((Q — Q)(()) = fink)(Q) holds for a.e. C E T, j 2 0. A matrix function Q E L°°(Mm,n) is k-admissible if sk(Hq,) < sk_1(Hq)) and [[Hq)[[e is strictly less than the smallest nonzero number in the set {t§k)(Q)}J-ZO. (Note that the statement on the singular values of the Hankel operator is vacuous when k = 0 and the essential norm of the Hankel Operator Hq) equals zero for continuous matrix functions Q.) We also refer to O-admissible matrix functions as admissible. More— over, in the case of scalar-valued functions, to say that a function go is k-admissible simply means that lleolle < sk(H¢) and sk(H,p) < sk_1(H¢). Thus the notion of k-admissibility for matrix functions (and so the uniqueness of a superoptimal ap— proximation) is a natural extension of the notion (and results) for scalar functions mentioned at the beginning of this chapter. 1.2 Badly and very badly approximable matrix func- tions A matrix function G E L°°(Mm,n) is called badly approximable if the zero matrix function is a best approximation in H 00(Mm.n) of G. If, in addition, the zero matrix function is a superoptimal approximation in H 0C’(Mmm) of G, we say G is very badly approximable. In particular, a matrix function is very badly approximable if and only if it is the difference of a bounded matrix function and its superoptimal approximant 8 in H 0°(Mmm). For Q E L°O(Mm,n) and fixed E Z 0, it is easy to observe from Definition 1.1.3 that if Q E ng)(Q), then the zero matrix function belongs to Qg(Q — Q), tj(Q - Q) = tg-k)(Q) and Q + F E Ilgk)(Q) whenever F E Sly-(Q — Q) for 0 S j g f. Therefore, if Q has superoptimal approximant Q in HE’EflMmfl), then Q - Q is very badly approximable. 1.3 Thematic factorizations In [PYl], very badly approximable matrix functions in (H 00 + C )(Mmfi) were char- acterized algebraically in terms of thematic factorizations. It turns out that this same algebraic characterization remains valid for very badly approximable matrix functions which are only admissible. To appropriately discuss these factorizations, we first re- call several definitions and refer the reader to [API] and [PT2] for other algebraic and geometric characterizations of admissible very badly approximable matrix functions. Let In denote the matrix function that equals the n X n identity matrix on T. Recall that a matrix function G E H°°(Mm,n) is called inner if 6*9 = In a.e. on T. A matrix function F E H 00(It/11mm) is called outer if FH2(C") is dense in H 2(Cm). Lastly, a matrix function O E H 00(It/ling”) is called co-outer whenever the transposed function 9’ is outer. In what follows, we shall make use of the following fact concerning co—outer matrix functions (see Chapter 14 of [Pel] for a proof). Fact 1.3.1. Suppose O is‘a co—outer matrix: function in HOO(Mm,n). If 77 E L2(Cn) is such that On E H2057”), then r) E H2((C”). Let n 2 2 and 0 < r < n. For an n x r inner and co-outer matrix function T, it is known that there is an n x (n — r) inner and co—outer matrix function 8 such that V=(TO) (1-3-1) 9 is a unitary-valued matrix function on T. Functions of the form (1.3.1) are called r-balanced. We refer the reader to Chapter 14 in [Pel] for a detailed presentation of many interesting properties of r-balanced matrix functions. Our main interest lies with 1-balanced matrix functions, which are also referred to as thematic. Definition 1.3.2. A partial thematic factorization of an m x n matrix function is a factorization of the form { touo O ... O O \ O tlul . . . O O W6-...-W:_1 E E E 3 V:_1-...-VO* (1.3.2) O O .. . tr—lur—l O K o o a x11 ) where the numbers t0, t1, . . . ,tr_1 satisfy t0 2 t1 221,4, > 0; (1.3.3) the function uj is unimodular and such that the Toeplitz operator Tu]. is Fredholm with positive index, for 0 g j S r — 1; the n x n matrix function V]- and m x m matrix function I/Vj have the form I,- o I,- o g, v,- = . and Wj = - , (1.3.4) e V]- o W,- for some thematic matrix functions V} and IF}, respectively, for 1 g j < r — 1; V0 and l/Vg are thematic matrix functions; and the matrix function ‘11 satisfies ll‘llllLOC(Mm_rgn-T) g tr—l and “HQ” < tr—l- (13-5) 10 The positive integers k0, . . . , kr_1 defined by kj = indTuj, for 0 _<_j5 r— 1, are called the thematic indices associated with the factorization in (1.3.2). As usual, if r = m or r = n, we use the convention that the corresponding row or column does not exist. Definition 1.3.3. A thematic factorization of an m x n matrix function is a partial thematic factorization of the form (1.3.2) in which ‘11 is identically zero. It can be shown that any admissible very badly approximable matrix function admits a thematic factorization. Conversely, any matrix function of the form (1.3.2) with Q = O is a very badly approximable matrix function whose jth—superoptimal singular value equals tj for 0 g j g r — 1. Actually, to deduce the latter, the assumption in (1.3.3) is essential as the following example illustrates. Example 1.3.4. Consider the matrix function l O z t 1 2: —1 G = IV“ V*, where V = 14’ = — o 32 V5 1 2 Obviously, “C(OH = 3 for a.e. C E T and so ”Hg“ _<_ [[GHLoo = 3. We claim G is not badly approximable. Assume, on the contrary, that G is badly approximable. In this case, the continuity of G guarantees that the Hankel operator HG has a (non-zero) maximizing vector f E H2(C2). In this case, “HG” = 3 (e.g. see (1.1.1) above), G'f E HEM?) and so ||WGf||2 = lle||2 = ”110sz = 3|lf||2 = 3||V*f||2 (1-3-6) 11 because the matrix functions V and W are unitary-valued. We see from (1.3.6) that Hv“f|l§ + gnetnli = 9|lV*f|l§ = gnrrui + Quaint and so '0" f = O, where v and O denote the first and second column functions of V. Thus, the fact that O 1 —3EOtf G f = W* = _— 3:59t f f2 3et f belongs to H_2_( H 2(C71) with symbol Q E LOO(Mm,n) is defined by Tibf ___ P+q)f, for f E H2001): where 1P+ denotes the orthogonal projection of L2( 0, then the Toeplitz operator TZG has dense range in H2(cm). To prove Fact 1.3.6, Peller and Young used the first assertion in Theorem 1.3.5 (applied to the function zG instead). That assertion allowed them to reduced the verification of density of Range TzG to a similar verification for a matrix function of smaller size and so the result followed by an induction argument. 13 The converse to Fact 1.3.6 also holds for unitary-valued matrix functions U on T such that [lHuHe < 1. This was proved in [AP1] (see also Chapter 14 in [Pe1]). Fact 1.3.7. Let U be an n X n unitary-valued matrix function such that HHUHe < 1. Then U is very badly approximable if and only if the T oeplitz operator T zU has dense range in H 2( Y is a bounded linear operator, we say that a non-zero vector at E X is a maximizing vector of T whenever [[TrHy 2 HT“ ' llwllx; H e [C denotes the orthogonal complement of a subspace [C of a Hilbert space H; ind T denOtes the index of Fredholm operator T, i.e. ind T = dim ker T-dim ker T *; if H and K are Hilbert spaces and T : H —-> [C is a bounded linear Operator, the singular values sn(T), n 2 0, of T are defined by sn(T) = inf{|[T — RH : R a bounded operator from H to lC,rank R g n} and the essential norm of T is defined by IlTlle = infillT - K : K is a compact operator }. 15 Chapter 2 Monotone thematic factorizations 2.1 Introduction In [PYl], it was observed that thematic indices (see Definition 1.3.2) depend on the choice of the thematic factorization. However, it was conjectured there that the sum of the thematic indices associated with any thematic factorization of a given very badly approximable matrix function Q depends only on Q (and is therefore independent of the choice of a thematic factorization) whenever Q belongs to (H 00 + C )(Mm,n)- This conjecture was settled in the affirmative shortly after in [PY2]. l\=loreover, it was shown in [PTI] that this conjecture remains valid for matrix functions Q which are merely admissible. The result concerning the sum of thematic indices of Q leads to the question: Can one arbitrarily distribute this sum among thematic indices of Q by choosing an appropriate thematic factorization? A partial answer was given in [AP2] in terms of monotone partial thematic factorizations. Definition 2.1.1. A partial thematic factorization of the form (1.3.2) is called mono- tone non-increasing (or non-decreasing) if for any superoptimal singular value t, such that t _>_ tr_1, the thematic indices kj, kj+1, . . . , ks that correspond to all of the super- 16 optimal singular values that are equal to t form a monotone non—increasing sequence (or non-decreasing sequence). Remark 2.1.2. Note that only monotone non-increasing partial thematic factoriza- tions were considered in [AP2]. The following result was established in [AP2]. Fact 2.1.3. If Q E L°°(Mm,n) is an admissible very badly approximable matrix func- tion, then Q possesses a monotone non-increasing thematic factorization. Moreover, the indices of any monotone non-increasing thematic factorization are uniquely de- termined by Q. Hence, one cannot arbitrarily distribute the sum of thematic indices of an ad- missible very badly approximable matrix function among thematic indices in non- increasing order. Indeed, thematic indices are uniquely determined when arranged in this way. We refer the reader to [Pe1] for more information and proofs of all previously mentioned facts concerning thematic factorizations. Before explaining what is done in this chapter, let us consider the following ex— ample. Let G be the 2 x 2-matrix function defined by Clearly, G is a very badly approximable continuous (and so admissible) function in its non-increasing monotone thematic factorization with thematic indices 2 and 1. we now ask the question: Does G admit a monotone non-decreasing thematic factorization? 17 It is easy to verify that G can also be factored as —1o sol—521 o1 o22fi1z2 demonstrating that G does admit a monotone non-decreasing thematic factorization with thematic indices 1 and 2. Thus, the natural question arises: Does every admis- sible very badly approximable matrix function admit a monotone non-decreasing the- matic factorization? If so, are the thematic indices in any such factorization uniquely determined by the matrix function itself? We succeed in providing answers to these questions. We begin Section 2.2 introducing sufficient conditions under which the Toeplitz operator induced by a unimodular function is invertible. For the reader’s convenience, we also state some well—known theorems on the factorization of certain unimodular functions. In Section 2.3, we establish new results on badly approximable matrix functions. We prove that given a (partial) thematic factorization of a badly approximable ma- trix function G whose “second” thematic index equals k and an integer j satisfying 1 _<_ j S h, it is possible to find a new (partial) thematic factorization of G in which the “first” new thematic index equals j . We then give further analysis of the “lower block” obtained in this new factorization of G. It is shown that, under rather nat- ural assumptions, the first thematic index of the new lower block is indeed the first thematic index of G in the originally given thematic factorization. Once these results are available, we argue in Section 2.4 that there is an abundant number of thematic factorizations of an arbitrary (admissible) very badly approx- imable matrix function. We begin by proving the existence of a monotone non- decreasing thematic factorization for such matrix functions. In contrast to monotone non-increasing thematic factorizations, it is shown that the thematic indices appear- 18 ing in a monotone non-decreasing thematic factorization are not uniquely determined by the matrix function itself. Moreover, we obtain every possible sequence of the— matic indices in the case of 2 x 2 unitary-valued matrix functions. Vile further prove that one can obtain various thematic factorizations from a monotone non-increasing thematic factorization while preserving “some structure” of the thematic indices in the case of m x n matrix functions with min{m,n} 2 2. We close the section by illustrating this with a simple example. In Section 2.5, we provide an algorithm and demonstrate with an example that the algorithm yields a thematic factorization for any specified sequence of thematic indices of an arbitrary admissible very badly approximable unitary—valued 2 x 2 matrix function. 2.2 Invertibility of Toeplitz operators and factor- ization of certain unimodular functions In this section, we include some useful and perhaps well-known (to those who work with Toeplitz and Hankel operators on the Hardy space H 2) results regarding scalar functions that are needed throughout the paper. We begin by introducing sufficient conditions for which a Toeplitz operator Tm, where w is a unimodular function on T (i.e. to has modulus equal to 1 a.e. on T), is invertible on H 2. Although a. complete description of unimodular functions is for which Ta. is invertible is given by the well— known theorem of Devinatz and Widom, the sufficient conditiongiven in Theorem 2.2.2 below is easier to verify. Lemma 2.2.1. Let 0 < p S 00. If h E H39 and 1/h E H2, then the Toeplitz operator T ,3 ,, has trivial kernel. ./ 1, Proof. Suppose that p 2 2. Let f E ker Th/h' Since H3 = L2 9 H2 2 EH? then 19 f / h E (MEET-17. It follows that f / h E H1 D {II—1 and therefore ker Th/h. must be trivial, because H1 H Eff—1— is trivial. Suppose now that h. E Hp \ H2 with 0 < p < 2. Assume, for the sake of contradiction, that ker T 71 / h is non—trivial. In this case, a simple argument of Hayashi (see the proof of Lemma 5 in [Ha]) shows that there is an outer function I: E H 2 such that h/ h = lit/k, and so there is a c E IR such that h = ck, a contradiction to the assumption that h E H 2. Thus T h / h must have trivial kernel. [I Theorem 2.2.2. Suppose that h e H2 and 1/h 6 H2. Then the Toeplitz operator Th/h has trivial kernel and dense range. In particular, if Th/h is Fredholm, then T 71/ h is invertible. Proof. By Lemma 2.2.1, we know that. Th/h has trivial kernel. Now, h. E H 2 and 1/h E H 2 imply that h is an outer function, and so the fact that T}; / h has dense range follows from Theorem 4.4.10 in [Pe1]. The rest is obvious. [:1 We now state a useful converse to Theorem 2.2.2. Fact 2.2.3. If u: is a unimodular function on T such that T w is invertible on H2, then to admits a factorization of the form w = h/ h for some outer function h such that both h and 1/h belong to Hp for some 2 < p S 00. This result can be deduced from the theorem of Devinatz and Widom mentioned earlier. A proof can be found in Chapter 3 of [Pe1]. we now state two useful, albeit immediate, implications of Fact 2.2.3. Corollary 2.2.4. Suppose that h and 1 / h ”belong to H 2. If the Toeplitz operator T 71/ h is Fredholm, then h and 1/h belong to H” for some 2 < p _<_ co. Corollary 2.2.5. Let u be a unimodular function on T. If the Toeplitz operator Tu is Fredholm with index k, then there is an outer function h such that f. u = 5k: ( to to [u—l V 20 and both h and 1/h belong to H10 for some 2 < p S 00. Remark 2.2.6. Even though representation (2.2.1) is very useful (e.g. in the proof of Theorem 2.3.3), it may be difficult to find the function h explicitly, if needed. This is however a very easy task for unimodular functions in the space ’R, of rational functions with poles outside of T. After all, if u E R, then there are finite Blaschke products B1 and B2 such that u = B1B2, by the Maximum Modulus Principle. Thus, it admits a representation of the form (2.2.1) with k = deg B1 — deg B2 for some function h invertible in H 90 (which is, up to a multiplicative constant, a product of quotients of reproducing kernels of H 2). We also find the classification of admissible scalar badly approximable functions mentioned in Chapter 1 and Remark 2.2.6 useful in proving the next theorem which is part of the lore of our subject. Theorem 2.2.7. Suppose that u E R is a unimodular function on T. Then u is badly approximable if and only if there are finite Blaschke products B1 and B2 such that deg 81 > deg B2 and 'u. = B1B2 on T. In particular, it admits the representation Q ll NI 2’49" with k = ind Tu 2 deg B1 — deg B2 for some function h invertible in H 00. 2.3 Badly approximable matrix functions Recall that for T : X —> Y, a bounded linear operator between normed spaces X and Y, a vector x E X is called a maximizing vector of T if x is non—zero and HTTHY = ||T|| ' H-TIIX- Definition 2.3.1. For a matrix function Q E L°C(Mlm,n) such that [IHQHC < [[Hq)”, 21 we define the space Mg) of maximizing vectors of Hg, by def 2 n Mo = {f E H (fl|2 = “Hell ' Hfllzi- It is easy to show that M q) is a closed subspace which consists of the zero vector and all maximizing vectors of the Hankel operator Hg). Moreover, Mg; always contains a maximizing vector of Hq) because [[HQHe < [[Hq,[[; a consequence of the spectral theorem for bounded self-adjoint operators. We now review results concerning badly approximable matrix functions that are used in this section. Let G E L°°(Mm,n) be a badly approximable function such that [[Hglle < 1 and ”HG“ = 1. In this case, it is not difficult to show that if f is a non-zero function in Mg, then Gf E H_2_((Cm), ”C(OHMmm = 1 for a.e. C E T, and f(() is a maximizing vector of G(C) for a.e. C E T (see Theorem 3.2.3 in [Pe1] for a. proof). These results can be used to deduce that G admits a factorization of the form a W* v*, (2.3.1) o \1: where u 2 Edit / h, h is an outer function in H 2, 6’ is an inner function, V = (v O) and Wt = (w 15..) are thematic, and \II E L°°(Mm_1,n_1) satisfies IIQIILOO(Mm_1 ”_1) S 1. Conversely, it is easy to verify that any matrix function which admits a factorization of this form is badly approximable. For the same matrix function G, it can also be shown that the Toeplitz operator Tu is Fredholm with positive index, “lele 3 [[Hglle, and the matrix functions 9 and E are left-invertible in H 0°, i.e. there are matrix functions A and B in H 00 such that AC") = In_1 and B3 = Im_1 hOld. We refer the reader to Chapter 2 and Chapter 14 of [Pe1] for proofs of the previ- 22 ously mentioned results. Lemma 2.3.2. Suppose that G E L°°(Mm,n) is a matrix function of the form u O G=VV* V*, O ‘11 where u is a unimodular function such that the Toeplitz operator Tu is Fredholm with indTu 2 0, Q E L°°(Mm__1,n_1) satisfies HQllLoo( S 1, the matrix Mm—lm—l) functions V = (v O) and Wt = ( w B) are thematic, and the bounded analytic matrix functions O and E are left-invertible in H 00. Let A and B be left-inverses for O and E in H 30, respectively, and g E ker Tip. 1. Ifé is co-outer, then A‘s + av is co-outer for any a E H2. 2. For a E H2, Até + av belongs to ker TG if and only ifa satisfies Tue 2 P+(th*\II§ — uv*At£). (2.3.2) Moreover, if §# d=ef Atg + av with a. E H2 satisfying (2.3.2), then 3. n# diff ZG€# is co-outer whenever STE is co-outer, and 4. E# E MG wheneverfi E Mt}; andHHq,” =1. Proof. Notice that for any a. E H 2, 9%,, = e‘(.4t§ + av) = g, (2.3.3) because A is a left-inverse for O and V is unitary-valued. In particular, if the entries ofg do not have a common inner divisor, then the entries of {if do not have a common inner divisor either. This establishes assertion 1. 23 Although assertion 2 is contained in [PYI] and [PY2], we provide a proof for future reference. Let {7% 2 Air + av. It follows from (2.3.3) that * v*§ v*€ v a, = t # = # , (2.3.4) 9 5# 6 and so G{# = wuv“§# + EQfi. Since W is unitary-valued, then Im = out -l- EE* holds and so 8* = ImB* = waitB* + 333* = ath‘“ + E. In particular, E = (In — if'wt)B* and so G§# = wu(v*At§ + a.) + EQ§ = B*\Il.§ + ’lIl(U(U*/lt€ + a) — uitB*\II€). (2.3.5) It follows now, from Fact 1.3.1 and (2.3.5), that G§# belongs to H3(Cm) if and only if P+(u(v*At§+a) -— uitB*\IJ§) = O because if E H_2_(Cm—1) and w is co—outer. Thus, G€# E H3(Cm) if and only if Tua = P+(th*\Il§ — uv*At§). This completes the proof of 2. Henceforth, we fix a function 0.0 E H 2 that satisfies (2.3.2). The existence of a0 follows from the fact that Tu is surjective. To prove 3, observe that (2.3.5) can be rewritten as G§# = B*\Il§ + 11580 for some b0 E H2 because P+(u(v*At£ + a0) — th*\II£) = O. Let n d—Elf 5%. Then r)# = EGEV‘f = Btn + bow and so am 2 3:31,, + (.02in = 7,, 24 because B is a left-inverse of E and IV is unitary-valued. Hence, n# is co—outer whenever i) is co-outer. Finally, we prove 4. Since 5 is a maximizing vector of H (I, and belongs to ker Tw, then [IQEHQ = [[qugllg = [[5]]; as “Hg,“ 2 1. Moreover, since H0§# = G§#, W is unitary—valued, and uv* VVG€# = €# ‘I’E we may conclude that. “Heielli = ||WG€#H§ = llu1’*€#lli+ll‘l’élli = ||v*€#||i + ”Elli = |l€#lli because (2.3.4) holds and V is unitary-valued. Thus {# E MO. C] We are now ready to state and prove the main result of this section. Theorem 2.3.3. Let m, n 2 2 and G E L°°(Mm,n) be a matrix function of the form O G = “If “’0 V0”: 2 O Q0 where no is a unimodular function such that the Toeplitz operator Tu0 is Fredholm with indTuO > 0, Q0 E L°O(Mm_.17n-1), the matrix functions V0 = (vo O) and W6 = (100 E) are thematic, and the bounded analytic matrix functions O and E are left-invertible in H 00. Suppose that .. “1 ‘0 . O Q1 for some unimodular function v.1 such that the Toeplitz operator Tu1 is Fredholm with ind Tu1 > 0, Q1 E L°°(M,,,_.2,,,_2) such that ll‘I’lllLOO(M ) g 1, and thematic m-2,n—2 25 matrix functions V1 and W]. Then G admits a factorization of the form u O G=W* V" O A for some unimodular function u such that T u is Fredholm with index equal to 1, a badly approximable matrix function A such that ”AHLOOUMI = 1, and thematic m—l,n—1) matrix functions V and Wt. Proof. Let A and B be left-inverses of O and E in H 00, respectively, and kj (13f ind Tu]. for j = 0, 1. By Corollary 2.2.5, there is an outer function hj such that ’3' NI 3" la. ' Uj= and both hj and l/hj belong to Hp for some 2 < p S 00, forj = 0,1. Let v1 denote the first column of V1 and g dzef zk1_1h1vl. It follows at once from (2.3.6) that Q05 2 Zhl 2711. Thus, 5 is a maximizing vector of quo and belongs to ker T (1,0. In particular, the column function r) dzef EQOE = hlwl is co—outer. Consider the equation Tan. = m(wgewog — uovaAtfi), a 6 H2. (2.3.7) It follows from the surjectivity of the Toeplitz operator Tu0 that there is an a0 E H 2 that satisfies (2.3.7). Furthermore, we may assume without loss of generality that z is not an inner divisor a0; otherwise, we consider a0 + ho instead of a0. By Lemma 2.3.2, the column function E# d__§f Até + aovo is a maximizing vector of the Hankel operator HG and belongs to ker Tc, as 5 is a 26 maximizing vector of the Hankel operator HQO and [[HQOH = 1. Since Ot§# = g and h1v1 is co—outer, then the greatest common inner divisor of the entries of 57% must be an inner divisor of zkl’1 by Fact 1.3.1. Therefore, {,y is co—outer whenever z is not an inner divisor of 5,3,. On the other hand, 2 is an inner divisor of the entries of 5# if and only if z is an inner divisor of do. Since 2 is not an inner divisor of no, it follows that 5# is co—outer. From (2.3.5) and (2.3.7), G{# = B*\Il0€ + 1170560, for some b0 E H 2. Thus the function 77# d—E-f 56%;]; = BtT] + b01110 is co—outer as well, by Lemma 2.3.2. From the remarks following Definition 2.3.1, we deduce that ||77#(C)|lcm = HGE#(C)Hcm = IIG(C)||Mm,nI|€#(<)ch = ||€#(C)||cn for a.e. C E T because €# is a maximizing vector of the Hankel operator HG and belongs to ker TC. Let h E H2 be an outer function such that |h(()[ = [|§#(()|[Cn for a.e. C E T. .VVe obtain that ||n#(.C)||cm = ||€#(C)ch = [hi-(0| (2-3-8) for a.e. C E T and so the column functions def 1 def 1 u 2 Egg and w = En# 27 are both inner and co—outer. Consider the unimodular function u dzef thu. It is easy to verify that , (2.3.9) D‘ID‘I 1 1 _ _ u = partway) = 32-:th = 2: by (2.3.8), and IIHUIIe = distLoo (u, [‘100 + C) S diSt’LOO(Mm7n)(G7 (H00 + C)(Mmgn)) < 1, because u and w are inner and [[Hglle < 1. Since it satisfies (2.3.9) and ”Hulle < 1, it follows that u is an admissible badly approximable scalar function, and so the Toeplitz operator Tu is Fredholm with positive index (see Chapter 1) and therefore T5 / h is Fredholm. Since Vb is unitary-valued and * 2156 6 then in = llVb*€#(<)Ilfgn = learner + IIEIOIIén—1 2 (hirer we? = new holds for a.e. ( E T and so l/h E Hp. By Theorem 2.2.2, TIE/h is invertible and so Ind Tu = 1. Let V and Wt be thematic matrix functions whose first columns are 1/ and a2, respectively. (The existence of such matrix functions was mentioned in Section 1.3.) Since thu = u is unimodular, it follows that WGV = 28 for some bounded matrix function A E L°°(Mm_1,n_1) with LOO-norm equal to 1, which is necessarily badly approximable. This completes the proof. D Corollary 2.3.4. Suppose that G satisfies the hypothesis of Theorem 2. 3.3. If k is an integer satisfying 1 S k g ind Tul, then G admits a factorization of the form. G=W* V* O A for some unimodular function it such that T u is Fredholm with index equal to k, a badly approximable matrix function A such that IIAIILOOUMI ) = 1, and thematic m-I,n—1 matrix functions V and Wt. Proof. Let k be a fixed positive integer satisfying k S ind T ”1' By Theorem 2.3.3, the matrix function 2’“ 71G admits a factorization of the form u O zk—lo = w* 12*, O A where ind Tu = 1, and so :h—lu O G = W* V* <0) 2k“1A is the desired factorization. CI At this point, we are unsatisfied with the conclusion of Corollary 2.3.4. After all, it does not give any information concerning the matrix function A. Therefore. we ask, under some reasonable assumptions, whether the “largest” possible thematic index appearing as the first thematic index in a thematic factorization of A should equal ind TuO. An affirmative answer is given in Theorem 2.3.5. Prior to stating and proving Theorem 2.3.5, we introduce notation and recall some needed facts. 29 Suppose that G is a badly approximable matrix function in L°C(Mm,n) such that [[Hglle < 1 and “Hg“ 2 1. As mentioned in the remarks following Definition 2.3.1, G admits a representation of the form (2.3.1) for some unimodular function it such that the Toeplitz operator T u is Fredholm with ind T u > 0. It turns out that there is an upper bound on the possible values of the index of Tu given by d f . . .(HG) :6 mine > o; ”H.220“ < ||HG||}. (2.3.10) Note that L(HG) is a well-defined non-zero positive integer and depends only on the Hankel operator HG (and not on the choice of its symbol). Moreover, there exists a (possibly distinct) factorization of G of the form (2.3.1) such that ind Tu = L(HG) and L(H\p) S L(H(;). See [AP2] or Section 10 in Chapter 14 of [Pe1] for proofs of these facts. Theorem 2.3.5. Let m,n 2 2. Suppose that G E L°O(Mm,n), IIHGIIe < 1, and G admits the factorizations uO * * 21. O * OQ OA c=wg where no and u are unimodular functions such that the Toeplitz operators TuO and Tu are Fredholm with positive indices, \II and A are badly approximable functions with LOO-norm equal to 1, and the matrix function V0, W6, V, and Wt are thematic. If ind TuO = L(Hg), 1.(H,1,) < L(HG) and ind Tu S L(Hq,), then L(HA) 2 ((110). (2.3.12) In addition, if ind Tu = L(H\I;), then equality holds in (2.3.12). 30 Proof. Let t d=ef L(HG). If L(HA) < L, then IIHzL-IAII < IIHAII = 1 and indet—lu g 5(HQl — (L " 1) S 0' It follows that the matrix function 2' z"—1G = W* V* satisfies ”sz‘lG” < 1 = “Hg”, by Lemma 14.107 in [Pe1], and so L(HG) S 1. —— 1, a contradiction. This establishes (2.3.12). Suppose that ind Tu = L(H (1,). Let j dzef t(Hq,) and consider the factorizations zjuo O zj u O zio=wg . v0*=w* , 12*. O zJQ O 23A It is easy to see that the sum of the thematic indices of zj G corresponding to the superoptimal singular value 1 equals ind T 2.7210 = L(HG) — L(H\p), because [[szwll < IIHinII = 1- In order to proceed, we need the following lemma. Lemma 2.3.6. Let G E L°°(Mm,n) be such that IIHGIIe < 1 and IIHGII = 1. Suppose that G is a badly approximable matrix function that admits a representation of the form (2.3.1), in which V and VI" are thematic matrix functions, u is a unimodular function, and \II is a bounded matrix function. Let V = (v O). 1. [ff E MG satisfies Off 2 O, then f = {v for some 5 E kerTu. 2. If \I/ is a badly approximable matrix function with L°°-norm equal to 1 and the 31 Toeplitz operator Tu is Fredholm with ind Tu S 0, then dim MG S dim Mg). (2.3.13) Moreover, if ind Ta, = 0, then equality holds in (2.3.13). we finish the proof of Theorem 2.3.5 before proving Lemma 2.3.6. As already seen, L(pr) < L(HG) S («(HA) and so 23A is a badly approximable matrix function of LOO-norm equal to 1, since at uHZjAH = HHAH = 1. Lett =9 as ,in Then IIHzrzj-All < “Hy-All = 1 implies that f + j Z L(HA), and therefore dimleG = dim szA Z L(szA) 2 L(HA) — j = L(HA) —— L(H\1,), by Lemma 2.3.6. Hence 1/(HG) _>_ AHA), because the sum of the thematic indices of zj G corresponding to the superoptimal singular value 1, namely L(Hg) — L(H\p), equals dimMfl'G (e.g. see Theorem 14.7.4 of [Pe1]). This completes the proof. [3 Remark 2.3.7. Note that if the inequality ind Tu, _<_ L(H (1,) in Theorem 2.3.5 is strict, then equality in (2.3.12) may not hold. For instance, consider a monotone non— increasing thematic factorization (e.g. see Section 2.4) of any admissible unitary- valued very badly approximable matrix function G E L°°(Ml2) with thematic indices 3 and 2, and any other thematic factorization of G whose first thematic index equals 1. Proof of Lemma 2. 3. 6. Let Wt = (w E). To prove assertion 1, we may assume that 32 f E Mg is non-zero. Since _ v*f f=VV*f=(vO) =v(v* ), O Fact 1.3.1 implies g (E v* f E H 2, as v is co—outer. It remains to show that u§ E H 3 Since use = G f E H3017”), it follows again from Fact 1.3.1 that ué E H3 because w is co—outer. Thus, f = {v with g E ker Tu. Suppose now that the functions \II and u satisfy the assumptions of assertion 2. Let (mfg, be a basis for MC and define g,- = etfj for 1 g j g N. Since ind Tu g 0, then ker T u is trivial, and each gj is a non-zero function in H 2((7171) by assertion 1. Furthermore, {91'}le is a linearly independent set in H 2(Of—1); after all, if there are scalars c1, . . . ,cn such that then 2?; cJ-fj = O by assertion 1, and so cj = 0 for 1 S j S N because {fj 9:1 is a linearly independent set. In order to prove (2.3.13), it suffices to show that g]- belongs to Mg, for 1 S j S N. To this end, fix jg such that 1 S jO S N. Since G is badly approximable and admits a factorization of the form (2.3.1), then ”more = Harmonie. = (more? + (memorial for a.e. C E T. On the other hand, Ira-(lot + Hevmciném = vajotonin = llfj0(C)ll?cn holds for a.e. C E T because V is unitary-valued. Thus, the function 910 : Olfj0 33 satisfies IIngj0(<)IICTn—l = IIgj0(C)IIcn—1 for a-e- (6 T- Since W is unitary—valued, * . uv f]0 WG - = he @9th '0 and so \IJO’fJ-O = E*ij0 E H3( dim MG because ‘11 is unimodular. 2.4 Sequences of thematic indices We proceed by proving the existence of a monotone non-decreasing thematic factor- ization and show that other thematic factorizations are induced by a given monotone non—increasing thematic factorization. Definition 2.4.1. Let G E L°°(Mm,n) be a badly approximable matrix function whose superoptimal singular values tj = t j(G), j 2 0, satisfy ”Hang < t7.-1, t0 = .. . = t7._1, and tr_1 > tr. (2.4.1) \Ve say that (k03k13k2iu'akT—I) is a sequence of thematic indices for G if G admits a partial thematic factorization 35 of the form f tong o o o) o toul . o o u"’6°...°l/V:_1 g g 5 5 ;.*_1-...- ’0'“, (2.4.2) o o toe...1 e K o o o \r ) such that ind T U]. = kj and the matrix functions V]- and W j are of the form (1.3.4) for 0 Sj S r —- 1, and \II satisfies (1.3.5). Theorem 2.4.2. Suppose that G E L°C(Mm,n) is a badly approximable matrix func- tion satisfying (2.4.1). If 1/ equals the sum of the thematic indices corresponding to the superoptimal singular value t0(G), then (1,1,...,1,1/-r+1) (2.4.3) r-l is a sequence of thematic indices for G. In particular, G admits a monotone non- decreasing thematic factorization. Proof. Consider any thematic factorization of A0 d—Ef to— 1G. It follows from Theorem 2.3.3 that A0 admits a factorization. of the form O A0=W3 “’0 v5 OA1 Where ind THO = 1. Similarly, Theorem 2.3.3 implies that A1 also admits a factoriza- tion of the form u1 O * A1 = W; V1 o A2 36 where ind Tu1 = 1. Continuing in this manner, we obtain matrix functions A0, A1, . , AT_2,A,._1 with factorizations of the form ‘0) Aj+1 * Vj, where ind Tu]. = 1, for 0 S j S r -— 2. It is easy to see that these matrix functions induce a partial thematic factorization of G in which the first r -— 1 thematic indices equal 1. Since the sum of the thematic indices 11 of G is independent of the partial thematic factorization, it must be that the rth thematic index in this induced partial thematic factorization equals 11 — (r — 1). CI The following corollaries are immediate. Corollary 2.4.3. If G E L°O(Mm,n) is an admissible very badly approximable matrix function, then G admits a monotone non-decreasing thematic factorization. Corollary 2.4.4. If G E (H00 + G)(Mm,n) is a very badly approximable matrix function, then G admits a monotone non-decreasing thematic factorization. We go on to show that the thematic indices obtained in a monotone non-decreasing thematic factorization are not uniquely determined. Moreover, we determine all pos- sible sequences of thematic indices for an admissible very badly approximable unitary- valued 2 x 2 matrix function. Theorem 2.4.5. Let U E L°°(M2) be an admissible very badly approximable unitary- valued matrix function. Suppose that (ho, k1) is the monotone non-increasing sequence of thematic indices for U. Then the collection of sequences of thematic indices for U coincides with the set d.f . . . 0U =e{(k1-Jik0 +3) : 0 S J < k1} U {(k0,k1)}. 37 Proof. Let 0 S j < k1. By Corollary 2.3.4, U admits a factorization of the form uO O U = W* V“ Oul with ind THO = k1 -— j. Since the sum of the thematic indices of U is independent of the thematic factorization, it must be that ind Tu,1 = kg + j. Thus (7U consists of sequences of thematic indices for U. Suppose now that (a, b) is a sequence of thematic indices for U that does not belong to 00. In this case, U admits a factorization of the form . uo O U = W’“ V* O ill for some thematic matrix functions V and Wt, and unimodular functions no and 21.1 such that ind TuO = a. and ind Tu1 = b. Since (a, b) E 0U» it follows that b > a and a > k1. Thus, k 7* zkluo (U) a: 2 1U = W V O Zkl ”(L1 is a very badly approximable unitary—valued matrix function. In particular, zklU admits a monotone non-increasing thematic sequence, say (a, B). Hence, (a + k1, B + k1) is a monotone non-increasing sequence of thematic indices for U and so, by the uniqueness of a monotone non-increasing sequence, k1 = B + k1 for some ,3 2 1 a contradiction. This completes the proof. [I We now recall how monotone non-increasing thematic factorizations were obtained in [AP2]. Let G E L°O(Mm,n) be a badly approximable matrix function such that (2.4.1) holds. In this case, it is known that G admits a monotone non-increasing partial 38 thematic factorization and that the thematic indices appearing in any monotone non- increasing partial thematic factorization of G are uniquely determined by G. In fact, as discussed in Section 2.3, G0 = t6 1G admits a factorization of the form . “0 9 . co = W0 v0 a G1 with indT“0 = L(HGO) and L(HGO) 2 L(HGI) (see (23.10)). Similarly, for each 1 S j S r — 1, we obtain a matrix function G j with a factorization of the form (41100): 141101)» - - - ~ l-(HG,_1)) is the monotone non-increasing sequence of thematic indices for G. (See [AP2] or Section 10 in Chapter 14 of [Pe1].) Note that, in the general setting of m X n matrix functions, at least two sequences of thematic indices for G exist; the monotone non—increasing sequence and the sequence in (2.4.3). The question remains: Are there any others? Theorem 2.4.6. Suppose G E L00(Mlm,n) is a badly approximable matrix function satisfying (2.4.1). If (kOakleriH'akT—I) is the monotone non-increasing sequence of thematic indices for G, then (klak0:k2a-~akr—1) 39 is also sequence of thematic indices for G. Proof. Without loss of generality, we may assume that to = 1 and k0 > k1. By Theorems 2.3.3 and 2.3.5, G admits a thematic factorization of the form u O G: W* V*, O A where ind Tu = k1 and L-(HA) = k0. Let. (R1,52,. . .,k,._1) be the monotone non- increasing sequence of thematic indices for A. In particular, K1 = kg and (k1,k0,K2,...,rtr_1) (2.4.4) is a sequence of thematic indices for G. We claim that rtJ-zkj for2Ser—1. By considering the monotone non-increasing sequence for G, it is easy to see that the sum of the thematic indices corresponding to the superoptimal singular value 1 of zk2G equals (180 - k2) + (k1 - k2). On the other hand, this sum is also equal to (k1 - k2) + (k0 — k2) + (Ky — I”2), because the sequence in (2.4.4) is a sequence of thematic indices for G. This implies that kg S k2. Now, by considing the matrix function z”2G, the same argument reveals that kg S K2. Therefore H2 2 k2. Let 2 S f < r — 1. Suppose we have already shown that kj = kj for 2 S j S t. In 40 the same manner, the sum of the thematic indices corresponding to the superoptimal singular value 1 of zk5+1G equals £1 I.’ 20c,- — ke+1l and 2a, — kl+1l + Z (4.- — k1+1l~ This implies that. Kg+1 S k)“, and a similar argument shows k1+1 S kl+1. Hence we must have that rt]- 2 kj for 2 S j S r -— 1. Cl Theorem 2.4.6 provides a stronger conclusion than one might think. Loosely speaking, it says that we can always interchange the highest two adjacent thematic indices in any monotone non-increasing sequence of thematic indices and still obtain another sequence of thematic indices for the same matrix function. Let us illustrate this with the following example. Example 2.4.7. For simplicity, consider the very badly approximable function z3oo G= @220) one: Clearly, (3,2,1) is the monotone non-increasing sequence of thematic indices for G. Our results imply that there are many other sequences of thematic indices for G. Indeed, by considering the subsequence (2,1), Theorem 2.4.6 implies that ( 3. 1. 2 ) is also a sequence of thematic indices for G. Similarly, it is easy to see that ( 2, 3, 1 ) and ( 2,1,3 ) are also sequences of thematic indices for G. On the other hand, it follows from Theorem 2.4.2 that ( 1, 1, 4 ) is a sequence of thematic indices for G. This leads us to ask: Are there other sequences of thematic indices in which the first index is equal to 1? 41 It can be verified that G admits the following thematic factorizations: 11o zoo z2o1 1 1 G—— ,3 _ _z2 float/2 o-o\/§1o 1—1o one2 ofio 1—zO z o 221(1) -—1—-1o o-4o—1— 12o fire 4. fl—2 oat/2 ooz OO\/2 Thus, ( 1,3, 2 ) and ( 1, 4,1 ) are sequences of thematic indices for G as well. These sequences induce two others by considering the subsequences ( 3,2 ) and ( 4,1 ); namely ( 1,2,3 ) and ( 1,1,4 ). Thus, the matrix function G admits at least 8 different sequences of thematic indices, namely (3,2,1), (3,1,2), (2,3,1),(2,1,3), (1,3,2), (1,2,3), (1,4,1), and(1,1,4). It is easy to verify that these are all possible sequences of thematic indices for G. 2.5 Unitary-valued very badly approximable 2 x 2 matrix functions The problem of finding all possible sequences of thematic indices for an. arbitrary admissible very badly approximable matrix function seems rather difficult for m x n matrix functions with min{m, n} > 2. However, in the case of unitary-valued 2 X 2 matrix functions, the problem has a straightforward solution provided by Theorem 2.4.5. In this section, we introduce a simple algorithm that yields thematic factoriza- tions with desired thematic indices for such matrix functions. 42 Algorithm Let U be an admissible very badly approximable unitary—valued 2 x 2 matrix function on T and (k0, k1) denote the monotone non—increasing sequence of thematic indices for U. Suppose U admits a monotone non-increasing thematic factorization of the form _ 2": L10. * 2. 0 ho O v r , (2.5.1) <0) 24417;] 8‘ where ho, hl, and their respective inverses belong to H p for some 2 < p S 00. For each integer j satisfying 1 S j S k1, a thematic factorization of U with thematic indices (j, k0 + k1 — j) can be obtained as follows. 1. Find left-inverses A and B in H 00 for O and E. respectively. 2. Set u0 = EkOfiTlEQ and \II = Ekl’jJFIEL. h0 h1 3. Let f = zkl_jh1. Find a solution a0 E H 2 to the equation Tuao = P+(th*\Il — uv*At)§. If 3' < k1, we require, in addition, that z is not an inner divisor of a0. (Note that if z is an inner divisor of do, then it suffices to replace a0 with a0 + ho.) 4. Let §# = Alf + am: and r}# = 2G§#. Choose an outer function h E H 2 such that |h(C)| = II€#(C)I|C2 for a.e. 4e "11‘. ‘ 5. Let 1/ = h—1§# and a; = h‘1n#. Find thematic completions V=(1/T) and Wt=(wO) to 1/ and w, respectively. 43 6. The desired thematic factorization for G is given by u O G = W* V* (2.5.2) O A where 112213 and A=QGT. End of algorithm The validity of this algorithm is justified by the proof of Theorem 2.3.3 and Corol- lary 2.3.4. For matrix functions G E R(M2), the badly approximable scalar functions ap- pearing in the diagonal factor of (2.5.1) also belong to R. This is a consequence of the results in [PYl] (see also Sections 5 and 12 of Chapter 14 in [Pe1]). As mentioned in Remark 2.2.6, the outer functions ho and h1 are (up to a multiplicative constant.) products of quotients of reproducing kernels of H 2. Therefore, steps 1 through 6 of the algorithm are more easily implemented if G E R(M2). Example 2.5.1. Consider the matrix function G = _ ’ _ z _ . (2.5.3) Let We find thematic factorizations with sequences of indices (2,3) and (1, 4). 44 1. A thematic factorization for G with sequence of indices (2,3): Let u0=22,\ll=f,§=1, and a0=—z2. In this case, it is easy to verify that —z2 d 1 —2 an 7']# = _- x/2 (g) ffi 41: Since ||€#(C)|[C2 = 2 on T, we may take h(() = \/2 for C E T. Then have thematic completions V=(1/ T) and Wt = (tuft), where T d O (D = — an : ‘5 z2 1 Thus, G admits the factorization —1 (U) 22 (U) 1 —;~‘2 1 with sequence of thematic indices (2,3) as desired. 2. A thematic factorization for G with sequence of indices (1, 4): 45 Let E3 Q252,§=z, and a0=1—z3. 1 23—2 l—z and7]#=*fi 2 Since II€#(C)H<2C2 = 3 — E3 — Z3 on T, we may choose _1 r h = a2 —— 723, where a 2 ——i——\/§. a 2 Let 1 1—z3 d 1 z3—2 V=— an to:— h 2 h.\/§ 22 are thematic. Since 24 ,, _ z QGT=7L§ 46 G admits the factorization NI 212‘” © G=W* O l-tl D‘ID‘ with sequence of thematic indices (1, 4) as desired. 47 vii! Chapter 3 On the sum of superoptimal singular values 3.1 An extremal problem In this chapter, we study the following extremal problem and its relevance to the sum of the superoptimal singular values of a matrix function: Extremal Problem 3.1.1. Let m,n > 1 and 1 S k S min{m,n}. Given a matrix function Q E L°°(Mm,n), when is there a matrix function ‘11,, in the set AZ’m such that ff tracet<1>(<)\1'*(<))dm(o = as)? The set AZ”? is defined by , df A2,?” =9 {\II E H6(Mln,m) : ”MILHMnm) S 1, rank \IJ(C) S k a.e. C E T} and ok(Q) is defined by def sup [Atrace(Q(()\IJ(())dm(()[. (3.1.1) n,m QeAk 014(1)) 48 Whenever n = m, we use the notation A” d=ef AZ’m instead. The importance of this problem arose from the following observation due to Peller [Pe3]. Theorem 3.1.1. Let 1 S k S min{m. n}. IfQ E L°O(Mm,n) is admissible, then ok(Q) gto(<1))+...+tk_1(<1>). (3.1.2) Proof. Let \I/ E AZ’m. We may assume, without loss of generality, that Q is very badly approximable. Indeed, ftraceetowmdmto = / tracecci — Q)(C)‘P(C))dm(C) T holds for any Q E H m(an). and so we may replace Q with Q — Q if necessary, where Q is the superoptimal approximation in H DO(Mmm) to Q. It follows from the well-known inequality [trace(A)| S [[Allsrln that the inequali— ties k—I [trace(Q(_C)\ll(C))I S II‘I’IC)‘I’(C)IIS’1” 5 (Z HUNG) II‘I/(OIIMnm j=0 hold for a.e. C E T. Thus, we» II‘1’(C)IIMn,mdm(C) f, trace(<1>(C)\P(C))dm(C) Err-Ks. I tj(‘1’) II‘1’(C)IIMn,mdm(C) Ad .3 .1 0 k—l S tj(q)) II‘IIIIL1(Mn3m) u. II _<_ tj((I)), (3.1.3) because the singular values of Q satisfy sj(Q(()) = tj(Q) for a.e. C E T since Q is very badly approximable. CI Before proceeding, let us observe that equality holds in (3.1.2) for some simple cases. Let r be a positive integer and t0, t1, . . . ,tr._1 be positive numbers satisfying Suppose Q is an n x n matrix function of the form from o o o ) o 1111.1 o o adzef , (3.1.4) o o t,._1u,._1 a K o o o 3,1,.) where [IQ#[[Loo S tr_1 and uj is a unimodular function of the form uj = Zdjh/ h with 63- an inner function for 0 S j S r — 1 and h an outer function in H 2. Without loss of generality, we may assume that “It” L2 = 1. It can be seen that if ( 260h2 o o o) O zt91h.2 O O xii—9f , (3.15) o o 2.0,.-152 o Ko o... o of then ‘11 E H6(Mn), rank \I’(() = r a.e. on T, IIWIILHMn = 1, and ) [Etrace(Q(()\It(C))dm(() = to + . . . + t7._1. 5O Thus we obtain that e..(<1>) = 10(5) + . . . + t7._1(Q). On the other hand, one cannot expect the inequality (3.1.2) to become an equality in general. After all, by the Hahn-Banach Theorem, and there are admissible very badly approximable 2 x 2 matrix functions Q for which the strict inequality diSt'Loo ((1), HOO(M2)) < t0((I)) + t1((1)) (5%) holds. For instance, consider the matrix function Clearly, Q has superoptimal singular values t0(Q) = t1(Q) = 1. Let 1 O O F:— \/§ —1 o ,It is not difficult to verify that 1 1 80((4’ — F)(C)) = 5% + «5 and s1((- PM» = 5 3 - «5 for all C E T. Therefore distLoc(Sg)((I), HOC(M2)) S [[Q — FHLOO(S¥) < 2 = t0((l)) + t1((l)). (3.1.7) 51 By virtue of Theorem 3.1.1 and the remarks following it, one may ask whether it is possible to characterize the matrix functions Q for which (3.1.2) becomes an equality. So, let Q be an admissible n x 71 matrix function with a superoptimal approximant Q in H 00(Mn) for which equality in Theorem 3.1.1 holds with k = n. In this case, it must be that n—1 n—1 diStLOO(sy)(q’a HOO(Mn)) = Z tjM’) = 510(1) — Ql(C)) = “‘1’ - QIIL°0(S[") j=0 j=0 by (3.1.6) and thus the superoptimal approximant Q must be a best approximant to Q under the L°°(S]") norm as well. Hence, we are led to investigate the following problems: 1. For which matrix functions Q does Extremal problem 3.1.1 have a solution? 2. If Q3; is a best approximant to Q under the L°O(S"f)-norm, when does it follow that Q$ is the superoptimal approximant to Q in L°°(Mn)? 3. Can we find necessary and sufficient conditions on Q to obtain equality in (3.1.2) of Theorem 3.1. 1? Before addressing these problems, we recall certain standard principles of func- tional analysis in Section 3.2 that are used throughout this chapter. In particular, we give their explicit formulation for the spaces Lp(S;n’n). In Section 3.3, we introduce the Hankel-type operators H g} on spaces of matrix functions and k-extremal functions, and prove that the number ok(Q) equals the operator norm of H g}. We also Show that Extremal problem 3.1.1 has a solution if and only if the Hankel-type operator H g} has a maximizing vector, and thus answer question 1 in terms Hankel-type operators. In Section 3.4, we establish the main results of this chapter concerning best ap— proximation under the L°°(S[n’n) norm (Theorem 3.4.7) and the sum of superoptimal 52 singular values (Theorem 3.4.13). The latter result characterizes the smallest number k for which Atracetvtcrlltcndmto equals the sum of all non-zero superoptimal singular values for some function ‘1! E AZ’m. These results serve as partial solutions to problems 2 and 3. Lastly, in Section 3.5, we restrict our attention to unitary-valued very badly ap- proximable matrix functions. For any such matrix function U, we provide a repre- sentation of any function \I' for which the formula f, trace>dm Ag), where Q E [fl/(Spin) and Aq)(‘l/) = Atrace(Q(()Q(C))dm(C) for \I! E Lq(Sgl’n). In particular, it follows that the annihilator of Hq(Sm’n) in Lq(Sm’n) is given by p P I nm Hg (Sp; ), and so diStLq(S;n’n)(q)’ H‘1(S;”.n)) = “‘1’” Ilnaim S1[frtrace(Q(C)\IJ(C))d'm(C) . Hg (Sp; > by our remarks at the beginning of this section. Moreover, if 1. < q < 00, then Q E Lq(S;,n’n) has a best approximant Q in H “33"") (as Lq(S[,”’n) is reflexive); that is, [[(I) — QIILq(S;)n.n) = diStLq(S;)n,n)((I), Hq(S;nrn)). The situation is similar in the case of L°°(S;,nn). Indeed, L°°(S;n’n) is a dual 54 space, and so there is a Q E HOO(S,TDn’n) such that [[(l) — QIILOO(S;nvn) = dlStLOO(S;)n.n)((I3,1710005311”) Again, it also follows from our remarks at the beginning of this section that dist (<1>, H°O(S;,”‘”)) = sup f1F tracer<1>dm<<> . Lousy") However, an extremal function may fail to exist in this case even if Q is a scalar-valued function. An example can be deduced from Section 1 of Chapter 1 in [Pe1]. 3.3 ok(Q) as the norm of a Hankel-type operator and k-extremal functions We now introduce the Hankel-type operators Hg} which act on spaces of matrix functions. We prove that the number ak(Q) equals the operator norm of H g} and characterize when H g} has a maximizing vector. We begin by establishing the following lemma. Lemma 3.3.1. Let 1 S k S min{rn,n}. If\Il E H1(Mn,m) is such that rank \IJ(C) = k for a.e. C E T, then there are functions R E H 2(Mn,k) and Q E H 2(Mlk‘m) such that R(C) has rank equal to k for almost every C E T, w = so and Ilerontn, = Ilotont,_m = “nonmmm for c e 11. Proof. Consider the set 32/ = closLl<<>R<<>odm31 '11 = SUP diSt m.k (@R1H2(Mm,k)) IIRIIH2(Mn kl£1 L2(Sl A ) _ {k} ’ ”H‘I’ ”Hyman—+1.2(sT’k>/H2L2(S[n’k)/H2(S[n’k) Suppose \II is a k-ext.remal function for Q. Let j E N be such that j S k and rank \II(C) = j for a.e. CE T. By Lemma 3.3.1, there is an R E H2(M ) and a Q E Halt/Him) such that 72,] v = so and “3(4)”me = IIQ(<)llfvi,-,m = ”more... for 46 it As before, adding zeros if necessary, we obtain it x k and k x m matrix functions R#=(R O) and Q#= Q , O 58 respectively, so that \I! = R#Q# and nonontm = Hotelltjm =11\II(C>IIM,,,.,. for a.e. (6 ii Let us show that R# is a maximizing vector for H g} . Since Q# belongs to H8 (Mhm), we have that for any F E H2(S[n’k) meet <1>=dm<< <)Q#(<))dm(<)- and so and») = [A trace((¢R# — F>dmR# — F)(C)Q#(Cllls§ndm(0 s [T ”(we — F1<<>115Ti11o#11M,,mdmR. H (51” )) = “H; l” By the remarks in Section 3.2, there is a function G E H8(Mk,m) such that ||G||L2(Mk m) S 1 and Atrace((1<>dm H3(Cm) is defined by Hq) f = P- (I) f for f E H2013”). The following is an immediate consequence of the previous theorem when k = 1. Corollary 3.3.6. Let (I) E L°°(M.,,W). The Hankel operator Hg) has a mazz'mz'zz'ng vector if and only if (I) has a 1—extremal function. Proof. By Theorem 3.3.5, (I) has a l-extremal function if and only if the Hankel- type operator Hg} : H2013”) —> L2((Cm) / H 2(Cm) has a maximizing vector. The conclusion now follows by considering the “natural” isometric isomorphism between the spaces H3 (cm) = L210") :3. H2(cm) and L2(cm)/H2(cm). 1:1 Remark 3.3.7. It is worth mentioning that if a matrix function (I) is such that the Hankel operator H¢, has a maximizing vector (e.g. (I) E (H00 + C)(Mn)), then any 60 l-extremal function \I' of ‘1’ satisfies [Ttrace1mc1w101dm1o = 111111.11 =to1<1>1 This is a consequence of Corollary 3.3.6 and Theorem 3.3.3. Remark 3.3.8. There are other characterizations of the class of bounded matrix func— tions (I) such that the Hankel operator H4, has a maximizing vector. These involve “dual” extremal functions and “thematic” factorizations. We refer the interested reader to [Pe2] for details. Corollary 3.3.9. Letl S k S t g n and@ E L°°(Mn). Suppose that ok() = og(). Recall that 0n((1)) = dlStLoo(S’il)((p, HOO(Mn)) and the distance on the right-hand side is in fact always attained, i.e. a best ap- proximant Q to (I) under the L°°(S’f) norm always exists as explained in Section 3.2. 61 Theorem 3.4.1. Let (I) E LOO(M and 1 g k g n. Suppose Q is a best approximant n) to (I) in H°0(Mn) under the L°0(S?)- -norm. If the Hankel-type operator Hg} has a maximizing vector .7: in H2(Mn.k) and ok() = on() holds for a.e. C E T, and 4. sj(( — Q)(C)) = 0 holds for a.e. C E T wheneverj Z k. Proof. By our assumptions, 1111.l.”11211n|i«.»M , =11”. Hi”? $211 11,1; L2(s’1"‘*k)/H2(s?1k) = 1111111» — 121171112 __ 2 z _ 2 51111» Q)f|IL2(S11.,1.) [11 1111 Q)(<)F(C)IIS?,kdm(C) = WWW“2 s [11 111<1> — 121101;?11r _<_ 111 — 1211300155.,11f11ig = 01.111 111E11L2M (Malt) nkl It follows from‘Theorem 3.3.3 that all inequalities are equalities. In particular, we obtain that Q? is a best approximant to CDQ under the L2(S?’k)—norm since the first inequality is actually an equality. For almost every C E T, 11<<1 — Q)(C)f(C)|ls111 = 111<1 — Q11<111s11111f1<111M,,, and (3.4.11 “(CF - Q)(C)l|svf = ”‘9 - QHLOO(S111) = 014(9), 62 because the second and third inequalities are equalities as well. It follows from (3.4.1) that for each 3' 2 0, 5.1111 — o11<1r1<11 = 3.111 -— 121111111f1<111MM for <6 11. We claim that ifj Z k, then say-((4) — Q)(()) = O for a.e. C E T. By Theorem 3.3.5, we can choose a k-extremal function, say ‘11, for (I). Since ‘1! belongs to Hall/ll”), 01.1111— / 1ra<=e11>1<111<11dm1<1 = [11 1race11<1 — Q11<111c11dm1c1 — T s /T111<1—Q11<1111<1115111dm1<1s A1111—1211111151111111(111M..dm1c1 S “‘1’ - Q||L<>0(S?f)|l‘1’HL1(Mn) S ”‘1’ - QHL°°(S?) = 01(4)). and so all inequalities are equalities. It follows that 11race11<1> — 1211111111111 = 1111» — 12110115111 11111111111. for 4e 11. 13.4.21 In order to complete the proof, we need the following lemma. Lemma 3.4.2. Let A E Mn and B E Mn. Suppose that A and B satisfy ltl‘aCGMB)! = llAlanllBllsf- If rank/l g k, then rankB _<_ k as well. We first finish the proof of Theorem 3.4.1 before proving Lemma 3.4.2. It follows from (3.4.2) and Lemma 3.4.2 that rank(( — Q)(()) S k for a.e. C E T. 63 In particular, if j Z k, then sj(( — Q)(C)) = O for a.e. C E T, and so k—l Z) s111<1> — Q)(<)) = 11111 — 12111111531: 01.1111 for c e '1. 1:0 This completes the proof. El Remark 3.4.3. Lemma 3.4.2 is a slight modification of Lemma 4.6 in [BN P]. Although the proof of Lemma 3.4.2 given below is almost the same as that given in [BNP] for Lemma 4.6, we include it for the convenience of the reader. Proof of Lemma 3.4.2. Let B have polar decomposition B = UP and set C 2 AU, where P = (B*B)1/2. Let 61,. . . ,en be an orthonormal basis of eigenvectors for P and Pej— — Aj -je .It IS easy to see that the following inequalities hold: 71 n ltrace(AB)| = |trace(CP)| = 2(Pej,C*eJ-) = ZAJ(e n n n = 21,105,,123-1 3:1,- [(Cej,e-1|_<_ZA,-||cej|1 i=1 j= = j=1 On the other hand, TL 11A11M..11B1151,1 = 11011Mm11P11s1 = 11011111.. 211 1:1 64 and so, by the assumption ltrace(AB)| = llAHMnllBHSTfa it follows that Tl- n 2111111111 = 11011111.. 211-- j:1 j=1 Therefore Aj||Cej|| = “CHMnAj for each j. However, if rankA g k, then rankC g k. Thus there are at most k vectors ej such that “Ce,” 2 llClan- In particular, there are at least n — k vectors ej such that HCejH < ”CHMn- Thus, Aj = 0 for those n — k vectors ej, rankP g k, and so rankB g k. C] Remark 3.4.4. Note that the distance function dq) defined on T by c1111) d=ef111<1> — 1211111151 equals ok(Q) for almost every C E T and is therefore independent of the choice of the best approximant Q. This is an immediate consequence of Theorem 3.4.1. A similar phenomenon occurs in the case of matrix functions Q E Lp(Mn) for 2 < p < 00. We refer the reader to [BN P] for details. Corollary 3.4.5. Let Q E L°O(Mn) be an admissible matrix function and 1 g k S n. If the Hankel-type operator Hg} has a maximizing vector and ok(Q) = on(Q), then k—l 11—1 ZSM‘I’ - Q)(C)) S ZM‘N 3'20 3:0 for any best approximation Q of Q in H 00 (Mn) under the LO°(Sf)—norm. Proof. This is an immediate consequence of Theorems 3.1.1 and 3.4.1. [:1 Definition 3.4.6. A matrix function Q E L°°(Mn) is said to have order t if t is the smallest number such that H g} has a maximizing vector and OAT) = dlStLC)O(S’iI)(q), HOO(Mn)). 65 If no such number 5 exists, we say that Q is inaccessible. The interested reader should compare this definition of “order” with the one made in [BNP] for matrix functions in LP(Mn) for 2 < p < 00. Also, due to Corollary 3.3.9, it is clear that if Q E L°°(Mn) has order A, then the Hankel-type operator H gc} has a maximizing vector and O’k((1))= dlStLoo(S711)(q), HOO(Mn)) holds for each k _>_ 6. Theorem 3.4.7. Let Q E L°°(Mn) be an admissible matrix function of order k. The following statements are equivalent. 1. Q E H 0° is a best approximant to Q under the L°°(Sf)-norm and the functions (HSj((‘1’-Q)(C))1 051's k-l. are constant almost everywhere on T. 2. Q is the superoptimal approximant to Q, t j(Q) = 0 for j _>_ k, and 0k(q)) = t0(q3) + . . . + tk__1(q)). Proof. we first prove that 1 implies 2. By Corollary 3.4.5, we have that, for almost every C E T, k—l k—l k—l 11—1 Zsj11<1> — Q)(<)) s 211-111): Zesssusz-(a — Q)(C)) = 211111 — Q)(C))- j=0 j=0 1:0 1:0 £611" 66 This implies that 1ND) = 6882161? 81% — 62116)) = 3,1111 - (2)10) for 0 S j S k — 1. Q E Qk_1(Q), and k-l k-l 211-11»): 2 sj11<1> — Q1111) = 01.111. 1:0 1:0 Moreover, Theorem 3.4.1 gives that sj((Q - Q)(C)) = 0 a.e. on T for j Z k, and so tj(Q) = 0 for j 2 k, as Q E Qk_1(Q). Hence, Q is the superoptimal approximant to Q. Let us show that 2 implies 1. Clearly, it suffices to Show that if 2 holds, then Q is a best approximant to Q under the L°°(S?)—norm. Suppose 2 holds. In this case, we must have that k—1‘ k—l 01.111: 211-11): 2 sj11<1> — Q11111=11<1> - 1211100131111- ,'=0 ,'=0 Since Q has order k, it follows that 0111(1)) = ”‘1’ - Q||L00(s?1 and so the proof is complete. [3 For the rest of this section, we. restrict ourselves to admissible matrix functions Q which are also very badly approximable. Recall that, in this case, the function C 1——> sj(Q(C)) equals tj(Q) a.e. on T for O _<_j S n — 1, as mentioned in Section 1.1. The next result follows at once from Theorem 3.4.7. Corollary 3.4.8. Let Q be an admissible very badly approximable n x n matrix func- tion of order k. The zero matrix function is a best approximant to Q under the 67 L°O(S'f)-norm if and only if tJ-(Q) = 0 forj 2 k and ok(Q) = t0(Q) + . . . + tk_1(Q). It is natural to question at this point whether or not the collection of admissible very badly approximable matrix functions of order k is non-empty. It turns out that one can easily construct examples of admissible very badly approximable matrix functions of order k (see Examples 3.4.14 and 3.4.15). Theorem 3.4.10 below gives a simple sufficient condition for determining when a very badly approximable matrix function has order k. We first need the following lemma. Lemma 3.4.9. Let Q E L°O(Mn). Suppose there is \I! E A? such that Atrace1<1>111111111dm111= 1111110015321- Then \11 is a k-extremal function for Q, ok(Q) :2 on(Q), and the zero matrix function is a best approximant to Q under the L°O(Sf')-norm. Proof. By the assumptions on ‘11, we have H‘PllLoqs?) = AtracelquWKdeM) S 012(4))- On the other hand, 011(4)) S diStLOO(Slll)(q)1Hool S llq’HLoqs?) always holds. Since all the previously mentioned inequalities are equalities, the con- clusion follows. E] Theorem 3.4.10. Let Q E Loom/fin) be an admissible very badly approximable matrix 68 function. Suppose there is \11 E A? such that Atrace11>111111111dm111 = 10111 + . . . + 1,411.). Iftk_1(Q) > 0, then Q has order k and the zero matrix function is a best approximant to Q under the L°O(Sf)-norm. Proof. By the remarks preceding Corollary 3.4.8, it is easy to see that It follows from Lemma 3.4.9 that \I/ is a k-extremal function for Q, ok(Q) = on(Q), and the zero matrix function is a best approximant to Q under the L°O(S'f)—norm. Thus ||Q||Loo(svli) = GHQ). Moreover, by Theorem 3.1.1, Uk—1((I’) S 1061’) + - - - +tk—2(‘P) < t001)) +---+ tic—1(4)) S ll‘I’llLoqs’f) Therefore ok_1(Q) < ok(Q). Cl Remark 3.4.11. Notice that under the hypotheses of Theorem 3.4.10, one also obtains that tk_1(Q) is the smallest non-zero superoptimal singular value of Q. This is an immediate consequence of Corollary 3.4.8. We now formulate the corresponding result for admissible very badly approximable unitary-valued matrix functions. These functions are considered in greater detail in Section 3.5. Corollary 3.4.12. Let U E L°O(Mn) be an admissible very badly approximable unitary-valued matrix function. If there is ‘11 E Ag such that h 1race1U1111<11111m111 = n. 69 then U has order n and the zero matrix function is a best approximant to U under the L°°(S?)-norm. Proof. This is a trivial consequence of Theorem 3.4.10 and the fact that tj(U)—-=1forO§an—1. D We are now ready to state the main result of this section. Theorem 3.4.13. Let Q be an admissible very badly approximable n X n matrix function. The following statements are equivalent: 1. k is the smallest number for which there exists \II E A2 such that h 1race11>11111111dm111 = 1011) + . . . +1.._11<1>). 2. Q has order k, tJ-(Q) = 0 forj 2 k and (11,111)) = 101111 +...+t,,_1(_ 0 : there exists a \II E A? such that h trace1<1111<11111dm111 =to1<1>1 +... +1.._11<1>1) Clearly, k(Q) may be infinite for arbitrary Q. Suppose k = k(Q) is finite. Then Lemma 3.4.9 implies that Q has a k—extremal function, 0,;(Q) = on(Q), and the zero matrix function is a best approximant to Q under the L°°(S?)-norm. In particular, Q has order k S k(Q), tj(Q) = 0 for j Z k, and 0161’) =tol‘1’)+m+t1~.-1(¢)e 70 by Corollary 3.4.8. On the other hand, if Q has order k, tJ-(Q) = 0 forj 2 k, and 01.11) = 10111+ ...+11_11<11 then Q has a k-cxtremal function ‘1’ E A}: such that AtracelmCl‘I’lCdeK) = 011(2) = t0(‘1’)+--- +tk—1(<1’)- Since t j(Q) = 0 for j _>_ k, it follows that [11 trace(Q(C)\IJ(C))dm(C) =1011>1 +...+t,,_1(<1>1. Thus m(Q) _<_ k. Hence, if either k(Q) is finite or Q satisfies 2, then k = m(Q). D We end this section by illustrating existence of very badly approximable matrix functions of order k by giving two simple examples; a 2 X 2 matrix function of order 2 and a 3 x 3 matrix function of order 2. Example 3.4.14. Let It is easy to see that Q is a continuous (and hence admissible) unitary-valued very badly approximable matrix function with superoptimal singular values t0(Q) = t1 (Q) = 1. We claim that Q has order 2. Indeed, the matrix function satisfies /Ttrace(Q(C)‘Il(())dm(o = 2’ and so Q has order 2 by Corollary 3.4.12. Example 3.4.15. Let t0 and t1 be two positive numbers satisfying to 2 t1. Let tofu G) (ll (D: o 112’) o o o o where a and b are positive integers. It is easy to see that Q is a continuous (and hence admissible) very badly approximable matrix function with superoptimal singular val- ues t0(Q) = t0, t1(Q) = t1, and t2(Q) = 0. Again, we have that Q has order 2. After all, the matrix function 16‘ ll © © 19 © No. © © © © satisfies [T trace1<1>111<111))dm11) =10 +11 = 10111+111<1>1 +1211), and so Q has order 2 by Theorem 3.4.10, since t1(Q) = t1 > 0. 3.5 Unitary-valued very badly approximable ma- trix functions We lastly consider the class an of admissible very badly approximable unitary—valued matrix functions of size n x n and provide a representation of any n-extremal function 72 \I! for a function U E Lin such that [Etrace(U(C)\IJ(C))dm(C) = t0(U) + . . . + tn_1(U) (3.5.1) holds. Note that for any such U we have that tj(U) = 1 for 0 g j S n — 1. Recall that, for a matrix function Q E L°°(Mm,n), the Toeplitz operator Tq, is defined by m = 111,111, for f 6 H2113”). where 11%. denotes the orthogonal projection from L2( O. In particular, the Toeplitz operator T detU is Fredholm and we refer the reader to Chapter 14 in [Pe1] for more information concerning functions in U”. Theorem 3.5.1. Suppose U E Lin has an n-extremal function \II such that (3.5.1) holds. Then \I/ admits a representation of the form where h E H 2 is an outer function such that ||h||L2 = 1 and O is a finite Blaschke- Potapov product. Moreover, the scalar functions det(U€)) and trace(UO) are admis- sible badly approximable functions that admit the factorizations Tl det(UO) = Z"% and trace(U€-)) = n5 Bil 3" 73 Proof. It follows from (3.5.1) that all inequalities in (3.1.3) are equalities and so tvratc<=r(U(C)‘1’(C ))— Ill/'(Cl‘1’(C)llsn =nll‘1’(C)||Mn (3-5-2) holds for a.e. C E T. Since U is unitary-valued, then HU(C)‘1’(C)||5711=||‘1’(C)||s?. and so 1111111113111 = 111111111111, must hold for a.e. C E T. Therefore 81(‘1’(C))=||‘I’(C)I|Mn for 21-6 C 6 T.0 S j S n — 1. By the Singular Value Decomposition Theorem for matrices (or, more generally, the I Schmidt Decomposition Theorem), it follows that 111) = 11111111MnV11) for a.e. 1 e 11. 13.5.31 for some unitary-valued matrix function V. Let h E H 2 be an outer function such that 1 2 1111111= 1111111 on '1‘ Consider also the matrix function - d=ef2q1 It follows from (3. 5. 3) that 1 2“,,“ )_—T4(\I’* ‘1’)(0 = In for a.e. C E T, 74 and so E is an inner function. Thus \II admits the factorization \II = zh2O for some n x n unitary-valued inner function 9 and an outer function h E H 2 such that ”My = 1. Note that the first equality in (3.5.2) indicates that the scalar function 1;: (1294f trace(UO) satisfies 2h2p = n|h|2 on T, or equivalently DID” 95:77.2 Moreover, HHUelle g HHUlle < 1, hence “Hgolle < n = ||H¢|| implying that 1,: is an admissible badly approximable scalar function on T. We conclude that the Toeplitz operator TCP is Fredholm and ind T90 > 0 (of. Theorem 7.5.5 in [Pe1]). Returning to (3.5.2), it also follows that each eigenvalue of U(C)\II(C) equals ||\1/(C)|[Mn = |h(C)|2 for a.e. CE T . In particular, 11111112" = det 1111111111 = 1znh2")111-det 1111111111911) 1 . holds a.e. C E T. By setting 19 dzef det G and u dz?! det U, we have that u admits the factorization de where 1.12 —- f Eh/h = 90/ n. Since the Toeplitz operator Tw is Fredholm with positive 75 index, Tux" is Fredholm as well. Since ker T9 = {O} and no?” = 6—, then dim(H2 @ 6H2) = dim ker T5 = dim ker Tg = ind T3 < 00 and so 6 is a finite Blaschke product. The conclusion follows from the well-known lemma stated below. [3 Lemma 3.5.2. If G is a unitary-valued inner function such that detO is a finite Blaschke product, then 9 is a Blaschke-Potapov product. Proof. Let 19 = det 9. It. is easy to see that 8*6 is an inner function. Since B dzef 61,1 is a finite Blaschke-Potapov product and BH2(C") C 9H2(Cn), then 9H2(C”) has finite codimension, and so 9 must be a finite Blaschke—Potapov product. D Corollary 3.5.3. Suppose U E U2 has a 2-extremal function \I! such that (3.5.1) holds. If U is a rational matrix function such that ind TU = 2, then G is a unitary constant on T. Proof. Due to the results of [PYl], U admits a (thematic) factorization of the form U71 -’wg uO (U) 271 ’52 1112 wl O 111 —vg v1 where v1, vg, wl and 11.12 are scalar rational functions such that. lvll2 + I112]2 = lwll2 +|u12|2 = 1 a.e. on T, v1 and ’02 have no common zeros in the unit disk ll), wl and 2.02 have no common zeros in ID), and no and 111 are scalar badly approximable rational unimodular functions on T. These results may also be found in Sections 5 and 12 from Chapter 14- of [Pe1]. Suppose Q = thO is an n-extremal function for U such that (35.1) holds as in 76 the conclusion of Theorem 3.5.1. Assume, for the sake of contradiction, that O is not a unitary constant. Since uj is a scalar badly approximable rational unimodular function on T, it admits a factorization of the form uj = cjf where cj is a unimodular constant, the function hj is H 00-invertible, and kj = ind Tuj, for j = O, 1. In particular, we have as k0 + k1 = ind TU = 2, where 6 (13f detO and u d=ef det U. On the other hand, by Theorem 3.5.1, and so the function h2h6 lhfl and its conjugate 71.2 h? 110111 6001 11011.1 belong to H1. Therefore h2h61h1- 1 equals a constant and so 6 equals a constant as well. Thus, the conclusion follows from the fact that 69* is an inner function. [3 We end this section with an example to illustrate some of our main results. Example 3.5.4. Consider the matrix function © N NI © S ,_. [Q l 1—1 z 1—1 M 3| r N l 1...: NI Clearly, U belongs to big and it has superoptimal singular values t0(U) = t1( U) = 1. We ask the question, is there a 2—extremal function \11 for U such that (3.5.1) holds with n = 2? Let us assume for the moment that such a function \1/ exists. In this case, Corollary 3.5.3 implies that. \I! must be of the form ‘11 = zh29. where is a unitary constant and h is an outer function in H 2 such that “h“ L2 = 1. Since -2712 -2 z h§ = det(UO) = z (ad -~ be), it is easy to see that h2 and its conjugate belong to H1, and so h2 is a constant of modulus 1. Relabeling the scalars a, b, c, and d, we may assume that h2 equals 1 a.e. on T. Thus, 25 = trace(U(C)9(C)) = % (aC + 0C2 — b + d1) holds for a.e. C E T, and so b = c = 0 and a + d = 21/2. However, 9 is unitary valued so it must be the case that |a| = |dI = 1, and so 2x/2=a.+d= |a+d| g |a|+ld| :2, which is a contradiction. Thus no such \I' exists. In particular, we must have that Q does not have order 2 or 02(Q) < t0(Q) + t1(Q) = 2 by Theorem 3.4.13. Actually, we have already shown that the zero matrix function is not a best ap- proximant to U under the LOWS?) norm, i.e. 02(Q) < 2. Indeed, we have dist D’.H°°(M2)) < 111(U) +t1(U) = llUllLquQ L°O(S%)( )2 78 by (3.1.7). We now ask, does U have order 1, order 2, or is U inaccessible? It is clear that. U has a 1-extremal function by Remark 3.3.7. In fact, it is easy to check that the matrix function Q1: fl 2O defines a 1-extremal function for U and 01111) = h trace1U111111111111m111 = 11111111 = 10111) = 1. However, U does not have order 1. Indeed, one can see that the matrix function 1O ‘11... N SI N 1 belongs to H6(M2), |lQ*||L1(M _<_ 1, and 2) 1 < (E = Atrace(U(C)‘1’(C))dm(C) S 02(U)- Therefore, either U has order 2 or U is inaccessible. This matter requires further investigation. 79 Chapter 4 An index formula in connection with meromorphic approximation Let 1,0 be an (essentially) bounded measurable function defined on the unit circle T. Recall that, for k 2 O, H (9]?) denotes the collection of meromorphic functions in ID which are bounded near T and have at most k poles in ID) (counting multiplicities). The Nehari-Takagi problem is to find a q E H a?) which is closest to go with respect to the LOO-norm, i.e. to find q E Ha?) such that A _ —— . 00 — . _ 111,. (11100 _ (1131,9011), Hm) —- 132%) “Q filoo. Any such function q is called a best approximation in H (3;) to 1,0. Although uniqueness of a best approximation in H 6:) need not hold in general, if 1,0 satisfies ||H1p|le < sk(H,p), then uniqueness does hold. Here Hcp denotes the Hankel operator with sym- bol 1p, sk(H¢) is the kth singular value of Hg), and ||H1p|le denotes the essential norm of H1; (precise definitions will be given below). Moreover, under these assumptions, it can be shown that the function defined by u = 81:1(90 — q) has modulus 1 a.e. on 80 T, the Toeplitz operator Tu is Fredholm and ind Tu = dim ker Tu = 2k + 11, (4.0.1) where n denotes the multiplicity of the singular value sk dzef sk(H1p) of the Hankel operator Hcp (e.g. see Chapter 4 in (Fell). In view of these results, it seems natural to ask whether analogous results hold in the case of matrix-valued functions. Suppose Q is a k-admissible n x n matrix function with superoptimal approxima- tion Q in H (0:)(Mn) and t£k_)1(Q) > 0. In this case, the matrix function G = Q — Q is very badly approximable, the Toeplitz operator T G is Fredholm and ind T0 = dim ker T0. Therefore, we are led to ask: Is it true that dim ker Tq)_Q = 2k + #7 (4.0.2) Notice that the validity of (4.0.2) is well-known when k = 0. Indeed, the left- hand side equals the sum of all thematic indices that correspond to the superoptimal singular value t0(G) = “HG” of the matrix function G and the right-hand side equals the multiplicity of the singular value 30(Hg) = ”Hg”. Actually, for arbitrary k 2 0, it is easy to see that the dimension of ker Tq,_Q must be at most 2k + 11. After all, if this conclusion fails, then the singular value 80(H¢_Q) = sk(Hq.) of the Hankel operator H¢_Q must have multiplicity strictly 81 greater 2k + u and so 8141711)): 50(HQ—Q) = 52k+piHQ—Q) S 5k+/1(HQ) + SleQ) = 811~+,)(H<11) < 5k+11-1(HQ) = 511-.(HQ) holds, because Q E H (C’Ck') (Mn), a contradiction. Therefore, dim ker TQLQ S 2k + ,u. However, the following example shows that equality in (4.0.2) may fail in general. Example 4.0.5. Consider the matrix function 2 MI ll 1— 1 TV ’3 1 3 I It is not difficult to verify that the nonzero singular values of the Hankel operator H Q are 80(H<1>) = [13, s111a1= s21Ht) = 51-31111.) = 1.341111.) = $ and .5111...) = g In particular, if ,u denotes the multiplicity of the singular value 31(H¢,) = 1 of the Hankel operator Hq), we have 2k + u. = 5. We now proceed to find the superoptimal approximation in H (05(M2) to Q by using an algorithm due to Peller and Young (see [PY3]) and following the notation used in Section 17 of Chapter 14 in [Bel]. Consider the vector functions f, g E H 2((32) defined by 22 Z 2 f: andg=— o 15 1 82 It is easy to check that f is a Schmidt vector corresponding to the singular value 51(Hq)) = 1 of H4, and g 2: EHq) f. Moreover, f and g admit inner-outer factorizations 2 f = zv and g = z w, where O _ 1 —1 1 — V5 2 CD II no :3 Q. [I] so that V = (v O) and Wt = (w E) are thematic. Let 1 $2 O Q1111 = E O O It is easy to check that 24 (D) W(Q — Q#)V = . (4.0.3) 101 1,22 Set \11# = 11,332. Now, it can be verified that one can choose the following functions in the algorithm: 83 (1) In particular, any matrix function Q E {20 (Q) must satisfy “.L O OQ We—Qw= where Q = ‘11# + H*(L# — L)A* = 2 (£2 — L) has LOO-norm at most 1 and L E H 0°. Thus, the superoptimal approximation Q to Q in HE’IO)(M2) is determined by finding the best approximant to $2 in H 0°. Therefore the superoptimal approxima- tion in H E310)(M2) to Q is given by By (4.0.3), we can see that dim ker T¢_Q = 4 even though 2k + 11 = 5. Hence, the equality in (4.0.2) may fail in general. 84 BIBLIOGRAPHY [AAKl] V.M. Adamyan, D.Z. Arov, and MG. Krein. On infinite Hankel matrices and generalized problems of Carathéodory-Fejér and F. Riesz. Funktsional. Anal. i Prilozhen. 2:1 (1968), 1—19; English transl.: Functional Anal. Appl. 2:1 (1968). [AAK2] V.M. Adamyan, D.Z. Arov, and MG. Krein. On infinite Hankel matrices [AP1] [AP2] [BNP] [Fl [Hal [Khl and generalized problems of Carathéodory-Fejér and I. Schur. Funktsional. Anal. i Prilozhen. 2:2 (1968), 1—17; English transl.: Functional Anal. Appl. 2:2 (1968). RB. Alexeev and V.V. Peller. Badly approximable matrix functions and canonical factorizations. Indiana Univ. Math. J. 49 (2000), 1247—1285. R.B. Alexeev and V.V. Peller. Invariance properties of thematic factorizations of matrix functions. J. Funct. Anal. 179 (2001), 309—332. L. Baratchart, F.L. Nazarov, and V.V. Peller. Analytic approximation of ma- trix functions in LP. To appear in J. Approx. Theory P.A. Fuhrmann. Linear Systems and Operators in Hilbert Space. McGraw-Hill, New York, 1981. To appear in J. Approx. Theory E. Hayashi. The solution sets of extremal problems in H1. Proc. Amer. Math. Soc. 93, No. 4 (1985), 690—696. S. Khavinson. On some extremal problems of the theory of analytic func- tions. Uchen. Zap. Mosk. Univ. Matem. 144, No. 4 (1951), 133—143. English Translation: Amer. Math. Soc. Transl. (2) 32 (1963), 139—154. Z. Nehari. On bounded bilinear forms. Ann. Math, 65 (1957), 153—162. V.V. Peller. Hankel Operators and Their Applications. Springer Monographs in Mathematics. Springer, New York, 2003. V.V. Peller. Analytic approximation of matrix functions and dual extremal functions. To appear in Proc. Amer. Math. Soc. 85 [Pe3] [PK] [P0] 1pm] [PY2] [PY3] [PTl] [PT2] [Trll I’M] [Y] V.V. Peller. Personal communication. V.V. Peller and S.V. Khruschév. Hankel operators, best approximation and stationary Gaussian processes. Uspekhi Mat. Nauk 37:1 (1982), 53—124; En- glish transl.: Russian Math. Surveys 37 (1982), 53—124. S.J . Poreda. A characterization of badly approximable functions. Trans. Amer. Math. Soc. 169 (1972), 249—256. V.V. Peller and N. J. Young. Superoptimal analytic approximation of matrix functions. J. Funct. Anal. 120 (1994), 300—343. V.V. Peller and N. J. Young. Superoptimal singular values and indices of matrix fimctions. Int. Eq. Op. Theory. 20 (1994), 350—363. V.V. Peller and NJ Young. Superoptimal approximation by meromorphic functions. Math. Proc. Camb. Phil. Soc. 119 (1996), 497—511. V.V. Peller and S. R. Treil. Approximation by analytic matrix functions. The four block problem. J. Funct. Anal. 148 (1997), 191—228. V.V. Peller and S. R. Treil. Very badly approximable matrix functions. Sel. math., New ser. 11 (2005), 127—154. S.R. Treil. The Adamyan-Arov-Krein theorem: a vector version. Zap. N auchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) 141 (1985), 56—71 (Rus- sian) S.R. 'Dreil. On superoptimal approximation by analytic and meromorphic matrix—valued functions. J. Funct. Anal. 131 (1995), 386—414. N.J. Young. The Nevanlinna—Pick problem for matrix-valued functions. J. Op- erator Theory, 15 (1986), 239—265. 86 SITY LIB ll1lllllllllllllllllllllllllllllll lllllllllllfl 63275