,’./_ chfl LIBRARIES MICHIGAN STATE UNIVERSITY EAST LANSING. MICH 48824-1048 This is to certify that the thesis entitled TYPED CACHING AND SERVER PROBLEMS M.S. presented by Martin Douglas Porcelli has been accepted towards fulfillment of the requirements for the degree in Computer Science 227:; Major Professor’s Signature ‘31.; Iy {8‘11 20 07" Date MSU is an affirmative-action, equal-opportunity employer « —.-.-c---r—-v-o—a—v-n-o-un—u-v-ou—c-u-u-u-u-a—n-u-n-n-s-u—c-u--.—o-o-o--n--o--n-o-o-o-a-u-o—I--n- PLACE lN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/07 p:/ClRC/DateDue.indd-p.1 TYPED CACHING AND SERVER PROBLEMS By Martin Douglas Porcelli A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Department of Computer Science 2007 ABSTRACT TYPED CACHING AND SERVER PROBLEMS By Martin Douglas Porcelli The k—server problem is a resource allocation problem in which k servers living on some metric space must move in order to satisfy a sequence of requests. Each request specifies some point in the metric space and is satisfied by moving one of the servers to that point. An algorithm controls the movement. of the servers, and its goal is to miminize the total distance they travel. The caching problem is a special case of the k-server problem. In the typed k—server problem a request also specifies a set of servers called a type. This set contains the servers that are allowed to satisfy the request. Until recently no one has studied the typed k-server problem. In this paper we study both the typed k—server problem and a typed version of the caching problem. We introduce a set of algorithms for the typed caching problem, called EP algorithms, that utilize reorganization. We give a lower bound on the competitive ratio of these algorithms and an upper bound on the competitive ratio of marking EP algorithms. We also give a lower bound on the competitive ratio of any deterministic algorithm for the typed k-server problem, and discuss a work function approach for solving it. Finally we show that parts of the proof that the work function algorithm is (21: — 1)-competitive [KP95] do not. apply in the typed k-SCI‘VCI’ problem. TABLE OF CONTENTS LIST OF FIGURES ............................... iv 1 Typed Caching ................................ 1 1.1 Introduction ................................ 1 1.2 Fully-Associative Caching ........................ 1 1.3 Typed Caching .............................. 1 1.4 Definit ions ................................. 2 1.5 Our Work ................................. 3 1.6 Past Work ................................. 4 1.7 States ................................... 5 1.8 Cache Graph ............................... 5 1.9 EP Algorithms .............................. 6 1.10 A Lower Bound for EP Algorithms ................... 7 1.11 Marking EP Algorithms ......................... 8 1.12 An Upper Bound for Markng EP Algorithms ............. 9 1.13 Conclusion ................................. 10 2 Server Problems ............................... 11 2.1 Introduction ................................ 11 2.2 Work Functions and the k-Server Problem ............... 11 2.3 Our Work ................................. 12 2.4 The Typed k—Servcr Problem ...................... 12 2.5 Definitions ................................. 12 2.6 A Deterministic Lower Bound for the Typed k-Server Problem . . . . 13 2.7 Work Functions and the Typed k-Server Problem ........... 14 2.8 Minimizers ................................. 17 2.9 Conclusion ................................. 18 BIBLIOGRAPHY ................................ 21 iii LIST OF FIGURES 2.1 h-‘Iinimizer values with respect to (2, {1}) ............... 19 2.2 Minimizer values with respect to (2, {1}) ............... 20 iv Chapter 1 Typed Caching 1.1 Introduction Caches are an important part of modern computers. A cache is a small block of fast memory that sits between the processor and main memory. “7 hen a memory page is requested by the processor, the system first checks to see if the page is currently in the cache. If the page is not present, a. copy is put into the cache. Since the processor is much faster than main memory, it must normally wait a (relatively) long time for a requested page. With a cache, this wait is reduced for many requests. 1.2 Fully-Associative Caching Most studies of caching have focused on fully-associative caches. In a fully-associative cache every page may be placed into every slot. This simplifies management of the cache. Page location is unimportant; all that matters is whether or not a page is in the cache. Set-associative caches are the simplest generalization of fully-associative caches. In an Ant-way set associative cache, the memory pages and slots are divided into m equal sized sets, call them P1, P2, . . . , Pm and 5'1, 5'2, . . . , Sm respectively. The memory page from any P,- can only be placed into a slot from 8,. In this way the cache is actually a collection of m independent fully—associative caches. A kt-way set-associative cache where k is the number of slots is called a direct mapped cache. 1.3 Typed Caching We focus on more complex caches called typed or restricted caches. A typed cache is any cache that cannot be reduced to a. collection of independent fully-associative caches. Previous work on the typed caching problem has focused on so called skew- associative caches and companion caches [SezQ3, BETVVO3]. In an m-way skew asso- ciative cache there are overlapping sets of size k/m. Every memory page is associated with exactly one of these sets, and a memory page can only be placed into a slot in its associated set. An (m, n)—companion cache pairs a direct mapped main cache with m slots and a fully-associative companion cache with n slots. Victim caches are examples of a companion cache. 1.4 Definitions we now give a formal definition for a cache that applies to both fully-associative and typed caches. Definition 1 (Cache). A cache is a triple (S, P, 111) where S is a set of slots, P is a set of memory pages, and A“! is an association matrix. Here S has cardinality k and is equal to {81, .32, . . . ,sk}, P has cardinality l and is equal to {191,172, . . . ,p,}, and AI is a k x 1 matrix on {0, 1} with (”273‘ = 1 whenever pj can be placed into 5,. We will use 0‘ = 01, 02, . . . ,on to reference a sequence of requests to memory pages. Here (7,- is the i—th memory page requested. We can now formally define the typed caching problem. Given a cache (S, P, A!) and a request sequence a, an algorithm A must satisfy the requests while incurring the minimum possible cost. To satisfy the i-th request, A must ensure that 0,- is in the cache. Since the cache is finite, this may involve evicting some memory page from the cache. It may also involve moving memory pages between slots, a process dubbed reorganization. We define the cost incurred by an algorithm A when servicing a request to a memory page 1) as 1 + ba p is not in the cache cost(A, p) = 0 otherwise Here 0 S b g 1 is the cost. of moving a single memory page from one slot to another and a is the number of such moves made. We can define the total cost incurred by A over a request sequence 0 as cost(A, o) = Z cost(A, 0i) i=1 The algorithms we study are termed online algorithms. These algorithms possess no knowledge of future requests. The performance of these algorithms are often mea- sured using the competitive ratio [ST85]. An algorithm A is said to be c—competitive if cost(A, a) S c - cost(A', a) + co for every algorithm A’ and sequence 0. Here CO is a constant that depends on the initial conditions of the problem. For randomized algorithms the competitive ratio is defined differently. A randomized algorithm A is said to be c—competitive if E[cost(A, 0)] g c - cost(A’, a) + co for every algorithm A’ and sequence 0. Here E [cost(A, 0)] is the expected cost of A on 0'. 1.5 Our Work We first define a set of algorithms called EP algorithms for the typed caching problem and provide a lower bound on the competitive ratio of these algorithms. We also define a subset of these algorithms called marking EP algorithms and give an upper bound on the competitive ratio of these algorithms. Both bounds are dependent on 2 parameters of the cache, 1: and d, which we define in section 1.10. 1 .6 Past Work The case of a fully-associative cache with k slots has been well studied. Sleator and Tarjan [ST85] showed that every deterministic algorithm is at best k—competitive, and they prove that the algorithms LRU and FIFO achieve this bound. Fiat et a1. [FKL+91] showed that every randominized algorithm is at least H k-competitive. They also introduced the algorithm MARK which is 2Hk—competitive in general and HA.- competitive when the number of memory pages is k. + 1. The partitioning algorithm of Manasse et a1. [MMSQO] achieves the Hk bound in general. Brehob et a1. [BETW03, BETWOI] were the first to study (m,n) companion caches. They considered cases with or without bypassing (when a cache is not forced to add the last requested memory page) and with or without free reorganization. They showed that LRU is not competitive in any case and that FIFO is (m + 3n)- competitive when bypassing is allowed. They introduced the algorithm 11/! CF and proved that it is (2n+3)-competitive when bypassing is allowed. They also introduced the algorithm M CFB and proved that is (2n — 2)-competitive when bypassing is not allowed. They showed that every deterministic algorithm is at least (2n + 2)— competitive when m = n+1 and bypassing is allowed and at least (n +1)-competitive when m = n + 1 and bypassing is not allowed. Finally they gave a reorganizing variant of LRU, called RLRU, that is (2n + 3)—competitive when bypassing and free reorganization are allowed. Fiat et al. [FMSOZ, MS04] further studied (m, n) companion caches with free reorganization allowed. They showed that. any deterministic algorithm is at least (272+ 1)-competitive and that every randomized algorithm is at least HgnH-competitive. They proved that any deterministic marking algorithm [BRIS91] is optimal. They also defined a subset of marking algorithms called type preference algorithms and gave two randomized type preference algorithms that are O(log(n))-competitive. Peserico [PesO3] independently studied the case of an arbitrary typed cache with parameters It and d in 2002. He proved a different lower bound of (k + 1) / 2 on the competitive ratio of any EP algorithm. He also showed that EP variants of many traditional caching algorithms are k7(1 + bd)-competitive, and that an EP variant of the algorithm MARK is 2(H;c + 1) / (1 + bd)—competitive. Finally he gave a non— reorganizing algorithm called TOUR which is 2k-competitive. 1.7 States Our definitions and proofs will require some notion of state. Definition 2 (state). Let C = (S, P, A!) be a cache. Define state(C, t) : P —> SU{¢} such that s p occupies s at time t stat.e(C. 0(1)) = {ch p is not in the cache 1.8 Cache Graph Our algorithms will require a second notion of state, the cache graph. Definition 3 (Cache Graph). Let C = (S, P, M) be a cache. Define G'(C',t) as the digraph with P as the nodes and (p, q) a directed edge if state(C,t)(q) 7E gt, p 79 q, and p can be placed into st.ate(C', t)(q). Certain paths in a cache graph define a series of actions that can be taken to add a memory page to the cache. For example, let 7)], v2, . . . ,un be a path in G (C, t) and let state(C, t)(vl) = Q5. Then v1 could be added to the cache using the following procedure. Evict an fori+—n—1,n-—2,...,1 do Move u,- to state(C, t)(v,-+1) end for We call this procedure evicting along a path from 221 to un. Definition 4 (Eviction Tree). Let C be a cache. Define D(C, v, t) to be the nodes in the directed tree of C(C, t) rooted at v and ercluding u. 1.9 EP Algorithms We now define a class of algorithms for the typed caching problem called eviction path or EP because they evict along paths. Definition 5 (EP Algorithm). Let C be a cache. On a request to a page ’0 at a time t that is not in the cache, an EP algorithm chooses a page to E D(C, v, t) and euicts it along some path from 1). Note that this class includes such algorithms as LRU and FIFO. These algorithms do not take advantage of reorganization; instead they always evict along paths of length 1. As a result, there are caches for which LRU is not competitive and the performance of FIFO is poor [PesO3, BETW03]. There exist variants of LRU and FIFO respectively that take full advantage of reorganization. We will show that these variants perform quite well in the general case. Definition 6 (EPLRU). Let C be a cache. On a request to a page 2) at a time t that is not in the cache, EPLRU chooses the page 211 E D(C,u,t) least recently requested and euicts it along the shortest path from u. Definition 7 (EPF IF O). Let C be a cache. On a request to a page i) at a time t that is not in the cache, EPFIFO chooses the page w E D(C, u, t) put into the cache earliest and euicts it along the shortest path from v. 1.10 A Lower Bound for EP Algorithms We can prove a lower bound of k/ (1+bd) on the competitive ratio of any EP algorithm. Here the maximum size of an eviction tree in a cache graph of C and the maximum length of an eviction path in a cache graph of C are k. + 1 and d + 2, respectively. Lemma 1. Let C be a cache and A an EP algorithm. If r is the memory page requested at time t and A euicts a memory page e to satisfy the request, then r + D(C, r, t) = e + D(C,e,t+1). Proof. The proof is to show that. any page 1) reachable from r in C(C,t) is also reachable from e in C(C,t + 1). The opposite direction will hold by symmetry. Let e1, e2, . . . , en be the path that e was evicted along. It follows that en, en_1, . . . , el is a path in C(C, t + 1). Now let 111,012,... , um be a path from r to u in C(C, t). There exists some i and j such that e1,e2, . . . ,ei,tij,vj+1, . . . ,2)", is also a path in C(C, t) and i is maximal. It follows that e,, 123-, Lia-+1, . . . ,2)", is still a path in C(C,t+1). Thus en, e,,_1, . . . ,ei,vj,vj+1, . . . ,vm is a walk in C(C,t+1). Cl Definition 8 (same). Let C be a cache and A1,A2,...,Ak,Ak+1 algorithms. The property same(t,A1,A2, . . . ,Ak,Ak+1) is true for a time t if there exists a sequence of memory pages v1,u2,. . . ,‘Uk,vk+l where state(C, t)(u,~) = (,6 and u,- + D(C, '11,, t) = {15,132, . . . ,uk,uk+1} for each A,» Theorem 1. Let C be a cache and A an algorithm. Then A is at least lat/(1 + bd)- competitive. Proof. The proof is by induction. We show that for a certain request sequence and adversaries BI, 32, . . . , Bk that same(t, A, BI, 32, . . . , Bk) is true at every time t. The 7 inductive step will detail every action taken by the algorithms to satisfy the requests. We will use this to count the costs they incur. The request sequence a is chosen such that state(C,1)(ol) = (b for A and at, t > 1, is the memory page evicted by A at time t — 1. We assume WLOG that 01+ D(C,01, 1) = {01,221, 22-2, . . . ,uk} for A. The initial state of C for each adversary is such that same(1, A, B1, B2, . . . , Bk) holds for the sequence 01, 'L'1,1.72, . . . , vk. We initialize C for each B,- exactly as we did for A, except that we also evict 12,- along the shortest path from 01. It follows from lemma. 1 that v,- + D(C,v,-, 1) = {01,u1,v2, . . . ,vk} for each B,. The actions of the algorithms assure that same(t + 1,A, Bl,B2, . . .,Bk) holds given that. same(t,A, B], 82, . . . , Bk) holds for some sequence at, 101,102, . . .,u.vk. At time t, A evicts some wi and B,- evicts at along the shortest path from wi. The remaining adversaries do nothing. It follows from lemma 1 that w,- + D(C, wi, t+ 1) = {ot,u21,ur2,. . . ,wk} for A and at + D(C,rn, t + 1) = {on 201, U72, . . .,wk} for B,. We now count the costs incurred by the algorithms. Since A adds a memory page to the cache for every request, it incurs a total cost of at least n. The adversaries as a whole do the same, so they incur a total cost of at most n(1 + bd). Pigeonhole Principle implies that some B,- incurs a total cost at most n(1 + bd) / k. Cl 1.11 Marking EP Algorithms Marking EP algorithms are a subset of EP algorithms that base their decisions on marks they apply to pages. Marking EP algorithms are a generalization of traditional marking algorithms [BRIS91, Tor9-5]. Definition 9 (Marking EP Algorithm). Let C be a cache. On a request to a page 1) at a time t that is not in the cache, a marking EP algorithm chooses a page w E D(C, v, t) that is unmarked and evicts it along some path from u. If there are no unmarked pages in D(C, v, t) then the algorithm unmarks all of them and tries again. After satisfying 8 any request the algorithm marks the requested page. Many algorithms are marking EP algorithms. In the same way that many tradi- tional algorithms are EP algoritlnns, so all traditional marking algorithms are marking EP algorithms. EPLRU is also a marking EP algorithm. 1.12 An Upper Bound for Marking EP Algorithms The rigid behavior of marking EP algorithms allows us to prove an upper bound of k(1 + bd) on the competitive ratio of any one of them. Here k and d are the same constants as in the lower bound proof. Theorem 2. Let C be a cache and A a marking EP algorithm. Then A is at most k(1 + bd)-competitiue. Proof. We begin by dividing the request sequence into phases. We then bound the cost incurred by the algorithm in each phase. Finally we bound the cost incurred by the optimal algorithm in terms of the number of evictions it performs. We will use the following definitions in the proof. we call the marking EP algo- rithm A and the optimal algorithm OPT . We divide the request sequence into phases as follows. Let t1, t2, . . . , tm be the times when A unmarks pages. For each i, 1 S i g m, we define a phase p,- in ,0,- = {u | A marks an unmarked on at u and unmarks it at t,} The whole of the request sequence is not covered by the phases. Instead the requests for which A incurs a cost. are covered. Therefore any analysis using only the phases will give an upper bound on the competitive ratio. We can bound the cost. incurred by A in any phase by k(1 + bd). Consider some phase pi. Every request during this phase caused A to mark some page and thus incur a possible cost of at most (1 + bd). Since the size of p,- is equal to the number of pages A unmarked at t,, the number of requests during ,0,- is bounded above by k. OPT must perform at least one eviction by the end of every phase. Consider some phase pi. Since the memory pages of at, + D(C, on, t,) can not all fit in C at. once, OPT must evict one of D(C, 0,5,, ti) by t,- to make room. OPT must perform at least one eviction per phase. Let p,- and pj be phases and assume WLOG that t,- < tj. Furthermore let u be the memory page OPT evicts by t,. If 11 ¢ D(C, at], tj) then its eviction can not count for pj. If v E D(C, at], tj) then there exists some t E pj at which time A marks an unmarked 2). Since 2) is marked until t, and is not marked again at ti, it follows that t > t,. Thus OPT must evict again by t in order to bring 21 back into the cache. We can now bound the total cost incurred by the algorithms. Since there are m phases A incurs a total cost at most mk(1 + bd). Since the number of evictions is equal to the number of times a memory page is added to the cache, OPT incurs a total cost of at least the number of evictions. We have shown this to be m. E] 1 .13 Conclusion We have defined the class of EP algorithms for the typed caching problem and given a lower bound for the competitive ratio of these algorithms. We have also defined the marking subset of EP algorithms and given an upper bound for the competitive ratio of such algorithms. Future work may involve bridging the gap between the upper and lower bounds for marking EP algorithms. It may also involve analyzing randomized EP algorithms and finding algorithms whose performance is independent of cache structure. Some of this work has already been done in [Pes03]. 10 Chapter 2 Server Problems 2.1 Introduction The k-server problem is one of the most well studied online problems. In the k-server problem there are k servers that exist in some metric space. These servers are moved around in order to service a sequence of requests. Each request specifies some point in the metric space and is serviced by moving one of the servers to that point. An algorithm controls the servers and strives to minimize the total distance covered by the servers. The fully—associative caching problem is a special case of the k-server problem. A fully—associative cache with k slots and m memory pages can be modeled using an instance of the k-server problem. Here the metric space is the complete graph with m nodes (one for each memory page) and all distances equal to 1. The k-server conjecture posed in [MMS90] postulates the existence of an algorithm that is k-competitive on any metric space. The conjecture has been proven for metric spaces such as the real line and any space with k + 2 points [BK04] using the Work Function Algorithm [CL92]. This algorithm is also (2k — 1)-competitive on arbitrary metric spaces [KP95]. It is the best performing algorithm known. 2.2 Work Functions and the k-Server Problem A work function is any Lipschitz continuous real valued function on a. metric space. Definition 10 (Work Function). Let (M, d) be a metric space. A function w : M —> IR+ is a work function if u(.r) g u'(y) + d(:1:, y) VIII, y E M 11 Work functions are useful tools for the k-server problem. They can be used to describe the minimum cost. of servicing a request sequence [CL92]. They are also the key concept used in the Work Function Algorithm. 2.3 Our Work We first introduce a new variant. of the k-server problem and give a deterministic lower bound for it. We then rederive the properties of work functions from [CL92] for the variant. Finally we give a new version of the minimizer concept from [KP95] and show that a property of minimizers does not hold using these definitions in this variant. 2.4 The Typed k-Server Problem The typed k-server problem is a generalization of the traditional k—server problem. In the typed k-server problem requests specify both a point on a metric space and a set of servers called the type. A request can only be satisfied by servers in its type. This approach to generalizing the k-server problem differs greatly from the gen- eralized k-server problem [SS06]. In the generalized k—server problem, the servers lie in different metric spaces and each server can still be used to satisfy every request. We know of no way to relate these variants. 2.5 Definitions In the traditional k—server problem the servers are indistiguishable; therefore previous work has used multisets to track server locations. Multisets are not appropriate when servers are distinguishable, as in the typed k-server problem. We propose the use of tuples as a replacement. Their use leads to a simple definition of a type as a subset of {1,2, . . . , k} and affords us a simple method for measuring the cost of moving servers. 12 Definition 11 (Location). Let (Ml, d) be a metric space and (51,.) some ordering of the servers. We say that the servers are located at some tuple (1);) 6 Milk if s,- is located atpi for all 1 S i S k. Definition 12 (Cost). Let (MI, (1’) be a metric space and (pk) E Mk the current location of the servers. The cost of moving the servers to some (qt) 6 Mk is k manh((pk), (qk)) = Z (K191. Qt) The set Mk along with the distance function manh is a metric space whenever (MI, (1) is one. This allows us to discuss work functions on this space. 2.6 A Deterministic Lower Bound for the Typed k-Server Problem The typed k-server problem can be considered harder than the traditional k-server problem. For example, given a metric space with k + 1 points we can prove a general lower bound of 2k — 1 for the typed problem. In the traditional problem the highest possible bound for an arbitrary metric space with k + 1 points is k. Theorem 3. Let (M, d) be the complete graph of size k+1 where the distance between any two vertices is 1. Every deterministic algorithm is at least (2k — 1)-competitive on this space. Proof. Let A be the algorithm and assume that. Ml = {1, 2, . . . , k, k + 1}. A competes against an adversary B. Initially both A and B ’3 servers are located at (1,2,... , k). B is responsible for issuing requests, and its strategy is to force A into moving all of its servers to a single point and then back. It acts in the following manner. 1. Issue a. request (k + 1, {1,2, . . .,k}). Assume W’LOG that A satisfies the request with server 1. B then satisfies the request with server k. 13 2. For each 2 S i g k issue a request (k + 1, {i,i + 1, . . . , k}). Assume WLOG that A satisfies the request with server i. B need not do anything to satisfy the request. 3. For each 1 g i g k — 1 issue a request (i, {i}). B need not do anything to satisfy the request. 4. A and BS servers are now both at (1,2,. . . , k — l, k + 1). Relabel the points in M so that the servers are at (1, 2,. . . , k) and goto step 1. A incurs a cost. of 2k — 1 in steps 1 through 3, while B incurs a cost of only 1. E] 2.7 Work Functions and the Typed k-Server Problem There exists work functions on the metric space (Mk, manh) that define the minimum cost. of servicing a sequence of requests. We show this by generating these work functions from an initial work function and using the requests themselves. we start by defining the update operator. This operator is used to generate a new work function from a given one and a request. Definition 13 (Update Operator). Let (M,d) be a metric space, w a work function on (Mk, manh), and (r, ,u) a request. The update operator generates a work function w /\ (r, ,u) on (Mk, manh) such that w /\ (r, u)(A) = mein w(A') + d(r, a,) 2 it for A = (ak) and A' = ((1.1. (1.2, . . . ,a,_1, r,a,~+1, . . . ,ak). Lemma 2. The update operator is well-defined. Proof. Let A = (ak) and B = (bk). There exists i E a such that 14 w /\ (r, ,u.)(A) _<_ w(A’) + d(r, a,) w /\ (r, max) = w - ~ » » ' « 1 ——-—----~ -- 2.6_ -—-—— 26 - ————-— 2 g — 2.8 2.8 {72.8 -1 O‘ 0 ISO O L O l I) O l O -1 —o 5 o 0.5 1 1 5 2 Server 1 Figure 2.1: Minimizer values with respect to (2, {1}) 19 Sewer 2 Server 2 2 _v‘ . :9“ r a \é> , , \\_\ I \ 1.5 :1‘ ‘ 1 \ 0 \§// 1 — 2J————j , \ . ' q x . , [l0 .‘2 1 (n l . \\ 0.5 - \‘I \o\\ - o - §—\ \\ iv} , —o.5 ~ 1 I - .l.] 1 _1 I I l —1 -o.5 o 0.5 1 1.5 2 Server1 21 i: :5 CI: LT T 5" :1 E1 1: :T _rsi ., .I 'm i_'JJ _0-0 LT 1.5 E E E eET 2.5 (2,5 is 1121 E 2i 3* is E s 0511; .L E i: 0- *93 '7 lo . <0 ‘01".0) (Q? s o@ _ .. .. \. WA 0 we 9%) .9 9 Eiék /\&<5 F 6) x) (‘9' 3 4.5:" o/ «0. r6 0 - \ Aw 59%» 523’ 9 537 1g skiff A5”? /o\5&$ WE‘ZQQ , 1:) (0" _ 1 1.5 2 Server 1 Figure 2.2: Minimizer values with respect to (2, {1}) 20 [BETWOI] [BETWOB] [BK04] [BRISQI] [CL92] [FKL+91] [FMSOZ] [KP95] [MMSQO] [MSO4] [PesO3] BIBLIOGRAPHY Mark Brehob, Richard Enbody, Eric Torng, and Stephen W’agner. On-line restricted caching. In SODA ’01: Proceedings of the twelfth annual ACM— SIAM symposium on Discrete algorithms, pages 374—383, Philadelphia, PA, USA, 2001. Society for Industrial and Applied Mathematics. Mark Brehob, Richard Enbody, Eric Torng, and Stephen Wagner. On-line restricted caching. J. of Scheduling, 6(2):149—166, 2003. Yair Bartal and Elias Koutsoupias. On the competitive ratio of the work function algorithm for the k-scrver problem. Theor. Comput. Sci, 324(2- 3):337—345, 2004. Allan Borodin, Prabhakar Raghavan, Sandy Irani, and Baruch Schieber. Competitive paging with locality of reference. In S T00 ’9]: Proceed- ings of the twenty-third annual ACM symposium on Theory of computing, pages 249—259, New York, NY, USA, 1991. ACM Press. Marek Chrobak and Lawrence L. Larmore. The server problem and on- line games. In DIMA CS Series in Discrete Mathematics and Theoretical Computer Science, volume 7, pages 1164. 1992. Amos Fiat, Richard M. Karp, Michael Luby, Lyle A. McGeoch, Daniel D. Sleator, and Neal E. Young. Competitive paging algorithms. J. Algo- rithms, 12(4):685-699, 1991. Amos Fiat, Manor Mendel, and Steven S. Seiden. Online companion caching. In BSA ’02: Proceedings of the 10th Annual European Sympo- sium on Algorithms, pages 499—511, London, UK, 2002. Springer-Verlag. Elias Koutsoupias and Christos H. Papadimitriou. On the k-server con- jecture. J. ACM, 42(5):971—983, 1995. Mark S. Manasse, Lyle A. McGeoch, and Daniel D. Sleator. Competitive algorithms for server problems. J. Algorithms, 11(2):208—230, 1990. M. Mendel and Steven S. Seiden. Online companion caching. Theor. Comput. Sci, 324(2-3):183—200, 2004. Enoch Peserico. Online paging with arbitrary associativity. In SODA ’03: Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, pages 555-564, Philadelphia, PA, USA, 2003. Society for In- dustrial and Applied Mathematics. 21 [Sez93] [8806] [Stem [Tor95] André Seznec. A case for two-way skewed-associative caches. In ISCA ’93: Proceedings of the 20th annual international symposium on Computer architecture, pages 169—178, New York, NY, USA, 1993. ACM Press. René A. Sitters and Leen Stougie. The generalized two-server prob- lem. J. ACM, 53(3):437-458, 2006. Daniel D. Sleator and Robert E. Tarjan. Amortized efficiency of list update and paging rules. Commun. ACM, 28(2):202—208, 1985. Eric Torng. A unified analysis of paging and caching. In IEEE Symposium on Foundations of Computer Science, pages 194 203, 1995. 22 llllllllllillllllilllil