This is to certify that the thesis entitled RESTRICTED K-SERVER PROBLEM presented by Jignesh Patel has been accepted towards fulfillment of the requirements for the MS. degree in Computer Science ”45'? Major Professor’s Signature 6 / H / 20031 Date MSU is an Affirmative Action/Equal Opportunity Institution LIBRARY Michigan State University PLACE IN RETURN Box to remove this checkout from your record. To AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/01 c:/ClRC/DatoDuo.p65«p.15 RESTRICTED K-SERVER PROBLEM By J ignesh Patel A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Department of Computer Science and Engineering 2004 ABSTRACT RESTRICTED K-SERVER PROBLEM By J ignesh Patel The k-server problem is the problem of deciding how to use k servers in a metric space to serve requests arriving in the metric space in an online manner. Many practical online problems can be modeled using the k-server problem. Although the k-server problem has been studied extensively, little work has been done about its generalizations, and the work done focuses on caching, a special case of the k-server problem. Many practical problems which cannot be modeled using the simple k—server problem can be modeled by some generalization of it. In this thesis, we first discuss the current work that has been done for the simple k~server problem. Then we consider a particular generalization in which the servers are not identical. We have types of requests and each server can serve only certain types of requests. We consider the simple problem with two server types and two request types. One server type can serve both request types and the other server type can serve only one request type. We consider the problem with three metric spaces. First the general metric space, then the line metric space, and last the uniform metric space. For the general metric space we present a partial result for the BALAN CEZ algorithm. For the line metric space we show that a modified version of the DOUBLE-COVERAGE algorithm is competitive. We give competitive algorithms for the uniform metric space. ACKNOWLEDGMENTS I would like to take this opportunity to thank all the people who have helped me make this thesis possible. First of all, I would like to thank my advisor, Dr. Eric Torng, for his constant guidance, support and encouragement throughout the course of the thesis. I would like to thank Dr. Richard Enbody and Dr. Abdol H Esfahanian for being on my the- sis committee. I would like to thank Stephen Wagner and Martin Porcelli for their ideas and co—operation. I would also like to thank the staff of the CSE department for their support. Finally I would like to thank my friends and family for all their help and en- couragement. iii Contents Page List of Figures ................................... v 1. Introduction 1 1.1 Online Problems ............................. 1 1.2 Competitive Ratio ............................ 2 1.3 Examples ................................. 3 2. The k-Server Problem 5 2.1 Definition ................................. 5 2.2 Current results .............................. 6 2.2.1 Results for specific k ....................... 7 2.2.2 Results for specific metric spaces ................ 7 2.3 Variations of the k—Server Problem ................... 7 2.3.1 Our problem ........................... 9 3. Results 10 3.1 Any Metric Space ............................. 10 3.1.1 BALZ ............................... 10 3.1.2 Work Function Algorithm .................... 16 3.2 Line Metric Space ............................ 17 3.2.1 More Than 2 servers ....................... 24 3.3 Uniform Metric Space .......................... 30 3.3.1 k + l = n ............................. 31 3.3.2 k +1 < n ............................. 33 4. Conclusion 34 Bibliography .................................... 36 iv List of Figures Figure Page 3.1 Quasiconvexity Counterexample .................... 17 3.2 DOUBLE-COVERAGE Counterexample ................ 18 3.3 Simple outside request .......................... 21 3.4 Non-crossing inside request ....................... 22 3.5 Crossing inside request .......................... 23 3.6 Outside request to O on B’s side ..................... 24 3.7 Outside request .............................. 26 3.8 Inside request ............................... 27 3.9 Only regionl empty, with servers crossing ............... 28 3.10 Only region2 empty ........................... 29 3.11 Neither region empty ........................... 29 Chapter 1 Introduction A problem consists of input and output, and is defined by the relation between the two. An algorithm which solves the problem produces the desired output for any given input. Generally we assume that the entire input data is available before any output is produced. For example, for the problem of finding the shortest path in a graph, we have the entire graph as the input. For many practical problems this is not true. That is, we may not have the complete input, and output has to be produced with the incomplete input. An important class of such problems is online problems. 1. 1 Online Problems An online problem consists of a sequence of inputs and a sequence of outputs, one output for each input. The important thing is that there is temporal (or some other) separation between the inputs in the input sequence. Each input is available at a particular time and the corresponding output has to be produced before the next input becomes available. That is, an online algorithm runs in real time or ‘on line’. A very common example of an online problem in computer science is the paging problem. The input to the problem is a sequence of page requests. For each page request, if the page is already in the fast memory, or there is room for the page in the fast memory, then there is no output; otherwise the output is a page to be evicted from the fast memory to make room for the current page. The algorithm for deciding which page to evict, if any, is an online algorithm since it has to produce the current output before it gets the next input. In paging the goal is to minimize the number of page faults. Note that the best output for any input depends on future inputs. This is what makes online problems difiicult to solve. Since the optimal output depends on future inputs, which are not available, typically we cannot generate an optimal output. If we know the whole input sequence at the start, then we can generate output for each input such that the output sequence is optimal. This is referred to as the offline version of the online problem. With the online version the goal then is to get as close to the optimal as possible. Since we can never get the optimal output, the natural question we can ask is how close are we to the optimal. This leads to the metric, called competitive ratio, which is used to compare the performance of online algorithms. 1 .2 Competitive Ratio The notion of competitive ratio was first defined by Manasse, McGeoch, and Sleator [10, 11]. The same idea was introduced earlier by Sleator and Tarjan [12]. Unlike traditional algorithms where the performance metric is usually the running time, for online algorithms the running time is usually ignored. Though algorithms with any running times are theoretically important, most online algorithms need to be fast since they run in a real time setting. As mentioned in the last section, what we look for is how close is the output generated to the optimal output. Like in the case of running time analysis, we perform worst case analysis. Following is the definition of competitive ratio. COSTx(o) is the cost (or value, the quantity we want to maximize / minimize) of the output sequence generated by online algorithm X on input sequence 0. An online algorithm is generally referred as ON and the corresponding omine optimal algorithm is referred as OPT. Definition 1. An online algorithm ON has a competitive ratio of or if for all input sequences 0‘, COSTONW) g ocCOSTopT(o) + B, where [3 is a constant independent of a. An algorithm with competitive ratio at is said to be oc-competitive. An algo- rithm for which no such or exists is said to be non-competitive. If an algorithm is oc-competitive, then no matter what the input sequence is, its solution will not be worse than at times the optimal solution (plus a constant). This intuitively seems like a good performance metric, though competitive analysis does have some draw- backs. An algorithm with a bad competitive ratio might perform well on average. Nevertheless, competitive analysis is a very useful theoretical tool. 1 .3 Examples We have already seen one example of an online problem, paging. Online prob- lems occur in many practical situations in computer science and other areas. We give some examples of online problems. Job Scheduling: The general problem is as follows: we have one or more processors and jobs to be processed by them. The jobs arrive over time and the problem is to determine when and where to schedule the jobs. The quantity to be optimized could be something like the total execution time or the total waiting time. It is an online problem because the jobs arrive over time and the algorithm has to produce output (that is, schedule the current jobs) without knowing the future inputs (jobs). k-server problem: The k-server problem is a general model for online problems. In this problem we have a metric space and k servers located at k points in the metric space. The input is a sequence of requests which are points in the metric space. If a request comes at a point, we need to serve it by moving one of our servers to that point, if one is not already present. The quantity to be Optimized is the total distance moved by the servers to serve all the requests. Chapter 2 The k—Server Problem In the last chapter we mentioned the k-server problem as an example of online problems. It is a very common example of online problems and has been studied extensively. In this chapter we present the current results on the k-server problem and discuss various modifications of the problem 2.1 Definition The k-server problem is a general model for online problems. Many practical online problems can be modeled using the k-server problem, and hence all the results for the k-server problems apply to the practical problems. We define the k-server problem as follows. Definition 2. The input is a metric space M, a set of k servers located at points in M, and a sequence of requests 01, 0‘2, . . ., each of which is also a point in M. After each request, the problem is to move each server some distance, possibly zero, with the condition that after moving the servers, there is a server at the request point. The cost to serve the request is the distance moved by all the servers. The goal is to minimize the total cost to serve all the requests in the request sequence. At any time, the location of the k servers is called the current configuration. The set of configurations is denoted by Q. So for each request, the problem is to decide the next configuration such that the request point is in the configuration. The server moved to the request point is the server that serves the request. The metric space is generally assumed to have a non-negative symmetric distance function satisfying the triangle inequality. The optimal solution to the offline version of the k-server problem can be computed in polynomial time using dynamic programming. 2.2 Current results In this section we discuss some of the current results on the k-server problem. The k-server problem was introduced by Manasse, McGeoch, and Sleator [10, 11]. They proved that no algorithm for the k-server problem is better than k- competitive for a metric space with at least k + 1 points, and conjectured that there is an algorithm which is k-competitive for any metric space. The problem of whether such a k-competitive algorithm exists is still open. The first competitive algorithm for the k—server problem for all metric spaces was given by Fiat, Rabani, and Ravid [7]. But their competitive ratio was exponential in k. The ratio was later improved by others. In Chrobak and Larmore [4] the authors introduced the Work Function Algorithm (WFA). But they could only show that the WFA was Z-competitive for the Z-server problem, and they conjectured that it was actually k-competitive for the k-server problem. In Koutsoupias and Papadimitriou [9], the authors showed that the WFA was (2k — l)-competitive. It is believed that the WFA is in fact k-competitive. 2.2.1 Results for specific k For k = 2, a 2-competitive algorithm was given by Manasse, McGeoch, and Sleator [10]. After them many others have given Z-competitive algorithms. In gen- eral these algorithms have time and space complexity in the order of the size of the metric space. A lO—competitive constant time and space algorithm, BALAN CEZ, was given by Irani and Rubinfeld [8]. In Chrobak and Larmore [5] the authors gave a 4—competitive algorithm with constant time and space complexity. For k = lMl — l, Manasse, McGeoch, and Sleator [11] gave a k-competitive algorithm. For k = [M] — 2, Bartal and Koutsoupias [1] showed that the WFA is k-competitive. 2.2.2 Results for specific metric spaces For a Line, a memoryless k—competitive algorithm, DOUBLE-COVERAG E, was given by Chrobak et. al. [3]. Bartal and Koutsoupias [1] also showed that the WA is k-competitive for the line. For a Tree metric space, Chrobak and Larmore [6] gave a k-competitive memoryless algorithm. If the metric space is a uniform metric space (the distance between any two points is l), we get the problem of Paging. The points in the metric space are virtual pages, and the k servers are main memory pages. If a server is at a point, this means that the corresponding virtual page is in the corresponding main memory page. We can also consider the metric space as a complete graph. We know there are k-competitive algorithms for Paging. 2.3 Variations of the k—Server Problem In the simple k—server problem, all the k servers are equal and the metric space is symmetric. We can consider some generalizations of the simple problem. These generalized problems can model many practical online problems which the simple problem cannot. We can have different variations by changing a property of the metric space or by changing the equality of the servers or both. Very little work has been done in this direction, and the work done is mostly with respect to the paging (caching) problem (k-server problem with uniform metric space). In Chrobak et. al. [3] the authors give a solution for the weighted-cache problem, where the cost of moving a server to a point depends only on that point. This leads to an asymmetric metric space. The restricted caching problem in Brehob et. al. [2] leads to the problem where specific servers can only move to specific points in the uniform metric space. Other variations for the cache problem where each server has a different serving cost have been considered, like in Epstein, Imreh, and Stee [13]. For this thesis we consider the following generalization: All servers can move to every point in the metric space, but each server can serve only certain types of requests. That is we have a fixed number of request types, and each request type can only be served by some subset of the k servers. The metric space has symmetric non-negative weights satisfying the triangle inequality. We call it the restricted k-server problem (the servers are restricted as to which requests they can serve). The most general problem would be one where we have k servers and 2k types of requests. Each type of request can be served by all servers in one of the 2k subsets of the k servers. 2.3. 1 Our problem In this thesis we consider the problem where we have two types of servers and two types of requests. The type of servers and requests are as follows: 0 There are two types of servers, type A and type B. We call the type A server the A server and the type B server the B server. 0 There are two request types, type 0 and type 1. o A servers can serve requests of both types, B servers can serve requests of only type 1 . In most cases we consider the problem with just two servers defined below on different metric spaces. Definition 3. The restricted 2-server problem on metric space M is the server problem on M with one A server and one B server. In some sections, we generalize the results to more than two servers. Some general conventions are given below: 0 We use uppercase letter words as names of algorithms. OPT is the optimal algorithm, it solves the omine version-of the problem optimally. ADV denotes the adversary against the online algorithm. We assume an adaptive offline adversary, meaning that it generates its next input after seeing the online algorithm’s current output, and also it produces its output optimally offline. ON is a general name for any online algorithm. 0 We use uppercase A and B for an online algorithm’s A servers and B servers, and lowercase a and b for optimal or adversary’s A servers and B servers. - c We assume that all servers start at the same location. Chapter 3 Results In this chapter we present our results for the restricted k-server problem. First we consider a general metric space, and then we consider two specific metric spaces. 3.1 Any Metric Space In this section we consider the restricted Z-server problem on a general metric space. We look at the BALAN CEZ (BALZ) and Work Function (WFA) algorithms. 3.1.1 BALZ Here we first prove a partial result for any algorithm, and then we prove a stronger partial result for BALZ. BALZ was introduced by Irani and Rubinfeld [8]. It is lO-competitive for the simple Z-server problem and uses constant time and space. The way it works is, if X and Y are the two servers, when a request comes in, it compares Cx + 2x with Cy + 2y. Here Cx is the total cost incurred by server X before serving the current request and x is the distance of server X from the current request. Similarly for server Y. If the first quantity is smaller, then it uses X to serve the request, 10 otherwise it uses Y. For our restricted problem we only use the cost incurred by the A server to serve requests of type 1 in making this comparison. Our basic approach is to split the costs to serve the requests of type 0 and type 1. C is the total cost. CO is the cost to serve type 0 requests and C l is the cost to serve type 1 requests. C = C0 + C l. Notations: o ng denotes the cost to serve requests of type x (the total cost if x is absent) using the type y server for algorithm 2, where x can be 0, l or empty, y can be A or B, and z can be OPT, ON or the name of some online algorithm. 0 C[i, i] denotes the cost from the i“ request to the i“1 request. C [j] = C [l , j]. If there is no index, then it is the total cost. (x, y or 2 can be added as described in last point). 0 A; denotes the A server of algorithm 2. Similarly 3,. o A phase is a maximal contiguous sequence of requests of one type. We can have a l-phase or a O-phase. A phase can be denoted as [i, j], which means requests i to j are of one type and requests i — l and j + l are of the other type. c A cycle is a l-phase followed by a O-phase. Cycle [i, j, k] means [i,j — l] is a l-phase and [j, k] is a O-phase. o The difference between the costs incurred by BALZ’s two servers to service type 1 requests after the ith request is denoted by F1, i.e. i“, : ICIQALZM — C 1% Au[i]|. F is the value after the last request. 11 Then we have the following. Lemma 1. For any online algorithm, ON, for any cycle [i,j,k] COON [is k] S COPTH) k] + CISN [i') k] (31) Proof. Let AA be the distance between the two A servers before request j. Since the A servers are aligned before request i we have AA S COPTfinj — I] + CISNHJ — I] And we have COOND, k] = COONUtk] S COP-TU, k] + AA S COPT[j)k] + Corrliti — I] + CISNHJ — I] S COPTH) k] + C13N[1’ k] D Theorem 1. For any online algorithm, ON COON S COPT + CIoN Proof. Summing equation 3.1 over all cycles we get COON S COPT + CISN Hence COON g Copr + CION (3.2) El 12 Corollary 1. Any online algorithm is 50/ e-competitive for request sequences for which the cost incurred to serve the type 1 requests is less than (50 — e)% of the total cost, where e is any positive real number. Proof. Given the condition of the request sequence we have SO—e 50+e CIONS 100 CON and COONZ 100 CON Substituting these values in equation 3.2 we get 50 + E 50 — e < _ 100 CON _ COPT+ 100 CON Ze WCON S COPT 50 Next we consider BALZ. Lemma 2. Let bi be the distance between ABM; and BEAU after serving request i. Let A, = maxjsi 51* Then, after serving the i“ request (i 2 l) I“, < 2A1 Proof. We prove this by induction on 1. Initially both LHS and RHS are 0. After the first request F1 3 A], hence F] < 2A1 (assuming the first request is not at the starting position of the two servers). Now assume it is true for all values less than i. Consider the ith request. If it is a type 0 request, then the LHS does not change and the RHS can only increase. Since [1.] < 2AM we have I“, < 2A,. If it is a type I request then we look at the server which served the request. Let x be the distance (before servicing the request) between the request and the server 13 which served the request and let y be the distance between the request and the other server. Then we have two cases Case 1 The leading server served the request. Then we must have Fi_1+ 2x 3 2y [1+ X S 2A1 r1 < 2A1 Case 2 The lagging server served the request. Then we must have [1-] + 2y 2 2X (3.3) The lagging server moves by x. If x 3 PH then IT, < FH < ZAH 3 2A,. If x > PH then rt S X— ri—l I} < 2X -- [1-1 Using equation 3.3 we get Fi< 2y I", < 2A1 Since A is bounded above by CQPT [i] we have the following corollary. Corollary 2. After any request i rt S ZCOPTm 14 Theorem 2. I COBALZ S ZCOPT + 2 CIBALZ Proof. Summing equation 3.1 (substituting ON with BALZ) over all cycles, we get COBALz S COPT + CIBALZ We have I I 1" < — 1 — C BALZ — 2C BALZ+ 2r Using Corollary 2 we have I CIBALZ S iCIBALZ + COPT So we get I COBALZ S 2CCPT + ‘Z‘CIBALZ (3-4) C] Corollary 3. BALZ is 400/3e-competitive for request sequences for which the cost incurred to serve the type I requests is less than or equal to (g? — e)% of the total cost, where e is any positive real number. Proof. Given the condition of the request sequence we have 399 — e 10—O- + e 3100 CBALZ and COBALZZ 3100 CBALZ CIBALZ S Substituting these values in equation 3.4 we get too 200 — + e l —- — e 3100 CBALZ S ZCOPT + 2 ’31—0‘6—‘CBAL2 3e EECBALZ S ZCOPT 400 CBALZ S ‘3— COPT e 15 We conjecture that the cost of serving type 1 requests, C 1 BALL can be bounded as follows CIBAL2 S mCOPT + (2 — 5ICOBAL2 where m is a positive integer and b is a positive real number > O. This bound can then be combined with Theorem 2 to get a bound for CBALZ- 3.1.2 Work Function Algorithm Here we consider the WFA for the restricted problem. The WFA was shown to be (2k — l)-competitive in Koutsoupias and Papadimitriou [9]. We show that an important property of the simple k—server problem, quasiconvexity does not hold for the restricted case. The central idea in the WFA algorithm is the work function, wt(X), which is the optimal cost of servicing the first t requests and ending up with the servers in configuration X. The quasiconvexity property is as follows: Vt,VX,Y,Vx E X, Ely E Y : wt(X) + wt(Y) Z w,(X — x + y) + wt(Y — y + x). What the property says is that given any two configurations, for any point in the first configuration there is some point in the second configuration such that if we swap the two points, the total work function does not increase. Quasiconvexity does not hold for the restricted problem as seen by the example in Figure 3.1. For the counterexample we consider the line metric space. Here all servers start at the same location, p, and the request sequence consists of a single request of type I at s. The two figures show the configurations X and Y before and after the swap of the A servers. As we can see the sum of work function increases from 3): + 2g + 3z to 3x + 4y + 32 because of the swap. 16 Config. X B A P x ‘i u Y 7- § Config. Y a b Config. X A B P x ‘3 y I z § Config. Y a b Figure 3.1: Quasiconvexity Counterexample. Here p, q,r and s are locations. x,y and z are distances between pq, qr and rs respectively. All servers start at p and there is a single request of type I at s. Quasiconvexity is critical for the proof in Koutsoupias and Papadimitriou [9] to work. Although this proof won’t work, we conjecture that the WFA is competitive for the restricted case, but not k—competitive. 3.2 Line Metric Space In this section we consider the restricted 2-server problem on a line metric space. We consider the double coverage approach for it. In Chrobak et. al. [3], the authors have given the memoryless algorithm DOUBLE-COVERAG E, which is k—competitive for the k—server problem. A memoryless algorithm is one whose move depends only on the current configuration and the current request; it does not look at the past requests. A memoryless algorithm can be described by a function f : Q x IR ——> Q, where Q is the set of configurations. Memoryless algorithms are appealing because they are easy to implement. DOUBLE-C OVERAG E behaves as follows: it moves the servers immediately on the left and right of the request towards the request by an amount equal to the distance between the closer server and the request. The closer server then serves 17 the request. If there is no server on one of the sides, then DOUBLE-COVERAGE simply moves the closest server on the other side to the request. The algorithm tries to ‘cover’ the request from both sides and hence the name. Since we have two types of requests in the restricted case we need to decide the double covering strategy for each type of request. The simplest approach is to do usual double coverage for type 1 requests and only move the A server for type 0 requests. We show below that this strategy is not competitive. Lemma 3. The simple double coverage strategy described above is not c— competitive for the restricted k-server problem for any constant c. Proof. We show this by example. OPT a b l? x ‘3 y T ON B A Figure 3.2: DOUBLE-COVERAGE Counterexample Here p, q and r are locations on the line, x is the distance between p and q and y is distance between q and r. The initial positions of the servers for OPT and ON are given. Since all servers start at the same location (say at p), we can achieve this configuration by requesting a type I request at r. OPT moves the oppoSite type of the server moved by ON. We first request a 0 at q. Both OPT and ON move their A server to q. Then we request a I at r. OPT doesn’t have to move, ON moves the A server to 1'. Last we request a. l at p, OPT moves the A server back to p, ON doesn’t have to move. We end up in the original configuration. 18 By repeating the above requests a large number of times, the costs for the initial requests needed to reach the above configuration can be neglected. The competitive ratio will be the ratio for one cycle, which is y/x. By properly choosing x and y we can asymptotically approach any ratio we want, and hence simple double coverage is not competitive for any constant c. E] Here we consider the problem with just 2 servers, but the above proof easily ex- tends to more than 2 servers with at least one A server. We believe a strategy which never moves the B server for requests of type 0 will not be constant competitive. We give an algorithm, called MDC (for Modified Double Coverage), for the restricted Z-server problem and show that it is competitive. The algorithm works as follows: For an outside (not between A and B) request of type 0 on the B server side, MDC does partial double coverage— it moves the A server to the request point, but moves the B server only till the original position of the A server. For all other outside requests it just moves the nearer server to serve it. For an inside request MDC does complete double coverage. MDC does simple double coverage for type I requests. It crosses servers for type 0 requests that are nearer to the B server than the A server. The intuition behind crossing servers is as follows: If MDC has to move the A server from position p to position q to serve a type 0 request, it needs to cover position p with another server, because the adversary might might have a server at p and use it to call MD C’s A server back. And since the A server can serve type I requests, the adversary could use a type I request to bring MD C’s A server back; thus moving the B server nearer to p may be helpful. We show that MDC is 6-competitive for the restricted 2-server problem on the line. 19 Theorem 3. MDC is 6-competitive for the restricted Z-server problem on the line. Proof. The proof technique is similar to the proof technique used to show DOUBLE-COVERAGE is competitive in [3]. We show this by finding a nonneg- ative potential function, (i), such that for any request: 1. when ADV serves it, (I) does not increase by more than 6 times the distance moved by ADV, and 2. when MDC serves it, 4) decreases by at least the distance moved by MD C. If the COSTMDC and COSTADV are the total costs of MDC and ADV, then after serving all the requests, the maximum increase in (b is 6COSTADV, and the minimum decrease in (I) is COSTMDC. Let (be be the initial value and (1),, be the final value of (I). Since (I) is nonnegative, we have 0 g (I)n = (1)0 + increase in (I) — decrease in d) 0 S (be + 6COSTADV — COSTMDC COSTMDC S 6COSTADV + To This shows that MDC is 6-competitive. We give the potential function (I) below and show that it has the above properties. We define the following distances: LI. is the distance between the left (server on the left side on the line) servers of MDC and ADV. Similarly RR is the distance between the right servers of MDC and ADV. Ad is the distance between server A (MD C’s A server) and server a (ADV’S A server). Similarly we define Ab, Eb and AB. Let (1)1 2 LI. + RR, ([92 :2 Aa + min{Ab, Bb} and ([33 2 AB. Then we use the 20 following potential function ¢=4¢1+2¢2+¢3 Let (1 be the distance ADV moves its server to serve the request (assuming a lazy adversary). Clearly (I); increases by at most (1. Depending on which server ADV moves, only one of the two terms of (I); can change and increase by at most (1. (I)3 does not change. So (I) can increase by at most 4d + 2d 2 6d. Hence (I) has property (1). Then when MDC moves to serve, we have the following cases. Without loss of generality we assume that A is the left server and B is the right server. Case 1 Simple outside request. This includes any request to the left of A and type I request to the right of B. The algorithm moves the closer server to serve the request. ADV a MDC A B cs- Figure 3.3: One simple outside request. There is a type I request at position p and ADV uses a to serve it. MDC moves B to p to serve the request. x is the distance between the original positions of A and B. y is the distance B moves. Figure 3.3 shows one example, but we consider all simple outside requests. N o matter where the other ADV server is, one term in (In does not change and the other decreases by y. So (I); decreases by y. 21 Case 2 If the request is to the left of A, then (I); cannot increase, because, depending on which server ADV uses, one of the terms in (I); decreases by y, and the other cannot increase by more than y. If the request is to the right of B then (I); increases by at most y, since the first term does not change and the second can increase by at most y. (I)3 always increases by y. So we have decrease in¢24y—2y—y 2y Non-crossing inside request. This includes any inside request nearer to A and type I request nearer to B. The algorithm does the double coverage in all cases and there is no crossing of servers. ADV MDC A B Figure 3.4: One non-crossing inside request. There is a type I request at position p and ADV uses a to serve it. MDC moves A to p to serve the request, and moves B to position q to double cover. x is the distance between the original positions of A and B. y is the distance A and B move. Again, figure 3.4 shows one example but we consider all non-crossing inside requests. While one of the terms in (I); may increase by y, one of the terms in (I); decreases by y, so (I); cannot increase. Similarly, while one of the terms in (I); might increase by y, one of the terms in (I); decreases by y, so (I); cannot increase. (b3 decreases by 2y. So we have decrease in (I) 2 O + O + 2y 2 2y 22 Case 3 Case 4 Crossing inside request. This is an inside request of type 0 nearer to B. ADV MDC A B Figure 3.5: Crossing inside request. There is a type 0 request at position p and ADV uses a to serve it. MDC moves A to p to serve the request, and moves B to position q to double cover. x is the distance between the original positions of A and B. y is the distance A and B move. If b is on the left of a, then LI. cannot increase by more than (x — y), and RR decreases by (x — y). If h is on the right of a, then RR cannot increase by more than (x — y) and LI. decreases by (x — y). So (I); cannot increase. Aa decreases by y. Since the final position of B is (x — y) away from original position of A, and vice versa, the min term cannot increase by more than (x—y). So (I); decreases by at least y — (x —y) 2 2y —— x. (I)3 decreases by 2(x — y). So we have decrease in (I) 2 O +2(2y — x) +2(x —y) 2 2y Outside request to O on B’s side. In this case MDC does double coverage, but stops B at the original position of A. Since B’s final position is the same as A’s original position, LL never changes. RR always decreases by y, no matter where server b is. So (I); decreases by y. Aa always decreases by (x + y). The min term cannot increase by more than y, since B takes A’s place, and A’s final position is distance y from B’s original position. So (I); decreases by at least (x + y — y) = x. 23 ADV a l I MDC A B Figure 3.6: Outside request to O on B’s side. There is a type 0 request at position p and ADV uses a to serve it. MDC moves A to p to serve the request, and moves B to the original position of A to double cover. x is the distance between the original positions of A and B. A moves a distance of x + y and B moves a distance of x. (I)3 increases by y. So we have decrease in (I) Z 4(y) + 2(x) — (y) 2 2x + y In each case the decrease in (I) is at least equal to the distance moved by MD C. Hence (I) also has property (2) [:1 3.2.1 More Than 2 servers In this section we consider the restricted problem with more than 2 servers. The types of servers and requests are the same. We consider the problem with one A server and k —- I B servers for a total of k servers. We call this the restricted ABk‘I-server problem. We consider the restricted AB“‘"‘-server problem on the line. This is a simple generalization of the restricted Z-server problem, and MDC works for this problem. The only difference in our analysis is for the case where we have B servers both between the A server and a type 0 request and also on the other side of the type 0 request. MDC works as follows: It does the regular double cover for type I requests. For a type 0 request, let p be the position of the type 0 request and q be the position of the A server. We define regionl as the region between p and q, and region2 as the region 24 on the other side of p. MDC always moves the A server from q to p to serve the request. It moves other servers as follows: — If there is no B server in both regionl and region2, MDC moves no other server. — If there is at least one B server in region2 and no B server in regionl, then MDC does the usual double cover, with possibly crossing servers. — If there is at least one B server in regionl and no B server in region2, then MDC moves the B server in regionl that is closest to p, to q. — If there is at least one server in both regions, then let S; be the B server in regionl closest to p. Then SH; is the B server in region2 that is closest to p. Let x be the distance of S; from q and y be the distance of S; from p. Then MDC moves S; to q, and then it moves SH; by a distance y towards p. Note here that the sum of the distances moved by the two B servers is x + y which is equal (and in opposite direction) to the distance moved by the A server. We show that MDC is (2k + 3)-competitive. Theorem 4. MDC is (2k+3)-competitive for the restricted AB"‘I -server prob- lem on the line. Proof. The proof is similar to the last proof; the main change is a slightly difierent potential function. Here the potential function must have the following properties: 1. when ADV serves a request, (I) does not increase by more than (2k + 3) times the distance moved by ADV, and 25 2. when MDC serves a request, (I) decreases by at least the distance moved by MDC. As in the last proof, if such a nonnegative (I) exists then MDC is (2k + 3)- competitive. We give the (I) below and show that it has the above properties. Let S; be MDC’s servers and s; be ADV’s servers, both numbered from left to right. Let (I); = 2;, Is; — 3;], (I); = An and (I)3 = Z S; — 3;). We use the i2 (2k+l)x—2x—2(k—I)x=x Case 1.2 Inside request (request between two servers). MDC does double cover in all cases and there is no crossing of servers. ADV 5i MDC Si 51+] Figure 3.8: Inside request. There is a type I request at position p and ADV uses 5; to serve it. S; is the MDC’s server nearest to p and S;+; is the nearest server on the other side of p. MDC moves S; to p to serve the request, and moves S;+; to position q to double cover. x is the distance between the original positions of S; and S;+;. y is the distance S; and S;+; move. Depending on whether j g i or j > i, one of the terms (either ith or (i+ I )3’) in (I); decreases by y. The other increases by at most y. So (I); cannot increase. (I); can increase at most by y. (I); decreases by 2y. So we have decrease in (I) 2 O — 2g + 2(2y) 2 2y Now we consider the case when the request is of type 0. Depending on whether or not there are B servers in regionl and region2, we get the following four cases. Case 2.1 No B servers in both regions. MDC just moves the A server to the request point. This case is equivalent to Case 1.1. 27 Case 2.2 No B server in regionl and at least one B server in region2. Here MDC does double cover using the A server and the B server in region2 closest to the request point. Depending on whether or not the servers cross we get two cases. Case 2.2.1 The servers do not cross. This case is equivalent to Case 1.2. Case 2.2.2 The servers cross. a ADV 1 1y Sui . MDC 5'1 a X B 51;] It ('3'. Figure 3.9: Only regionl empty, with servers crossing. There is a type 0 request at position p and ADV uses 5; to serve it. S; is MDC’s A server and S;+; is the B server in region2 nearest to p. MDC moves S; to p to serve the request, and moves S;+; to position q to double cover. x is the distance between the original positions of S; and S;+;. y is the distance S; and S;+; move. While one term, out of Is; — 5;] or |s;+; — S;+;|, in (I); might increase by x — y, the other term decreases by (x — y). Thus (I); cannot increase. (I); decreases by y. (I)3 decreases by 2()( — y). So we have decrease in (I) 2 O +2y +2(2(x —y)) 2 2x 2 2y Case 2.3 No B server in region2 and at least one B server in regionl. In this case, MDC moves the A server to the request point and the B server in regionl closest to the request point to the original position of the A server. 28 ADV 5i MDC Si Sk Figure 3.10: Only region2 empty. There is a type 0 request at position p and ADV uses 5; to serve it. 5; is MDC’s A server and 8;; is MDC’s rightmost server. MDC moves 5; to p to serve the request, and moves S;‘ to the original position of S; to double cover. x is the distance between the original positions of S; and Sk. 5; moves by distance x +1; and S;< moves by distance x. ' ’t Hit-m Here only the kth term in (I); changes, and it decreases by y, so (I); decreases by y. (I); decreases by (x + y). (I); increases by (k — I)y. So we have II decrease in¢2 (2k+l)(y)+2(x+y)—2(k—I)y 22x+y Case 2.4 At least one B server in both regions. This case is shown in figure 3.11 below. C1 II ADV 5) l l y 1 z 1 I x T fi 1 MDC Si 51 51+] II II Ii A Bt Bt+I Figure 3.11: Neither region empty. There is a type 0 request at position p and ADV uses 5; to serve it. 5; is MDC’s A server. S; and SH; are MDC’s B servers in regionl and region2, respectively, nearest to p. x is the distance between the original positions of S; and 5;, y is the distance between S; and p, and z is the distance between p and S;+;. MDC moves 5; to p to serve the request. It moves 3; to the original position of S; and moves SH; by a distance y towards p. S; moves by distance x + y, S; moves by distance x and 51+; moves by a distance y. If 7. 2 y, then S; and SH; will not cross (S; would be renamed as 3;). In this case, depending on whether j g l or j > I, either the 1”1 term or the (1+ I)st 29 term in (I); will decrease by y, and the other can increase by at most y. So (I); does not increase. If z < y, then S; and S;+; will cross (S; will be renamed as S;+; and S;+; will be renamed as 3;). In this case, depending on whether j g I or i > I, either the 1"11 term or the (1+ I)st term in (I); will decrease by z, and the other can only increase by at most 2. So (I); does not increase. (I); decreases by (x + y). If 2 2 y, we can view the movement of MD C’s servers as S; and SH; moving towards each other by a distance y, so (I)3 decreases by 2y. And if z < y, we can view the movement of MD C’s servers as S; and S;+; moving towards each other by a distance 1. So (I)3 decreases by 22. In either case, we can say that (I)3 does not increase. So we have decrease in (I) Z 0+2(x+y) +O=2(x+y) In each case, the decrease in (I) is at least equal to the distance moved by MD C. Hence (I) also has property (2) E] The problem with more than one A server is much more complex than the problem with just one A server. MDC does not easily extend to the case with more than one A server. We conjecture that no memoryless algorithm is constant competitive for this case. 3.3 Uniform Metric Space In this section we consider the restricted problem with the uniform metric space (or the complete graph). The cost to move a server from any point to any other 30 point is I. We consider the problem with more than two servers. Let n be the number of points in the metric space, k be the number of A servers and I be the number of B servers. We assume that k < I. We call this problem the restricted AkBI-server problem on Kn, where Kn is the complete graph on n vertices. For this problem we assume that requests are always generated at points where the online algorithm does not have a server which can serve the request. That is the online algorithm faults on all requests. We can easily trim (since it can only reduce the optimal cost) any request sequence to get this property. We consider the following two cases. 331 k+l=n For the simple k—server problem, this case is trivial. However, for the restricted problem it is not trivial. First we consider the case with just two servers, one A server and other B server, and two points, that is the restricted AB-server problem on K;. We give an algorithm, USEB (for USE the B server whenever possible), which works as follows: it always uses the B server to serve a request of type I. We prove that USEB is 3-competitive. Theorem 5. USEB is 3-competitive for the restricted AB-server problem on K;. Proof. We cut the input sequence into phases as follows: A new location is a location where a request of a particular type has not been requested in the current phase. A request of type 0 at the 2nd new location ends the current phase. The request ending the current phase is part of the next phase. 31 Because of the way the phases are defined, OPT must have one fault correspond- ing to each phase. A simple case analysis shows that the maximum length of a phase can be 3. This gives a competitive ratio of 3. E] Next we consider more than-2 servers, that is the restricted AkBl-server problem on Kk+;. We extend the phase definition as follows. Since there are k (instead of I) A servers, a phase ends when a type 0 is requested at the (k + I)”t new location. Again the ending request is not part of the current phase. USEB still always uses B servers to serve requests of type I. It uses a marking scheme to select which B server to use as follows: USEB uses two types of marks, type I-mark and type O-mark. If a server serves a type x request it gets a type x-mark. When a request of type 0 comes in, USEB uses any one of the type O-unmarked A servers. When a type I request comes in, it uses any one of the type I-unmarked B servers. When USEB moves an A server to a location where there is already a B server, it unmarks that B server. At the end of a phase, USEB converts all 0 marks into I marks. Theorem 6. USEB is 3k-competitive for the restricted AkBl-server problem on Kk+l- Proof. As before, because of the way a phase is defined, OPT must have at least one fault corresponding to each phase. USEB faults on every request, so we find a limit on the number of requests in a phase. From the way the algorithm works, we can never have a type I request when there is no unmarked B server. The 0 request, when there is no type 0 unmarked A server, ends the phase. Thus there are at most k type 0 requests in a phase. At the start of a phase there can be a maximum of k unmarked B servers (except for 32 the first phase) corresponding to the k A servers being co-located with k B servers. During a phase USEB can unmark a maximum of k B servers. So we can have maximum of k type 0 requests and 2k type I requests, which gives a competitive ratio of 3k. [:1 332 k+I