113 research outputs found
Simpler and Better Algorithms for Minimum-Norm Load Balancing
Recently, Chakrabarty and Swamy (STOC 2019) introduced the minimum-norm load-balancing problem on unrelated machines, wherein we are given a set J of jobs that need to be scheduled on a set of m unrelated machines, and a monotone, symmetric norm; We seek an assignment sigma: J -> [m] that minimizes the norm of the resulting load vector load_{sigma} in R_+^m, where load_{sigma}(i) is the load on machine i under the assignment sigma. Besides capturing all l_p norms, symmetric norms also capture other norms of interest including top-l norms, and ordered norms. Chakrabarty and Swamy (STOC 2019) give a (38+epsilon)-approximation algorithm for this problem via a general framework they develop for minimum-norm optimization that proceeds by first carefully reducing this problem (in a series of steps) to a problem called min-max ordered load balancing, and then devising a so-called deterministic oblivious LP-rounding algorithm for ordered load balancing.
We give a direct, and simple 4+epsilon-approximation algorithm for the minimum-norm load balancing based on rounding a (near-optimal) solution to a novel convex-programming relaxation for the problem. Whereas the natural convex program encoding minimum-norm load balancing problem has a large non-constant integrality gap, we show that this issue can be remedied by including a key constraint that bounds the "norm of the job-cost vector." Our techniques also yield a (essentially) 4-approximation for: (a) multi-norm load balancing, wherein we are given multiple monotone symmetric norms, and we seek an assignment respecting a given budget for each norm; (b) the best simultaneous approximation factor achievable for all symmetric norms for a given instance
Welfare Maximization and Truthfulness in Mechanism Design with Ordinal Preferences
We study mechanism design problems in the {\em ordinal setting} wherein the
preferences of agents are described by orderings over outcomes, as opposed to
specific numerical values associated with them. This setting is relevant when
agents can compare outcomes, but aren't able to evaluate precise utilities for
them. Such a situation arises in diverse contexts including voting and matching
markets.
Our paper addresses two issues that arise in ordinal mechanism design. To
design social welfare maximizing mechanisms, one needs to be able to
quantitatively measure the welfare of an outcome which is not clear in the
ordinal setting. Second, since the impossibility results of Gibbard and
Satterthwaite~\cite{Gibbard73,Satterthwaite75} force one to move to randomized
mechanisms, one needs a more nuanced notion of truthfulness.
We propose {\em rank approximation} as a metric for measuring the quality of
an outcome, which allows us to evaluate mechanisms based on worst-case
performance, and {\em lex-truthfulness} as a notion of truthfulness for
randomized ordinal mechanisms. Lex-truthfulness is stronger than notions
studied in the literature, and yet flexible enough to admit a rich class of
mechanisms {\em circumventing classical impossibility results}. We demonstrate
the usefulness of the above notions by devising lex-truthful mechanisms
achieving good rank-approximation factors, both in the general ordinal setting,
as well as structured settings such as {\em (one-sided) matching markets}, and
its generalizations, {\em matroid} and {\em scheduling} markets.Comment: Some typos correcte
Approximability of Sparse Integer Programs
The main focus of this paper is a pair of new approximation algorithms for certain integer programs. First, for covering integer programs {min cx:Ax≥b,0≤x≤d} where A has at most k nonzeroes per row, we give a k-approximation algorithm. (We assume A,b,c,d are nonnegative.) For any k≥2 and ε>0, if P≠NP this ratio cannot be improved to k−1−ε, and under the unique games conjecture this ratio cannot be improved to k−ε. One key idea is to replace individual constraints by others that have better rounding properties but the same nonnegative integral solutions; another critical ingredient is knapsack-cover inequalities. Second, for packing integer programs {max cx:Ax≤b,0≤x≤d} where A has at most k nonzeroes per column, we give a (2k 2+2)-approximation algorithm. Our approach builds on the iterated LP relaxation framework. In addition, we obtain improved approximations for the second problem when k=2, and for both problems when every A ij is small compared to b i. Finally, we demonstrate a 17/16-inapproximability for covering integer programs with at most two nonzeroes per colum
Better and Simpler Error Analysis of the Sinkhorn-Knopp Algorithm for Matrix Scaling
Given a non-negative real matrix A, the matrix scaling problem is to determine if it is possible to scale the rows and columns so that each row and each column sums to a specified target value for it.
The matrix scaling problem arises in many algorithmic applications, perhaps most notably as a preconditioning step in solving linear system of equations. One of the most natural and by now classical approach to matrix scaling is the Sinkhorn-Knopp algorithm (also known as the RAS method) where one alternately scales either all rows or all columns to meet the target values. In addition to being extremely simple and natural, another appeal of this procedure is that it easily lends itself to parallelization. A central question is to understand the rate of convergence of the Sinkhorn-Knopp algorithm.
Specifically, given a suitable error metric to measure deviations from target values, and an error bound epsilon, how quickly does the Sinkhorn-Knopp algorithm converge to an error below epsilon? While there are several non-trivial convergence results known about the Sinkhorn-Knopp algorithm, perhaps somewhat surprisingly, even for natural error metrics such as ell_1-error or ell_2-error, this is not entirely understood.
In this paper, we present an elementary convergence analysis for the Sinkhorn-Knopp algorithm that improves upon the previous best bound. In a nutshell, our approach is to show (i) a simple bound on the number of iterations needed so that the KL-divergence between the current row-sums and the target row-sums drops below a specified threshold delta, and (ii) then show that for a suitable choice of delta, whenever KL-divergence is below delta, then the ell_1-error or the ell_2-error is below epsilon. The well-known Pinsker\u27s inequality immediately allows us to translate a bound on the KL divergence to a bound on ell_1-error. To bound the ell_2-error in terms of the KL-divergence, we establish a new inequality, referred to as (KL vs ell_1/ell_2) inequality in the paper. This new inequality is a strengthening of the Pinsker\u27s inequality that we believe is of independent interest. Our analysis of ell_2-error significantly improves upon the best previous convergence bound for ell_2-error.
The idea of studying Sinkhorn-Knopp convergence via KL-divergence is not new and has indeed been previously explored. Our contribution is an elementary, self-contained presentation of this approach and an interesting new inequality that yields a significantly stronger convergence guarantee for the extensively studied ell_2-error
Integrality Gap of the Hypergraphic Relaxation of Steiner Trees: a short proof of a 1.55 upper bound
Recently Byrka, Grandoni, Rothvoss and Sanita (at STOC 2010) gave a
1.39-approximation for the Steiner tree problem, using a hypergraph-based
linear programming relaxation. They also upper-bounded its integrality gap by
1.55. We describe a shorter proof of the same integrality gap bound, by
applying some of their techniques to a randomized loss-contracting algorithm
The Non-Uniform k-Center Problem
In this paper, we introduce and study the Non-Uniform k-Center problem
(NUkC). Given a finite metric space and a collection of balls of radii
, the NUkC problem is to find a placement of their
centers on the metric space and find the minimum dilation , such that
the union of balls of radius around the th center covers
all the points in . This problem naturally arises as a min-max vehicle
routing problem with fleets of different speeds.
The NUkC problem generalizes the classic -center problem when all the
radii are the same (which can be assumed to be after scaling). It also
generalizes the -center with outliers (kCwO) problem when there are
balls of radius and balls of radius . There are -approximation
and -approximation algorithms known for these problems respectively; the
former is best possible unless P=NP and the latter remains unimproved for 15
years.
We first observe that no -approximation is to the optimal dilation is
possible unless P=NP, implying that the NUkC problem is more non-trivial than
the above two problems. Our main algorithmic result is an
-bi-criteria approximation result: we give an -approximation
to the optimal dilation, however, we may open centers of each
radii. Our techniques also allow us to prove a simple (uni-criteria), optimal
-approximation to the kCwO problem improving upon the long-standing
-factor. Our main technical contribution is a connection between the NUkC
problem and the so-called firefighter problems on trees which have been studied
recently in the TCS community.Comment: Adjusted the figur
On Column-restricted and Priority Covering Integer Programs
In a column-restricted covering integer program (CCIP), all the non-zero
entries of any column of the constraint matrix are equal. Such programs capture
capacitated versions of covering problems. In this paper, we study the
approximability of CCIPs, in particular, their relation to the integrality gaps
of the underlying 0,1-CIP.
If the underlying 0,1-CIP has an integrality gap O(gamma), and assuming that
the integrality gap of the priority version of the 0,1-CIP is O(omega), we give
a factor O(gamma + omega) approximation algorithm for the CCIP. Priority
versions of 0,1-CIPs (PCIPs) naturally capture quality of service type
constraints in a covering problem.
We investigate priority versions of the line (PLC) and the (rooted) tree
cover (PTC) problems. Apart from being natural objects to study, these problems
fall in a class of fundamental geometric covering problems. We bound the
integrality of certain classes of this PCIP by a constant. Algorithmically, we
give a polytime exact algorithm for PLC, show that the PTC problem is APX-hard,
and give a factor 2-approximation algorithm for it.Comment: 28 pages, 6 figures, extended abstract to appear in proceedings of
IPCO 2010
- …
