1,620 research outputs found
Online Makespan Minimization with Parallel Schedules
In online makespan minimization a sequence of jobs
has to be scheduled on identical parallel machines so as to minimize the
maximum completion time of any job. We investigate the problem with an
essentially new model of resource augmentation. Here, an online algorithm is
allowed to build several schedules in parallel while processing . At
the end of the scheduling process the best schedule is selected. This model can
be viewed as providing an online algorithm with extra space, which is invested
to maintain multiple solutions. The setting is of particular interest in
parallel processing environments where each processor can maintain a single or
a small set of solutions.
We develop a (4/3+\eps)-competitive algorithm, for any 0<\eps\leq 1, that
uses a number of 1/\eps^{O(\log (1/\eps))} schedules. We also give a
(1+\eps)-competitive algorithm, for any 0<\eps\leq 1, that builds a
polynomial number of (m/\eps)^{O(\log (1/\eps) / \eps)} schedules. This value
depends on but is independent of the input . The performance
guarantees are nearly best possible. We show that any algorithm that achieves a
competitiveness smaller than 4/3 must construct schedules. Our
algorithms make use of novel guessing schemes that (1) predict the optimum
makespan of a job sequence to within a factor of 1+\eps and (2)
guess the job processing times and their frequencies in . In (2) we
have to sparsify the universe of all guesses so as to reduce the number of
schedules to a constant.
The competitive ratios achieved using parallel schedules are considerably
smaller than those in the standard problem without resource augmentation
Balanced Allocations: A Simple Proof for the Heavily Loaded Case
We provide a relatively simple proof that the expected gap between the
maximum load and the average load in the two choice process is bounded by
, irrespective of the number of balls thrown. The theorem
was first proven by Berenbrink et al. Their proof uses heavy machinery from
Markov-Chain theory and some of the calculations are done using computers. In
this manuscript we provide a significantly simpler proof that is not aided by
computers and is self contained. The simplification comes at a cost of weaker
bounds on the low order terms and a weaker tail bound for the probability of
deviating from the expectation
Scheduling Packets with Values and Deadlines in Size-bounded Buffers
Motivated by providing quality-of-service differentiated services in the
Internet, we consider buffer management algorithms for network switches. We
study a multi-buffer model. A network switch consists of multiple size-bounded
buffers such that at any time, the number of packets residing in each
individual buffer cannot exceed its capacity. Packets arrive at the network
switch over time; they have values, deadlines, and designated buffers. In each
time step, at most one pending packet is allowed to be sent and this packet can
be from any buffer. The objective is to maximize the total value of the packets
sent by their respective deadlines. A 9.82-competitive online algorithm has
been provided for this model (Azar and Levy. SWAT 2006), but no offline
algorithms have been known yet. In this paper, We study the offline setting of
the multi-buffer model. Our contributions include a few optimal offline
algorithms for some variants of the model. Each variant has its unique and
interesting algorithmic feature. These offline algorithms help us understand
the model better in designing online algorithms.Comment: 7 page
Balanced Allocation on Graphs: A Random Walk Approach
In this paper we propose algorithms for allocating sequential balls into
bins that are interconnected as a -regular -vertex graph , where
can be any integer.Let be a given positive integer. In each round
, , ball picks a node of uniformly at random and
performs a non-backtracking random walk of length from the chosen node.Then
it allocates itself on one of the visited nodes with minimum load (ties are
broken uniformly at random). Suppose that has a sufficiently large girth
and . Then we establish an upper bound for the maximum number
of balls at any bin after allocating balls by the algorithm, called {\it
maximum load}, in terms of with high probability. We also show that the
upper bound is at most an factor above the lower bound that is
proved for the algorithm. In particular, we show that if we set , for every constant , and
has girth at least , then the maximum load attained by the
algorithm is bounded by with high probability.Finally, we
slightly modify the algorithm to have similar results for balanced allocation
on -regular graph with and sufficiently large girth
Statistical mechanics of budget-constrained auctions
Finding the optimal assignment in budget-constrained auctions is a
combinatorial optimization problem with many important applications, a notable
example being the sale of advertisement space by search engines (in this
context the problem is often referred to as the off-line AdWords problem).
Based on the cavity method of statistical mechanics, we introduce a message
passing algorithm that is capable of solving efficiently random instances of
the problem extracted from a natural distribution, and we derive from its
properties the phase diagram of the problem. As the control parameter (average
value of the budgets) is varied, we find two phase transitions delimiting a
region in which long-range correlations arise.Comment: Minor revisio
The Estimation of Oil Palm Carbon Stock in Sembilang Dangku Landscape, South Sumatra
Oil palm has the ability to sequester carbon dioxide stored as carbon stock. This study aimed to estimate carbon stock in some age classes, to determine the relationship between Normalized Difference Vegetation Index (NDVI) and carbon stock, and to estimate the distribution of oil palm carbon stock in Landscape Sembilang Dangku. Estimation of carbon stock were carried out at the non productive age plant phase namely <2 years, 2-3 years, and the productive plant age phase namely 4-10 years and> 10 years. The carbon stock estimation used allometric equations. Landsat 8 Operational Land Imager (OLI) /Thermal Infrared Sensor (TIRS) was analyzed to determine NDVI. Making a map of the classification of carbon stock distribution using Software QGIS Las Palmas 2.18.0. The results showed that the carbon stock in the age class <2 years was 9.50 ton C/ ha, the age class of 2-3 was 9.62 ton C/ha, the age of 4-10 was 28.23 ton C/ha and in the age class> 10 was 79.83 ton C/ha. The relation between NDVI with carbon stock had a strong correlation (r = 0.9972) with regression equation Y = 638.13x - 242.65. Carbon stock distribution was based on percentage of area as follows: <15 ton C/ha covering an area of 26.52%, 15-25 ton C/ha covering an area of 5.29%, 26-70 ton C ha covering an area of 35.41%, and > 70 ton C/ha 32.78%
Upaya Keluarga Untuk Mencegah Penularan Dalam Perawatan Anggota Keluarga Dengan Tb Paru
Indonesia merupakan negara keempat dengan insiden kasus terbanyak untuk tuberkulosis (TB) paru didunia..Penelitian ini menggunakan desain kualitatif dengan pendekatan case study research, bertujuan untuk memberikan penjelasan tentang upaya keluarga untuk mencegah penularan dalam perawatan anggota keluarga dengan TB Paru. Dari hasil analisa data, didapatkan tiga tema dan tujuh subtema yaitu: (1) Modifikasi lingkungan dengan subtema modifikasi ventilasi yang memadai dan menjaga kebersihan. (2) Upaya memutus transmisi penyakit dengan subtema membuang dahak, pengunaan masker, dan menutup saat batuk. (3) Konsumsi obat dan kontrol rutin ke puskesmas dengan subtema pemantauan dari keluarga dalam minum obat (PMO), serta kontrol rutin ke Puskesmas.Berdasarkan hasil penelitian ini diharapkan Puskesmas dapat menambah dan memodifikasi program penanggulangan tuberkulosis (TB). Selain itu perlu dilakukan pengawasan secara berkala atau kunjungan rumah secara rutin untuk memantau pengobatan dan pencegahan penularan Tuberkulosis (TB) yang dilakukan keluarga di rumah
Locally Optimal Load Balancing
This work studies distributed algorithms for locally optimal load-balancing:
We are given a graph of maximum degree , and each node has up to
units of load. The task is to distribute the load more evenly so that the loads
of adjacent nodes differ by at most .
If the graph is a path (), it is easy to solve the fractional
version of the problem in communication rounds, independently of the
number of nodes. We show that this is tight, and we show that it is possible to
solve also the discrete version of the problem in rounds in paths.
For the general case (), we show that fractional load balancing
can be solved in rounds and discrete load
balancing in rounds for some function , independently of the
number of nodes.Comment: 19 pages, 11 figure
Wear Minimization for Cuckoo Hashing: How Not to Throw a Lot of Eggs into One Basket
We study wear-leveling techniques for cuckoo hashing, showing that it is
possible to achieve a memory wear bound of after the
insertion of items into a table of size for a suitable constant
using cuckoo hashing. Moreover, we study our cuckoo hashing method empirically,
showing that it significantly improves on the memory wear performance for
classic cuckoo hashing and linear probing in practice.Comment: 13 pages, 1 table, 7 figures; to appear at the 13th Symposium on
Experimental Algorithms (SEA 2014
Managing Climate Risk
At the heart of the traditional approach to strategy in the climate change dilemma lies the assumption that the global community, by applying a set of powerful analytical tools, can predict the future of climate change accurately enough to choose a clear strategic direction for it. We claim that this approach might involve underestimating uncertainty in order to lay out a vision of future events sufficiently precise to be captured in a discounted cost flow analysis in integrated assessment models. However, since the future of climate change is truly uncertain, this approach might at best be marginally helpful and at worst downright dangerous: underestimating uncertainty can lead to strategies that do not defend the world against unexpected and sometimes even catastrophic threats. Another danger lies on the other extreme: if the global community can not find a strategy that works under traditional analysis or if uncertainties are too large that clear messages are absent, they may abandon the analytical rigor of their planning process altogether and base their decisions on good instinct and consensus of some future process that is easy to agree upon.
In this paper, we try to outline a system to derive strategic decisions under uncertainty for the climate change dilemma. What follows is a framework for determining the level of uncertainty surrounding strategic decisions and for tailoring strategy to that uncertainty.
Our core argument is that a robust strategy towards climate change involves the building of a technological portfolio of mitigation and adaptation measures that includes sufficient opposite technological positions to the underlying baseline emission scenarios given the uncertainties of the entire physical and socioeconomic system in place. In the case of mitigation, opposite technological positions with the highest leverage are particular types of sinks. A robust climate risk management portfolio can only work when the opposite technological positions are readily available when needed and therefore have to be prepared in advance. It is precisely the flexibility of these technological options which has to be quantified under the perspective of the uncertain nature of the underlying system and compared to the cost of creating these options, rather than comparing their cost with expected losses in a net present value type analysis. We conclude that climate policy - especially under the consideration of the precautionary principle - would look much different if uncertainties would be taken explicitly into account
- …
