1,620 research outputs found

    Online Makespan Minimization with Parallel Schedules

    Full text link
    In online makespan minimization a sequence of jobs σ=J1,...,Jn\sigma = J_1,..., J_n has to be scheduled on mm identical parallel machines so as to minimize the maximum completion time of any job. We investigate the problem with an essentially new model of resource augmentation. Here, an online algorithm is allowed to build several schedules in parallel while processing σ\sigma. At the end of the scheduling process the best schedule is selected. This model can be viewed as providing an online algorithm with extra space, which is invested to maintain multiple solutions. The setting is of particular interest in parallel processing environments where each processor can maintain a single or a small set of solutions. We develop a (4/3+\eps)-competitive algorithm, for any 0<\eps\leq 1, that uses a number of 1/\eps^{O(\log (1/\eps))} schedules. We also give a (1+\eps)-competitive algorithm, for any 0<\eps\leq 1, that builds a polynomial number of (m/\eps)^{O(\log (1/\eps) / \eps)} schedules. This value depends on mm but is independent of the input σ\sigma. The performance guarantees are nearly best possible. We show that any algorithm that achieves a competitiveness smaller than 4/3 must construct Ω(m)\Omega(m) schedules. Our algorithms make use of novel guessing schemes that (1) predict the optimum makespan of a job sequence σ\sigma to within a factor of 1+\eps and (2) guess the job processing times and their frequencies in σ\sigma. In (2) we have to sparsify the universe of all guesses so as to reduce the number of schedules to a constant. The competitive ratios achieved using parallel schedules are considerably smaller than those in the standard problem without resource augmentation

    Balanced Allocations: A Simple Proof for the Heavily Loaded Case

    Full text link
    We provide a relatively simple proof that the expected gap between the maximum load and the average load in the two choice process is bounded by (1+o(1))loglogn(1+o(1))\log \log n, irrespective of the number of balls thrown. The theorem was first proven by Berenbrink et al. Their proof uses heavy machinery from Markov-Chain theory and some of the calculations are done using computers. In this manuscript we provide a significantly simpler proof that is not aided by computers and is self contained. The simplification comes at a cost of weaker bounds on the low order terms and a weaker tail bound for the probability of deviating from the expectation

    Scheduling Packets with Values and Deadlines in Size-bounded Buffers

    Full text link
    Motivated by providing quality-of-service differentiated services in the Internet, we consider buffer management algorithms for network switches. We study a multi-buffer model. A network switch consists of multiple size-bounded buffers such that at any time, the number of packets residing in each individual buffer cannot exceed its capacity. Packets arrive at the network switch over time; they have values, deadlines, and designated buffers. In each time step, at most one pending packet is allowed to be sent and this packet can be from any buffer. The objective is to maximize the total value of the packets sent by their respective deadlines. A 9.82-competitive online algorithm has been provided for this model (Azar and Levy. SWAT 2006), but no offline algorithms have been known yet. In this paper, We study the offline setting of the multi-buffer model. Our contributions include a few optimal offline algorithms for some variants of the model. Each variant has its unique and interesting algorithmic feature. These offline algorithms help us understand the model better in designing online algorithms.Comment: 7 page

    Balanced Allocation on Graphs: A Random Walk Approach

    Full text link
    In this paper we propose algorithms for allocating nn sequential balls into nn bins that are interconnected as a dd-regular nn-vertex graph GG, where d3d\ge3 can be any integer.Let ll be a given positive integer. In each round tt, 1tn1\le t\le n, ball tt picks a node of GG uniformly at random and performs a non-backtracking random walk of length ll from the chosen node.Then it allocates itself on one of the visited nodes with minimum load (ties are broken uniformly at random). Suppose that GG has a sufficiently large girth and d=ω(logn)d=\omega(\log n). Then we establish an upper bound for the maximum number of balls at any bin after allocating nn balls by the algorithm, called {\it maximum load}, in terms of ll with high probability. We also show that the upper bound is at most an O(loglogn)O(\log\log n) factor above the lower bound that is proved for the algorithm. In particular, we show that if we set l=(logn)1+ϵ2l=\lfloor(\log n)^{\frac{1+\epsilon}{2}}\rfloor, for every constant ϵ(0,1)\epsilon\in (0, 1), and GG has girth at least ω(l)\omega(l), then the maximum load attained by the algorithm is bounded by O(1/ϵ)O(1/\epsilon) with high probability.Finally, we slightly modify the algorithm to have similar results for balanced allocation on dd-regular graph with d[3,O(logn)]d\in[3, O(\log n)] and sufficiently large girth

    Statistical mechanics of budget-constrained auctions

    Full text link
    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). Based on the cavity method of statistical mechanics, we introduce a message passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.Comment: Minor revisio

    The Estimation of Oil Palm Carbon Stock in Sembilang Dangku Landscape, South Sumatra

    Full text link
    Oil palm has the ability to sequester carbon dioxide stored as carbon stock. This study aimed to estimate carbon stock in some age classes, to determine the relationship between Normalized Difference Vegetation Index (NDVI) and carbon stock, and to estimate the distribution of oil palm carbon stock in Landscape Sembilang Dangku. Estimation of carbon stock were carried out at the non productive age plant phase namely &lt;2 years, 2-3 years, and the productive plant age phase namely 4-10 years and&gt; 10 years. The carbon stock estimation used allometric equations. Landsat 8 Operational Land Imager (OLI) /Thermal Infrared Sensor (TIRS) was analyzed to determine NDVI. Making a map of the classification of carbon stock distribution using Software QGIS Las Palmas 2.18.0. The results showed that the carbon stock in the age class &lt;2 years was 9.50 ton C/ ha, the age class of 2-3 was 9.62 ton C/ha, the age of 4-10 was 28.23 ton C/ha and in the age class&gt; 10 was 79.83 ton C/ha. The relation between NDVI with carbon stock had a strong correlation (r = 0.9972) with regression equation Y = 638.13x - 242.65. Carbon stock distribution was based on percentage of area as follows: &lt;15 ton C/ha covering an area of 26.52%, 15-25 ton C/ha covering an area of 5.29%, 26-70 ton C ha covering an area of 35.41%, and &gt; 70 ton C/ha 32.78%

    Upaya Keluarga Untuk Mencegah Penularan Dalam Perawatan Anggota Keluarga Dengan Tb Paru

    Full text link
    Indonesia merupakan negara keempat dengan insiden kasus terbanyak untuk tuberkulosis (TB) paru didunia..Penelitian ini menggunakan desain kualitatif dengan pendekatan case study research, bertujuan untuk memberikan penjelasan tentang upaya keluarga untuk mencegah penularan dalam perawatan anggota keluarga dengan TB Paru. Dari hasil analisa data, didapatkan tiga tema dan tujuh subtema yaitu: (1) Modifikasi lingkungan dengan subtema modifikasi ventilasi yang memadai dan menjaga kebersihan. (2) Upaya memutus transmisi penyakit dengan subtema membuang dahak, pengunaan masker, dan menutup saat batuk. (3) Konsumsi obat dan kontrol rutin ke puskesmas dengan subtema pemantauan dari keluarga dalam minum obat (PMO), serta kontrol rutin ke Puskesmas.Berdasarkan hasil penelitian ini diharapkan Puskesmas dapat menambah dan memodifikasi program penanggulangan tuberkulosis (TB). Selain itu perlu dilakukan pengawasan secara berkala atau kunjungan rumah secara rutin untuk memantau pengobatan dan pencegahan penularan Tuberkulosis (TB) yang dilakukan keluarga di rumah

    Locally Optimal Load Balancing

    Full text link
    This work studies distributed algorithms for locally optimal load-balancing: We are given a graph of maximum degree Δ\Delta, and each node has up to LL units of load. The task is to distribute the load more evenly so that the loads of adjacent nodes differ by at most 11. If the graph is a path (Δ=2\Delta = 2), it is easy to solve the fractional version of the problem in O(L)O(L) communication rounds, independently of the number of nodes. We show that this is tight, and we show that it is possible to solve also the discrete version of the problem in O(L)O(L) rounds in paths. For the general case (Δ>2\Delta > 2), we show that fractional load balancing can be solved in poly(L,Δ)\operatorname{poly}(L,\Delta) rounds and discrete load balancing in f(L,Δ)f(L,\Delta) rounds for some function ff, independently of the number of nodes.Comment: 19 pages, 11 figure

    Wear Minimization for Cuckoo Hashing: How Not to Throw a Lot of Eggs into One Basket

    Full text link
    We study wear-leveling techniques for cuckoo hashing, showing that it is possible to achieve a memory wear bound of loglogn+O(1)\log\log n+O(1) after the insertion of nn items into a table of size CnCn for a suitable constant CC using cuckoo hashing. Moreover, we study our cuckoo hashing method empirically, showing that it significantly improves on the memory wear performance for classic cuckoo hashing and linear probing in practice.Comment: 13 pages, 1 table, 7 figures; to appear at the 13th Symposium on Experimental Algorithms (SEA 2014

    Global Ultrasound Elastography Using Convolutional Neural Network

    Full text link
    Displacement estimation is very important in ultrasound elastography and failing to estimate displacement correctly results in failure in generating strain images. As conventional ultrasound elastography techniques suffer from decorrelation noise, they are prone to fail in estimating displacement between echo signals obtained during tissue distortions. This study proposes a novel elastography technique which addresses the decorrelation in estimating displacement field. We call our method GLUENet (GLobal Ultrasound Elastography Network) which uses deep Convolutional Neural Network (CNN) to get a coarse time-delay estimation between two ultrasound images. This displacement is later used for formulating a nonlinear cost function which incorporates similarity of RF data intensity and prior information of estimated displacement. By optimizing this cost function, we calculate the finer displacement by exploiting all the information of all the samples of RF data simultaneously. The Contrast to Noise Ratio (CNR) and Signal to Noise Ratio (SNR) of the strain images from our technique is very much close to that of strain images from GLUE. While most elastography algorithms are sensitive to parameter tuning, our robust algorithm is substantially less sensitive to parameter tuning.Comment: 4 pages, 4 figures; added acknowledgment section, submission type late
    corecore