1,563 research outputs found

    Disease dynamics across political borders : the case of rabies in Israel and the surrounding countries

    Get PDF
    An eco-historical analysis facilitated the identification of the socio-political, demographical and environmental changes that have affected the distribution and abundance of vertebrates living in Israeli and Palestinian territories, their pathogens and the extent of human -animal contacts, all contributing to the risk of rabies, leading to three deaths in the late 90's. There are indications that the implementation of uncoordinated control strategies with a lack of an ecological perspective on one side of the border, such as the destruction of the main reservoirs, led to the emergence of a more potent reservoir coming from the other side, and the creation of an additional one yet to be identified. We analyze the lessons of historical mistakes, aiming at future regional control of the disease

    Exploiting the Oil-GDP Effect to Support Renewables Deployment

    Get PDF
    The empirical evidence from a growing body of academic literature clearly suggests that oil price increases and volatility dampen macroeconomic growth by raising inflation and unemployment and by depressing the value of financial and other assets. Surprisingly, this issue seems to have received little attention from energy policy makers. In percentage terms, the Oil-GDP effect is relatively small, producing losses in the order of 0.5% of GDP for a 10% oil price increase. In absolute terms however, even a 10% oil price rise. and oil has risen at least 50% in the last year alone. produces GDP losses that, could they have been averted, would significantly offset the cost of increased RE deployment. While we focus on renewables, the GDP offset applies equally to energy efficiency, DSM and nuclear and other non-fossil technologies. This paper draws on the empirical Oil-GDP literature, which we summarize, to show that by displacing gas and oil, renewable energy investments can help nations avoid costly macroeconomic losses produced by the Oil-GDP effect. We show that a 10% increase in RE share avoids GDP losses in the range of 29.29.53 billion in the US and the EU (49.49.90 billion for OECD). These avoided losses offset one-fifth of the RE investment needs projected by the EREC and half the OECD investment projected by a G-8 Task Force. For the US, the figures further suggest that each additional kW of renewables, on average, avoids 250.250.450 in GDP losses, a figure that varies across technologies as a function of annual capacity factors. We approximate that the offset is worth 200/kWforwindandsolarand200/kW for wind and solar and 800/kW for geothermal and biomass (and probably nuclear). The societal valuation of non-fossil alternatives needs to reflect the avoided GDP losses, whose benefit is not fully captured by private investors. This said, we fully recognize that wealth created in this manner does not directly form a pool of public funds that is easily earmarked for renewables support. Finally, the Oil-GDP relationship has important implications for correctly estimating direct electricity generating cost for conventional and renewable alternatives and for developing more useful energy security and diversity concepts. We also address these issues.Oil price shocks, oil price volatility, Oil-GDP effects, renewable energy, RES-E targets, financial beta risk, funding renewables

    A Simple Deterministic Distributed MST Algorithm, with Near-Optimal Time and Message Complexities

    Full text link
    Distributed minimum spanning tree (MST) problem is one of the most central and fundamental problems in distributed graph algorithms. Garay et al. \cite{GKP98,KP98} devised an algorithm with running time O(D+nlogn)O(D + \sqrt{n} \cdot \log^* n), where DD is the hop-diameter of the input nn-vertex mm-edge graph, and with message complexity O(m+n3/2)O(m + n^{3/2}). Peleg and Rubinovich \cite{PR99} showed that the running time of the algorithm of \cite{KP98} is essentially tight, and asked if one can achieve near-optimal running time **together with near-optimal message complexity**. In a recent breakthrough, Pandurangan et al. \cite{PRS16} answered this question in the affirmative, and devised a **randomized** algorithm with time O~(D+n)\tilde{O}(D+ \sqrt{n}) and message complexity O~(m)\tilde{O}(m). They asked if such a simultaneous time- and message-optimality can be achieved by a **deterministic** algorithm. In this paper, building upon the work of \cite{PRS16}, we answer this question in the affirmative, and devise a **deterministic** algorithm that computes MST in time O((D+n)logn)O((D + \sqrt{n}) \cdot \log n), using O(mlogn+nlognlogn)O(m \cdot \log n + n \log n \cdot \log^* n) messages. The polylogarithmic factors in the time and message complexities of our algorithm are significantly smaller than the respective factors in the result of \cite{PRS16}. Also, our algorithm and its analysis are very **simple** and self-contained, as opposed to rather complicated previous sublinear-time algorithms \cite{GKP98,KP98,E04b,PRS16}

    Notched graphite polymimide composites at room and notched graphite polymide composites at room and elevated temperatures

    Get PDF
    The fracture behavior in graphite/polyimide (Gr/PI) Celion 6000/PMR-15 composites was characterized. Emphasis was placed on the correlation between the observed failure modes and the deformation characteristics of center-notched Gr/Pl laminates. Crack tip damage growth, fracture strength and notch sensitivity, and the associated characterization methods were also examined. Special attention was given to nondestructive evaluation of internal damage and damage growth, techniques such as acoustic emission, X-ray radiography, and ultrasonic C-scan. Microstructural studies using scanning electron microscopy, photomicrography, and the pulsed nuclear magnetic resonance technique were employed as well. All experimental procedures and techniques are described and a summary of representative results for Gr/Pl laminates is given

    Efficient electricity generating portfolios for Europe: Maximising energy security and climate change mitigation

    Full text link
    This paper applies portfolio-theory optimisation concepts from the field of finance to produce an expository evaluation of the 2020 projected EU-BAU (business-as-usual) electricity generating mix. We locate optimal generating portfolios that reduce cost and market risk as well as CO2 emissions relative to the BAU mix. Optimal generating portfolios generally include greater shares of wind, nuclear, and other nonfossil technologies that often cost more on a standalone engineering basis, but overall costs and risks are reduced because of the portfolio diversification effect. They also enhance energy security. The benefit streams created by these optimal mixes warrant current investments of about €250 - €500 billion. The analysis further suggests that the optimal 2020 generating mix is constrained by shortages of wind, especially offshore, and possibly nuclear power, so that even small incremental additions of these two technologies will provide sizeable cost and risk reductions

    RoBuSt: A Crash-Failure-Resistant Distributed Storage System

    Full text link
    In this work we present the first distributed storage system that is provably robust against crash failures issued by an adaptive adversary, i.e., for each batch of requests the adversary can decide based on the entire system state which servers will be unavailable for that batch of requests. Despite up to γn1/loglogn\gamma n^{1/\log\log n} crashed servers, with γ>0\gamma>0 constant and nn denoting the number of servers, our system can correctly process any batch of lookup and write requests (with at most a polylogarithmic number of requests issued at each non-crashed server) in at most a polylogarithmic number of communication rounds, with at most polylogarithmic time and work at each server and only a logarithmic storage overhead. Our system is based on previous work by Eikel and Scheideler (SPAA 2013), who presented IRIS, a distributed information system that is provably robust against the same kind of crash failures. However, IRIS is only able to serve lookup requests. Handling both lookup and write requests has turned out to require major changes in the design of IRIS.Comment: Revised full versio

    Parallel Batch-Dynamic Graph Connectivity

    Full text link
    In this paper, we study batch parallel algorithms for the dynamic connectivity problem, a fundamental problem that has received considerable attention in the sequential setting. The most well known sequential algorithm for dynamic connectivity is the elegant level-set algorithm of Holm, de Lichtenberg and Thorup (HDT), which achieves O(log2n)O(\log^2 n) amortized time per edge insertion or deletion, and O(logn/loglogn)O(\log n / \log\log n) time per query. We design a parallel batch-dynamic connectivity algorithm that is work-efficient with respect to the HDT algorithm for small batch sizes, and is asymptotically faster when the average batch size is sufficiently large. Given a sequence of batched updates, where Δ\Delta is the average batch size of all deletions, our algorithm achieves O(lognlog(1+n/Δ))O(\log n \log(1 + n / \Delta)) expected amortized work per edge insertion and deletion and O(log3n)O(\log^3 n) depth w.h.p. Our algorithm answers a batch of kk connectivity queries in O(klog(1+n/k))O(k \log(1 + n/k)) expected work and O(logn)O(\log n) depth w.h.p. To the best of our knowledge, our algorithm is the first parallel batch-dynamic algorithm for connectivity.Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Adaptive Regret Minimization in Bounded-Memory Games

    Get PDF
    Online learning algorithms that minimize regret provide strong guarantees in situations that involve repeatedly making decisions in an uncertain environment, e.g. a driver deciding what route to drive to work every day. While regret minimization has been extensively studied in repeated games, we study regret minimization for a richer class of games called bounded memory games. In each round of a two-player bounded memory-m game, both players simultaneously play an action, observe an outcome and receive a reward. The reward may depend on the last m outcomes as well as the actions of the players in the current round. The standard notion of regret for repeated games is no longer suitable because actions and rewards can depend on the history of play. To account for this generality, we introduce the notion of k-adaptive regret, which compares the reward obtained by playing actions prescribed by the algorithm against a hypothetical k-adaptive adversary with the reward obtained by the best expert in hindsight against the same adversary. Roughly, a hypothetical k-adaptive adversary adapts her strategy to the defender's actions exactly as the real adversary would within each window of k rounds. Our definition is parametrized by a set of experts, which can include both fixed and adaptive defender strategies. We investigate the inherent complexity of and design algorithms for adaptive regret minimization in bounded memory games of perfect and imperfect information. We prove a hardness result showing that, with imperfect information, any k-adaptive regret minimizing algorithm (with fixed strategies as experts) must be inefficient unless NP=RP even when playing against an oblivious adversary. In contrast, for bounded memory games of perfect and imperfect information we present approximate 0-adaptive regret minimization algorithms against an oblivious adversary running in time n^{O(1)}.Comment: Full Version. GameSec 2013 (Invited Paper

    Fast and Compact Distributed Verification and Self-Stabilization of a DFS Tree

    Full text link
    We present algorithms for distributed verification and silent-stabilization of a DFS(Depth First Search) spanning tree of a connected network. Computing and maintaining such a DFS tree is an important task, e.g., for constructing efficient routing schemes. Our algorithm improves upon previous work in various ways. Comparable previous work has space and time complexities of O(nlogΔ)O(n\log \Delta) bits per node and O(nD)O(nD) respectively, where Δ\Delta is the highest degree of a node, nn is the number of nodes and DD is the diameter of the network. In contrast, our algorithm has a space complexity of O(logn)O(\log n) bits per node, which is optimal for silent-stabilizing spanning trees and runs in O(n)O(n) time. In addition, our solution is modular since it utilizes the distributed verification algorithm as an independent subtask of the overall solution. It is possible to use the verification algorithm as a stand alone task or as a subtask in another algorithm. To demonstrate the simplicity of constructing efficient DFS algorithms using the modular approach, We also present a (non-sielnt) self-stabilizing DFS token circulation algorithm for general networks based on our silent-stabilizing DFS tree. The complexities of this token circulation algorithm are comparable to the known ones

    Distributed Computing in the Asynchronous LOCAL model

    Full text link
    The LOCAL model is among the main models for studying locality in the framework of distributed network computing. This model is however subject to pertinent criticisms, including the facts that all nodes wake up simultaneously, perform in lock steps, and are failure-free. We show that relaxing these hypotheses to some extent does not hurt local computing. In particular, we show that, for any construction task TT associated to a locally checkable labeling (LCL), if TT is solvable in tt rounds in the LOCAL model, then TT remains solvable in O(t)O(t) rounds in the asynchronous LOCAL model. This improves the result by Casta\~neda et al. [SSS 2016], which was restricted to 3-coloring the rings. More generally, the main contribution of this paper is to show that, perhaps surprisingly, asynchrony and failures in the computations do not restrict the power of the LOCAL model, as long as the communications remain synchronous and failure-free
    corecore