531 research outputs found
Structural Rounding: Approximation Algorithms for Graphs Near an Algorithmically Tractable Class
We develop a framework for generalizing approximation algorithms from the structural graph algorithm literature so that they apply to graphs somewhat close to that class (a scenario we expect is common when working with real-world networks) while still guaranteeing approximation ratios. The idea is to edit a given graph via vertex- or edge-deletions to put the graph into an algorithmically tractable class, apply known approximation algorithms for that class, and then lift the solution to apply to the original graph. We give a general characterization of when an optimization problem is amenable to this approach, and show that it includes many well-studied graph problems, such as Independent Set, Vertex Cover, Feedback Vertex Set, Minimum Maximal Matching, Chromatic Number, (l-)Dominating Set, Edge (l-)Dominating Set, and Connected Dominating Set.
To enable this framework, we develop new editing algorithms that find the approximately-fewest edits required to bring a given graph into one of a few important graph classes (in some cases these are bicriteria algorithms which simultaneously approximate both the number of editing operations and the target parameter of the family). For bounded degeneracy, we obtain an O(r log{n})-approximation and a bicriteria (4,4)-approximation which also extends to a smoother bicriteria trade-off. For bounded treewidth, we obtain a bicriteria (O(log^{1.5} n), O(sqrt{log w}))-approximation, and for bounded pathwidth, we obtain a bicriteria (O(log^{1.5} n), O(sqrt{log w} * log n))-approximation. For treedepth 2 (related to bounded expansion), we obtain a 4-approximation. We also prove complementary hardness-of-approximation results assuming P != NP: in particular, these problems are all log-factor inapproximable, except the last which is not approximable below some constant factor 2 (assuming UGC)
Fine-grained I/O Complexity via Reductions: New Lower Bounds, Faster Algorithms, and a Time Hierarchy
This paper initiates the study of I/O algorithms (minimizing cache misses) from the perspective of fine-grained complexity
(conditional polynomial lower bounds). Specifically, we aim to answer why sparse graph problems are so hard, and why the Longest Common Subsequence problem gets a savings of a factor of the size of cache times the length of a cache line, but no more. We take the reductions and techniques from complexity and fine-grained complexity and apply them to the I/O model to generate new (conditional) lower bounds as well as new faster algorithms. We also prove the existence of a time hierarchy for the I/O model, which motivates the fine-grained reductions.
- Using fine-grained reductions, we give an algorithm for distinguishing 2 vs. 3 diameter and radius that runs in O(|E|^2/(MB)) cache misses, which for sparse graphs improves over the previous O(|V|^2/B) running time.
- We give new reductions from radius and diameter to Wiener index and median. These reductions are new in both the RAM and I/O models.
- We show meaningful reductions between problems that have linear-time solutions in the RAM model. The reductions use low I/O complexity (typically O(n/B)), and thus help to finely capture between "I/O linear time" O(n/B) and RAM linear time O(n).
- We generate new I/O assumptions based on the difficulty of improving sparse graph problem running times in the I/O model. We create conjectures that the current best known algorithms for Single Source Shortest Paths (SSSP), diameter, and radius are optimal.
- From these I/O-model assumptions, we show that many of the known reductions in the word-RAM model can naturally extend to hold in the I/O model as well (e.g., a lower bound on the I/O complexity of Longest Common Subsequence that matches the best known running time).
- We prove an analog of the Time Hierarchy Theorem in the I/O model, further motivating the study of fine-grained algorithmic differences
A Note on Improved Results for One Round Distributed Clique Listing
In this note, we investigate listing cliques of arbitrary sizes in
bandwidth-limited, dynamic networks. The problem of detecting and listing
triangles and cliques was originally studied in great detail by Bonne and
Censor-Hillel (ICALP 2019). We extend this study to dynamic graphs where more
than one update may occur as well as resolve an open question posed by Bonne
and Censor-Hillel (2019). Our algorithms and results are based on some simple
observations about listing triangles under various settings and we show that we
can list larger cliques using such facts. Specifically, we show that our
techniques can be used to solve an open problem posed in the original paper: we
show that detecting and listing cliques (of any size) can be done using
-bandwidth after one round of communication under node insertions and
node/edge deletions. We conclude with an extension of our techniques to obtain
a small bandwidth -round algorithm for listing cliques when more than one
node insertion/deletion and/or edge deletion update occurs at any time.Comment: To appear in IP
Recommended from our members
Oscillation-specific nodal alterations in early to middle stages Parkinsons disease.
Background: Different oscillations of brain networks could carry different dimensions of brain integration. We aimed to investigate oscillation-specific nodal alterations in patients with Parkinsons disease (PD) across early stage to middle stage by using graph theory-based analysis. Methods: Eighty-eight PD patients including 39 PD patients in the early stage (EPD) and 49 patients in the middle stage (MPD) and 36 controls were recruited in the present study. Graph theory-based network analyses from three oscillation frequencies (slow-5: 0.01-0.027 Hz; slow-4: 0.027-0.073 Hz; slow-3: 0.073-0.198 Hz) were analyzed. Nodal metrics (e.g. nodal degree centrality, betweenness centrality and nodal efficiency) were calculated. Results: Our results showed that (1) a divergent effect of oscillation frequencies on nodal metrics, especially on nodal degree centrality and nodal efficiency, that the anteroventral neocortex and subcortex had high nodal metrics within low oscillation frequencies while the posterolateral neocortex had high values within the relative high oscillation frequency was observed, which visually showed that network was perturbed in PD; (2) PD patients in early stage relatively preserved nodal properties while MPD patients showed widespread abnormalities, which was consistently detected within all three oscillation frequencies; (3) the involvement of basal ganglia could be specifically observed within slow-5 oscillation frequency in MPD patients; (4) logistic regression and receiver operating characteristic curve analyses demonstrated that some of those oscillation-specific nodal alterations had the ability to well discriminate PD patients from controls or MPD from EPD patients at the individual level; (5) occipital disruption within high frequency (slow-3) made a significant influence on motor impairment which was dominated by akinesia and rigidity. Conclusions: Coupling various oscillations could provide potentially useful information for large-scale network and progressive oscillation-specific nodal alterations were observed in PD patients across early to middle stages
The Predicted-Deletion Dynamic Model: Taking Advantage of ML Predictions, for Free
The main bottleneck in designing efficient dynamic algorithms is the unknown
nature of the update sequence. In particular, there are some problems, like
3-vertex connectivity, planar digraph all pairs shortest paths, and others,
where the separation in runtime between the best partially dynamic solutions
and the best fully dynamic solutions is polynomial, sometimes even exponential.
In this paper, we formulate the predicted-deletion dynamic model, motivated
by a recent line of empirical work about predicting edge updates in dynamic
graphs. In this model, edges are inserted and deleted online, and when an edge
is inserted, it is accompanied by a "prediction" of its deletion time. This
models real world settings where services may have access to historical data or
other information about an input and can subsequently use such information make
predictions about user behavior. The model is also of theoretical interest, as
it interpolates between the partially dynamic and fully dynamic settings, and
provides a natural extension of the algorithms with predictions paradigm to the
dynamic setting.
We give a novel framework for this model that "lifts" partially dynamic
algorithms into the fully dynamic setting with little overhead. We use our
framework to obtain improved efficiency bounds over the state-of-the-art
dynamic algorithms for a variety of problems. In particular, we design
algorithms that have amortized update time that scales with a partially dynamic
algorithm, with high probability, when the predictions are of high quality. On
the flip side, our algorithms do no worse than existing fully-dynamic
algorithms when the predictions are of low quality. Furthermore, our algorithms
exhibit a graceful trade-off between the two cases. Thus, we are able to take
advantage of ML predictions asymptotically "for free.'
- …
