275 research outputs found
Analytical Cost Metrics : Days of Future Past
As we move towards the exascale era, the new architectures must be capable of
running the massive computational problems efficiently. Scientists and
researchers are continuously investing in tuning the performance of
extreme-scale computational problems. These problems arise in almost all areas
of computing, ranging from big data analytics, artificial intelligence, search,
machine learning, virtual/augmented reality, computer vision, image/signal
processing to computational science and bioinformatics. With Moore's law
driving the evolution of hardware platforms towards exascale, the dominant
performance metric (time efficiency) has now expanded to also incorporate
power/energy efficiency. Therefore, the major challenge that we face in
computing systems research is: "how to solve massive-scale computational
problems in the most time/power/energy efficient manner?"
The architectures are constantly evolving making the current performance
optimizing strategies less applicable and new strategies to be invented. The
solution is for the new architectures, new programming models, and applications
to go forward together. Doing this is, however, extremely hard. There are too
many design choices in too many dimensions. We propose the following strategy
to solve the problem: (i) Models - Develop accurate analytical models (e.g.
execution time, energy, silicon area) to predict the cost of executing a given
program, and (ii) Complete System Design - Simultaneously optimize all the cost
models for the programs (computational problems) to obtain the most
time/area/power/energy efficient solution. Such an optimization problem evokes
the notion of codesign
Computing the Girth of a Planar Graph in Linear Time
The girth of a graph is the minimum weight of all simple cycles of the graph.
We study the problem of determining the girth of an n-node unweighted
undirected planar graph. The first non-trivial algorithm for the problem, given
by Djidjev, runs in O(n^{5/4} log n) time. Chalermsook, Fakcharoenphol, and
Nanongkai reduced the running time to O(n log^2 n). Weimann and Yuster further
reduced the running time to O(n log n). In this paper, we solve the problem in
O(n) time.Comment: 20 pages, 7 figures, accepted to SIAM Journal on Computin
Faster Separators for Shallow Minor-Free Graphs via Dynamic Approximate Distance Oracles
Plotkin, Rao, and Smith (SODA'97) showed that any graph with edges and
vertices that excludes as a depth -minor has a
separator of size and that such a separator can be
found in time. A time bound of for
any constant was later given (W., FOCS'11) which is an
improvement for non-sparse graphs. We give three new algorithms. The first has
the same separator size and running time O(\mbox{poly}(h)\ell
m^{1+\epsilon}). This is a significant improvement for small and .
If for an arbitrarily small chosen constant
, we get a time bound of O(\mbox{poly}(h)\ell n^{1+\epsilon}).
The second algorithm achieves the same separator size (with a slightly larger
polynomial dependency on ) and running time O(\mbox{poly}(h)(\sqrt\ell
n^{1+\epsilon} + n^{2+\epsilon}/\ell^{3/2})) when . Our third algorithm has running time
O(\mbox{poly}(h)\sqrt\ell n^{1+\epsilon}) when . It finds a separator of size O(n/\ell) + \tilde
O(\mbox{poly}(h)\ell\sqrt n) which is no worse than previous bounds when
is fixed and . A main tool in obtaining our results
is a novel application of a decremental approximate distance oracle of Roditty
and Zwick.Comment: 16 pages. Full version of the paper that appeared at ICALP'14. Minor
fixes regarding the time bounds such that these bounds hold also for
non-sparse graph
Graph-based linear scaling electronic structure theory
We show how graph theory can be combined with quantum theory to calculate the
electronic structure of large complex systems. The graph formalism is general
and applicable to a broad range of electronic structure methods and materials,
including challenging systems such as biomolecules. The methodology combines
well-controlled accuracy, low computational cost, and natural low-communication
parallelism. This combination addresses substantial shortcomings of linear
scaling electronic structure theory, in particular with respect to
quantum-based molecular dynamics simulations.Comment: 17 pages, 5 figure
-Stars or On Extending a Drawing of a Connected Subgraph
We consider the problem of extending the drawing of a subgraph of a given
plane graph to a drawing of the entire graph using straight-line and polyline
edges. We define the notion of star complexity of a polygon and show that a
drawing of an induced connected subgraph can be extended with at
most bends per edge, where is the
largest star complexity of a face of and is the size of the
largest face of . This result significantly improves the previously known
upper bound of [5] for the case where is connected. We also show
that our bound is worst case optimal up to a small additive constant.
Additionally, we provide an indication of complexity of the problem of testing
whether a star-shaped inner face can be extended to a straight-line drawing of
the graph; this is in contrast to the fact that the same problem is solvable in
linear time for the case of star-shaped outer face [9] and convex inner face
[13].Comment: Appears in the Proceedings of the 26th International Symposium on
Graph Drawing and Network Visualization (GD 2018
Convex Hull of Points Lying on Lines in o(n log n) Time after Preprocessing
Motivated by the desire to cope with data imprecision, we study methods for
taking advantage of preliminary information about point sets in order to speed
up the computation of certain structures associated with them.
In particular, we study the following problem: given a set L of n lines in
the plane, we wish to preprocess L such that later, upon receiving a set P of n
points, each of which lies on a distinct line of L, we can construct the convex
hull of P efficiently. We show that in quadratic time and space it is possible
to construct a data structure on L that enables us to compute the convex hull
of any such point set P in O(n alpha(n) log* n) expected time. If we further
assume that the points are "oblivious" with respect to the data structure, the
running time improves to O(n alpha(n)). The analysis applies almost verbatim
when L is a set of line-segments, and yields similar asymptotic bounds. We
present several extensions, including a trade-off between space and query time
and an output-sensitive algorithm. We also study the "dual problem" where we
show how to efficiently compute the (<= k)-level of n lines in the plane, each
of which lies on a distinct point (given in advance).
We complement our results by Omega(n log n) lower bounds under the algebraic
computation tree model for several related problems, including sorting a set of
points (according to, say, their x-order), each of which lies on a given line
known in advance. Therefore, the convex hull problem under our setting is
easier than sorting, contrary to the "standard" convex hull and sorting
problems, in which the two problems require Theta(n log n) steps in the worst
case (under the algebraic computation tree model).Comment: 26 pages, 5 figures, 1 appendix; a preliminary version appeared at
SoCG 201
- …
