9 research outputs found
Are Fibonacci Heaps Optimal?
In this paper we investigate the inherent complexity of the priority queue abstract data type. We show that, under reasonable assumptions, there exist sequences of n Insert, n Delete, m DecreaseKey and t FindMin operations, where 1 t n, which have W(nlogt + n + m) complexity. Although Fibonacci heaps do not achieve this bound, we present a modified Fibonacci heap which does, and so is optimal under our assumptions
An Efficient Algorithm for the Shortest Path Problem
A hybrid algorithm for the shortest path problem is presented. Its time complexity is O(n + m) when the graph is acyclic and O(n log c + m) in general, where c < n. The main improvement comes from a modification of Fibonacci heaps, in which the amortized time complexity of n Delete and c' FindMin operations is O(n log c)
An Efficient Parallel Heap Compaction Algorithm
We propose a heap compaction algorithm appropriate for modern computing environments. Our algorithm is targeted at SMP platforms. It demonstrates high scalability when running in parallel but is also extremely ecient when running single-threaded on a uniprocessor. Instead of using the standard forwarding pointer mechanism for updating pointers to moved objects, the algorithm saves information for a pack of objects. It then does a small computation to process this information and determine each object's new location. In addition, using a smart parallel moving strategy, the algorithm achieves (almost) perfect compaction in the lower addresses of the heap, whereas previous algorithms achieved parallelism by compacting within several predetermined segments. Next, we investigate a method that trades compaction quality for a further reduction in time and space overhead. Finally, we propose a modern version of the two- nger compaction algorithm. This algorithm fails, thus, re-validating traditional wisdom asserting that retaining the order of live objects signi cantly improves the quality of the compaction. The parallel compaction algorithm was implemented on the IBM production Java Virtual Machine. We provide measurements demonstrating high eciency and scalability. Subsequently, this algorithm has been incorporated into the IBM production JVM
Partial Solution and Entropy
Abstract. If the given problem instance is partially solved, we want to minimize our effort to solve the problem using that information. In this paper we introduce the measure of entropy H(S) for uncertainty in partially solved input data S(X) = (X1,..., Xk), where X is the entire data set, and each Xi is already solved. We use the entropy measure to analyze three example problems, sorting, shortest paths and minimum spanning trees. For sorting Xi is an ascending run, and for shortest paths, Xi is an acyclic part in the given graph. For minimum spanning trees, Xi is interpreted as a partially obtained minimum spanning tree for a subgraph. The entropy measure, H(S), is defined by regarding pi = |Xi|/|X | as a probability measure, that is, H(S) = −nΣ k i=1pi log pi, where n = Σ k i=1|Xi|. Then we show that we can sort the input data S(X) in O(H(S)) time, and solve the shortest path problem in O(m + H(S)) time where m is the number of edges of the graph. Finally we show that the minimum spanning tree is computed in O(m + H(S)) time. Keywords:entropy, complexity, adaptive sort, minimal mergesort, ascending runs, shortest paths, nearly acyclic graphs, minimum spanning trees
