461 research outputs found
Local, Smooth, and Consistent Jacobi Set Simplification
The relation between two Morse functions defined on a common domain can be
studied in terms of their Jacobi set. The Jacobi set contains points in the
domain where the gradients of the functions are aligned. Both the Jacobi set
itself as well as the segmentation of the domain it induces have shown to be
useful in various applications. Unfortunately, in practice functions often
contain noise and discretization artifacts causing their Jacobi set to become
unmanageably large and complex. While there exist techniques to simplify Jacobi
sets, these are unsuitable for most applications as they lack fine-grained
control over the process and heavily restrict the type of simplifications
possible.
In this paper, we introduce a new framework that generalizes critical point
cancellations in scalar functions to Jacobi sets in two dimensions. We focus on
simplifications that can be realized by smooth approximations of the
corresponding functions and show how this implies simultaneously simplifying
contiguous subsets of the Jacobi set. These extended cancellations form the
atomic operations in our framework, and we introduce an algorithm to
successively cancel subsets of the Jacobi set with minimal modifications
according to some user-defined metric. We prove that the algorithm is correct
and terminates only once no more local, smooth and consistent simplifications
are possible. We disprove a previous claim on the minimal Jacobi set for
manifolds with arbitrary genus and show that for simply connected domains, our
algorithm reduces a given Jacobi set to its simplest configuration.Comment: 24 pages, 19 figure
Scalable scientific data
posterQuestion Hierarchial Z-Order Evaluation How can we present hundreds or thousands of gigabytes of scientific data to a user for analysis and interpretation? • The Scientific Computing and Imaging Institute is responsible for helping scientists visualize massive amounts of data. • Sources of large scientific data include medical imaging equipment (CAT, PET, MRI, etc.), fluid dynamics simulations, and genetic sequence mapping • Some of these simulations produce hundreds of gigabytes of data per simulation time step. Evaluating the speed of loading a set of random samples from an 8GB 3D image showed that: •Both Z and HZ-order significantly outperform the standard Row Major mode representation •HZ-order also outperforms Z-order for progressive requests Based on the Lebesque curve • Indexes Z-curve resolution levels in hierarchical order from coarser to finer. • Maintains the same geometric locality for each Z-curve resolution level • Beneficial for progressive resolution requests. (e.g. an "object search" application may first attempt to perform filtering on a coarser resolution
Recommended from our members
Hierarchical Large-scale Volume Representation with 3rd-root-of-2 Subdivision and Trivariate B-spline Wavelets
Multiresolution methods provide a means for representing data at multiple levels of detail. They are typically based on a hierarchical data organization scheme and update rules needed for data value computation. We use a data organization that is based on what we call subdivision. The main advantage of subdivision, compared to quadtree (n=2) or octree (n=3) organizations, is that the number of vertices is only doubled in each subdivision step instead of multiplied by a factor of four or eight, respectively. To update data values we use n-variate B-spline wavelets, which yield better approximations for each level of detail. We develop a lifting scheme for n=2 and n=3 based on the -subdivision scheme. We obtain narrow masks that provide a basis for out-of-core techniques as well as view-dependent visualization and adaptive, localized refinement
Mapping applications with collectives over sub-communicators on torus networks
pre-printThe placement of tasks in a parallel application on specific nodes of a supercomputer can significantly impact performance. Traditionally, this task mapping has focused on reducing the distance between communicating tasks on the physical network. This minimizes the number of hops that point-to-point messages travel and thus reduces link sharing between messages and contention. However, for applications that use collectives over sub-communicators, this heuristic may not be optimal. Many collectives can benefit from an increase in bandwidth even at the cost of an increase in hop count, especially when sending large messages. For example, placing communicating tasks in a cube configuration rather than a plane or a line on a torus network increases the number of possible paths messages might take. This increases the available bandwidth which can lead to significant performance gains. We have developed Rubik, a tool that provides a simple and intuitive interface to create a wide variety of mappings for structured communication patterns. Rubik supports a number of elementary operations such as splits, tilts, or shifts, that can be combined into a large number of unique patterns. Each operation can be applied to disjoint groups of processes involved in collectives to increase the effective bandwidth. We demonstrate the use of Rubik for improving performance of two parallel codes, pF3D and Qbox, which use collectives over sub-communicators
Topology verification for isosurface extraction
Journal ArticleThe broad goals of verifiable visualization rely on correct algorithmic implementations. We extend a framework for verification of isosurfacing implementations to check topological properties. Specifically, we use stratified Morse theory and digital topology to design algorithms which verify topological invariants. Our extended framework reveals unexpected behavior and coding mistakes in popular publicly available isosurface codes
Characterization and modeling of PIDX parallel I/O for performance optimization
pre-printParallel I/O library performance can vary greatly in re- sponse to user-tunable parameter values such as aggregator count, file count, and aggregation strategy. Unfortunately, manual selection of these values is time consuming and dependent on characteristics of the target machine, the underlying file system, and the dataset itself. Some characteristics, such as the amount of memory per core, can also impose hard constraints on the range of viable parameter values. In this work we address these problems by using machine learning techniques to model the performance of the PIDX parallel I/O library and select appropriate tunable parameter values. We characterize both the network and I/O phases of PIDX on a Cray XE6 as well as an IBM Blue Gene/P system. We use the results of this study to develop a machine learning model for parameter space exploration and performance prediction
Conforming Morse-Smale complexes
pre-printMorse-Smale (MS) complexes have been gaining popularity as a tool for feature-driven data analysis and visualization. However, the quality of their geometric embedding and the sole dependence on the input scalar field data can limit their applicability when expressing application-dependent features. In this paper we introduce a new combinatorial technique to compute an MS complex that conforms to both an input scalar field and an additional, prior segmentation of the domain. The segmentation constrains the MS complex computation guaranteeing that boundaries in the segmentation are captured as separatrices of the MS complex. We demonstrate the utility and versatility of our approach with two applications. First, we use streamline integration to determine numerically computed basins/mountains and use the resulting segmentation as an input to our algorithm. This strategy enables the incorporation of prior flow path knowledge, effectively resulting in an MS complex that is as geometrically accurate as the employed numerical integration. Our second use case is motivated by the observation that often the data itself does not explicitly contain features known to be present by a domain expert. We introduce edit operations for MS complexes so that a user can directly modify their features while maintaining all the advantages of a robust topology-based representation
- …
