6,770 research outputs found

    Analysis of approximate nearest neighbor searching with clustered point sets

    Full text link
    We present an empirical analysis of data structures for approximate nearest neighbor searching. We compare the well-known optimized kd-tree splitting method against two alternative splitting methods. The first, called the sliding-midpoint method, which attempts to balance the goals of producing subdivision cells of bounded aspect ratio, while not producing any empty cells. The second, called the minimum-ambiguity method is a query-based approach. In addition to the data points, it is also given a training set of query points for preprocessing. It employs a simple greedy algorithm to select the splitting plane that minimizes the average amount of ambiguity in the choice of the nearest neighbor for the training points. We provide an empirical analysis comparing these two methods against the optimized kd-tree construction for a number of synthetically generated data and query sets. We demonstrate that for clustered data and query sets, these algorithms can provide significant improvements over the standard kd-tree construction for approximate nearest neighbor searching.Comment: 20 pages, 8 figures. Presented at ALENEX '99, Baltimore, MD, Jan 15-16, 199

    On the Combinatorial Complexity of Approximating Polytopes

    Get PDF
    Approximating convex bodies succinctly by convex polytopes is a fundamental problem in discrete geometry. A convex body KK of diameter diam(K)\mathrm{diam}(K) is given in Euclidean dd-dimensional space, where dd is a constant. Given an error parameter ε>0\varepsilon > 0, the objective is to determine a polytope of minimum combinatorial complexity whose Hausdorff distance from KK is at most εdiam(K)\varepsilon \cdot \mathrm{diam}(K). By combinatorial complexity we mean the total number of faces of all dimensions of the polytope. A well-known result by Dudley implies that O(1/ε(d1)/2)O(1/\varepsilon^{(d-1)/2}) facets suffice, and a dual result by Bronshteyn and Ivanov similarly bounds the number of vertices, but neither result bounds the total combinatorial complexity. We show that there exists an approximating polytope whose total combinatorial complexity is O~(1/ε(d1)/2)\tilde{O}(1/\varepsilon^{(d-1)/2}), where O~\tilde{O} conceals a polylogarithmic factor in 1/ε1/\varepsilon. This is a significant improvement upon the best known bound, which is roughly O(1/εd2)O(1/\varepsilon^{d-2}). Our result is based on a novel combination of both old and new ideas. First, we employ Macbeath regions, a classical structure from the theory of convexity. The construction of our approximating polytope employs a new stratified placement of these regions. Second, in order to analyze the combinatorial complexity of the approximating polytope, we present a tight analysis of a width-based variant of B\'{a}r\'{a}ny and Larman's economical cap covering. Finally, we use a deterministic adaptation of the witness-collector technique (developed recently by Devillers et al.) in the context of our stratified construction.Comment: In Proceedings of the 32nd International Symposium Computational Geometry (SoCG 2016) and accepted to SoCG 2016 special issue of Discrete and Computational Geometr

    Correspondence between HBT radii and the emission zone in non-central heavy ion collisions

    Full text link
    In non-central collisions between ultra-relativistic heavy ions, the freeze-out distribution is anisotropic, and its major longitudinal axis may be tilted away from the beam direction. The shape and orientation of this distribution are particularly interesting, as they provide a snapshot of the evolving source and reflect the space-time aspect of anisotropic flow. Experimentally, this information is extracted by measuring pion HBT radii as a function of angle with respect to the reaction plane. Existing formulae relating the oscillations of the radii and the freezeout anisotropy are in principle only valid for Gaussian sources with no collective flow. With a realistic transport model of the collision, which generates flow and non-Gaussian sources, we find that these formulae approximately reflect the anisotropy of the freezeout distribution.Comment: 9 pages, 8 figure

    AN ECONOMIC ANALYSIS OF THE U.S. GENERIC DAIRY ADVERTISING PROGRAM USING AN INDUSTRY MODEL

    Get PDF
    The market impacts of generic dairy advertising are assessed using an industry model which encompasses supply and demand conditions at the retail, wholesale, and farm levels, and government intervention under the dairy price support program. The estimated model is used to simulate price and quantity values for four advertising scenarios: (1) no advertising, (2) historical fluid advertising, (3) historical manufactured advertising, and (4) historical fluid and manufactured advertising. Compared to previous studies, the dairy-industry model provides additional insights into the way generic dairy advertising influences prices and quantities at the retail, wholesale, and farm levels.Livestock Production/Industries, Marketing,

    Delaunay triangulation and computational fluid dynamics meshes

    Get PDF
    In aerospace computational fluid dynamics (CFD) calculations, the Delaunay triangulation of suitable quadrilateral meshes can lead to unsuitable triangulated meshes. Here, we present case studies which illustrate the limitations of using structured grid generation methods which produce points in a curvilinear coordinate system for subsequent triangulations for CFD applications. We discuss conditions under which meshes of quadrilateral elements may not produce a Delaunay triangulation suitable for CFD calculations, particularly with regard to high aspect ratio, skewed quadrilateral elements

    Space Exploration via Proximity Search

    Get PDF
    We investigate what computational tasks can be performed on a point set in d\Re^d, if we are only given black-box access to it via nearest-neighbor search. This is a reasonable assumption if the underlying point set is either provided implicitly, or it is stored in a data structure that can answer such queries. In particular, we show the following: (A) One can compute an approximate bi-criteria kk-center clustering of the point set, and more generally compute a greedy permutation of the point set. (B) One can decide if a query point is (approximately) inside the convex-hull of the point set. We also investigate the problem of clustering the given point set, such that meaningful proximity queries can be carried out on the centers of the clusters, instead of the whole point set

    Performance, emissions, and physical characteristics of a rotating combustion aircraft engine

    Get PDF
    The RC2-75, a liquid cooled two chamber rotary combustion engine (Wankel type), designed for aircraft use, was tested and representative baseline (212 KW, 285 BHP) performance and emissions characteristics established. The testing included running fuel/air mixture control curves and varied ignition timing to permit selection of desirable and practical settings for running wide open throttle curves, propeller load curves, variable manifold pressure curves covering cruise conditions, and EPA cycle operating points. Performance and emissions data were recorded for all of the points run. In addition to the test data, information required to characterize the engine and evaluate its performance in aircraft use is provided over a range from one half to twice its present power. The exhaust emissions results are compared to the 1980 EPA requirements. Standard day take-off brake specific fuel consumption is 356 g/KW-HR (.585 lb/BHP-HR) for the configuration tested

    Yanagi: Transcript Segment Library Construction for RNA-Seq Quantification

    Get PDF
    Analysis of differential alternative splicing from RNA-seq data is complicated by the fact that many RNA-seq reads map to multiple transcripts, and that annotated transcripts from a given gene are often a small subset of many possible complete transcripts for that gene. Here we describe Yanagi, a tool which segments a transcriptome into disjoint regions to create a segments library from a complete transcriptome annotation that preserves all of its consecutive regions of a given length L while distinguishing annotated alternative splicing events in the transcriptome. In this paper, we formalize this concept of transcriptome segmentation and propose an efficient algorithm for generating segment libraries based on a length parameter dependent on specific RNA-Seq library construction. The resulting segment sequences can be used with pseudo-alignment tools to quantify expression at the segment level. We characterize the segment libraries for the reference transcriptomes of Drosophila melanogaster and Homo sapiens. Finally, we demonstrate the utility of quantification using a segment library based on an analysis of differential exon skipping in Drosophila melanogaster and Homo sapiens. The notion of transcript segmentation as introduced here and implemented in Yanagi will open the door for the application of lightweight, ultra-fast pseudo-alignment algorithms in a wide variety of analyses of transcription variation
    corecore