16,700 research outputs found

    Off-line computing for experimental high-energy physics

    Get PDF
    The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated

    Analysis of approximate nearest neighbor searching with clustered point sets

    Full text link
    We present an empirical analysis of data structures for approximate nearest neighbor searching. We compare the well-known optimized kd-tree splitting method against two alternative splitting methods. The first, called the sliding-midpoint method, which attempts to balance the goals of producing subdivision cells of bounded aspect ratio, while not producing any empty cells. The second, called the minimum-ambiguity method is a query-based approach. In addition to the data points, it is also given a training set of query points for preprocessing. It employs a simple greedy algorithm to select the splitting plane that minimizes the average amount of ambiguity in the choice of the nearest neighbor for the training points. We provide an empirical analysis comparing these two methods against the optimized kd-tree construction for a number of synthetically generated data and query sets. We demonstrate that for clustered data and query sets, these algorithms can provide significant improvements over the standard kd-tree construction for approximate nearest neighbor searching.Comment: 20 pages, 8 figures. Presented at ALENEX '99, Baltimore, MD, Jan 15-16, 199

    On the Combinatorial Complexity of Approximating Polytopes

    Get PDF
    Approximating convex bodies succinctly by convex polytopes is a fundamental problem in discrete geometry. A convex body KK of diameter diam(K)\mathrm{diam}(K) is given in Euclidean dd-dimensional space, where dd is a constant. Given an error parameter ε>0\varepsilon > 0, the objective is to determine a polytope of minimum combinatorial complexity whose Hausdorff distance from KK is at most εdiam(K)\varepsilon \cdot \mathrm{diam}(K). By combinatorial complexity we mean the total number of faces of all dimensions of the polytope. A well-known result by Dudley implies that O(1/ε(d1)/2)O(1/\varepsilon^{(d-1)/2}) facets suffice, and a dual result by Bronshteyn and Ivanov similarly bounds the number of vertices, but neither result bounds the total combinatorial complexity. We show that there exists an approximating polytope whose total combinatorial complexity is O~(1/ε(d1)/2)\tilde{O}(1/\varepsilon^{(d-1)/2}), where O~\tilde{O} conceals a polylogarithmic factor in 1/ε1/\varepsilon. This is a significant improvement upon the best known bound, which is roughly O(1/εd2)O(1/\varepsilon^{d-2}). Our result is based on a novel combination of both old and new ideas. First, we employ Macbeath regions, a classical structure from the theory of convexity. The construction of our approximating polytope employs a new stratified placement of these regions. Second, in order to analyze the combinatorial complexity of the approximating polytope, we present a tight analysis of a width-based variant of B\'{a}r\'{a}ny and Larman's economical cap covering. Finally, we use a deterministic adaptation of the witness-collector technique (developed recently by Devillers et al.) in the context of our stratified construction.Comment: In Proceedings of the 32nd International Symposium Computational Geometry (SoCG 2016) and accepted to SoCG 2016 special issue of Discrete and Computational Geometr

    The solar spectral irradiance 1200-3184 a near solar maximum, 15 July 1980

    Get PDF
    Full disk solar spectral irradiances near solar maximum were obtained in the spectral range 1200 to 3184 A at a spectral resolution of approximately 1 A from rocket observations above White Sands Missile Range. Comparison with measurements made during solar minimum confirm a large increase at solar maximum in the solar irradiance near 1200 A with no change within the measurement errors near 2000 A. Irradiances in the range 1900 to 2100 A are in excellent agreement with previous measurements, and those in the 2100 to 2500 A range are lower than separate previous results in this range. Agreement is found with previous values 2500 to 2900 A A, and then fall below those values 2900 to 3184 A

    From the Editor

    Get PDF
    corecore