19 research outputs found

    Structure in the 3D Galaxy Distribution: I. Methods and Example Results

    Full text link
    Three methods for detecting and characterizing structure in point data, such as that generated by redshift surveys, are described: classification using self-organizing maps, segmentation using Bayesian blocks, and density estimation using adaptive kernels. The first two methods are new, and allow detection and characterization of structures of arbitrary shape and at a wide range of spatial scales. These methods should elucidate not only clusters, but also the more distributed, wide-ranging filaments and sheets, and further allow the possibility of detecting and characterizing an even broader class of shapes. The methods are demonstrated and compared in application to three data sets: a carefully selected volume-limited sample from the Sloan Digital Sky Survey redshift data, a similarly selected sample from the Millennium Simulation, and a set of points independently drawn from a uniform probability distribution -- a so-called Poisson distribution. We demonstrate a few of the many ways in which these methods elucidate large scale structure in the distribution of galaxies in the nearby Universe.Comment: Re-posted after referee corrections along with partially re-written introduction. 80 pages, 31 figures, ApJ in Press. For full sized figures please download from: http://astrophysics.arc.nasa.gov/~mway/lss1.pd

    Neural Control of a Virtual Prosthesis

    No full text

    Data-Driven & Goal-Driven Computational Intelligence for Autonomy and Affordability

    No full text

    Adaptive Scaling of Codebook Vectors

    No full text
    this paper we introduce a vector quantization algorithm in which the codebook vectors are extended with a scale parameter to let them represent Gaussian functions. The means of these functions are determined by a standard vector quantization algorithm; and for their scales we have derived a learning rule. Our algorithm estimates the probability densities efficiently. The main application is pattern classification. Pattern classification is trivial if a function is available which describes the probability distribution of the classes in the pattern space. In that case we can use the Bayesian classifier; classify a pattern according to the class with the largest probability at the respective sample position in the pattern space. The obvious problem is then to find a function which describes this probability distribution. A standard method is Parzen window estimation [1]. In this method each pattern is seen as a Gaussian distribution whose mean equals the pattern position and whose standard deviation (or scale) has some arbitrary value. The estimated probability function is the average of all Gaussians. The major advantage of this method is that any probability function will be estimated correctly if the number of samples reaches infinity. The main disadvantages are: (i) a large number of samples is necessary to obtain a reasonable estimate; (ii) the probability function is in terms of a large number of Gaussians and evaluation of the function is time and memory consuming, and (iii) the choice of the standard deviation is arbitrary. In this paper we propose a new method which reduces the mentioned disadvantages but keeps the nice properties of Parzen window estimation. 2 Related Wor

    COMPENSATION COMPETITIVE LEARNING

    No full text
    corecore