10,662 research outputs found

    Multi-fidelity classification using Gaussian processes: accelerating the prediction of large-scale computational models

    Full text link
    Machine learning techniques typically rely on large datasets to create accurate classifiers. However, there are situations when data is scarce and expensive to acquire. This is the case of studies that rely on state-of-the-art computational models which typically take days to run, thus hindering the potential of machine learning tools. In this work, we present a novel classifier that takes advantage of lower fidelity models and inexpensive approximations to predict the binary output of expensive computer simulations. We postulate an autoregressive model between the different levels of fidelity with Gaussian process priors. We adopt a fully Bayesian treatment for the hyper-parameters and use Markov Chain Mont Carlo samplers. We take advantage of the probabilistic nature of the classifier to implement active learning strategies. We also introduce a sparse approximation to enhance the ability of themulti-fidelity classifier to handle large datasets. We test these multi-fidelity classifiers against their single-fidelity counterpart with synthetic data, showing a median computational cost reduction of 23% for a target accuracy of 90%. In an application to cardiac electrophysiology, the multi-fidelity classifier achieves an F1 score, the harmonic mean of precision and recall, of 99.6% compared to 74.1% of a single-fidelity classifier when both are trained with 50 samples. In general, our results show that the multi-fidelity classifiers outperform their single-fidelity counterpart in terms of accuracy in all cases. We envision that this new tool will enable researchers to study classification problems that would otherwise be prohibitively expensive. Source code is available at https://github.com/fsahli/MFclass

    Un segle de psiquiatria a Sant Boi

    Get PDF

    Relación de la escala de intensidad de Mercalli y la información instrumental como una tarea de clasificación de patrones

    Get PDF
    A pesar de los progresos ocurridos en la instrumentación sísmica, la valoración de vulnerabilidad sísmica y el daño con índices cualitativos, tal como los proporcionados por Intensidad de Mercalli Modificada (IMM), siguen siendo altamente favorables y útiles para los propósitos prácticos. Para vincular las medidas cualitativas de acción del terremoto y sus efectos, es habitualmente aplicada la regresión estadística. En este artículo, se adopta un planteamiento diferente, el cual consiste en expresar la Intensidad de Mercalli, como una clase en vez de un valor numérico. Una herramienta de clasificación estadística moderna, conocida como máquina de vectores de soporte, se usa para clasificar la información instrumental con el fin de evaluar la intensidad de Mercalli correspondiente. Se muestra que el método da resultados satisfactorios con respecto a las altas incertidumbres y a la medida del daño sísmico cualitativo

    Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    Get PDF
    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time

    Memoria compartida

    Get PDF

    The Simplest Piston Problem II: Inelastic Collisions

    Full text link
    We study the dynamics of three particles in a finite interval, in which two light particles are separated by a heavy ``piston'', with elastic collisions between particles but inelastic collisions between the light particles and the interval ends. A symmetry breaking occurs in which the piston migrates near one end of the interval and performs small-amplitude periodic oscillations on a logarithmic time scale. The properties of this dissipative limit cycle can be understood simply in terms of an effective restitution coefficient picture. Many dynamical features of the three-particle system closely resemble those of the many-body inelastic piston problem.Comment: 8 pages, 7 figures, 2-column revtex4 forma

    Investigation of vertical cavity surface emitting laser dynamics for neuromorphic photonic systems

    Get PDF
    We report an approach based upon vertical cavity surface emitting lasers (VCSELs) to reproduce optically different behaviors exhibited by biological neurons but on a much faster timescale. The technique proposed is based on the polarization switching and nonlinear dynamics induced in a single VCSEL under polarized optical injection. The particular attributes of VCSELs and the simple experimental configuration used in this work offer prospects of fast, reconfigurable processing elements with excellent fan-out and scaling potentials for use in future computational paradigms and artificial neural networks. © 2012 American Institute of Physics

    Shell-like structures in our cosmic neighbourhood

    Full text link
    Signatures of the processes in the early Universe are imprinted in the cosmic web. Some of them may define shell-like structures characterised by typical scales. We search for shell-like structures in the distribution of nearby rich clusters of galaxies drawn from the SDSS DR8. We calculate the distance distributions between rich clusters of galaxies, and groups and clusters of various richness, look for the maxima in the distance distributions, and select candidates of shell-like structures. We analyse the space distribution of groups and clusters forming shell walls. We find six possible candidates of shell-like structures, in which galaxy clusters have maxima in the distance distribution to other galaxy groups and clusters at the distance of about 120 Mpc/h. The rich galaxy cluster A1795, the central cluster of the Bootes supercluster, has the highest maximum in the distance distribution of other groups and clusters around them at the distance of about 120 Mpc/h among our rich cluster sample, and another maximum at the distance of about 240 Mpc/h. The structures of galaxy systems causing the maxima at 120 Mpc/h form an almost complete shell of galaxy groups, clusters and superclusters. The richest systems in the nearby universe, the Sloan Great Wall, the Corona Borealis supercluster and the Ursa Major supercluster are among them. The probability that we obtain maxima like this from random distributions is lower than 0.001. Our results confirm that shell-like structures can be found in the distribution of nearby galaxies and their systems. The radii of the possible shells are larger than expected for a BAO shell (approximately 109 Mpc/h versus approximately 120 Mpc/h), and they are determined by very rich galaxy clusters and superclusters with high density contrast while BAO shells are barely seen in the galaxy distribution. We discuss possible consequences of these differences.Comment: Comments: 9 pages, 10 figures, Astronomy and Astrophysics, in pres
    corecore