2,168 research outputs found

    Searches for TeV-scale particles at the LHC using jet shapes

    Full text link
    New particles at the TeV scale can decay hadronically with strongly collimated jets, thus the standard reconstruction methods based on invariant-masses of well-separated jets can fail. We discuss how to identify such particles in pp collisions at the LHC using jet shapes which help to reduce the contribution of QCD-induced events. We focus on a rather generic example X to ttbar to hadrons, with X being a heavy particle, but the approach is well suited for reconstruction of other decay channels characterized by a cascade decay of known states.Comment: 14 pages, 6 figure

    EffiTest: Efficient Delay Test and Statistical Prediction for Configuring Post-silicon Tunable Buffers

    Full text link
    At nanometer manufacturing technology nodes, process variations significantly affect circuit performance. To combat them, post- silicon clock tuning buffers can be deployed to balance timing bud- gets of critical paths for each individual chip after manufacturing. The challenge of this method is that path delays should be mea- sured for each chip to configure the tuning buffers properly. Current methods for this delay measurement rely on path-wise frequency stepping. This strategy, however, requires too much time from ex- pensive testers. In this paper, we propose an efficient delay test framework (EffiTest) to solve the post-silicon testing problem by aligning path delays using the already-existing tuning buffers in the circuit. In addition, we only test representative paths and the delays of other paths are estimated by statistical delay prediction. Exper- imental results demonstrate that the proposed method can reduce the number of frequency stepping iterations by more than 94% with only a slight yield loss.Comment: ACM/IEEE Design Automation Conference (DAC), June 201

    Principal component analysis - an efficient tool for variable stars diagnostics

    Get PDF
    We present two diagnostic methods based on ideas of Principal Component Analysis and demonstrate their efficiency for sophisticated processing of multicolour photometric observations of variable objects.Comment: 8 pages, 4 figures. Published alread

    High-Dimensional Inference with the generalized Hopfield Model: Principal Component Analysis and Corrections

    Get PDF
    We consider the problem of inferring the interactions between a set of N binary variables from the knowledge of their frequencies and pairwise correlations. The inference framework is based on the Hopfield model, a special case of the Ising model where the interaction matrix is defined through a set of patterns in the variable space, and is of rank much smaller than N. We show that Maximum Lik elihood inference is deeply related to Principal Component Analysis when the amp litude of the pattern components, xi, is negligible compared to N^1/2. Using techniques from statistical mechanics, we calculate the corrections to the patterns to the first order in xi/N^1/2. We stress that it is important to generalize the Hopfield model and include both attractive and repulsive patterns, to correctly infer networks with sparse and strong interactions. We present a simple geometrical criterion to decide how many attractive and repulsive patterns should be considered as a function of the sampling noise. We moreover discuss how many sampled configurations are required for a good inference, as a function of the system size, N and of the amplitude, xi. The inference approach is illustrated on synthetic and biological data.Comment: Physical Review E: Statistical, Nonlinear, and Soft Matter Physics (2011) to appea

    How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility

    Full text link
    Recommendation systems are ubiquitous and impact many domains; they have the potential to influence product consumption, individuals' perceptions of the world, and life-altering decisions. These systems are often evaluated or trained with data from users already exposed to algorithmic recommendations; this creates a pernicious feedback loop. Using simulations, we demonstrate how using data confounded in this way homogenizes user behavior without increasing utility

    Significance analysis and statistical mechanics: an application to clustering

    Full text link
    This paper addresses the statistical significance of structures in random data: Given a set of vectors and a measure of mutual similarity, how likely does a subset of these vectors form a cluster with enhanced similarity among its elements? The computation of this cluster p-value for randomly distributed vectors is mapped onto a well-defined problem of statistical mechanics. We solve this problem analytically, establishing a connection between the physics of quenched disorder and multiple testing statistics in clustering and related problems. In an application to gene expression data, we find a remarkable link between the statistical significance of a cluster and the functional relationships between its genes.Comment: to appear in Phys. Rev. Let

    Principal Component Analysis with Noisy and/or Missing Data

    Full text link
    We present a method for performing Principal Component Analysis (PCA) on noisy datasets with missing values. Estimates of the measurement error are used to weight the input data such that compared to classic PCA, the resulting eigenvectors are more sensitive to the true underlying signal variations rather than being pulled by heteroskedastic measurement noise. Missing data is simply the limiting case of weight=0. The underlying algorithm is a noise weighted Expectation Maximization (EM) PCA, which has additional benefits of implementation speed and flexibility for smoothing eigenvectors to reduce the noise contribution. We present applications of this method on simulated data and QSO spectra from the Sloan Digital Sky Survey.Comment: Accepted for publication in PASP; v2 with minor updates, mostly to bibliograph

    Mesoscopic Model for Free Energy Landscape Analysis of DNA sequences

    Get PDF
    A mesoscopic model which allows us to identify and quantify the strength of binding sites in DNA sequences is proposed. The model is based on the Peyrard-Bishop-Dauxois model for the DNA chain coupled to a Brownian particle which explores the sequence interacting more importantly with open base pairs of the DNA chain. We apply the model to promoter sequences of different organisms. The free energy landscape obtained for these promoters shows a complex structure that is strongly connected to their biological behavior. The analysis method used is able to quantify free energy differences of sites within genome sequences.Comment: 7 pages, 5 figures, 1 tabl

    Coarse-grained dynamics of an activity bump in a neural field model

    Full text link
    We study a stochastic nonlocal PDE, arising in the context of modelling spatially distributed neural activity, which is capable of sustaining stationary and moving spatially-localized ``activity bumps''. This system is known to undergo a pitchfork bifurcation in bump speed as a parameter (the strength of adaptation) is changed; yet increasing the noise intensity effectively slowed the motion of the bump. Here we revisit the system from the point of view of describing the high-dimensional stochastic dynamics in terms of the effective dynamics of a single scalar "coarse" variable. We show that such a reduced description in the form of an effective Langevin equation characterized by a double-well potential is quantitatively successful. The effective potential can be extracted using short, appropriately-initialized bursts of direct simulation. We demonstrate this approach in terms of (a) an experience-based "intelligent" choice of the coarse observable and (b) an observable obtained through data-mining direct simulation results, using a diffusion map approach.Comment: Corrected aknowledgement

    Neuronal assembly dynamics in supervised and unsupervised learning scenarios

    Get PDF
    The dynamic formation of groups of neurons—neuronal assemblies—is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus on the temporal structure of neuronal signals. In this context, we investigate neuronal assembly dynamics in two complementary scenarios: the first, a supervised spike pattern classification task, in which noisy variations of a collection of spikes have to be correctly labeled; the second, an unsupervised, minimally cognitive evolutionary robotics tasks, in which an evolved agent has to cope with multiple, possibly conflicting, objectives. In both cases, the more traditional dynamical analysis of the system’s variables is paired with information-theoretic techniques in order to get a broader picture of the ongoing interactions with and within the network. The neural network model is inspired by the Kuramoto model of coupled phase oscillators and allows one to fine-tune the network synchronization dynamics and assembly configuration. The experiments explore the computational power, redundancy, and generalization capability of neuronal circuits, demonstrating that performance depends nonlinearly on the number of assemblies and neurons in the network and showing that the framework can be exploited to generate minimally cognitive behaviors, with dynamic assembly formation accounting for varying degrees of stimuli modulation of the sensorimotor interactions
    corecore