1,588 research outputs found

    Compact representation of wall-bounded turbulence using compressive sampling

    Get PDF
    Compressive sampling is well-known to be a useful tool used to resolve the energetic content of signals that admit a sparse representation. The broadband temporal spectrum acquired from point measurements in wall-bounded turbulence has precluded the prior use of compressive sampling in this kind of flow, however it is shown here that the frequency content of flow fields that have been Fourier transformed in the homogeneous spatial (wall-parallel) directions is approximately sparse, giving rise to a compact representation of the velocity field. As such, compressive sampling is an ideal tool for reducing the amount of information required to approximate the velocity field. Further, success of the compressive sampling approach provides strong evidence that this representation is both physically meaningful and indicative of special properties of wall turbulence. Another advantage of compressive sampling over periodic sampling becomes evident at high Reynolds numbers, since the number of samples required to resolve a given bandwidth with compressive sampling scales as the logarithm of the dynamically significant bandwidth instead of linearly for periodic sampling. The combination of the Fourier decomposition in the wall-parallel directions, the approximate sparsity in frequency, and empirical bounds on the convection velocity leads to a compact representation of an otherwise broadband distribution of energy in the space defined by streamwise and spanwise wavenumber, frequency, and wall-normal location. The data storage requirements for reconstruction of the full field using compressive sampling are shown to be significantly less than for periodic sampling, in which the Nyquist criterion limits the maximum frequency that can be resolved. Conversely, compressive sampling maximizes the frequency range that can be recovered if the number of samples is limited, resolving frequencies up to several times higher than the mean sampling rate. It is proposed that the approximate sparsity in frequency and the corresponding structure in the spatial domain can be exploited to design simulation schemes for canonical wall turbulence with significantly reduced computational expense compared with current techniques

    Ethnic In-Group Favoritism Among Minority and Majority Groups: Testing the Self-Esteem Hypothesis Among Preadolescents

    Get PDF
    The self-esteem hypothesis in intergroup relations, as proposed by social identity theory (SIT), states that successful intergroup discrimination enhances momentary collective self-esteem. This hypothesis is a source of continuing controversy. Furthermore, although SIT is increasingly used to account for children’s group attitudes, few studies have examined the hypothesis among children. In addition, the hypothesis’s generality makes it important to study among children from different ethnic groups. The present study, conducted among Dutch and Turkish preadolescents, examined momentary collective self-feelings as a consequence of ethnic group evaluations. The results tended to support the self-esteem hypothesis. In-group favoritism was found to have a self-enhancing effect among participants high in ethnic identification. This result was found for ethnic majority (Dutch) and minority (Turkish) participants.

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure

    Precision Tests of the Standard Model

    Get PDF
    30 páginas, 11 figuras, 11 tablas.-- Comunicación presentada al 25º Winter Meeting on Fundamental Physics celebrado del 3 al 8 de MArzo de 1997 en Formigal (España).Precision measurements of electroweak observables provide stringent tests of the Standard Model structure and an accurate determination of its parameters. An overview of the present experimental status is presented.This work has been supported in part by CICYT (Spain) under grant No. AEN-96-1718.Peer reviewe

    Balance in single-limb stance after surgically treated ankle fractures: a 14-month follow-up

    Get PDF
    BACKGROUND: The maintenance of postural control is fundamental for different types of physical activity. This can be measured by having subjects stand on one leg on a force plate. Many studies assessing standing balance have previously been carried out in patients with ankle ligament injuries but not in patients with ankle fractures. The aim of this study was to evaluate whether patients operated on because of an ankle fracture had impaired postural control compared to an uninjured age- and gender-matched control group. METHODS: Fifty-four individuals (patients) operated on because of an ankle fracture were examined 14 months postoperatively. Muscle strength, ankle mobility, and single-limb stance on a force-platform were measured. Average speed of centre of pressure movements and number of movements exceeding 10 mm from the mean value of centre of pressure were registered in the frontal and sagittal planes on a force-platform. Fifty-four age- and gender-matched uninjured individuals (controls) were examined in the single-limb stance test only. The paired Student t-test was used for comparisons between patients' injured and uninjured legs and between side-matched legs within the controls. The independent Student t-test was used for comparisons between patients and controls. The Chi-square test, and when applicable, Fisher's exact test were used for comparisons between groups. Multiple logistic regression was performed to identify factors associated with belonging to the group unable to complete the single-limb stance test on the force-platform. RESULTS: Fourteen of the 54 patients (26%) did not manage to complete the single-limb stance test on the force-platform, whereas all controls managed this (p < 0.001). Age over 45 years was the only factor significantly associated with not managing the test. When not adjusted for age, decreased strength in the ankle plantar flexors and dorsiflexors was significantly associated with not managing the test. In the 40 patients who managed to complete the single-limb stance test no differences were found between the results of patients' injured leg and the side-matched leg of the controls regarding average speed and the number of centre of pressure movements. CONCLUSION: One in four patients operated on because of an ankle fracture had impaired postural control compared to an age- and gender-matched control group. Age over 45 years and decreased strength in the ankle plantar flexors and dorsiflexors were found to be associated with decreased balance performance. Further, longitudinal studies are required to evaluate whether muscle and balance training in the rehabilitation phase may improve postural control

    Minimizing Acquisition Maximizing Inference -- A demonstration on print error detection

    Full text link
    Is it possible to detect a feature in an image without ever looking at it? Images are known to have sparser representation in Wavelets and other similar transforms. Compressed Sensing is a technique which proposes simultaneous acquisition and compression of any signal by taking very few random linear measurements (M). The quality of reconstruction directly relates with M, which should be above a certain threshold for a reliable recovery. Since these measurements can non-adaptively reconstruct the signal to a faithful extent using purely analytical methods like Basis Pursuit, Matching Pursuit, Iterative thresholding, etc., we can be assured that these compressed samples contain enough information about any relevant macro-level feature contained in the (image) signal. Thus if we choose to deliberately acquire an even lower number of measurements - in order to thwart the possibility of a comprehensible reconstruction, but high enough to infer whether a relevant feature exists in an image - we can achieve accurate image classification while preserving its privacy. Through the print error detection problem, it is demonstrated that such a novel system can be implemented in practise
    corecore