39,419 research outputs found

    Experimental entanglement-assisted quantum delayed-choice experiment

    Full text link
    The puzzling properties of quantum mechanics, wave-particle duality, entanglement and superposition, were dissected experimentally at past decades. However, hidden-variable (HV) models, based on three classical assumptions of wave-particle objectivity, determinism and independence, strive to explain or even defeat them. The development of quantum technologies enabled us to test experimentally the predictions of quantum mechanics and HV theories. Here, we report an experimental demonstration of an entanglement-assisted quantum delayed-choice scheme using a liquid nuclear magnetic resonance quantum information processor. This scheme we realized is based on the recently proposed scheme [Nat. Comms. 5:4997(2014)], which gave different results for quantum mechanics and HV theories. In our experiments, the intensities and the visibilities of the interference are in consistent the theoretical prediction of quantum mechanics. The results imply that a contradiction is appearing when all three assumptions of HV models are combined, though any two of the above assumptions are compatible with it.Comment: 8 pages, 1 table and 6 figure

    FEAFA: A Well-Annotated Dataset for Facial Expression Analysis and 3D Facial Animation

    Full text link
    Facial expression analysis based on machine learning requires large number of well-annotated data to reflect different changes in facial motion. Publicly available datasets truly help to accelerate research in this area by providing a benchmark resource, but all of these datasets, to the best of our knowledge, are limited to rough annotations for action units, including only their absence, presence, or a five-level intensity according to the Facial Action Coding System. To meet the need for videos labeled in great detail, we present a well-annotated dataset named FEAFA for Facial Expression Analysis and 3D Facial Animation. One hundred and twenty-two participants, including children, young adults and elderly people, were recorded in real-world conditions. In addition, 99,356 frames were manually labeled using Expression Quantitative Tool developed by us to quantify 9 symmetrical FACS action units, 10 asymmetrical (unilateral) FACS action units, 2 symmetrical FACS action descriptors and 2 asymmetrical FACS action descriptors, and each action unit or action descriptor is well-annotated with a floating point number between 0 and 1. To provide a baseline for use in future research, a benchmark for the regression of action unit values based on Convolutional Neural Networks are presented. We also demonstrate the potential of our FEAFA dataset for 3D facial animation. Almost all state-of-the-art algorithms for facial animation are achieved based on 3D face reconstruction. We hence propose a novel method that drives virtual characters only based on action unit value regression of the 2D video frames of source actors.Comment: 9 pages, 7 figure

    GeoSay: A Geometric Saliency for Extracting Buildings in Remote Sensing Images

    Full text link
    Automatic extraction of buildings in remote sensing images is an important but challenging task and finds many applications in different fields such as urban planning, navigation and so on. This paper addresses the problem of buildings extraction in very high-spatial-resolution (VHSR) remote sensing (RS) images, whose spatial resolution is often up to half meters and provides rich information about buildings. Based on the observation that buildings in VHSR-RS images are always more distinguishable in geometry than in texture or spectral domain, this paper proposes a geometric building index (GBI) for accurate building extraction, by computing the geometric saliency from VHSR-RS images. More precisely, given an image, the geometric saliency is derived from a mid-level geometric representations based on meaningful junctions that can locally describe geometrical structures of images. The resulting GBI is finally measured by integrating the derived geometric saliency of buildings. Experiments on three public and commonly used datasets demonstrate that the proposed GBI achieves the state-of-the-art performance and shows impressive generalization capability. Additionally, GBI preserves both the exact position and accurate shape of single buildings compared to existing methods
    corecore