1,885 research outputs found

    Aeroelastic modeling for the FIT team F/A-18 simulation

    Get PDF
    Some details of the aeroelastic modeling of the F/A-18 aircraft done for the Functional Integration Technology (FIT) team's research in integrated dynamics modeling and how these are combined with the FIT team's integrated dynamics model are described. Also described are mean axis corrections to elastic modes, the addition of nonlinear inertial coupling terms into the equations of motion, and the calculation of internal loads time histories using the integrated dynamics model in a batch simulation program. A video tape made of a loads time history animation was included as a part of the oral presentation. Also discussed is work done in one of the areas of unsteady aerodynamic modeling identified as needing improvement, specifically, in correction factor methodologies for improving the accuracy of stability derivatives calculated with a doublet lattice code

    Integrated control/structure optimization by multilevel decomposition

    Get PDF
    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example

    Aeroelastic modeling for the FIT (Functional Integration Technology) team F/A-18 simulation

    Get PDF
    As part of Langley Research Center's commitment to developing multidisciplinary integration methods to improve aerospace systems, the Functional Integration Technology (FIT) team was established to perform dynamics integration research using an existing aircraft configuration, the F/A-18. An essential part of this effort has been the development of a comprehensive simulation modeling capability that includes structural, control, and propulsion dynamics as well as steady and unsteady aerodynamics. The structural and unsteady aerodynamics contributions come from an aeroelastic mode. Some details of the aeroelastic modeling done for the Functional Integration Technology (FIT) team research are presented. Particular attention is given to work done in the area of correction factors to unsteady aerodynamics data

    On the relationship between matched filter theory as applied to gust loads and phased design loads analysis

    Get PDF
    A theoretical basis and example calculations are given that demonstrate the relationship between the Matched Filter Theory approach to the calculation of time-correlated gust loads and Phased Design Load Analysis in common use in the aerospace industry. The relationship depends upon the duality between Matched Filter Theory and Random Process Theory and upon the fact that Random Process Theory is used in Phased Design Loads Analysis in determining an equiprobable loads design ellipse. Extensive background information describing the relevant points of Phased Design Loads Analysis, calculating time-correlated gust loads with Matched Filter Theory, and the duality between Matched Filter Theory and Random Process Theory is given. It is then shown that the time histories of two time-correlated gust load responses, determined using the Matched Filter Theory approach, can be plotted as parametric functions of time and that the resulting plot, when superposed upon the design ellipse corresponding to the two loads, is tangent to the ellipse. The question is raised of whether or not it is possible for a parametric load plot to extend outside the associated design ellipse. If it is possible, then the use of the equiprobable loads design ellipse will not be a conservative design practice in some circumstances

    Learning the Roots of Visual Domain Shift

    Get PDF
    In this paper we focus on the spatial nature of visual domain shift, attempting to learn where domain adaptation originates in each given image of the source and target set. We borrow concepts and techniques from the CNN visualization literature, and learn domainnes maps able to localize the degree of domain specificity in images. We derive from these maps features related to different domainnes levels, and we show that by considering them as a preprocessing step for a domain adaptation algorithm, the final classification performance is strongly improved. Combined with the whole image representation, these features provide state of the art results on the Office dataset.Comment: Extended Abstrac

    Right for the Right Reason: Training Agnostic Networks

    Get PDF
    We consider the problem of a neural network being requested to classify images (or other inputs) without making implicit use of a "protected concept", that is a concept that should not play any role in the decision of the network. Typically these concepts include information such as gender or race, or other contextual information such as image backgrounds that might be implicitly reflected in unknown correlations with other variables, making it insufficient to simply remove them from the input features. In other words, making accurate predictions is not good enough if those predictions rely on information that should not be used: predictive performance is not the only important metric for learning systems. We apply a method developed in the context of domain adaptation to address this problem of "being right for the right reason", where we request a classifier to make a decision in a way that is entirely 'agnostic' to a given protected concept (e.g. gender, race, background etc.), even if this could be implicitly reflected in other attributes via unknown correlations. After defining the concept of an 'agnostic model', we demonstrate how the Domain-Adversarial Neural Network can remove unwanted information from a model using a gradient reversal layer.Comment: Author's original versio

    Midline Shift is Unrelated to Subjective Pupillary Reactivity Assessment on Admission in Moderate and Severe Traumatic Brain Injury.

    Get PDF
    BACKGROUND: This study aims to determine the relationship between pupillary reactivity, midline shift and basal cistern effacement on brain computed tomography (CT) in moderate-to-severe traumatic brain injury (TBI). All are important diagnostic and prognostic measures, but their relationship is unclear. METHODS: A total of 204 patients with moderate-to-severe TBI, documented pupillary reactivity, and archived neuroimaging were included. Extent of midline shift and basal cistern effacement were extracted from admission brain CT. Mean midline shift was calculated for each ordinal category of pupillary reactivity and basal cistern effacement. Sequential Chi-square analysis was used to calculate a threshold midline shift for pupillary abnormalities and basal cistern effacement. Univariable and multiple logistic regression analyses were performed. RESULTS: Pupils were bilaterally reactive in 163 patients, unilaterally reactive in 24, and bilaterally unreactive in 17, with mean midline shift (mm) of 1.96, 3.75, and 2.56, respectively (p = 0.14). Basal cisterns were normal in 118 patients, compressed in 45, and absent in 41, with mean midline shift (mm) of 0.64, 2.97, and 5.93, respectively (p < 0.001). Sequential Chi-square analysis identified a threshold for abnormal pupils at a midline shift of 7-7.25 mm (p = 0.032), compressed basal cisterns at 2 mm (p < 0.001), and completely effaced basal cisterns at 7.5 mm (p < 0.001). Logistic regression revealed no association between midline shift and pupillary reactivity. With effaced basal cisterns, the odds ratio for normal pupils was 0.22 (95% CI 0.08-0.56; p = 0.0016) and for at least one unreactive pupil was 0.061 (95% CI 0.012-0.24; p < 0.001). Basal cistern effacement strongly predicted midline shift (OR 1.27; 95% CI 1.17-1.40; p < 0.001). CONCLUSIONS: Basal cistern effacement alone is associated with pupillary reactivity and is closely associated with midline shift. It may represent a uniquely useful neuroimaging marker to guide intervention in traumatic brain injury

    Single muscle fiber proteomics reveals unexpected mitochondrial specialization

    Get PDF
    Mammalian skeletal muscles are composed of multinucleated cells termed slow or fast fibers according to their contractile and metabolic properties. Here, we developed a high-sensitivity workflow to characterize the proteome of single fibers. Analysis of segments of the same fiber by traditional and unbiased proteomics methods yielded the same subtype assignment. We discovered novel subtype-specific features, most prominently mitochondrial specialization of fiber types in substrate utilization. The fiber type-resolved proteomes can be applied to a variety of physiological and pathological conditions and illustrate the utility of single cell type analysis for dissecting proteomic heterogeneity

    'Part'ly first among equals: Semantic part-based benchmarking for state-of-the-art object recognition systems

    Full text link
    An examination of object recognition challenge leaderboards (ILSVRC, PASCAL-VOC) reveals that the top-performing classifiers typically exhibit small differences amongst themselves in terms of error rate/mAP. To better differentiate the top performers, additional criteria are required. Moreover, the (test) images, on which the performance scores are based, predominantly contain fully visible objects. Therefore, `harder' test images, mimicking the challenging conditions (e.g. occlusion) in which humans routinely recognize objects, need to be utilized for benchmarking. To address the concerns mentioned above, we make two contributions. First, we systematically vary the level of local object-part content, global detail and spatial context in images from PASCAL VOC 2010 to create a new benchmarking dataset dubbed PPSS-12. Second, we propose an object-part based benchmarking procedure which quantifies classifiers' robustness to a range of visibility and contextual settings. The benchmarking procedure relies on a semantic similarity measure that naturally addresses potential semantic granularity differences between the category labels in training and test datasets, thus eliminating manual mapping. We use our procedure on the PPSS-12 dataset to benchmark top-performing classifiers trained on the ILSVRC-2012 dataset. Our results show that the proposed benchmarking procedure enables additional differentiation among state-of-the-art object classifiers in terms of their ability to handle missing content and insufficient object detail. Given this capability for additional differentiation, our approach can potentially supplement existing benchmarking procedures used in object recognition challenge leaderboards.Comment: Extended version of our ACCV-2016 paper. Author formatting modifie

    Part Detector Discovery in Deep Convolutional Neural Networks

    Full text link
    Current fine-grained classification approaches often rely on a robust localization of object parts to extract localized feature representations suitable for discrimination. However, part localization is a challenging task due to the large variation of appearance and pose. In this paper, we show how pre-trained convolutional neural networks can be used for robust and efficient object part discovery and localization without the necessity to actually train the network on the current dataset. Our approach called "part detector discovery" (PDD) is based on analyzing the gradient maps of the network outputs and finding activation centers spatially related to annotated semantic parts or bounding boxes. This allows us not just to obtain excellent performance on the CUB200-2011 dataset, but in contrast to previous approaches also to perform detection and bird classification jointly without requiring a given bounding box annotation during testing and ground-truth parts during training. The code is available at http://www.inf-cv.uni-jena.de/part_discovery and https://github.com/cvjena/PartDetectorDisovery.Comment: Accepted for publication on Asian Conference on Computer Vision (ACCV) 201
    corecore