506 research outputs found

    Relationships of peripheral IGF-1, VEGF and BDNF levels to exercise-related changes in memory, hippocampal perfusion and volumes in older adults

    Get PDF
    Animal models point towards a key role of brain-derived neurotrophic factor (BDNF), insulin-like growth factor-I (IGF-I) and vascular endothelial growth factor (VEGF) in mediating exercise-induced structural and functional changes in the hippocampus. Recently, also platelet derived growth factor-C (PDGF-C) has been shown to promote blood vessel growth and neuronal survival. Moreover, reductions of these neurotrophic and angiogenic factors in old age have been related to hippocampal atrophy, decreased vascularization and cognitive decline. In a 3-month aerobic exercise study, forty healthy older humans (60 to 77years) were pseudo-randomly assigned to either an aerobic exercise group (indoor treadmill, n=21) or to a control group (indoor progressive-muscle relaxation/stretching, n=19). As reported recently, we found evidence for fitness-related perfusion changes of the aged human hippocampus that were closely linked to changes in episodic memory function. Here, we test whether peripheral levels of BDNF, IGF-I, VEGF or PDGF-C are related to changes in hippocampal blood flow, volume and memory performance. Growth factor levels were not significantly affected by exercise, and their changes were not related to changes in fitness or perfusion. However, changes in IGF-I levels were positively correlated with hippocampal volume changes (derived by manual volumetry and voxel-based morphometry) and late verbal recall performance, a relationship that seemed to be independent of fitness, perfusion or their changes over time. These preliminary findings link IGF-I levels to hippocampal volume changes and putatively hippocampus-dependent memory changes that seem to occur over time independently of exercise. We discuss methodological shortcomings of our study and potential differences in the temporal dynamics of how IGF-1, VEGF and BDNF may be affected by exercise and to what extent these differences may have led to the negative findings reported here

    Software defect prediction: do different classifiers find the same defects?

    Get PDF
    Open Access: This article is distributed under the terms of the Creative Commons Attribution 4.0 International License CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.During the last 10 years, hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. We perform a sensitivity analysis to compare the performance of Random Forest, Naïve Bayes, RPart and SVM classifiers when predicting defects in NASA, open source and commercial datasets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty of each classifier is compared. Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. Our results confirm that a unique subset of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Given our results, we conclude that classifier ensembles with decision-making strategies not based on majority voting are likely to perform best in defect prediction.Peer reviewedFinal Published versio

    Modeling irregular time series with continuous recurrent units

    Get PDF
    Recurrent neural networks (RNNs) are a popular choice for modeling sequential data. Modern RNN architectures assume constant time-intervals between observations. However, in many datasets (e.g. medical records) observation times are irregular and can carry important information. To address this challenge, we propose continuous recurrent units (CRUs) {–} a neural architecture that can naturally handle irregular intervals between observations. The CRU assumes a hidden state, which evolves according to a linear stochastic differential equation and is integrated into an encoder-decoder framework. The recursive computations of the CRU can be derived using the continuous-discrete Kalman filter and are in closed form. The resulting recurrent architecture has temporal continuity between hidden states and a gating mechanism that can optimally integrate noisy observations. We derive an efficient parameterization scheme for the CRU that leads to a fast implementation f-CRU. We empirically study the CRU on a number of challenging datasets and find that it can interpolate irregular time series better than methods based on neural ordinary differential equations

    Theory and Applications of X-ray Standing Waves in Real Crystals

    Full text link
    Theoretical aspects of x-ray standing wave method for investigation of the real structure of crystals are considered in this review paper. Starting from the general approach of the secondary radiation yield from deformed crystals this theory is applied to different concreat cases. Various models of deformed crystals like: bicrystal model, multilayer model, crystals with extended deformation field are considered in detailes. Peculiarities of x-ray standing wave behavior in different scattering geometries (Bragg, Laue) are analysed in detailes. New possibilities to solve the phase problem with x-ray standing wave method are discussed in the review. General theoretical approaches are illustrated with a big number of experimental results.Comment: 101 pages, 43 figures, 3 table

    Post-2020 climate agreements in the major economies assessed in the light of global models

    Get PDF
    Integrated assessment models can help in quantifying the implications of international climate agreements and regional climate action. This paper reviews scenario results from model intercomparison projects to explore different possible outcomes of post-2020 climate negotiations, recently announced pledges and their relation to the 2 °C target. We provide key information for all the major economies, such as the year of emission peaking, regional carbon budgets and emissions allowances. We highlight the distributional consequences of climate policies, and discuss the role of carbon markets for financing clean energy investments, and achieving efficiency and equity

    Validating anthropogenic threat maps as a tool for assessing river ecological integrity in Andean-Amazon basins

    Get PDF
    Anthropogenic threat maps are commonly used as a surrogate for the ecological integrity of rivers in freshwater conservation, but a clearer understanding of their relationships is required to develop proper management plans at large scales. Here, we developed and validated empirical models that link the ecological integrity of rivers to threat maps in a large, heterogeneous and biodiverse Andean-Amazon watershed. Through fieldwork, we recorded data on aquatic invertebrate community composition, habitat quality, and physical-chemical parameters to calculate the ecological integrity of 140 streams/rivers across the basin. Simultaneously, we generated maps that describe the location, extent, and magnitude of impact of nine anthropogenic threats to freshwater systems in the basin. Through seven-fold cross-validation procedure, we found that regression models based on anthropogenic threats alone have limited power for predicting the ecological integrity of rivers. However, the prediction accuracy improved when environmental predictors (slope and elevation) were included, and more so when the predictions were carried out at a coarser scale, such as microbasins. Moreover, anthropogenic threats that amplify the incidence of other pressures (roads, human settlements and oil activities) are the most relevant predictors of ecological integrity. We concluded that threat maps can offer an overall picture of the ecological integrity pattern of the basin, becoming a useful tool for broad-scale conservation planning for freshwater ecosystems. While it is always advisable to have finer scale in situ measurements of ecological integrity, our study shows that threat maps provide fast and cost-effective results, which so often are needed for pressing management and conservation actions

    Can deep learning predict risky retail investors? A case study in financial risk behavior forecasting

    Get PDF
    The paper examines the potential of deep learning to support decisions in financial risk management. We develop a deep learning model for predicting whether individual spread traders secure profits from future trades. This task embodies typical modeling challenges faced in risk and behavior forecasting. Conventional machine learning requires data that is representative of the feature-target relationship and relies on the often costly development, maintenance, and revision of handcrafted features. Consequently, modeling highly variable, heterogeneous patterns such as trader behavior is challenging. Deep learning promises a remedy. Learning hierarchical distributed representations of the data in an automatic manner (e.g. risk taking behavior), it uncovers generative features that determine the target (e.g., trader’s profitability), avoids manual feature engineering, and is more robust toward change (e.g. dynamic market conditions). The results of employing a deep network for operational risk forecasting confirm the feature learning capability of deep learning, provide guidance on designing a suitable network architecture and demonstrate the superiority of deep learning over machine learning and rule-based benchmarks

    A Metric Framework for quantifying Data Concentration

    Get PDF
    Poor performance of artificial neural nets when applied to credit-related classification problems is investigated and contrasted with logistic regression classification. We propose that artificial neural nets are less successful because of the inherent structure of credit data rather than any particular aspect of the neural net structure. Three metrics are developed to rationalise the result with such data. The metrics exploit the distributional properties of the data to rationalise neural net results. They are used in conjunction with a variant of an established concentration measure that differentiates between class characteristics. The results are contrasted with those obtained using random data, and are compared with results obtained using logistic regression. We find, in general agreement with previous studies, that logistic regressions out-perform neural nets in the majority of cases. An approximate decision criterion is developed in order to explain adverse results
    corecore