66 research outputs found
Pattern Matching and Neural Networks based Hybrid Forecasting System
Copyright © 2001 Springer-Verlag Berlin Heidelberg. The final publication is available at link.springer.comBook title: Advances in Pattern Recognition — ICAPR 2001Second International Conference on Advances in Pattern Recognition (ICAPR 2001), Rio de Janeiro, Brazil, March 11–14, 2001In this paper we propose a Neural Net-PMRS hybrid for forecasting time-series data. The neural network model uses the traditional MLP architecture and backpropagation method of training. Rather than using the last n lags for prediction, the input to the network is determined by the output of the PMRS (Pattern Modelling and Recognition System). PMRS matches current patterns in the time-series with historic data and generates input for the neural network that consists of both current and historic information. The results of the hybrid model are compared with those of neural networks and PMRS on their own. In general, there is no outright winner on all performance measures, however, the hybrid model is a better choice for certain types of data, or on certain error measures
Building robust prediction models for defective sensor data using Artificial Neural Networks
Predicting the health of components in complex dynamic systems such as an
automobile poses numerous challenges. The primary aim of such predictive
systems is to use the high-dimensional data acquired from different sensors and
predict the state-of-health of a particular component, e.g., brake pad. The
classical approach involves selecting a smaller set of relevant sensor signals
using feature selection and using them to train a machine learning algorithm.
However, this fails to address two prominent problems: (1) sensors are
susceptible to failure when exposed to extreme conditions over a long periods
of time; (2) sensors are electrical devices that can be affected by noise or
electrical interference. Using the failed and noisy sensor signals as inputs
largely reduce the prediction accuracy. To tackle this problem, it is
advantageous to use the information from all sensor signals, so that the
failure of one sensor can be compensated by another. In this work, we propose
an Artificial Neural Network (ANN) based framework to exploit the information
from a large number of signals. Secondly, our framework introduces a data
augmentation approach to perform accurate predictions in spite of noisy
signals. The plausibility of our framework is validated on real life industrial
application from Robert Bosch GmbH.Comment: 16 pages, 7 figures. Currently under review. This research has
obtained funding from the Electronic Components and Systems for European
Leadership (ECSEL) Joint Undertaking, the framework programme for research
and innovation Horizon 2020 (2014-2020) under grant agreement number
662189-MANTIS-2014-
Tau association with synaptic vesicles causes presynaptic dysfunction
Tau is implicated in more than 20 neurodegenerative diseases, including Alzheimer's disease. Under pathological conditions, Tau dissociates from axonal microtubules and missorts to pre- and postsynaptic terminals. Patients suffer from early synaptic dysfunction prior to Tau aggregate formation, but the underlying mechanism is unclear. Here we show that pathogenic Tau binds to synaptic vesicles via its N-terminal domain and interferes with presynaptic functions, including synaptic vesicle mobility and release rate, lowering neurotransmission in fly and rat neurons. Pathological Tau mutants lacking the vesicle binding domain still localize to the presynaptic compartment but do not impair synaptic function in fly neurons. Moreover, an exogenously applied membrane-permeable peptide that competes for Tau-vesicle binding suppresses Tau-induced synaptic toxicity in rat neurons. Our work uncovers a presynaptic role of Tau that may be part of the early pathology in various Tauopathies and could be exploited therapeutically.status: publishe
Are Realized Volatility Models Good Candidates for Alternative Value at Risk Prediction Strategies?
The role of high frequency intra-daily data, daily range and implied volatility in multi-period Value-at-Risk forecasting
In this paper, we assess the informational content of daily range, realized variance, realized bipower variation, two time scale realized variance, realized range and implied volatility in daily, weekly, biweekly and monthly out-of-sample Value-at-Risk (VaR) predictions. We use the recently proposed Realized GARCH model combined with the skewed student distribution for the innovations process and a Monte Carlo simulation approach in order to produce the multi-period VaR estimates. The VaR forecasts are evaluated in terms of statistical and regulatory accuracy as well as capital efficiency. Our empirical findings, based on the S&P 500 stock index, indicate that almost all realized and implied volatility measures can produce statistically and regulatory precise VaR forecasts across forecasting horizons, with the implied volatility being especially accurate in monthly VaR forecasts. The daily range produces inferior forecasting results in terms of regulatory accuracy and Basel II compliance. However, robust realized volatility measures such as the adjusted realized range and the realized bipower variation, which are immune against microstructure noise bias and price jumps respectively, generate superior VaR estimates in terms of capital efficiency, as they minimize the opportunity cost of capital and the Basel II regulatory capital. Our results highlight the importance of robust high frequency intra-daily data based volatility estimators in a multi-step VaR forecasting context as they balance between statistical or regulatory accuracy and capital efficiency
Parallelism in knowledge-based machines
AbstractThe application area of knowledge-based expert systems is currently providing the main stimulus for developing powerful, parallel computer architectures. Languages for programming knowledge-based applications divide into four broad classes: Functional languages (e.g. LISP), Logic languages (e.g. PROLOG), Rule-Based languages (e.g. OPS5), and, what we refer to as self-organizing networks (e.g. BOLTZMANN machines).Despite their many differences, a common problem for all language classes and their supporting machine architectures is parallelism: how to de-compose a single computation into a number of parallel tasks that can be distributed across an ensemble of processors. The aim of this paper is to review the four types of language for programming knowledge-based expert systems, and their supporting parallel machine architectures. In doing so we analyze the concepts and relationships that exist between the programming languages and their parallel machine architectures in terms of their strengths and limitations for exploiting parallelization.</jats:p
PARLE: A language for expressing parallelism and integrating symbolic and numeric computations
Message passing via singly-buffered channels: an efficient & flexible communications control mechanism
- …
