904 research outputs found
Hierarchical Temporal Representation in Linear Reservoir Computing
Recently, studies on deep Reservoir Computing (RC) highlighted the role of
layering in deep recurrent neural networks (RNNs). In this paper, the use of
linear recurrent units allows us to bring more evidence on the intrinsic
hierarchical temporal representation in deep RNNs through frequency analysis
applied to the state signals. The potentiality of our approach is assessed on
the class of Multiple Superimposed Oscillator tasks. Furthermore, our
investigation provides useful insights to open a discussion on the main aspects
that characterize the deep learning framework in the temporal domain.Comment: This is a pre-print of the paper submitted to the 27th Italian
Workshop on Neural Networks, WIRN 201
Dynamic clustering of time series with Echo State Networks
In this paper we introduce a novel methodology for unsupervised analysis of time series, based upon the iterative implementation of a clustering algorithm embedded into the evolution of a recurrent Echo State Network. The main features of the temporal data are captured by the dynamical evolution of the network states, which are then subject to a clustering procedure. We apply the proposed algorithm to time series coming from records of eye movements, called saccades, which are recorded for diagnosis of a neurodegenerative form of ataxia. This is a hard classification problem, since saccades from patients at an early stage of the disease are practically indistinguishable from those coming from healthy subjects. The unsupervised clustering algorithm implanted within the recurrent network produces more compact clusters, compared to conventional clustering of static data, and provides a source of information that could aid diagnosis and assessment of the disease.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec
Optoelectronic Reservoir Computing
Reservoir computing is a recently introduced, highly efficient bio-inspired
approach for processing time dependent data. The basic scheme of reservoir
computing consists of a non linear recurrent dynamical system coupled to a
single input layer and a single output layer. Within these constraints many
implementations are possible. Here we report an opto-electronic implementation
of reservoir computing based on a recently proposed architecture consisting of
a single non linear node and a delay line. Our implementation is sufficiently
fast for real time information processing. We illustrate its performance on
tasks of practical importance such as nonlinear channel equalization and speech
recognition, and obtain results comparable to state of the art digital
implementations.Comment: Contains main paper and two Supplementary Material
Reservoir Computing Approach to Robust Computation using Unreliable Nanoscale Networks
As we approach the physical limits of CMOS technology, advances in materials
science and nanotechnology are making available a variety of unconventional
computing substrates that can potentially replace top-down-designed
silicon-based computing devices. Inherent stochasticity in the fabrication
process and nanometer scale of these substrates inevitably lead to design
variations, defects, faults, and noise in the resulting devices. A key
challenge is how to harness such devices to perform robust computation. We
propose reservoir computing as a solution. In reservoir computing, computation
takes place by translating the dynamics of an excited medium, called a
reservoir, into a desired output. This approach eliminates the need for
external control and redundancy, and the programming is done using a
closed-form regression problem on the output, which also allows concurrent
programming using a single device. Using a theoretical model, we show that both
regular and irregular reservoirs are intrinsically robust to structural noise
as they perform computation
Climate, people, fire and vegetation: new insights into vegetation dynamics in the Eastern Mediterranean since the 1st century AD
Anatolia forms a bridge between Europe, Africa and Asia and is influenced by all three continents in terms of climate, vegetation and human civilisation. Unfortunately, well-dated palynological records focussing on the period from the end of the classical Roman period until subrecent times are rare for Anatolia and completely absent for southwest Turkey, resulting in a lacuna in knowledge concerning the interactions of climatic change, human impact, and environmental change in this important region. Two well-dated palaeoecological records from the Western Taurus Mountains, Turkey, provide a first relatively detailed record of vegetation dynamics from late Roman times until the present in SW Turkey. Combining pollen, non-pollen palynomorphs, charcoal, sedimentological, archaeological data, and newly developed multivariate numerical analyses allows for the disentangling of climatic and anthropogenic influences on vegetation change. Results show changes in both the regional pollen signal as well as local soil sediment characteristics match shifts in regional climatic conditions. Both climatic as well as anthropogenic change had a strong influence on vegetation dynamics and land use. A moist environmental trend during the late-3rd century caused an increase in marshes and wetlands in the moister valley floors, limiting possibilities for intensive crop cultivation at such locations. A mid-7th century shift to pastoralism coincided with a climatic deterioration as well as the start of Arab incursions into the region, the former driving the way in which the vegetation developed afterwards. Resurgence in agriculture was observed in the study during the mid-10th century AD, coinciding with the Medieval Climate Anomaly. An abrupt mid-12th century decrease in agriculture is linked to socio-political change, rather than the onset of the Little Ice Age. Similarly, gradual deforestation occurring from the 16th century onwards has been linked to changes in land use during Ottoman times. The pollen data reveal that a fast rise in <i>Pinus</i> pollen after the end of the Beyşehir Occupation Phase need not always occur. The notion of high <i>Pinus</i> pollen percentages indicating an open landscape incapable of countering the influx of pine pollen is also deemed unrealistic. While multiple fires occurred in the region through time, extended fire periods, as had occurred during the Bronze Age and Beyşehir Occupation Phase, did not occur, and no signs of local fire activity were observed. Fires were never a major influence on vegetation dynamics. While no complete overview of post-BO Phase fire events can be presented, the available data indicates that fires in the vicinity of Gravgaz may have been linked to anthropogenic activity in the wider surroundings of the marsh. Fires in the vicinity of Bereket appeared to be linked to increased abundance of pine forests. There was no link with specifically wet or dry environmental conditions at either site. While this study reveals much new information concerning the impact of climate change and human occupation on the environment, more studies from SW Turkey are required in order to properly quantify the range of the observed phenomena and the magnitude of their impacts
Development of a quality assurance process for the SoLid experiment
The SoLid experiment has been designed to search for an oscillation pattern induced by a light sterile neutrino state, utilising the BR2 reactor of SCK circle CEN, in Belgium.
The detector leverages a new hybrid technology, utilising two distinct scintillators in a cubic array, creating a highly segmented detector volume. A combination of 5 cm cubic polyvinyltoluene cells, with (LiF)-Li-6:ZnS(Ag) sheets on two faces of each cube, facilitate reconstruction of the neutrino signals. Whilst the high granularity provides a powerful toolset to discriminate backgrounds; by itself the segmentation also represents a challenge in terms of homogeneity and calibration, for a consistent detector response. The search for this light sterile neutrino implies a sensitivity to distortions of around O(10)% in the energy spectrum of reactor (v) over bare. Hence, a very good neutron detection efficiency, light yield and homogeneous detector response are critical for data validation. The minimal requirements for the SoLid physics program are a light yield and a neutron detection efficiency larger than 40 PA/MeV/cube and 50% respectively. In order to guarantee these minimal requirements, the collaboration developed a rigorous quality assurance process for all 12800 cubic cells of the detector. To carry out the quality assurance process, an automated calibration system called CALIPSO was designed and constructed. CALIPSO provides precise, automatic placement of radioactive sources in front of each cube of a given detector plane (16 x 16 cubes). A combination of Na-22, Cf-252 and AmBe gamma and neutron sources were used by CALIPSO during the quality assurance process. Initially, the scanning identified defective components allowing for repair during initial construction of the SoLid detector. Secondly, a full analysis of the calibration data revealed initial estimations for the light yield of over 60 PA/MeV and neutron reconstruction efficiency of 68%, validating the SoLid physics requirements
Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks
Recurrent neural networks (RNNs) are widely used in computational
neuroscience and machine learning applications. In an RNN, each neuron computes
its output as a nonlinear function of its integrated input. While the
importance of RNNs, especially as models of brain processing, is undisputed, it
is also widely acknowledged that the computations in standard RNN models may be
an over-simplification of what real neuronal networks compute. Here, we suggest
that the RNN approach may be made both neurobiologically more plausible and
computationally more powerful by its fusion with Bayesian inference techniques
for nonlinear dynamical systems. In this scheme, we use an RNN as a generative
model of dynamic input caused by the environment, e.g. of speech or kinematics.
Given this generative RNN model, we derive Bayesian update equations that can
decode its output. Critically, these updates define a 'recognizing RNN' (rRNN),
in which neurons compute and exchange prediction and prediction error messages.
The rRNN has several desirable features that a conventional RNN does not have,
for example, fast decoding of dynamic stimuli and robustness to initial
conditions and noise. Furthermore, it implements a predictive coding scheme for
dynamic inputs. We suggest that the Bayesian inversion of recurrent neural
networks may be useful both as a model of brain function and as a machine
learning tool. We illustrate the use of the rRNN by an application to the
online decoding (i.e. recognition) of human kinematics
Robotic ubiquitous cognitive ecology for smart homes
Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent- based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feed- back received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work
Soil and water bioengineering: practice and research needs for reconciling natural hazard control and ecological restoration
Soil and water bioengineering is a technology that encourages scientists and practitioners to combine their knowledge and skills in the management of ecosystems with a common goal to maximize benefits to both man and the natural environment. It involves techniques that use plants as living building materials, for: (i) natural hazard control (e.g., soil erosion, torrential floods and landslides) and (ii) ecological restoration or nature-based re-introduction of species on degraded lands, river embankments, and disturbed environments. For a bioengineering project to be successful, engineers are required to highlight all the potential benefits and ecosystem services by documenting the technical, ecological, economic and social values. The novel approaches used by bioengineers raise questions for researchers and necessitate innovation from practitioners to design bioengineering concepts and techniques. Our objective in this paper, therefore, is to highlight the practice and research needs in soil and water bioengineering for reconciling natural hazard control and ecological restoration. Firstly, we review the definition and development of bioengineering technology, while stressing issues concerning the design, implementation, and monitoring of bioengineering actions. Secondly, we highlight the need to reconcile natural hazard control and ecological restoration by posing novel practice and research questions
Optimized parameter search for large datasets of the regularization parameter and feature selection for ridge regression
In this paper we propose mathematical optimizations to select the optimal regularization parameter for ridge regression using cross-validation. The resulting algorithm is suited for large datasets and the computational cost does not depend on the size of the training set. We extend this algorithm to forward or backward feature selection in which the optimal regularization parameter is selected for each possible feature set. These feature selection algorithms yield solutions with a sparse weight matrix using a quadratic cost on the norm of the weights. A naive approach to optimizing the ridge regression parameter has a computational complexity of the order with the number of applied regularization parameters, the number of folds in the validation set, the number of input features and the number of data samples in the training set. Our implementation has a computational complexity of the order . This computational cost is smaller than that of regression without regularization for large datasets and is independent of the number of applied regularization parameters and the size of the training set. Combined with a feature selection algorithm the algorithm is of complexity and for forward and backward feature selection respectively, with the number of selected features and the number of removed features. This is an order faster than and for the naive implementation, with for large datasets. To show the performance and reduction in computational cost, we apply this technique to train recurrent neural networks using the reservoir computing approach, windowed ridge regression, least-squares support vector machines (LS-SVMs) in primal space using the fixed-size LS-SVM approximation and extreme learning machines
- …
