258 research outputs found
Neural plasma
This paper presents a novel type of artificial neural network, called neural plasma, which is tailored for classification tasks involving few observations with a large number of variables. Neural plasma learns to adapt its classification confidence by generating artificial training data as a function of its confidence in previous decisions. In contrast to multilayer perceptrons and similar techniques, which are inspired by topological and operational aspects of biological neural networks, neural plasma is motivated by aspects of high-level behavior and reasoning in the presence of uncertainty. The basic principles of the proposed model apply to other supervised learning algorithms that provide explicit classification confidence values. The empirical evaluation of this new technique is based on benchmarking experiments involving data sets from biotechnology that are characterized by the small-n-large-p problem.
The presented study exposes a comprehensive methodology and is seen as a first step in exploring different aspects of this methodology.IFIP International Conference on Artificial Intelligence in Theory and Practice - Neural NetsRed de Universidades con Carreras en Informática (RedUNCI
Instance-based concept learning from multiclass DNA microarray data
BACKGROUND: Various statistical and machine learning methods have been successfully applied to the classification of DNA microarray data. Simple instance-based classifiers such as nearest neighbor (NN) approaches perform remarkably well in comparison to more complex models, and are currently experiencing a renaissance in the analysis of data sets from biology and biotechnology. While binary classification of microarray data has been extensively investigated, studies involving multiclass data are rare. The question remains open whether there exists a significant difference in performance between NN approaches and more complex multiclass methods. Comparative studies in this field commonly assess different models based on their classification accuracy only; however, this approach lacks the rigor needed to draw reliable conclusions and is inadequate for testing the null hypothesis of equal performance. Comparing novel classification models to existing approaches requires focusing on the significance of differences in performance. RESULTS: We investigated the performance of instance-based classifiers, including a NN classifier able to assign a degree of class membership to each sample. This model alleviates a major problem of conventional instance-based learners, namely the lack of confidence values for predictions. The model translates the distances to the nearest neighbors into 'confidence scores'; the higher the confidence score, the closer is the considered instance to a pre-defined class. We applied the models to three real gene expression data sets and compared them with state-of-the-art methods for classifying microarray data of multiple classes, assessing performance using a statistical significance test that took into account the data resampling strategy. Simple NN classifiers performed as well as, or significantly better than, their more intricate competitors. CONCLUSION: Given its highly intuitive underlying principles – simplicity, ease-of-use, and robustness – the k-NN classifier complemented by a suitable distance-weighting regime constitutes an excellent alternative to more complex models for multiclass microarray data sets. Instance-based classifiers using weighted distances are not limited to microarray data sets, but are likely to perform competitively in classifications of high-dimensional biological data sets such as those generated by high-throughput mass spectrometry
Recommended from our members
Estimating the Replication Probability of Significant Classification Benchmark Experiments
A fundamental question in machine learning is: "What are the chances that a statistically significant result will replicate?" The standard framework of null hypothesis significance testing, however, cannot answer this question directly. In this work, we derive formulas for estimating the replication probability that are applicable in two of the most widely used experimental designs in machine learning: the comparison of two classifiers over multiple benchmark datasets and the comparison of two classifiers in k-fold cross-validation. Using simulation studies, we show that p-values just below the common significance threshold of 0.05 are insufficient to warrant a high confidence in the replicability of significant results, as such p-values are barely more informative than the flip of a coin. If a replication probability of around 0.95 is desired, then the significance threshold should be lowered to at least 0.003. This observation might explain, at least in part, why many published research findings fail to replicate
Neural plasma
This paper presents a novel type of artificial neural network, called neural plasma, which is tailored for classification tasks involving few observations with a large number of variables. Neural plasma learns to adapt its classification confidence by generating artificial training data as a function of its confidence in previous decisions. In contrast to multilayer perceptrons and similar techniques, which are inspired by topological and operational aspects of biological neural networks, neural plasma is motivated by aspects of high-level behavior and reasoning in the presence of uncertainty. The basic principles of the proposed model apply to other supervised learning algorithms that provide explicit classification confidence values. The empirical evaluation of this new technique is based on benchmarking experiments involving data sets from biotechnology that are characterized by the small-n-large-p problem.
The presented study exposes a comprehensive methodology and is seen as a first step in exploring different aspects of this methodology.IFIP International Conference on Artificial Intelligence in Theory and Practice - Neural NetsRed de Universidades con Carreras en Informática (RedUNCI
Low Cost IoT System for Solar Panel Power Monitoring
International audienceIn this work, we will present a low-cost system to monitor energy production from a solar panel. Based on simple devices, this solution made it possible to measure the current, voltage, power, and visualize them through an available and free IoT application called Node-Red. The project can be qualified having plenty of essential purposes : to be used in education field, research and even production monitoring in a photovoltaic system
Self-organizing incremental neural networks for continual learning
Continual learning systems can adapt to new tasks, changes in data distributions, and new information that becomes incrementally available over time. The key challenge for such systems is how to mitigate catastrophic forgetting, i.e., how to prevent the loss of previously learned knowledge when new tasks need to be solved. In our research, we investigate self-organizing incremental neural networks (SOINN) for continual learning from both stationary and non-stationary data. We have developed a new algorithm, SOINN+, that learns to forget irrelevant nodes and edges and is robust to noise
Recommended from our members
A Self-Organizing Incremental Neural Network for Continual Supervised Learning
Continual learning algorithms can adapt to changes of data distributions, new classes, and even completely new tasks without catastrophically forgetting previously acquired knowledge. Here, we present a novel self-organizing incremental neural network, GSOINN+, for continual supervised learning. GSOINN+ learns a topological mapping of the input data to an undirected network and uses a weighted nearest-neighbor rule with fractional distance for classification. GSOINN+ learns incrementally new classification tasks do not need to be specified a priori, and no rehearsal of previously learned tasks with stored training sets is required. In a series of sequential learning experiments, we show that GSOINN+ can mitigate catastrophic forgetting, even when completely new tasks are to be learned
Recommended from our members
Can machines make us think? In memory of Alan Turing (1912-1954)
Alan Turing's question "Can machines think?" motivated his famous imitation game, which became widely known as the Turing test. Constructing a machine that can pass the test was seen by many as the "holy grail" of artificial intelligence (AI) research because such a machine must be assumed to have intelligence. The test had a tremendous impact on computer science and stirred many philosophical debates over the last decades. Today, however, the test has nearly vanished from research agendas in AI. Here, we argue that the Turing test is still inspirational. Modern computing machinery is now an integral part in myriads of problem-solving processes and has revolutionized how science is done. Machines can make us think, that is, help us refine or develop new theories about the natural world
Quo Vadis, Artificial Intelligence?
Since its conception in the mid 1950s, artificial intelligence with its great ambition to understand and emulate intelligence in natural and artificial environments alike is now a truly multidisciplinary field that reaches out and is inspired by a great diversity of other fields. Rapid advances in research and technology in various fields have created environments into which artificial intelligence could embed itself naturally and comfortably. Neuroscience with its desire to understand nervous systems of biological organisms and systems biology with its longing to comprehend, holistically, the multitude of complex interactions in biological systems are two such fields. They target ideals artificial intelligence has dreamt about for a long time including the computer simulation of an entire biological brain or the creation of new life forms from manipulations of cellular and genetic information in the laboratory. The scope for artificial intelligence in neuroscience and systems biology is extremely wide. This article investigates the standing of artificial intelligence in relation to neuroscience and systems biology and provides an outlook at new and exciting challenges for artificial intelligence in these fields. These challenges include, but are not necessarily limited to, the ability to learn from other projects and to be inventive, to understand the potential and exploit novel computing paradigms and environments, to specify and adhere to stringent standards and robust statistical frameworks, to be integrative, and to embrace openness principles
- …
