17 research outputs found
Pain-related Somato Sensory Evoked Potentials: a potential new tool to improve the prognostic prediction of coma after cardiac arrest
Auditory evoked potential classification by unsupervised ART 2-A and supervised fuzzy ARTMAP networks
Classification Of Auditory Brainstem Responses By Human Experts And Backipropagation Neural Networks
Recommended from our members
Determining hearing threshold from brain stem evoked potentials. Optimizing a neural network to improve classification performance
Feed-forward neural networks in conjunction with back-propagation are an effective tool to automate the classification of biomedical signals. Most of the neural network research to date has been done with a view to accelerate learning speed. In the medical context, however, generalisation may be more important than learning speed. With the brain stem auditory evoked potential classification task described in this study, the authors found that parameter values that gave fastest learning could result in poor generalisation. In order to achieve maximum generalisation, it was necessary to fine tune the neural net for gain, momentum, batch size, and hidden layer size. Although this maximization could be time consuming, especially with larger training sets, the authors' results suggest that fine tuning parameters can have important clinical consequences, which justifies the time involved. In the authors' case, fine tuning parameters for high generalisation had the additional effect of reducing false negative classifications, with only a small sacrifice in learning speed.<
Are modified back-propagation algorithms worth the effort?
A wide range of modifications and extensions to the backpropagation (BP) algorithm have been tested on a real world medical problem. Our results show that: 1) proper tuning of learning parameters of standard BP not only increases the speed of learning but also has a significant effect on generalisation; 2) parameter combinations and training options which lead to fast learning do not usually yield good generalisation and vice versa; 3) standard BP may be fast enough when its parameters are finely tuned; 4) modifications developed on artificial problems for faster learning do not necessarily give faster learning on real-world problems, and when they do, it may be at the expense of generalisation; and 5) even when modified BP algorithms perform well, they may require extensive fine-tuning to achieve this performance. For our problem, none of the modifications could justify the effort to implement them.<
