23 research outputs found
Multiband Prediction Model for Financial Time Series with Multivariate Empirical Mode Decomposition
This paper presents a subband approach to financial time series prediction. Multivariate empirical mode decomposition (MEMD) is employed here for multiband representation of multichannel financial time series together. Autoregressive moving average (ARMA) model is used in prediction of individual subband of any time series data. Then all the predicted subband signals are summed up to obtain the overall prediction. The ARMA model works better for stationary signal. With multiband representation, each subband becomes a band-limited (narrow band) signal and hence better prediction is achieved. The performance of the proposed MEMD-ARMA model is compared with classical EMD, discrete wavelet transform (DWT), and with full band ARMA model in terms of signal-to-noise ratio (SNR) and mean square error (MSE) between the original and predicted time series. The simulation results show that the MEMD-ARMA-based method performs better than the other methods.</jats:p
Emotion Recognition from EEG Signal Focusing on Deep Learning and Shallow Learning Techniques
Recently, electroencephalogram-based emotion recognition has become crucial in enabling the Human-Computer Interaction (HCI) system to become more intelligent. Due to the outstanding applications of emotion recognition, e.g., person-based decision making, mind-machine interfacing, cognitive interaction, affect detection, feeling detection, etc., emotion recognition has become successful in attracting the recent hype of AI-empowered research. Therefore, numerous studies have been conducted driven by a range of approaches, which demand a systematic review of methodologies used for this task with their feature sets and techniques. It will facilitate the beginners as guidance towards composing an effective emotion recognition system. In this article, we have conducted a rigorous review on the state-of-the-art emotion recognition systems, published in recent literature, and summarized some of the common emotion recognition steps with relevant definitions, theories, and analyses to provide key knowledge to develop a proper framework. Moreover, studies included here were dichotomized based on two categories: i) deep learning-based, and ii) shallow machine learning-based emotion recognition systems. The reviewed systems were compared based on methods, classifier, the number of classified emotions, accuracy, and dataset used. An informative comparison, recent research trends, and some recommendations are also provided for future research directions
Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images
The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness.</jats:p
Vulnerability In Deep Transfer Learning models to Adversarial Fast Gradient Sign Attack for COVID-19 prediction from Chest Radiography Images (Preprint)
BACKGROUND
COVID-19 pandemic requires quick isolation of infected patients. Thus high sensitivity of radiology images could be a key technique to diagnose symptoms besides the PCR approach. Pre-trained deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in radiology image classification, cost efficiency, lack of expert radiologists and faster processing requirement in pandemic area. Such open-source models, parameters, data sharing to generate big data repository for rare diseases and lack of variation in the radiology image-capturing environment makes the diagnosis system vulnerable to adversarial attacks like Fast Gradient Sign Method based attack.
OBJECTIVE
This study aims to explore the potential vulnerability in the state of the art deep transfer learning models for COVID-19 classification from chest radiography image, to Fast Gradient Sign Method based adversarial attack.
METHODS
Firstly, we developed two transfer learning models for X-ray and CT image based COVID-19 classification from frequently used VGG16 and InceptionV3 Convolutional Neural Network architecture. We analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, we crafted the FGSM attack for these prediction models and illustrated the adversarial perturbation variation effect for this attack on the visual perceptibility of the radiography images through proper visualization. Thirdly, we computed the decrement in overall accuracy, correct classification probability score and total misclassified samples to quantify the performance drop of these models. The experiments were validated using publicly available COVID-19 patient data.
RESULTS
We collected publicly available, labeled 268 Xray and 746 CT images. The performance of the developed transfer learning models reached above 95% accuracy with F1 and AUC score close to 1 for both X-ray and CT image based COVID-19 classification before the attack. Then our study illustrates that the misclassification can occur with a very minor perturbation of 0.009 and 0.003 for the FGSM attack in these models for Xray and CT images respectively without any effect on the visual perceptibility of these images. In addition, we demonstrated that successful FGSM attack can decrease the accuracy by 16.67% and 55% for Xray images and 70% and 40% for CT images while classifying using VGG16 and InceptionV3 respectively. Finally, the correct class probability of any test image is found to drop from 1 to 0.24 and 0.17 for VGG16 model for Xray and CT images respectively.
CONCLUSIONS
Frequently used chest radiology based COVID-19 detection models like VGG16 and InceptionV3 can significantly suffer from FGSM attack. Extensive analysis of probability score, misclassifications, perturbation effect on visual perception clearly illustrates the vulnerability. The InceptionV3 model is found to be more robust than VGG16 although FGSM can make them vulnerable. Thus despite the need for data sharing and automated diagnosis, practical deployment of such program asks for more robustness.
</sec
Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images
The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness
