59 research outputs found

    Enhancing Heart Disease Prediction With Reinforcement Learning and Data Augmentation

    Get PDF
    The study presents a novel method to improve the prediction accuracy of cardiac disease by combining data augmentation techniques with reinforcement learning. The complex nature of cardiac data frequently presents challenges for traditional machine learning models, which results in subpar performance. In response, our fusion methodology improves predictive capabilities by augmenting data and utilizing reinforcement learning\u27s skill at sequential decision-making. Our method predicts cardiac disease with an astounding 94 % accuracy rate, which is an outstanding result. This significant improvement outperforms existing techniques and shows a deeper comprehension of intricate data relationships. The amalgamation of reinforcement learning and data augmentation not only yields superior predictive accuracy but also bears noteworthy consequences for patient care and accurate cardiac diagnosis. Through the efficient combination of these approaches, our method provides a powerful response to the difficulties presented by complicated cardiac data. The potential to transform illness prediction and prevention techniques and ultimately improve patient outcomes is demonstrated by this integration\u27s success

    Corn leaf disease diagnosis: enhancing accuracy with resnet152 and grad-cam for explainable AI

    No full text
    Abstract Objective The agricultural sector is important in the supply of food globally as well as in enhancing the economy especially in developing countries where it forms the backbone of the economy. Corn can be classified as such crops important to the world’s food system. Unfortunately, corn crop is vulnerable to a lot of diseases and this may result into heavy losses and disruption in the food supply system. Hence, it is important to detect and classify these diseases accurately and promptly to limit losses and achieve the highest possible productivity. This study intends to solve these problems by constructing a trustworthy and interpretable model based on deep learning approaches focused on accurate identification of corn leaf disease. Material In this study, the cumulative dataset comprises 4188 images which are further divided into four classes of corn leaf disease, with 1146 images of blight leaf, 1306 images of common rust, 574 images of gray spot and 1162 images of healthy leaves. In order to train and validate the model 70% of data was used for training while 30% was used for testing. This division was appropriate because it allowed enough data to be used during model training and also enough for model evaluation on new data. Methods The research employs ResNet152, a well-known deep leaning structure in image classification, because uses residual connections that improve the training of deep networks. Furthermore, Grad-CAM (Gradient-weighted Class Activation Mapping) is employed to improve the explainability of the model. Grad-CAM produces human interpretable visual images in the form of heatmaps and it indicates the areas of corn leaves that have had the greatest impact in the model, which is fairly useful in understanding the model. The model processes and predicts the corn leaves into four classes: healthy (H), blight (B), gray spot (GS) and common rust (CR), with precision and explainability. Results The results of training the ResNet152 model were remarkable as it registered a 99.95% accuracy during training as well as 98.34% during testing. Also, applying Grad-CAM for interpretability purposes proved to be useful as it created heatmaps that indicated the most important parts of the leaf images for making the model predictions. This added to the understanding of the model and its predictions, which was especially important for users such as farmers who required accurate diagnoses of diseases. Conclusion This study demonstrates the effectiveness of the ResNet152 model, enhanced with Grad-CAM for explainability, in classifying corn leaf diseases. Achieving good training and testing accuracy, the model provides transparent, human-readable explanations, fostering trust and reliability in automated disease diagnosis and aiding farmers in making better-informed decisions to improve crop yields

    Multimodal Biomedical Image Segmentation using Multi-Dimensional U-Convolutional Neural Network

    No full text
    Abstract Deep learning recently achieved advancement in the segmentation of medical images. In this regard, U-Net is the most predominant deep neural network, and its architecture is the most prevalent in the medical imaging society. Experiments conducted on difficult datasets directed us to the conclusion that the traditional U-Net framework appears to be deficient in certain respects, despite its overall excellence in segmenting multimodal medical images. Therefore, we propose several modifications to the existing cutting-edge U-Net model. The technical approach involves applying a Multi-Dimensional U-Convolutional Neural Network to achieve accurate segmentation of multimodal biomedical images, enhancing precision and comprehensiveness in identifying and analyzing structures across diverse imaging modalities. As a result of the enhancements, we propose a novel framework called Multi-Dimensional U-Convolutional Neural Network (MDU-CNN) as a potential successor to the U-Net framework. On a large set of multimodal medical images, we compared our proposed framework, MDU-CNN, to the classical U-Net. There have been small changes in the case of perfect images, and a huge improvement is obtained in the case of difficult images. We tested our model on five distinct datasets, each of which presented unique challenges, and found that it has obtained a better performance of 1.32%, 5.19%, 4.50%, 10.23% and 0.87%, respectively

    Detection and classification of brain tumor using hybrid deep learning models

    No full text
    Abstract Accurately classifying brain tumor types is critical for timely diagnosis and potentially saving lives. Magnetic Resonance Imaging (MRI) is a widely used non-invasive method for obtaining high-contrast grayscale brain images, primarily for tumor diagnosis. The application of Convolutional Neural Networks (CNNs) in deep learning has revolutionized diagnostic systems, leading to significant advancements in medical imaging interpretation. In this study, we employ a transfer learning-based fine-tuning approach using EfficientNets to classify brain tumors into three categories: glioma, meningioma, and pituitary tumors. We utilize the publicly accessible CE-MRI Figshare dataset to fine-tune five pre-trained models from the EfficientNets family, ranging from EfficientNetB0 to EfficientNetB4. Our approach involves a two-step process to refine the pre-trained EfficientNet model. First, we initialize the model with weights from the ImageNet dataset. Then, we add additional layers, including top layers and a fully connected layer, to enable tumor classification. We conduct various tests to assess the robustness of our fine-tuned EfficientNets in comparison to other pre-trained models. Additionally, we analyze the impact of data augmentation on the model's test accuracy. To gain insights into the model's decision-making, we employ Grad-CAM visualization to examine the attention maps generated by the most optimal model, effectively highlighting tumor locations within brain images. Our results reveal that using EfficientNetB2 as the underlying framework yields significant performance improvements. Specifically, the overall test accuracy, precision, recall, and F1-score were found to be 99.06%, 98.73%, 99.13%, and 98.79%, respectively

    Local-Ternary-Pattern-Based Associated Histogram Equalization Technique for Cervical Cancer Detection

    No full text
    Every year, cervical cancer is a leading cause of mortality in women all over the world. This cancer can be cured if it is detected early and patients are treated promptly. This study proposes a new strategy for the detection of cervical cancer using cervigram pictures. The associated histogram equalization (AHE) technique is used to improve the edges of the cervical image, and then the finite ridgelet transform is used to generate a multi-resolution picture. Then, from this converted multi-resolution cervical picture, features such as ridgelets, gray-level run-length matrices, moment invariant, and enhanced local ternary pattern are retrieved. A feed-forward backward propagation neural network is used to train and test these extracted features in order to classify the cervical images as normal or abnormal. To detect and segment cancer regions, morphological procedures are applied to the abnormal cervical images. The cervical cancer detection system’s performance metrics include 98.11% sensitivity, 98.97% specificity, 99.19% accuracy, a PPV of 98.88%, an NPV of 91.91%, an LPR of 141.02%, an LNR of 0.0836, 98.13% precision, 97.15% FPs, and 90.89% FNs. The simulation outcomes show that the proposed method is better at detecting and segmenting cervical cancer than the traditional methods

    Detection and Grade Classification of Diabetic Retinopathy and Adult Vitelliform Macular Dystrophy Based on Ophthalmoscopy Images

    No full text
    Diabetic retinopathy (DR) and adult vitelliform macular dystrophy (AVMD) may cause significant vision impairment or blindness. Prompt diagnosis is essential for patient health. Photographic ophthalmoscopy checks retinal health quickly, painlessly, and easily. It is a frequent eye test. Ophthalmoscopy images of these two illnesses are challenging to analyse since early indications are typically absent. We propose a deep learning strategy called ActiveLearn to address these concerns. This approach relies heavily on the ActiveLearn Transformer as its central structure. Furthermore, transfer learning strategies that are able to strengthen the low-level features of the model and data augmentation strategies to balance the data are incorporated owing to the peculiarities of medical pictures, such as their limited quantity and generally rigid structure. On the benchmark dataset, the suggested technique is shown to perform better than state-of-the-art methods in both binary and multiclass accuracy classification tasks with scores of 97.9% and 97.1%, respectively.</jats:p

    Hybrid Whale and Gray Wolf Deep Learning Optimization Algorithm for Prediction of Alzheimer’s Disease

    No full text
    In recent years, finding the optimal solution for image segmentation has become more important in many applications. The whale optimization algorithm (WOA) is a metaheuristic optimization technique that has the advantage of achieving the global optimal solution while also being simple to implement and solving many real-time problems. If the complexity of the problem increases, the WOA may stick to local optima rather than global optima. This could be an issue in obtaining a better optimal solution. For this reason, this paper recommends a hybrid algorithm that is based on a mixture of the WOA and gray wolf optimization (GWO) for segmenting the brain sub regions, such as the gray matter (GM), white matter (WM), ventricle, corpus callosum (CC), and hippocampus (HC). This hybrid mixture consists of two steps, i.e., the WOA and GWO. The proposed method helps in diagnosing Alzheimer’s disease (AD) by segmenting the brain sub regions (SRs) by using a hybrid of the WOA and GWO (H-WOA-GWO, which is represented as HWGO). The segmented region was validated with different measures, and it shows better accuracy results of 92%. Following segmentation, the deep learning classifier was utilized to categorize normal and AD images. The combination of WOA and GWO yields an accuracy of 90%. As a result, it was discovered that the suggested method is a highly successful technique for identifying the ideal solution, and it is paired with a deep learning algorithm for classification.</jats:p
    corecore