3 research outputs found
Data Augmentation and Lightweight Convolutional Neural Networks in Classification of Thoracic Diseases
The medical field has seen a tremendous transformation due to technological breakthroughs, with advanced medical imaging techniques becoming indispensable for diagnosis and treatment. Convolutional neural network (CNN) models have demonstrated remarkable accuracy in analyzing and classifying medical images, often surpassing human performance. In this study, we contrast two important methods for classifying medical images: lightweight CNN models (that are tailored for devices with limited resources) and data augmentation (to improve model generalization). Evaluating these models' effectiveness and performance in identifying thoracic illnesses, such as breast cancer, COVID-19 effects, and pneumonia, is the main goal. To enhance model performance, the study used preprocessed and augmented publicly available chest X-ray scans and computer tomography images. Specific CNN models used in the experiments are MobileNet, EfficientNet-B0, ResNet50, and DenseNet121. The state-of-the-art for these models show that despite lowering the danger of overfitting, data augmentation greatly increases model accuracy. Lightweight models provided the best possible compromise between accuracy and resource efficiency, performing on par with complicated models. The suitability of lightweight models for portable medical equipment was validated by testing on devices with limited resources, allowing for quick and precise pre-diagnosis. In addition to highlighting the potential of lightweight CNN to increase diagnostic accessibility, these findings emphasize the importance of striking balancing performance and efficiency in medical applications, particularly in resource-limited settings
Activity-Aware Vital Sign Monitoring Based on a Multi-Agent Architecture
Vital sign monitoring outside the clinical environment based on wearable sensors ensures better support in assessing a patient’s health condition, and in case of health deterioration, automatic alerts can be sent to the care providers. In everyday life, the users can perform different physical activities, and considering that vital sign measurements depend on the intensity of the activity, we proposed an architecture based on the multi-agent paradigm to handle this issue dynamically. Different types of agents were proposed that processed different sensor signals and recognized simple activities of daily living. The system was validated using a real-life dataset where subjects wore accelerometer sensors on the chest, wrist, and ankle. The system relied on ontology-based models to address the data heterogeneity and combined different wearable sensor sources in order to achieve better performance. The results showed an accuracy of 95.25% on intersubject activity classification. Moreover, the proposed method, which automatically extracted vital sign threshold ranges for each physical activity recognized by the system, showed promising results for remote health status evaluation.</jats:p
Activity-Aware Vital Sign Monitoring Based on a Multi-Agent Architecture
Vital sign monitoring outside the clinical environment based on wearable sensors ensures better support in assessing a patient’s health condition, and in case of health deterioration, automatic alerts can be sent to the care providers. In everyday life, the users can perform different physical activities, and considering that vital sign measurements depend on the intensity of the activity, we proposed an architecture based on the multi-agent paradigm to handle this issue dynamically. Different types of agents were proposed that processed different sensor signals and recognized simple activities of daily living. The system was validated using a real-life dataset where subjects wore accelerometer sensors on the chest, wrist, and ankle. The system relied on ontology-based models to address the data heterogeneity and combined different wearable sensor sources in order to achieve better performance. The results showed an accuracy of 95.25% on intersubject activity classification. Moreover, the proposed method, which automatically extracted vital sign threshold ranges for each physical activity recognized by the system, showed promising results for remote health status evaluation
