94 research outputs found
Distinguishing features in the presentations of childhood inflammatory brain diseases at a tertiary-care centre
Can Machine Learning Be Better than Biased Readers?
Background: Training machine learning (ML) models in medical imaging requires large amounts of labeled data. To minimize labeling workload, it is common to divide training data among multiple readers for separate annotation without consensus and then combine the labeled data for training a ML model. This can lead to a biased training dataset and poor ML algorithm prediction performance. The purpose of this study is to determine if ML algorithms can overcome biases caused by multiple readers’ labeling without consensus. Methods: This study used a publicly available chest X-ray dataset of pediatric pneumonia. As an analogy to a practical dataset without labeling consensus among multiple readers, random and systematic errors were artificially added to the dataset to generate biased data for a binary-class classification task. The Resnet18-based convolutional neural network (CNN) was used as a baseline model. A Resnet18 model with a regularization term added as a loss function was utilized to examine for improvement in the baseline model. Results: The effects of false positive labels, false negative labels, and random errors (5–25%) resulted in a loss of AUC (0–14%) when training a binary CNN classifier. The model with a regularized loss function improved the AUC (75–84%) over that of the baseline model (65–79%). Conclusion: This study indicated that it is possible for ML algorithms to overcome individual readers’ biases when consensus is not available. It is recommended to use regularized loss functions when allocating annotation tasks to multiple readers as they are easy to implement and effective in mitigating biased labels
Diffusion-Based Image Synthesis or Traditional Augmentation for Enriching Musculoskeletal Ultrasound Datasets
Background: Machine learning models can provide quick and reliable assessments in place of medical practitioners. With over 50 million adults in the United States suffering from osteoarthritis, there is a need for models capable of interpreting musculoskeletal ultrasound images. However, machine learning requires lots of data, which poses significant challenges in medical imaging. Therefore, we explore two strategies for enriching a musculoskeletal ultrasound dataset independent of these limitations: traditional augmentation and diffusion-based image synthesis. Methods: First, we generate augmented and synthetic images to enrich our dataset. Then, we compare the images qualitatively and quantitatively, and evaluate their effectiveness in training a deep learning model for detecting thickened synovium and knee joint recess distension. Results: Our results suggest that synthetic images exhibit some anatomical fidelity, diversity, and help a model learn representations consistent with human opinion. In contrast, augmented images may impede model generalizability. Finally, a model trained on synthetically enriched data outperforms models trained on un-enriched and augmented datasets. Conclusions: We demonstrate that diffusion-based image synthesis is preferable to traditional augmentation. Our study underscores the importance of leveraging dataset enrichment strategies to address data scarcity in medical imaging and paves the way for the development of more advanced diagnostic tools
Using Cluster Analysis to Assess the Impact of Dataset Heterogeneity on Deep Convolutional Network Accuracy: A First Glance
Countermeasures against methotrexate intolerance in juvenile idiopathic arthritis instituted by parents show no effect
Will Obesity Increase the Proportion of Children and Adolescents Recommended for a Statin?
NeRF-US: Removing Ultrasound Imaging Artifacts from Neural Radiance Fields in the Wild
Current methods for performing 3D reconstruction and novel view synthesis
(NVS) in ultrasound imaging data often face severe artifacts when training
NeRF-based approaches. The artifacts produced by current approaches differ from
NeRF floaters in general scenes because of the unique nature of ultrasound
capture. Furthermore, existing models fail to produce reasonable 3D
reconstructions when ultrasound data is captured or obtained casually in
uncontrolled environments, which is common in clinical settings. Consequently,
existing reconstruction and NVS methods struggle to handle ultrasound motion,
fail to capture intricate details, and cannot model transparent and reflective
surfaces. In this work, we introduced NeRF-US, which incorporates 3D-geometry
guidance for border probability and scattering density into NeRF training,
while also utilizing ultrasound-specific rendering over traditional volume
rendering. These 3D priors are learned through a diffusion model. Through
experiments conducted on our new "Ultrasound in the Wild" dataset, we observed
accurate, clinically plausible, artifact-free reconstructions
- …
