9,506 research outputs found
Unsupervised Body Part Regression via Spatially Self-ordering Convolutional Neural Networks
Automatic body part recognition for CT slices can benefit various medical
image applications. Recent deep learning methods demonstrate promising
performance, with the requirement of large amounts of labeled images for
training. The intrinsic structural or superior-inferior slice ordering
information in CT volumes is not fully exploited. In this paper, we propose a
convolutional neural network (CNN) based Unsupervised Body part Regression
(UBR) algorithm to address this problem. A novel unsupervised learning method
and two inter-sample CNN loss functions are presented. Distinct from previous
work, UBR builds a coordinate system for the human body and outputs a
continuous score for each axial slice, representing the normalized position of
the body part in the slice. The training process of UBR resembles a
self-organization process: slice scores are learned from inter-slice
relationships. The training samples are unlabeled CT volumes that are abundant,
thus no extra annotation effort is needed. UBR is simple, fast, and accurate.
Quantitative and qualitative experiments validate its effectiveness. In
addition, we show two applications of UBR in network initialization and anomaly
detection.Comment: Oral presentation in ISBI1
ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases
The chest X-ray is one of the most commonly accessible radiological
examinations for screening and diagnosis of many lung diseases. A tremendous
number of X-ray imaging studies accompanied by radiological reports are
accumulated and stored in many modern hospitals' Picture Archiving and
Communication Systems (PACS). On the other side, it is still an open question
how this type of hospital-size knowledge database containing invaluable imaging
informatics (i.e., loosely labeled) can be used to facilitate the data-hungry
deep learning paradigms in building truly large-scale high precision
computer-aided diagnosis (CAD) systems.
In this paper, we present a new chest X-ray database, namely "ChestX-ray8",
which comprises 108,948 frontal-view X-ray images of 32,717 unique patients
with the text-mined eight disease image labels (where each image can have
multi-labels), from the associated radiological reports using natural language
processing. Importantly, we demonstrate that these commonly occurring thoracic
diseases can be detected and even spatially-located via a unified
weakly-supervised multi-label image classification and disease localization
framework, which is validated using our proposed dataset. Although the initial
quantitative results are promising as reported, deep convolutional neural
network based "reading chest X-rays" (i.e., recognizing and locating the common
disease patterns trained with only image-level labels) remains a strenuous task
for fully-automated high precision CAD systems. Data download link:
https://nihcc.app.box.com/v/ChestXray-NIHCCComment: CVPR 2017 spotlight;V1: CVPR submission+supplementary; V2: Statistics
and benchmark results on published ChestX-ray14 dataset are updated in
Appendix B V3: Minor correction V4: new data download link upated:
https://nihcc.app.box.com/v/ChestXray-NIHCC V5: Update benchmark results on
the published data split in the appendi
Deep convolutional networks for automated detection of posterior-element fractures on spine CT
Injuries of the spine, and its posterior elements in particular, are a common
occurrence in trauma patients, with potentially devastating consequences.
Computer-aided detection (CADe) could assist in the detection and
classification of spine fractures. Furthermore, CAD could help assess the
stability and chronicity of fractures, as well as facilitate research into
optimization of treatment paradigms.
In this work, we apply deep convolutional networks (ConvNets) for the
automated detection of posterior element fractures of the spine. First, the
vertebra bodies of the spine with its posterior elements are segmented in spine
CT using multi-atlas label fusion. Then, edge maps of the posterior elements
are computed. These edge maps serve as candidate regions for predicting a set
of probabilities for fractures along the image edges using ConvNets in a 2.5D
fashion (three orthogonal patches in axial, coronal and sagittal planes). We
explore three different methods for training the ConvNet using 2.5D patches
along the edge maps of 'positive', i.e. fractured posterior-elements and
'negative', i.e. non-fractured elements.
An experienced radiologist retrospectively marked the location of 55
displaced posterior-element fractures in 18 trauma patients. We randomly split
the data into training and testing cases. In testing, we achieve an
area-under-the-curve of 0.857. This corresponds to 71% or 81% sensitivities at
5 or 10 false-positives per patient, respectively. Analysis of our set of
trauma patients demonstrates the feasibility of detecting posterior-element
fractures in spine CT images using computer vision techniques such as deep
convolutional networks.Comment: To be presented at SPIE Medical Imaging, 2016, San Dieg
DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation
Automatic organ segmentation is an important yet challenging problem for
medical image analysis. The pancreas is an abdominal organ with very high
anatomical variability. This inhibits previous segmentation methods from
achieving high accuracies, especially compared to other organs such as the
liver, heart or kidneys. In this paper, we present a probabilistic bottom-up
approach for pancreas segmentation in abdominal computed tomography (CT) scans,
using multi-level deep convolutional networks (ConvNets). We propose and
evaluate several variations of deep ConvNets in the context of hierarchical,
coarse-to-fine classification on image patches and regions, i.e. superpixels.
We first present a dense labeling of local image patches via
and nearest neighbor fusion. Then we describe a regional
ConvNet () that samples a set of bounding boxes around
each image superpixel at different scales of contexts in a "zoom-out" fashion.
Our ConvNets learn to assign class probabilities for each superpixel region of
being pancreas. Last, we study a stacked leveraging
the joint space of CT intensities and the dense
probability maps. Both 3D Gaussian smoothing and 2D conditional random fields
are exploited as structured predictions for post-processing. We evaluate on CT
images of 82 patients in 4-fold cross-validation. We achieve a Dice Similarity
Coefficient of 83.66.3% in training and 71.810.7% in testing.Comment: To be presented at MICCAI 2015 - 18th International Conference on
Medical Computing and Computer Assisted Interventions, Munich, German
- …
