64 research outputs found
Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database
Radiologists in their daily work routinely find and annotate significant
abnormalities on a large number of radiology images. Such abnormalities, or
lesions, have collected over years and stored in hospitals' picture archiving
and communication systems. However, they are basically unsorted and lack
semantic annotations like type and location. In this paper, we aim to organize
and explore them by learning a deep feature representation for each lesion. A
large-scale and comprehensive dataset, DeepLesion, is introduced for this task.
DeepLesion contains bounding boxes and size measurements of over 32K lesions.
To model their similarity relationship, we leverage multiple supervision
information including types, self-supervised location coordinates and sizes.
They require little manual annotation effort but describe useful attributes of
the lesions. Then, a triplet network is utilized to learn lesion embeddings
with a sequential sampling strategy to depict their hierarchical similarity
structure. Experiments show promising qualitative and quantitative results on
lesion retrieval, clustering, and classification. The learned embeddings can be
further employed to build a lesion graph for various clinically useful
applications. We propose algorithms for intra-patient lesion matching and
missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde
End-to-End Adversarial Shape Learning for Abdomen Organ Deep Segmentation
Automatic segmentation of abdomen organs using medical imaging has many
potential applications in clinical workflows. Recently, the state-of-the-art
performance for organ segmentation has been achieved by deep learning models,
i.e., convolutional neural network (CNN). However, it is challenging to train
the conventional CNN-based segmentation models that aware of the shape and
topology of organs. In this work, we tackle this problem by introducing a novel
end-to-end shape learning architecture -- organ point-network. It takes deep
learning features as inputs and generates organ shape representations as points
that located on organ surface. We later present a novel adversarial shape
learning objective function to optimize the point-network to capture shape
information better. We train the point-network together with a CNN-based
segmentation model in a multi-task fashion so that the shared network
parameters can benefit from both shape learning and segmentation tasks. We
demonstrate our method with three challenging abdomen organs including liver,
spleen, and pancreas. The point-network generates surface points with
fine-grained details and it is found critical for improving organ segmentation.
Consequently, the deep segmentation model is improved by the introduced shape
learning as significantly better Dice scores are observed for spleen and
pancreas segmentation.Comment: Accepted to International Workshop on Machine Learning in Medical
Imaging (MLMI2019
- …
