11 research outputs found
k-strip: A novel segmentation algorithm in k-space for the application of skull stripping
Objectives: Present a novel deep learning-based skull stripping algorithm for
magnetic resonance imaging (MRI) that works directly in the information rich
k-space.
Materials and Methods: Using two datasets from different institutions with a
total of 36,900 MRI slices, we trained a deep learning-based model to work
directly with the complex raw k-space data. Skull stripping performed by HD-BET
(Brain Extraction Tool) in the image domain were used as the ground truth.
Results: Both datasets were very similar to the ground truth (DICE scores of
92\%-98\% and Hausdorff distances of under 5.5 mm). Results on slices above the
eye-region reach DICE scores of up to 99\%, while the accuracy drops in regions
around the eyes and below, with partially blurred output. The output of k-strip
often smoothed edges at the demarcation to the skull. Binary masks are created
with an appropriate threshold.
Conclusion: With this proof-of-concept study, we were able to show the
feasibility of working in the k-space frequency domain, preserving phase
information, with consistent results. Future research should be dedicated to
discovering additional ways the k-space can be used for innovative image
analysis and further workflows.Comment: 11 pages, 6 figures, 2 table
Haematological, clinical–chemical and immunological consequences of feeding Fusarium toxin contaminated diets to early lactating dairy cows
MedShapeNet – a large-scale dataset of 3D medical shapes for computer vision
Objectives: The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel
grids, meshes, point clouds, and implicit surfacemodels are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing. Methods: We present MedShapeNet to translate datadriven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of
shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing. Results: By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via aweb interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications
in virtual, augmented, or mixed reality, and 3D printing. Conclusions: MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. The project page is: https://medshapenet.ikim.nrw/
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
Prior to the deep learning era, shape was commonly used to describe the
objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are
predominantly diverging from computer vision, where voxel grids, meshes, point
clouds, and implicit surface models are used. This is seen from numerous
shape-related publications in premier vision conferences as well as the growing
popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915
models). For the medical domain, we present a large collection of anatomical
shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument,
called MedShapeNet, created to facilitate the translation of data-driven vision
algorithms to medical applications and to adapt SOTA vision algorithms to
medical problems. As a unique feature, we directly model the majority of shapes
on the imaging data of real patients. As of today, MedShapeNet includes 23
dataset with more than 100,000 shapes that are paired with annotations (ground
truth). Our data is freely accessible via a web interface and a Python
application programming interface (API) and can be used for discriminative,
reconstructive, and variational benchmarks as well as various applications in
virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present
use cases in the fields of classification of brain tumors, facial and skull
reconstructions, multi-class anatomy completion, education, and 3D printing. In
future, we will extend the data and improve the interfaces. The project pages
are: https://medshapenet.ikim.nrw/ and
https://github.com/Jianningli/medshapenet-feedbackComment: 16 page
CellViT: Vision Transformers for Precise Cell Segmentation and Classification
Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E)
tissue images are important clinical tasks and crucial for a wide range of
applications. However, it is a challenging task due to nuclei variances in
staining and size, overlapping boundaries, and nuclei clustering. While
convolutional neural networks have been extensively used for this task, we
explore the potential of Transformer-based networks in this domain. Therefore,
we introduce a new method for automated instance segmentation of cell nuclei in
digitized tissue samples using a deep learning architecture based on Vision
Transformer called CellViT. CellViT is trained and evaluated on the PanNuke
dataset, which is one of the most challenging nuclei instance segmentation
datasets, consisting of nearly 200,000 annotated Nuclei into 5 clinically
important classes in 19 tissue types. We demonstrate the superiority of
large-scale in-domain and out-of-domain pre-trained Vision Transformers by
leveraging the recently published Segment Anything Model and a ViT-encoder
pre-trained on 104 million histological image patches - achieving
state-of-the-art nuclei detection and instance segmentation performance on the
PanNuke dataset with a mean panoptic quality of 0.51 and an F1-detection score
of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViTComment: 13 pages, 5 figures, appendix include
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac
