14 research outputs found
AbdomenAtlas: A Large-Scale, Detailed-Annotated, & Multi-Center Dataset for Efficient Transfer Learning and Open Algorithmic Benchmarking
We introduce the largest abdominal CT dataset (termed AbdomenAtlas) of 20,460
three-dimensional CT volumes sourced from 112 hospitals across diverse
populations, geographies, and facilities. AbdomenAtlas provides 673K
high-quality masks of anatomical structures in the abdominal region annotated
by a team of 10 radiologists with the help of AI algorithms. We start by having
expert radiologists manually annotate 22 anatomical structures in 5,246 CT
volumes. Following this, a semi-automatic annotation procedure is performed for
the remaining CT volumes, where radiologists revise the annotations predicted
by AI, and in turn, AI improves its predictions by learning from revised
annotations. Such a large-scale, detailed-annotated, and multi-center dataset
is needed for two reasons. Firstly, AbdomenAtlas provides important resources
for AI development at scale, branded as large pre-trained models, which can
alleviate the annotation workload of expert radiologists to transfer to broader
clinical applications. Secondly, AbdomenAtlas establishes a large-scale
benchmark for evaluating AI algorithms -- the more data we use to test the
algorithms, the better we can guarantee reliable performance in complex
clinical scenarios. An ISBI & MICCAI challenge named BodyMaps: Towards 3D Atlas
of Human Body was launched using a subset of our AbdomenAtlas, aiming to
stimulate AI innovation and to benchmark segmentation accuracy, inference
efficiency, and domain generalizability. We hope our AbdomenAtlas can set the
stage for larger-scale clinical trials and offer exceptional opportunities to
practitioners in the medical imaging community. Codes, models, and datasets are
available at https://www.zongweiz.com/datasetComment: Published in Medical Image Analysi
MedShapeNet – a large-scale dataset of 3D medical shapes for computer vision
Objectives: The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel
grids, meshes, point clouds, and implicit surfacemodels are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing. Methods: We present MedShapeNet to translate datadriven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of
shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing. Results: By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via aweb interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications
in virtual, augmented, or mixed reality, and 3D printing. Conclusions: MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. The project page is: https://medshapenet.ikim.nrw/
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
Prior to the deep learning era, shape was commonly used to describe the
objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are
predominantly diverging from computer vision, where voxel grids, meshes, point
clouds, and implicit surface models are used. This is seen from numerous
shape-related publications in premier vision conferences as well as the growing
popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915
models). For the medical domain, we present a large collection of anatomical
shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument,
called MedShapeNet, created to facilitate the translation of data-driven vision
algorithms to medical applications and to adapt SOTA vision algorithms to
medical problems. As a unique feature, we directly model the majority of shapes
on the imaging data of real patients. As of today, MedShapeNet includes 23
dataset with more than 100,000 shapes that are paired with annotations (ground
truth). Our data is freely accessible via a web interface and a Python
application programming interface (API) and can be used for discriminative,
reconstructive, and variational benchmarks as well as various applications in
virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present
use cases in the fields of classification of brain tumors, facial and skull
reconstructions, multi-class anatomy completion, education, and 3D printing. In
future, we will extend the data and improve the interfaces. The project pages
are: https://medshapenet.ikim.nrw/ and
https://github.com/Jianningli/medshapenet-feedbackComment: 16 page
Electrochemical assembling of aligned porous Nd(OH)3 nanobelts with high performance in water treatment
Annotating 8,000 Abdominal CT Volumes for Multi-Organ Segmentation in Three Weeks
Annotating medical images, particularly for organ segmentation, is laborious
and time-consuming. For example, annotating an abdominal organ requires an
estimated rate of 30-60 minutes per CT volume based on the expertise of an
annotator and the size, visibility, and complexity of the organ. Therefore,
publicly available datasets for multi-organ segmentation are often limited in
data size and organ diversity. This paper proposes a systematic and efficient
method to expedite the annotation process for organ segmentation. We have
created the largest multi-organ dataset (by far) with the spleen, liver,
kidneys, stomach, gallbladder, pancreas, aorta, and IVC annotated in 8,448 CT
volumes, equating to 3.2 million slices. The conventional annotation methods
would take an experienced annotator up to 1,600 weeks (or roughly 30.8 years)
to complete this task. In contrast, our annotation method has accomplished this
task in three weeks (based on an 8-hour workday, five days a week) while
maintaining a similar or even better annotation quality. This achievement is
attributed to three unique properties of our method: (1) label bias reduction
using multiple pre-trained segmentation models, (2) effective error detection
in the model predictions, and (3) attention guidance for annotators to make
corrections on the most salient errors. Furthermore, we summarize the taxonomy
of common errors made by AI algorithms and annotators. This allows for
continuous refinement of both AI and annotations and significantly reduces the
annotation costs required to create large-scale datasets for a wider variety of
medical imaging tasks
MedShapeNet – a large-scale dataset of 3D medical shapes for computer vision
Objectives
The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing.
Methods
We present MedShapeNet to translate data-driven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing.
Results
By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing.
Conclusions
MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. The project page is: https://medshapenet.ikim.nrw/
Generation and characterization of neutralizing antibodies against M1R and B6R proteins of monkeypox virus
Abstract The global outbreak of monkeypox virus (MPXV), combined with the termination of smallpox vaccination and the lack of specific antiviral treatments, raises increasing concerns. The surface proteins M1R and B6R of MPXV are crucial for virus transmission and serve as key targets for vaccine development. In this study, a panel of human antibodies targeting M1R and B6R is isolated from a human antibody library using phage display technology. Among these antibodies, A138 against M1R and B026 against B6R show the most potent broad-spectrum neutralizing activities against MPXV and Vaccinia virus (VACV). When used in combination, A138 and B026 exhibit complementary neutralizing activity against both viruses in vitro. X-ray crystallography reveales that A138 binds to the loop regions of M1R, similar to the vulnerable epitope of 7D11 on VACV L1R. By contrast, A129 targets a more cryptic epitope, primarily comprising the β-strands of M1R. Moreover, prophylactic and therapeutic administration of A138 or B026 alone provides partial protection, while combining these two antibodies results in enhanced protection against VACV in male C57BL/6 mice. This study demonstrates of a dual-targeting strategy using two different components of the virion for the prevention and treatment of MPXV infection
– a large-scale dataset of 3D medical shapes for computer vision
Objectives: The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing.
Methods: We present MedShapeNet to translate data-driven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing.
Results: By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing.
Conclusions: MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. The project page is: https://medshapenet.ikim.nrw/
MedShapeNet - a large-scale dataset of 3D medical shapes for computer vision
The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing. We present MedShapeNet to translate data-driven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing. By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications.</p
