103 research outputs found
Conv2Warp: An unsupervised deformable image registration with continuous convolution and warping
Recent successes in deep learning based deformable image registration (DIR) methods have demonstrated that complex deformation can be learnt directly from data while reducing computation time when compared to traditional methods. However, the reliance on fully linear convolutional layers imposes a uniform sampling of pixel/voxel locations which ultimately limits their performance. To address this problem, we propose a novel approach of learning a continuous warp of the source image. Here, the required deformation vector fields are obtained from a concatenated linear and non-linear convolution layers and a learnable bicubic Catmull-Rom spline resampler. This allows to compute smooth deformation field and more accurate alignment compared to using only linear convolutions and linear resampling. In addition, the continuous warping technique penalizes disagreements that are due to topological changes. Our experiments demonstrate that this approach manages to capture large non-linear deformations and minimizes the propagation of interpolation errors. While improving accuracy the method is computationally efficient. We present comparative results on a range of public 4D CT lung (POPI) and brain datasets (CUMC12, MGH10)
Automated motion analysis of bony joint structures from dynamic computer tomography images: A multi-atlas approach
Dynamic computer tomography (CT) is an emerging modality to analyze in-vivo joint kinematics at the bone level, but it requires manual bone segmentation and, in some instances, landmark identification. The objective of this study is to present an automated workflow for the assessment of three-dimensional in vivo joint kinematics from dynamic musculoskeletal CT images. The proposed method relies on a multi-atlas, multi-label segmentation and landmark propagation framework to extract bony structures and detect anatomical landmarks on the CT dataset. The segmented structures serve as regions of interest for the subsequent motion estimation across the dynamic sequence. The landmarks are propagated across the dynamic sequence for the construction of bone embedded reference frames from which kinematic parameters are estimated. We applied our workflow on dynamic CT images obtained from 15 healthy subjects on two different joints: thumb base (n = 5) and knee (n = 10). The proposed method resulted in segmentation accuracies of 0.90 ± 0.01 for the thumb dataset and 0.94 ± 0.02 for the knee as measured by the Dice score coefficient. In terms of motion estimation, mean differences in cardan angles between the automated algorithm and manual segmentation, and landmark identification performed by an expert were below 1◦. Intraclass correlation (ICC) between cardan angles from the algorithm and results from expert manual landmarks ranged from 0.72 to 0.99 for all joints across all axes. The proposed automated method resulted in reproducible and reliable measurements, enabling the assessment of joint kinematics using 4DCT in clinical routine
The effect of augmented reality on the accuracy and learning curve of external ventricular drain placement
Registration of magnetic resonance and computed tomography images in patients with oral squamous cell carcinoma for three-dimensional vir
The aim of this study was to evaluate and present an automated method for registration of magnetic resonance imaging (MRI) and computed tomography (CT) or cone beam CT (CBCT) images of the mandibular region for patients with oral squamous cell carcinoma (OSCC). Registered MRI and (CB)CT could facilitate the three-dimensional virtual planning of surgical guides employed for resection and reconstruction in patients with OSCC with mandibular invasion. MRI and (CB)CT images were collected retrospectively from 19 patients. MRI images were aligned with (CB)CT images employing a rigid registration approach (stage 1), a rigid registration approach using a mandibular mask (stage 2), and two non-rigid registration approaches (stage 3). Registration accuracy was quantified by the mean target registration error (mTRE), calculated over a set of landmarks annotated by two observers. Stage 2 achieved the best registration result, with an mTRE of 2.5 ± 0.7 mm, which was comparable to the inter- and intra-observer variabilities of landmark placement in MRI. Stage 2 was significantly better aligned compared to all approaches in stage 3. In conclusion, this study demonstrated that rigid registration with the use of a mask is an appropriate image registration method for aligning MRI and (CB)CT images of the mandibular region in patients with OSCC
Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge
Automatic detection of pulmonary nodules in thoracic computed tomography (CT) scans has been an active area of research for the last two decades. However, there have only been few studies that provide a comparative performance evaluation of different systems on a common database. We have therefore set up the LUNA16 challenge, an objective evaluation framework for automatic nodule detection algorithms using the largest publicly available reference database of chest CT scans, the LIDC-IDRI data set. In LUNA16, participants develop their algorithm and upload their predictions on 888 CT scans in one of the two tracks: 1) the complete nodule detection track where a complete CAD system should be developed, or 2) the false positive reduction track where a provided set of nodule candidates should be classified. This paper describes the setup of LUNA16 and presents the results of the challenge so far. Moreover, the impact of combining individual systems on the detection performance was also investigated. It was observed that the leading solutions employed convolutional networks and used the provided set of nodule candidates. The combination of these solutions achieved an excellent sensitivity of over 95% at fewer than 1.0 false positives per scan. This highlights the potential of combining algorithms to improve the detection performance. Our observer study with four expert readers has shown that the best system detects nodules that were missed by expert readers who originally annotated the LIDC-IDRI data. We released this set of additional nodules for further development of CAD systems
Low-field magnetic resonance imaging offers potential for measuring tibial component migration
MedShapeNet – a large-scale dataset of 3D medical shapes for computer vision
Objectives: The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel
grids, meshes, point clouds, and implicit surfacemodels are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing. Methods: We present MedShapeNet to translate datadriven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of
shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing. Results: By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via aweb interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications
in virtual, augmented, or mixed reality, and 3D printing. Conclusions: MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. The project page is: https://medshapenet.ikim.nrw/
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
Prior to the deep learning era, shape was commonly used to describe the
objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are
predominantly diverging from computer vision, where voxel grids, meshes, point
clouds, and implicit surface models are used. This is seen from numerous
shape-related publications in premier vision conferences as well as the growing
popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915
models). For the medical domain, we present a large collection of anatomical
shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument,
called MedShapeNet, created to facilitate the translation of data-driven vision
algorithms to medical applications and to adapt SOTA vision algorithms to
medical problems. As a unique feature, we directly model the majority of shapes
on the imaging data of real patients. As of today, MedShapeNet includes 23
dataset with more than 100,000 shapes that are paired with annotations (ground
truth). Our data is freely accessible via a web interface and a Python
application programming interface (API) and can be used for discriminative,
reconstructive, and variational benchmarks as well as various applications in
virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present
use cases in the fields of classification of brain tumors, facial and skull
reconstructions, multi-class anatomy completion, education, and 3D printing. In
future, we will extend the data and improve the interfaces. The project pages
are: https://medshapenet.ikim.nrw/ and
https://github.com/Jianningli/medshapenet-feedbackComment: 16 page
Hyperglycaemic clamp test for diabetes risk assessment in IA-2-antibody-positive relatives of type 1 diabetic patients
AIMS/HYPOTHESIS: The aim of the study was to investigate the use of hyperglycaemic clamp tests to identify individuals who will develop diabetes among insulinoma-associated protein-2 antibody (IA-2A)-positive first-degree relatives (IA-2A(+) FDRs) of type 1 diabetic patients.
METHODS: Hyperglycaemic clamps were performed in 17 non-diabetic IA-2A(+) FDRs aged 14 to 33 years and in 21 matched healthy volunteers (HVs). Insulin and C-peptide responses were measured during the first (5-10 min) and second (120-150 min) release phase, and after glucagon injection (150-160 min). Clamp-induced C-peptide release was compared with C-peptide release during OGTT.
RESULTS: Seven (41%) FDRs developed diabetes 3-63 months after their initial clamp test. In all phases they had lower C-peptide responses than non-progressors (p < 0.05) and HVs (p < 0.002). All five FDRs with low first-phase release also had low second-phase release and developed diabetes 3-21 months later. Two of seven FDRs with normal first-phase but low second-phase release developed diabetes after 34 and 63 months, respectively. None of the five FDRs with normal C-peptide responses in all test phases has developed diabetes so far (follow-up 56 to 99 months). OGTT-induced C-peptide release also tended to be lower in progressors than in non-progressors or HVs, but there was less overlap in results between progressors and the other groups using the clamp.
CONCLUSIONS/INTERPRETATION: Clamp-derived functional variables stratify risk of diabetes in IA-2A(+) FDRs and may more consistently identify progressors than OGTT-derived variables. A low first-phase C-peptide response specifically predicts impending diabetes while a low second-phase response may reflect an earlier disease stag
In-Situ High-Temperature Gas and Vacuum 3D Electron Diffraction for Studying Structural Transformations upon Redox Reactions
International audienc
- …
