14 research outputs found
Bio-inspired Attentive Segmentation of Retinal OCT Imaging
Albeit optical coherence imaging (OCT) is widely used to assess ophthalmic pathologies, localization of intra-retinal boundaries suffers from erroneous segmentations due to image artifacts or topological abnormalities. Although deep learning-based methods have been effectively applied in OCT imaging, accurate automated layer segmentation remains a challenging task, with the flexibility and precision of most methods being highly constrained. In this paper, we propose a novel method to segment all retinal layers, tailored to the bio-topological OCT geometry. In addition to traditional learning of shift-invariant features, our method learns in selected pixels horizontally and vertically, exploiting the orientation of the extracted features. In this way, the most discriminative retinal features are generated in a robust manner, while long-range pixel dependencies across spatial locations are efficiently captured. To validate the effectiveness and generalisation of our method, we implement three sets of networks based on different backbone models. Results on three independent studies show that our methodology consistently produces more accurate segmentations than state-of-the-art networks, and shows better precision and agreement with ground truth. Thus, our method not only improves segmentation, but also enhances the statistical power of clinical trials with layer thickness change outcomes
Retinal OCT Denoising with Pseudo-Multimodal Fusion Network
Optical coherence tomography (OCT) is a prevalent imaging technique for retina. However, it is affected by multiplicative speckle noise that can degrade the visibility of essential anatomical structures, including blood vessels and tissue layers. Although averaging repeated B-scan frames can significantly improve the signal-to-noise-ratio (SNR), this requires longer acquisition time, which can introduce motion artifacts and cause discomfort to patients. In this study, we propose a learning-based method that exploits information from the single-frame noisy B-scan and a pseudo-modality that is created with the aid of the self-fusion method. The pseudo-modality provides good SNR for layers that are barely perceptible in the noisy B-scan but can over-smooth fine features such as small vessels. By using a fusion network, desired features from each modality can be combined, and the weight of their contribution is adjustable. Evaluated by intensity-based and structural metrics, the result shows that our method can effectively suppress the speckle noise and enhance the contrast between retina layers while the overall structure and small blood vessels are preserved. Compared to the single modality network, our method improves the structural similarity with low noise B-scan from 0.559 ± 0.033 to 0.576 ± 0.031
Disentanglement Network for Unsupervised Speckle Reduction of Optical Coherence Tomography Images
Self-supervised Denoising via Diffeomorphic Template Estimation: Application to Optical Coherence Tomography
OCT Segmentation via Deep Learning: A Review of Recent Work
Optical coherence tomography (OCT) is an important retinal imaging method since it is a non-invasive, high-resolution imaging technique and is able to reveal the fine structure within the human retina. It has applications for retinal as well as neurological disease characterization and diagnostics. The use of machine learning techniques for analyzing the retinal layers and lesions seen in OCT can greatly facilitate such diagnostics tasks. The use of deep learning (DL) methods principally using fully convolutional networks has recently resulted in significant progress in automated segmentation of optical coherence tomography. Recent work in that area is reviewed herein
