781 research outputs found

    Image reconstruction in fluorescence molecular tomography with sparsity-initialized maximum-likelihood expectation maximization

    Get PDF
    We present a reconstruction method involving maximum-likelihood expectation maximization (MLEM) to model Poisson noise as applied to fluorescence molecular tomography (FMT). MLEM is initialized with the output from a sparse reconstruction-based approach, which performs truncated singular value decomposition-based preconditioning followed by fast iterative shrinkage-thresholding algorithm (FISTA) to enforce sparsity. The motivation for this approach is that sparsity information could be accounted for within the initialization, while MLEM would accurately model Poisson noise in the FMT system. Simulation experiments show the proposed method significantly improves images qualitatively and quantitatively. The method results in over 20 times faster convergence compared to uniformly initialized MLEM and improves robustness to noise compared to pure sparse reconstruction. We also theoretically justify the ability of the proposed approach to reduce noise in the background region compared to pure sparse reconstruction. Overall, these results provide strong evidence to model Poisson noise in FMT reconstruction and for application of the proposed reconstruction framework to FMT imaging

    Incorporating reflection boundary conditions in the Neumann series radiative transport equation: Application to photon propagation and reconstruction in diffuse optical imaging

    Get PDF
    We propose a formalism to incorporate boundary conditions in a Neumann-series-based radiative transport equation. The formalism accurately models the reflection of photons at the tissue-external medium interface using Fresnel’s equations. The formalism was used to develop a gradient descent-based image reconstruction technique. The proposed methods were implemented for 3D diffuse optical imaging. In computational studies, it was observed that the average root-mean-square error (RMSE) for the output images and the estimated absorption coefficients reduced by 38% and 84%, respectively, when the reflection boundary conditions were incorporated. These results demonstrate the importance of incorporating boundary conditions that model the reflection of photons at the tissue-external medium interface

    Implementation of absolute quantification in small-animal SPECT imaging: Phantom and animal studies

    Get PDF
    Purpose: Presence of photon attenuation severely challenges quantitative accuracy in single-photon emission computed tomography (SPECT) imaging. Subsequently, various attenuation correction methods have been developed to compensate for this degradation. The present study aims to implement an attenuation correction method and then to evaluate quantification accuracy of attenuation correction in small-animal SPECT imaging. Methods: Images were reconstructed using an iterative reconstruction method based on the maximum-likelihood expectation maximization (MLEM) algorithm including resolution recovery. This was implemented in our designed dedicated small-animal SPECT (HiReSPECT) system. For accurate quantification, the voxel values were converted to activity concentration via a calculated calibration factor. An attenuation correction algorithm was developed based on the first-order Chang’s method. Both phantom study and experimental measurements with four rats were used in order to validate the proposed method. Results: The phantom experiments showed that the error of �15.5% in the estimation of activity concentration in a uniform region was reduced to +5.1% when attenuation correction was applied. For in vivo studies, the average quantitative error of �22.8 � 6.3% (ranging from �31.2% to �14.8%) in the uncorrected images was reduced to +3.5 � 6.7% (ranging from �6.7 to +9.8%) after applying attenuation correction. Conclusion: The results indicate that the proposed attenuation correction algorithm based on the first-order Chang’s method, as implemented in our dedicated small-animal SPECT system, significantly improves accuracy of the quantitative analysis as well as the absolute quantification

    Implementation of absolute quantification in small-animal SPECT imaging: Phantom and animal studies

    Get PDF
    Purpose: Presence of photon attenuation severely challenges quantitative accuracy in single-photon emission computed tomography (SPECT) imaging. Subsequently, various attenuation correction methods have been developed to compensate for this degradation. The present study aims to implement an attenuation correction method and then to evaluate quantification accuracy of attenuation correction in small-animal SPECT imaging. Methods: Images were reconstructed using an iterative reconstruction method based on the maximum-likelihood expectation maximization (MLEM) algorithm including resolution recovery. This was implemented in our designed dedicated small-animal SPECT (HiReSPECT) system. For accurate quantification, the voxel values were converted to activity concentration via a calculated calibration factor. An attenuation correction algorithm was developed based on the first-order Chang’s method. Both phantom study and experimental measurements with four rats were used in order to validate the proposed method. Results: The phantom experiments showed that the error of �15.5% in the estimation of activity concentration in a uniform region was reduced to +5.1% when attenuation correction was applied. For in vivo studies, the average quantitative error of �22.8 � 6.3% (ranging from �31.2% to �14.8%) in the uncorrected images was reduced to +3.5 � 6.7% (ranging from �6.7 to +9.8%) after applying attenuation correction. Conclusion: The results indicate that the proposed attenuation correction algorithm based on the first-order Chang’s method, as implemented in our dedicated small-animal SPECT system, significantly improves accuracy of the quantitative analysis as well as the absolute quantification

    Generalized Dice Focal Loss trained 3D Residual UNet for Automated Lesion Segmentation in Whole-Body FDG PET/CT Images

    Full text link
    Automated segmentation of cancerous lesions in PET/CT images is a vital initial task for quantitative analysis. However, it is often challenging to train deep learning-based segmentation methods to high degree of accuracy due to the diversity of lesions in terms of their shapes, sizes, and radiotracer uptake levels. These lesions can be found in various parts of the body, often close to healthy organs that also show significant uptake. Consequently, developing a comprehensive PET/CT lesion segmentation model is a demanding endeavor for routine quantitative image analysis. In this work, we train a 3D Residual UNet using Generalized Dice Focal Loss function on the AutoPET challenge 2023 training dataset. We develop our models in a 5-fold cross-validation setting and ensemble the five models via average and weighted-average ensembling. On the preliminary test phase, the average ensemble achieved a Dice similarity coefficient (DSC), false-positive volume (FPV) and false negative volume (FNV) of 0.5417, 0.8261 ml, and 0.2538 ml, respectively, while the weighted-average ensemble achieved 0.5417, 0.8186 ml, and 0.2538 ml, respectively. Our algorithm can be accessed via this link: https://github.com/ahxmeds/autosegnet.Comment: AutoPET-II challenge (2023

    IgCONDA-PET: Implicitly-Guided Counterfactual Diffusion for Detecting Anomalies in PET Images

    Full text link
    Minimizing the need for pixel-level annotated data for training PET anomaly segmentation networks is crucial, particularly due to time and cost constraints related to expert annotations. Current un-/weakly-supervised anomaly detection methods rely on autoencoder or generative adversarial networks trained only on healthy data, although these are more challenging to train. In this work, we present a weakly supervised and Implicitly guided COuNterfactual diffusion model for Detecting Anomalies in PET images, branded as IgCONDA-PET. The training is conditioned on image class labels (healthy vs. unhealthy) along with implicit guidance to generate counterfactuals for an unhealthy image with anomalies. The counterfactual generation process synthesizes the healthy counterpart for a given unhealthy image, and the difference between the two facilitates the identification of anomaly locations. The code is available at: https://github.com/igcondapet/IgCONDA-PET.gitComment: 12 pages, 6 figures, 1 tabl

    Beyond Conventional Parametric Modeling: Data-Driven Framework for Estimation and Prediction of Time Activity Curves in Dynamic PET Imaging

    Full text link
    Dynamic Positron Emission Tomography (dPET) imaging and Time-Activity Curve (TAC) analyses are essential for understanding and quantifying the biodistribution of radiopharmaceuticals over time and space. Traditional compartmental modeling, while foundational, commonly struggles to fully capture the complexities of biological systems, including non-linear dynamics and variability. This study introduces an innovative data-driven neural network-based framework, inspired by Reaction Diffusion systems, designed to address these limitations. Our approach, which adaptively fits TACs from dPET, enables the direct calibration of diffusion coefficients and reaction terms from observed data, offering significant improvements in predictive accuracy and robustness over traditional methods, especially in complex biological scenarios. By more accurately modeling the spatio-temporal dynamics of radiopharmaceuticals, our method advances modeling of pharmacokinetic and pharmacodynamic processes, enabling new possibilities in quantitative nuclear medicine

    Novel Method to Estimate Kinetic Microparameters from Dynamic Whole-Body Imaging in Regular-Axial Field-of-View PET Scanners

    Full text link
    For whole-body (WB) kinetic modeling based on a typical PET scanner, a multi-pass multi-bed scanning protocol is necessary given the limited axial field-of-view. Such a protocol introduces loss of early-dynamics in time-activity curves (TACs) and sparsity in TAC measurements, inducing uncertainty in parameter estimation when using least-squares estimation (LSE) (i.e., common standard) especially for kinetic microparameters. We present a method to reliably estimate microparameters, enabling accurate parametric imaging, on regular-axial field-of-view PET scanners. Our method, denoted parameter combination-driven estimation (PCDE), relies on generation of reference truth TAC database, and subsequently selecting, the best parameter combination as the one arriving at TAC with highest total similarity score (TSS), focusing on general image quality, overall visibility, and tumor detectability metrics. Our technique has two distinctive characteristics: 1) improved probability of having one-on-one mapping between early and late dynamics in TACs (the former missing from typical protocols), and 2) use of multiple aspects of TACs in selection of best fits. To evaluate our method against conventional LSE, we plotted tradeoff curves for noise and bias. In addition, the overall SNR and spatial noise were calculated and compared. Furthermore, CNR and TBR were also calculated. We also tested our proposed method on patient data (18F-DCFPyL PET scans) to further verify clinical applicability. Significantly improved general image quality performance was verified in microparametric images (e.g. noise-bias performance). The overall visibility and tumor detectability were also improved. Finally, for our patient studies, improved overall visibility and tumor detectability were demonstrated in micoparametric images, compared to use of conventional parameter estimation

    On Ontological Openness: Who Is Open? Resonating Thoughts From Continental Philosophers And Muslim Mystics

    Get PDF
    Being “open-minded” is considered a definite virtue in today’s world. What does it mean to be open-minded? What we refer to as ‘openness’ in this writing moves beyond the ability to see and entertain other views. It cuts deep into both the intentionality and content of what one contemplates. This work focuses on ontological openness, reflecting parallel and resonating thoughts by prominent continental philosophers Martin Heidegger and Hans-Georg Gadamer. Though Gadamer appears after Heidegger, we find it fruitful to read Gadamer as leading to Heidegger. We compare their thoughts with those of Muslim mystics, focusing on the highly influential and ground-breaking thinker Ibn Arabi and thinkers in his tradition

    Entanglement of Being and Beings: Heidegger and Ibn Arabi on Sameness and Difference

    Get PDF
    Martin Heidegger was reported to have considered his work Identity and Difference (based on two seminars delivered in 1957) to be “the most important thing he … published since [his magnum opus] Being and Time.” (Heidegger, 1969, 7) While Being and Time begins with the human being (Da- sein; being-there), aiming to proceed to an understanding of the Being of beings, in Identity and Difference the focus is on the very “relation” between the human being and Being. (Ibid., 8) The present work highlights the intertwined and entangled sameness/difference between beings and Being. This entanglement and belonging, as we shall see, is also one of the most foundational concepts and prominent themes by the renowned and highly influential Muslim mystic Ibn Arabi (1165-1240). We particularly focus on his important compendium of mystical teachings, Fusus al-Hikam (Bezels of Wisdom). We also touch upon the sameness/difference of thoughts between these two thinkers
    corecore