26 research outputs found
Notice of Retraction: Atmospheric Particle Deposition and Element Distribution in Xi'an, China
Notice of Retraction: Depth Profiles of Particulate Matter and Elements in Particulate Matter in Xi'an, China
Electrogenerated Chemiluminescence Biosensor for Quantization of Matrix Metalloproteinase-3 in Serum via Target-Induced Cleavage of Oligopeptide
A highly sensitive and selective electrogenerated chemiluminescence (ECL) biosensor was developed for the determination of matrix metalloproteinase 3 (MMP-3) in serum via the target-induced cleavage of an oligopeptide. One ECL probe (named as Ir-peptide) was synthesized by covalently linking a new cyclometalated iridium(III) complex ([(3-pba)2Ir(bpy-COOH)](PF6)) (3-pba = 3-(2-pyridyl) benzaldehyde, bpy-COOH = 4′-methyl-2,2′-bipyridine-4-carboxylic acid) with an oligopeptide (CGVPLSLTMGKGGK). An ECL biosensor was fabricated by firstly casting Nafion and gold nanoparticles (AuNPs) on a glassy carbon electrode and then self-assembling both of the ECL probes, 6-mercapto-1-hexanol and zwitterionic peptide, on the electrode surface, from which the AuNPs could be used to amplify the ECL signal and Ir-peptide could serve as an ECL probe to detect the MMP-3. Thanks to the MMP-3-induced cleavage of the oligopeptide contributing to the decrease in ECL intensity and the amplification of the ECL signal using AuNPs, the ECL biosensor could selectively and sensitively quantify MMP-3 in the concentration range of 10–150 ng·mL−1 and with both a limit of quantification (26.7 ng·mL−1) and a limit of detection (8.0 ng·mL−1) via one-step recognition. In addition, the developed ECL biosensor showed good performance in the quantization of MMP-3 in serum samples, with a recovery of 92.6% ± 2.8%–105.6% ± 5.0%. An increased level of MMP-3 was found in the serum of rheumatoid arthritis patients compared with that of healthy people. This work provides a sensitive and selective biosensing method for the detection of MMP-3 in human serum, which is promising in the identification of patients with rheumatoid arthritis
Time-course Study on Left Ventricular Function in a Rabbit Model of Ischemia–reperfusion Injury With Morphine Preconditioning
CGC-Net: A Context-Guided Constrained Network for Remote-Sensing Image Super Resolution
In remote-sensing image processing tasks, images with higher resolution always result in better performance on downstream tasks, such as scene classification and object segmentation. However, objects in remote-sensing images often have low resolution and complex textures due to the imaging environment. Therefore, effectively reconstructing high-resolution remote-sensing images remains challenging. To address this concern, we investigate embedding context information and object priors from remote-sensing images into current deep learning super-resolution models. Hence, this paper proposes a novel remote-sensing image super-resolution method called Context-Guided Constrained Network (CGC-Net). In CGC-Net, we first design a simple but effective method to generate inverse distance maps from the remote-sensing image segmentation maps as prior information. Combined with prior information, we propose a Global Context-Constrained Layer (GCCL) to extract high-quality features with global context constraints. Furthermore, we introduce a Guided Local Feature Enhancement Block (GLFE) to enhance the local texture context via a learnable guided filter. Additionally, we design a High-Frequency Consistency Loss (HFC Loss) to ensure gradient consistency between the reconstructed image (HR) and the original high-quality image (HQ). Unlike existing remote-sensing image super-resolution methods, the proposed CGC-Net achieves superior visual results and reports new state-of-the-art (SOTA) performance on three popular remote-sensing image datasets, demonstrating its effectiveness in remote-sensing image super-resolution (RSI-SR) tasks
PLA—A Privacy-Embedded Lightweight and Efficient Automated Breast Cancer Accurate Diagnosis Framework for the Internet of Medical Things
The Internet of Medical Things (IoMT) can automate breast tumor detection and classification with the potential of artificial intelligence. However, the leakage of sensitive data can cause harm to patients. To address this issue, this study proposed an intrauterine breast cancer diagnosis method, namely “Privacy-Embedded Lightweight and Efficient Automated (PLA)”, for IoMT, which represents an approach that combines privacy-preserving techniques, efficiency, and automation to achieve our goals. Firstly, our model is designed to achieve lightweight classification prediction and global information processing of breast cancer by utilizing an advanced IoMT-friendly ViT backbone. Secondly, PLA protects patients’ privacy by federated learning, taking the classification task of breast cancer as the main task and introducing the texture analysis task of breast cancer images as the auxiliary task to train the model. For our PLA framework, the classification accuracy is 0.953, the recall rate is 0.998 for the best, the F1 value is 0.969, the precision value is 0.988, and the classification time is 61.9 ms. The experimental results show that the PLA model performs better than all of the comparison methods in terms of accuracy, with an improvement of more than 0.5%. Furthermore, our proposed model demonstrates significant advantages over the comparison methods regarding time and memory
Breast cancer stem cells, heterogeneity, targeting therapies and therapeutic implications
Differences in responses of soil microbial properties and trifoliate orange seedling to biochar derived from three feedstocks
An artificial intelligence model for the pathological diagnosis of invasion depth and histologic grade in bladder cancer
Abstract
Background
Accurate pathological diagnosis of invasion depth and histologic grade is key for clinical management in patients with bladder cancer (BCa), but it is labour-intensive, experience-dependent and subject to interobserver variability. Here, we aimed to develop a pathological artificial intelligence diagnostic model (PAIDM) for BCa diagnosis.
Methods
A total of 854 whole slide images (WSIs) from 692 patients were included and divided into training and validation sets. The PAIDM was developed using the training set based on the deep learning algorithm ScanNet, and the performance was verified at the patch level in validation set 1 and at the WSI level in validation set 2. An independent validation cohort (validation set 3) was employed to compare the PAIDM and pathologists. Model performance was evaluated using the area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value and negative predictive value.
Results
The AUCs of the PAIDM were 0.878 (95% CI 0.875–0.881) at the patch level in validation set 1 and 0.870 (95% CI 0.805–0.923) at the WSI level in validation set 2. In comparing the PAIDM and pathologists, the PAIDM achieved an AUC of 0.847 (95% CI 0.779–0.905), which was non-inferior to the average diagnostic level of pathologists. There was high consistency between the model-predicted and manually annotated areas, improving the PAIDM’s interpretability.
Conclusions
We reported an artificial intelligence-based diagnostic model for BCa that performed well in identifying invasion depth and histologic grade. Importantly, the PAIDM performed admirably in patch-level recognition, with a promising application for transurethral resection specimens.
</jats:sec
