478 research outputs found

    Spatial variation of perceived equity and its determinants in a gateway community of Giant Panda National Park, China

    Get PDF
    Unidad de excelencia María de Maeztu CEX2019-000940-MSocial equity is essential in the governance of protected areas (PAs), as ignoring such consideration can lead to resistance and jeopardize conservation objectives. However, more research is required to understand the spatial heterogeneity of perceived social equity and its underlying spatial factors. Using a survey of 361 respondents, we presented spatial distribution patterns of perceived equity by kernel density estimation (KDE) in Giant Panda National Park, China. The regression analysis showed that local residents who live closer to the PA boundary are more likely to develop negative responses and those who with easy access to tourism spots have more positive procedural and distributional perceptions. Notably, the proximity to the PA authority decreases locals' perceptions of fairness in all aspects, which is potentially due to the opaque participative channels provided by the PA authority. We argue that those spatial differentials in fairness perceptions are driven by the intrinsic discrepancy of biodiversity protection requirements and the unevenly distributed consequences of management policies. Key steps to advance social equity considerations include multi-industry guidance, extending participative channels, and co-producing better compensation plans. Herein, this study appeals to a greater focus on the spatial aspect of social equity issues in PAs

    TGFβƒ1 Promotes Gemcitabine Resistance Through Regulating the LncRNA-LET/NF90/miR-145 Signaling Axis in Bladder Cancer

    Get PDF
    High tumor recurrence is frequently observed in patients with urinary bladder cancers (UBCs), with the need for biomarkers of prognosis and drug response. Chemoresistance and subsequent recurrence of cancers are driven by a subpopulation of tumor initiating cells, namely cancer stem-like cells (CSCs). However, the underlying molecular mechanism in chemotherapy-induced CSCs enrichment remains largely unclear. In this study, we found that during gemcitabine treatment lncRNA-Low Expression in Tumor (lncRNA-LET) was downregulated in chemoresistant UBC, accompanied with the enrichment of CSC population. Knockdown of lncRNA-LET increased UBC cell stemness, whereas forced expression of lncRNA-LET delayed gemcitabine-induced tumor recurrence. Furthermore, lncRNA-LET was directly repressed by gemcitabine treatment-induced overactivation of TGFβ/SMAD signaling through SMAD binding element (SBE) in the lncRNA-LET promoter. Consequently, reduced lncRNA-LET increased the NF90 protein stability, which in turn repressed biogenesis of miR-145 and subsequently resulted in accumulation of CSCs evidenced by the elevated levels of stemness markers HMGA2 and KLF4. Treatment of gemcitabine resistant xenografts with LY2157299, a clinically relevant specific inhibitor of TGFβRI, sensitized them to gemcitabine and significantly reduced tumorigenecity in vivo. Notably, overexpression of TGFβ1, combined with decreased levels of lncRNA-LET and miR-145 predicted poor prognosis in UBC patients. Collectively, we proved that the dysregulated lncRNA-LET/NF90/miR-145 axis by gemcitabine-induced TGFβ1 promotes UBC chemoresistance through enhancing cancer cell stemness. The combined changes in TGFβ1/lncRNA-LET/miR-145 provide novel molecular prognostic markers in UBC outcome. Therefore, targeting this axis could be a promising therapeutic approach in treating UBC patients

    Robust median reversion strategy for on-line portfolio selection

    Get PDF
    Ministry of Education, Singapore under its Academic Research Funding Tier

    TinyLLaVA Factory: A Modularized Codebase for Small-scale Large Multimodal Models

    Full text link
    We present TinyLLaVA Factory, an open-source modular codebase for small-scale large multimodal models (LMMs) with a focus on simplicity of code implementations, extensibility of new features, and reproducibility of training results. Following the design philosophy of the factory pattern in software engineering, TinyLLaVA Factory modularizes the entire system into interchangeable components, with each component integrating a suite of cutting-edge models and methods, meanwhile leaving room for extensions to more features. In addition to allowing users to customize their own LMMs, TinyLLaVA Factory provides popular training recipes to let users pretrain and finetune their models with less coding effort. Empirical experiments validate the effectiveness of our codebase. The goal of TinyLLaVA Factory is to assist researchers and practitioners in exploring the wide landscape of designing and training small-scale LMMs with affordable computational resources.Comment: Our codebase is made public at https://github.com/TinyLLaVA/TinyLLaVA_Factory with documentation available at https://tinyllava-factory.readthedocs.io/en/latest

    Robust median reversion strategy for online portfolio selection

    Get PDF
    Ministry of Education, Singapore under its Academic Research Funding Tier

    PL-Net: progressive learning network for medical image segmentation

    Get PDF
    In recent years, deep convolutional neural network-based segmentation methods have achieved state-of-the-art performance for many medical analysis tasks. However, most of these approaches rely on optimizing the U-Net structure or adding new functional modules, which overlooks the complementation and fusion of coarse-grained and fine-grained semantic information. To address these issues, we propose a 2D medical image segmentation framework called Progressive Learning Network (PL-Net), which comprises Internal Progressive Learning (IPL) and External Progressive Learning (EPL). PL-Net offers the following advantages: 1) IPL divides feature extraction into two steps, allowing for the mixing of different size receptive fields and capturing semantic information from coarse to fine granularity without introducing additional parameters; 2) EPL divides the training process into two stages to optimize parameters and facilitate the fusion of coarse-grained information in the first stage and fine-grained information in the second stage. We conducted comprehensive evaluations of our proposed method on five medical image segmentation datasets, and the experimental results demonstrate that PL-Net achieves competitive segmentation performance. It is worth noting that PL-Net does not introduce any additional learnable parameters compared to other U-Net variants

    A-Eval: A Benchmark for Cross-Dataset Evaluation of Abdominal Multi-Organ Segmentation

    Full text link
    Although deep learning have revolutionized abdominal multi-organ segmentation, models often struggle with generalization due to training on small, specific datasets. With the recent emergence of large-scale datasets, some important questions arise: \textbf{Can models trained on these datasets generalize well on different ones? If yes/no, how to further improve their generalizability?} To address these questions, we introduce A-Eval, a benchmark for the cross-dataset Evaluation ('Eval') of Abdominal ('A') multi-organ segmentation. We employ training sets from four large-scale public datasets: FLARE22, AMOS, WORD, and TotalSegmentator, each providing extensive labels for abdominal multi-organ segmentation. For evaluation, we incorporate the validation sets from these datasets along with the training set from the BTCV dataset, forming a robust benchmark comprising five distinct datasets. We evaluate the generalizability of various models using the A-Eval benchmark, with a focus on diverse data usage scenarios: training on individual datasets independently, utilizing unlabeled data via pseudo-labeling, mixing different modalities, and joint training across all available datasets. Additionally, we explore the impact of model sizes on cross-dataset generalizability. Through these analyses, we underline the importance of effective data usage in enhancing models' generalization capabilities, offering valuable insights for assembling large-scale datasets and improving training strategies. The code and pre-trained models are available at \href{https://github.com/uni-medical/A-Eval}{https://github.com/uni-medical/A-Eval}

    SAM-Med3D

    Full text link
    Although the Segment Anything Model (SAM) has demonstrated impressive performance in 2D natural image segmentation, its application to 3D volumetric medical images reveals significant shortcomings, namely suboptimal performance and unstable prediction, necessitating an excessive number of prompt points to attain the desired outcomes. These issues can hardly be addressed by fine-tuning SAM on medical data because the original 2D structure of SAM neglects 3D spatial information. In this paper, we introduce SAM-Med3D, the most comprehensive study to modify SAM for 3D medical images. Our approach is characterized by its comprehensiveness in two primary aspects: firstly, by comprehensively reformulating SAM to a thorough 3D architecture trained on a comprehensively processed large-scale volumetric medical dataset; and secondly, by providing a comprehensive evaluation of its performance. Specifically, we train SAM-Med3D with over 131K 3D masks and 247 categories. Our SAM-Med3D excels at capturing 3D spatial information, exhibiting competitive performance with significantly fewer prompt points than the top-performing fine-tuned SAM in the medical domain. We then evaluate its capabilities across 15 datasets and analyze it from multiple perspectives, including anatomical structures, modalities, targets, and generalization abilities. Our approach, compared with SAM, showcases pronouncedly enhanced efficiency and broad segmentation capabilities for 3D volumetric medical images. Our code is released at https://github.com/uni-medical/SAM-Med3D
    corecore