548 research outputs found

    Epigenetic repression of PDZ-LIM domain-containing protein 2 promotes ovarian cancer via NOS2-derived nitric oxide signaling.

    Get PDF
    Ovarian cancer constitutes one of the most lethal gynaecological malignancies worldwide and currently no satisfactory therapeutic approaches have been established. Therefore, elucidation of molecular mechanisms to develop targeted therapy of ovarian cancer is crucial. PDLIM2 is critical to promote ubiquitination of nuclear p65 and thus its role in inflammation has been highlighted recently. We demonstrate that PDLIM2 is decreased in both ovarian high-grade serous carcinoma and in various human ovarian cancer cell lines compared with normal ovary tissues and human ovarian surface epithelial cells (HOSE). Further functional analysis revealed that PDLIM2 is epigenetically repressed in ovarian cancer development and inhibition of PDLIM2 promoted ovarian cancer growth both in vivo and in vitro via NOS2-derived nitric oxide signaling, leading to recruitment of M2 type macrophages. These results suggest that PDLIM2 might be involved in ovarian cancer pathogenesis, which could serve as a promising therapeutic target for ovarian cancer patients

    LncRNAs: the bridge linking RNA and colorectal cancer.

    Get PDF
    Long noncoding RNAs (lncRNAs) are transcribed by genomic regions (exceeding 200 nucleotides in length) that do not encode proteins. While the exquisite regulation of lncRNA transcription can provide signals of malignant transformation, lncRNAs control pleiotropic cancer phenotypes through interactions with other cellular molecules including DNA, protein, and RNA. Recent studies have demonstrated that dysregulation of lncRNAs is influential in proliferation, angiogenesis, metastasis, invasion, apoptosis, stemness, and genome instability in colorectal cancer (CRC), with consequent clinical implications. In this review, we explicate the roles of different lncRNAs in CRC, and the potential implications for their clinical application

    DQ-Det: Learning Dynamic Query Combinations for Transformer-based Object Detection and Segmentation

    Full text link
    Transformer-based detection and segmentation methods use a list of learned detection queries to retrieve information from the transformer network and learn to predict the location and category of one specific object from each query. We empirically find that random convex combinations of the learned queries are still good for the corresponding models. We then propose to learn a convex combination with dynamic coefficients based on the high-level semantics of the image. The generated dynamic queries, named modulated queries, better capture the prior of object locations and categories in the different images. Equipped with our modulated queries, a wide range of DETR-based models achieve consistent and superior performance across multiple tasks including object detection, instance segmentation, panoptic segmentation, and video instance segmentation.Comment: 12 pages, 4 figures, ICML 202

    Relabeling Minimal Training Subset to Flip a Prediction

    Full text link
    When facing an unsatisfactory prediction from a machine learning model, it is crucial to investigate the underlying reasons and explore the potential for reversing the outcome. We ask: can we result in the flipping of a test prediction xtx_t by relabeling the smallest subset St\mathcal{S}_t of the training data before the model is trained? We propose an efficient procedure to identify and relabel such a subset via an extended influence function. We find that relabeling fewer than 1% of the training points can often flip the model's prediction. This mechanism can serve multiple purposes: (1) providing an approach to challenge a model prediction by recovering influential training subsets; (2) evaluating model robustness with the cardinality of the subset (i.e., St|\mathcal{S}_t|); we show that St|\mathcal{S}_t| is highly related to the noise ratio in the training set and St|\mathcal{S}_t| is correlated with but complementary to predicted probabilities; (3) revealing training points lead to group attribution bias. To the best of our knowledge, we are the first to investigate identifying and relabeling the minimal training subset required to flip a given prediction.Comment: Under revie

    The Devil is in the Details: A Deep Dive into the Rabbit Hole of Data Filtering

    Full text link
    The quality of pre-training data plays a critical role in the performance of foundation models. Popular foundation models often design their own recipe for data filtering, which makes it hard to analyze and compare different data filtering approaches. DataComp is a new benchmark dedicated to evaluating different methods for data filtering. This paper describes our learning and solution when participating in the DataComp challenge. Our filtering strategy includes three stages: single-modality filtering, cross-modality filtering, and data distribution alignment. We integrate existing methods and propose new solutions, such as computing CLIP score on horizontally flipped images to mitigate the interference of scene text, using vision and language models to retrieve training samples for target downstream tasks, rebalancing the data distribution to improve the efficiency of allocating the computational budget, etc. We slice and dice our design choices, provide in-depth analysis, and discuss open questions. Our approach outperforms the best method from the DataComp paper by over 4% on the average performance of 38 tasks and by over 2% on ImageNet.Comment: 12 pages, 10 figure
    corecore