306 research outputs found

    The Emerging Roles of Circular RNAs in Colorectal Cancer

    Get PDF
    Colorectal cancer (CRC) is one of the most common malignant diseases and the forth common cause for death in the world. Circular RNAs (circRNAs) are a group of non-coding RNAs (ncRNAs), which have a covalent closed loop without 5’ and 3’ ends. Studies indicated that many circRNAs are differently expressed in CRC cells and tissues. Their different expression levels are significantly correlated with clinicopathological features and overall survival time of CRC patients. Additionally, they regulate CRC cell proliferation, apoptosis, invasion, and migration mainly by acting as competing endogenous RNAs (ceRNAs). In this review, we reviewed CRC-associated circRNAs, described their functions and mechanisms, discussed their potential as diagnostic or prognostic biomarkers and therapeutic targets of CRC

    Entailment as Robust Self-Learner

    Full text link
    Entailment has been recognized as an important metric for evaluating natural language understanding (NLU) models, and recent studies have found that entailment pretraining benefits weakly supervised fine-tuning. In this work, we design a prompting strategy that formulates a number of different NLU tasks as contextual entailment. This approach improves the zero-shot adaptation of pretrained entailment models. Secondly, we notice that self-training entailment-based models with unlabeled data can significantly improve the adaptation performance on downstream tasks. To achieve more stable improvement, we propose the Simple Pseudo-Label Editing (SimPLE) algorithm for better pseudo-labeling quality in self-training. We also found that both pretrained entailment-based models and the self-trained models are robust against adversarial evaluation data. Experiments on binary and multi-class classification tasks show that SimPLE leads to more robust self-training results, indicating that the self-trained entailment models are more efficient and trustworthy than large language models on language understanding tasks.Comment: Accepted by ACL 2023 main conferenc

    Chain of Thought Prompt Tuning in Vision Language Models

    Full text link
    Language-Image Pre-training has demonstrated promising results on zero-shot and few-shot downstream tasks by prompting visual models with natural language prompts. However, most recent studies only use a single prompt for tuning, neglecting the inherent step-to-step cognitive reasoning process that humans conduct in complex task settings, for example, when processing images from unfamiliar domains. Chain of Thought is a simple and effective approximation to human reasoning process and has been proven useful for natural language processing (NLP) tasks. Based on this cognitive intuition, we believe that conducting effective reasoning is also an important problem in visual tasks, and a chain of thought could be a solution to this problem. In this work, we propose a novel chain of thought prompt tuning for vision-language modeling. Extensive experiments show that our method not only generalizes better in image classification tasks, has greater transferability beyond a single dataset, and has stronger domain generalization performance, but also performs much better in imagetext retrieval and visual question answering, which require more reasoning capabilities. We are the first to successfully adapt chain-of-thought prompting that combines visual and textual embeddings. We will release our code

    Gut microbiota: a potential new regulator of hypertension

    Get PDF
    Hypertension is a significant risk factor for cardiovascular and cerebrovascular diseases and has become a global public health concern. Although hypertension results from a combination of factors, the specific mechanism is still unclear. However, increasing evidence suggests that gut microbiota is closely associated with the development of hypertension. We provide a summary of the composition and physiological role of gut microbiota. We then delve into the mechanism of gut microbiota and its metabolites involved in the occurrence and development of hypertension. Finally, we review various regimens for better-controlling hypertension from the diet, exercise, drugs, antibiotics, probiotics, and fecal transplantation perspectives

    Training Task Experts through Retrieval Based Distillation

    Full text link
    One of the most reliable ways to create deployable models for specialized tasks is to obtain an adequate amount of high-quality task-specific data. However, for specialized tasks, often such datasets do not exist. Existing methods address this by creating such data from large language models (LLMs) and then distilling such knowledge into smaller models. However, these methods are limited by the quality of the LLMs output, and tend to generate repetitive or incorrect data. In this work, we present Retrieval Based Distillation (ReBase), a method that first retrieves data from rich online sources and then transforms them into domain-specific data. This method greatly enhances data diversity. Moreover, ReBase generates Chain-of-Thought reasoning and distills the reasoning capacity of LLMs. We test our method on 4 benchmarks and results show that our method significantly improves performance by up to 7.8% on SQuAD, 1.37% on MNLI, and 1.94% on BigBench-Hard

    Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning

    Full text link
    How can we perform computations over natural language representations to solve tasks that require symbolic and numeric reasoning? We propose natural language embedded programs (NLEP) as a unifying framework for addressing math/symbolic reasoning, natural language understanding, and instruction following tasks. Our approach prompts a language model to generate full Python programs that define functions over data structures which contain natural language representations of structured knowledge. A Python interpreter then executes the generated code and prints the output. Despite using a task-general prompt, we find that this approach can improve upon strong baselines across a range of different tasks including math and symbolic reasoning, text classification, question answering, and instruction following. We further find the generated programs are often interpretable and enable post-hoc verification of the intermediate reasoning steps

    Self-Corrected Multimodal Large Language Model for End-to-End Robot Manipulation

    Full text link
    Robot manipulation policies have shown unsatisfactory action performance when confronted with novel task or object instances. Hence, the capability to automatically detect and self-correct failure action is essential for a practical robotic system. Recently, Multimodal Large Language Models (MLLMs) have shown promise in visual instruction following and demonstrated strong reasoning abilities in various tasks. To unleash general MLLMs as an end-to-end robotic agent, we introduce a Self-Corrected (SC)-MLLM, equipping our model not only to predict end-effector poses but also to autonomously recognize and correct failure actions. Specifically, we first conduct parameter-efficient fine-tuning to empower MLLM with pose prediction ability, which is reframed as a language modeling problem. When facing execution failures, our model learns to identify low-level action error causes (i.e., position and rotation errors) and adaptively seeks prompt feedback from experts. Based on the feedback, SC-MLLM rethinks the current failure scene and generates the corrected actions. Furthermore, we design a continuous policy learning method for successfully corrected samples, enhancing the model's adaptability to the current scene configuration and reducing the frequency of expert intervention. To evaluate our SC-MLLM, we conduct extensive experiments in both simulation and real-world settings. SC-MLLM agent significantly improve manipulation accuracy compared to previous state-of-the-art robotic MLLM (ManipLLM), increasing from 57\% to 79\% on seen object categories and from 47\% to 69\% on unseen novel categories

    Gut microbiome-derived hydrolases—an underrated target of natural product metabolism

    Get PDF
    In recent years, there has been increasing interest in studying gut microbiome-derived hydrolases in relation to oral drug metabolism, particularly focusing on natural product drugs. Despite the significance of natural product drugs in the field of oral medications, there is a lack of research on the regulatory interplay between gut microbiome-derived hydrolases and these drugs. This review delves into the interaction between intestinal microbiome-derived hydrolases and natural product drugs metabolism from three key perspectives. Firstly, it examines the impact of glycoside hydrolases, amide hydrolases, carboxylesterase, bile salt hydrolases, and epoxide hydrolase on the structure of natural products. Secondly, it explores how natural product drugs influence microbiome-derived hydrolases. Lastly, it analyzes the impact of interactions between hydrolases and natural products on disease development and the challenges in developing microbial-derived enzymes. The overarching goal of this review is to lay a solid theoretical foundation for the advancement of research and development in new natural product drugs and personalized treatment

    Epigenetic dynamics shaping melanophore and iridophore cell fate in zebrafish

    Get PDF
    BACKGROUND: Zebrafish pigment cell differentiation provides an attractive model for studying cell fate progression as a neural crest progenitor engenders diverse cell types, including two morphologically distinct pigment cells: black melanophores and reflective iridophores. Nontrivial classical genetic and transcriptomic approaches have revealed essential molecular mechanisms and gene regulatory circuits that drive neural crest-derived cell fate decisions. However, how the epigenetic landscape contributes to pigment cell differentiation, especially in the context of iridophore cell fate, is poorly understood. RESULTS: We chart the global changes in the epigenetic landscape, including DNA methylation and chromatin accessibility, during neural crest differentiation into melanophores and iridophores to identify epigenetic determinants shaping cell type-specific gene expression. Motif enrichment in the epigenetically dynamic regions reveals putative transcription factors that might be responsible for driving pigment cell identity. Through this effort, in the relatively uncharacterized iridophores, we validate alx4a as a necessary and sufficient transcription factor for iridophore differentiation and present evidence on alx4a\u27s potential regulatory role in guanine synthesis pathway. CONCLUSIONS: Pigment cell fate is marked by substantial DNA demethylation events coupled with dynamic chromatin accessibility to potentiate gene regulation through cis-regulatory control. Here, we provide a multi-omic resource for neural crest differentiation into melanophores and iridophores. This work led to the discovery and validation of iridophore-specific alx4a transcription factor
    corecore