102 research outputs found

    Experiences of Family Caregivers of Chinese Cancer Patients:a Qualitative Meta-synthesis

    Get PDF
    BackgroundCare needs of Chinese cancer patients have increased significantly, due to massive population ageing and an increasing cancer incidence rate. Family caregiving, being the most important component in meeting those care needs, comes with many kinds of care burdens for caregivers. Thus, it is vital to systematically examine caregiving experiences of family caregivers for cancer patients.ObjectiveTo systematically synthesize the care experiences of family caregivers for Chinese cancer patients.MethodsAll qualitative studies on the caregiving experiences of family caregivers for Chinese cancer patients were systematically retrieved from Web of Science, PubMed, EmBase, Medline, Cochrane Library, grey literature in the health sciences, CNKI, and Wanfang Data from inception to May 23, 2021 between January and May 2021. The 2016 JBI Critical Appraisal Checklist for Qualitative Research was used for quality evaluation. Meta-synthesis of the included studies was performed.ResultsNineteen studies (6 in Chinese and 13 in English) were finally included, involving 295 family caregivers in total. Nine were rated A with very low risk of bias, and 10 were rated B with relatively low risk of bias. Three overarching themes containing 15 subthemes emerged: patient-centered care needs, care burden and care gains.ConclusionThis qualitative meta-synthesis provides a deep and comprehensive analysis of the care experiences of family caregivers of Chinese cancer patients, which may help improve the construction of medical system to meet the needs of patient-centered care, strengthen the positive factors affecting the care experience at the micro, meso and macro levels, and carry out intervention measures such as death and life education to reduce the negative impact of cultural factors on the care experience

    Honey bee maternal effects improve worker performance and reproductive ability in offspring

    Get PDF
    Maternal effects are an evolutionary strategy used to improve offspring quality. In an example of maternal effects in honey bees (Apis mellifera), mother queens produce larger eggs in queen cells than in worker cells in order to breed better daughter queens. In our current study, morphological indexes, reproductive tissues, and the egg-laying ability of newly reared queens reared with eggs laid in queen cells (QE), eggs laid in worker cells (WE), and 2-day-old larvae in worker cells (2L) were evaluated. In addition, morphological indexes of offspring queens and working performance of offspring workers were examined. The thorax weight, number of ovarioles, egg length, and number of laid eggs and capped broods of QE were significantly higher than those of WE and 2L, indicating that the reproductive capacity of QE group was better than that of other groups. Furthermore, offspring queens from QE had larger thorax weights and sizes than those from the other two groups. Offspring worker bees from QE also had larger body sizes and greater pollen-collecting and royal jelly-producing abilities than those of other two groups. These results demonstrate that honey bees display profound maternal effects on queen quality that can be transmitted across generations. These findings provide a basis for improving queen quality, with implications in apicultural and agricultural production

    Caregiving Experiences of Family Caregivers for Children with Tumors:a Qualitative Systematic Review

    Get PDF
    BackgroundAs the most direct caregivers, family caregivers play a crucial role in caring children with cancer. Qualitative studies on their emotions and experiences have reported that they face great challenges and pressures during caring children with cance.ObjectiveTo perform an integrative synthesis of caregiving experiences of family caregivers of children with cancer, providing evidence derived from practice for improving the caring for such children, and their family caregivers' physical and mental health.MethodsQualitative studies regarding caregiving experiences of family caregivers for children with cancer were retrieved from Web of Science, PubMed, EmBase, Medline, CNKI, and Wanfang Data from inception to June 1, 2021. Literature screening, and data extraction were performed by two researchers, separately. Methodological quality was assessed using JBI Critical Appraisal Checklist for Systematic Reviews and Research Syntheses. And the results were synthesized using an integrative review approach.ResultsTwelve studies were finaly enrolled. Thirty-eight complete evidence of 11 types arose from the synthesis and were summarized into two themes: (1) care challenges and burdens; (2) care resources. Each primary topic encompasses multiple sub-topics.ConclusionWe found that family caregivers face a variety of burdens and challenges, and they attempt to actively solve them using their own strengths, supports from their own personal networks, other people, external sources (non-governmental, public and supportive policy resources) , culture and belief, as well as knowledge about hospice care. To relieve their care burden, and improve the quality of life of these children, it is suggested that medical workers should provide these caregivers with targeted guidance and supports with the features of the specific treatment phase of the children, and their caregivers' caregiving experiences and culture taken into consideration

    KEHRL: Learning Knowledge-Enhanced Language Representations with Hierarchical Reinforcement Learning

    Full text link
    Knowledge-enhanced pre-trained language models (KEPLMs) leverage relation triples from knowledge graphs (KGs) and integrate these external data sources into language models via self-supervised learning. Previous works treat knowledge enhancement as two independent operations, i.e., knowledge injection and knowledge integration. In this paper, we propose to learn Knowledge-Enhanced language representations with Hierarchical Reinforcement Learning (KEHRL), which jointly addresses the problems of detecting positions for knowledge injection and integrating external knowledge into the model in order to avoid injecting inaccurate or irrelevant knowledge. Specifically, a high-level reinforcement learning (RL) agent utilizes both internal and prior knowledge to iteratively detect essential positions in texts for knowledge injection, which filters out less meaningful entities to avoid diverting the knowledge learning direction. Once the entity positions are selected, a relevant triple filtration module is triggered to perform low-level RL to dynamically refine the triples associated with polysemic entities through binary-valued actions. Experiments validate KEHRL's effectiveness in probing factual knowledge and enhancing the model's performance on various natural language understanding tasks

    UniPSDA: Unsupervised Pseudo Semantic Data Augmentation for Zero-Shot Cross-Lingual Natural Language Understanding

    Full text link
    Cross-lingual representation learning transfers knowledge from resource-rich data to resource-scarce ones to improve the semantic understanding abilities of different languages. However, previous works rely on shallow unsupervised data generated by token surface matching, regardless of the global context-aware semantics of the surrounding text tokens. In this paper, we propose an Unsupervised Pseudo Semantic Data Augmentation (UniPSDA) mechanism for cross-lingual natural language understanding to enrich the training data without human interventions. Specifically, to retrieve the tokens with similar meanings for the semantic data augmentation across different languages, we propose a sequential clustering process in 3 stages: within a single language, across multiple languages of a language family, and across languages from multiple language families. Meanwhile, considering the multi-lingual knowledge infusion with context-aware semantics while alleviating computation burden, we directly replace the key constituents of the sentences with the above-learned multi-lingual family knowledge, viewed as pseudo-semantic. The infusion process is further optimized via three de-biasing techniques without introducing any neural parameters. Extensive experiments demonstrate that our model consistently improves the performance on general zero-shot cross-lingual natural language understanding tasks, including sequence classification, information extraction, and question answering

    DAFNet: Dynamic Auxiliary Fusion for Sequential Model Editing in Large Language Models

    Full text link
    Recently, while large language models (LLMs) have demonstrated impressive results, they still suffer from hallucination, i.e., the generation of false information. Model editing is the task of fixing factual mistakes in LLMs; yet, most previous works treat it as a one-time task, paying little attention to ever-emerging mistakes generated by LLMs. We address the task of sequential model editing (SME) that aims to rectify mistakes continuously. A Dynamic Auxiliary Fusion Network (DAFNet) is designed to enhance the semantic interaction among the factual knowledge within the entire sequence, preventing catastrophic forgetting during the editing process of multiple knowledge triples. Specifically, (1) for semantic fusion within a relation triple, we aggregate the intra-editing attention flow into auto-regressive self-attention with token-level granularity in LLMs. We further leverage multi-layer diagonal inter-editing attention flow to update the weighted representations of the entire sequence-level granularity. (2) Considering that auxiliary parameters are required to store the knowledge for sequential editing, we construct a new dataset named \textbf{DAFSet}, fulfilling recent, popular, long-tail and robust properties to enhance the generality of sequential editing. Experiments show DAFNet significantly outperforms strong baselines in single-turn and sequential editing. The usage of DAFSet also consistently improves the performance of other auxiliary network-based methods in various scenariosComment: ACL2024 finding

    On the Role of Long-tail Knowledge in Retrieval Augmented Large Language Models

    Full text link
    Retrieval augmented generation (RAG) exhibits outstanding performance in promoting the knowledge capabilities of large language models (LLMs) with retrieved documents related to user queries. However, RAG only focuses on improving the response quality of LLMs via enhancing queries indiscriminately with retrieved information, paying little attention to what type of knowledge LLMs really need to answer original queries more accurately. In this paper, we suggest that long-tail knowledge is crucial for RAG as LLMs have already remembered common world knowledge during large-scale pre-training. Based on our observation, we propose a simple but effective long-tail knowledge detection method for LLMs. Specifically, the novel Generative Expected Calibration Error (GECE) metric is derived to measure the ``long-tailness'' of knowledge based on both statistics and semantics. Hence, we retrieve relevant documents and infuse them into the model for patching knowledge loopholes only when the input query relates to long-tail knowledge. Experiments show that, compared to existing RAG pipelines, our method achieves over 4x speedup in average inference time and consistent performance improvement in downstream tasks

    ChatRadio-Valuer: A Chat Large Language Model for Generalizable Radiology Report Generation Based on Multi-institution and Multi-system Data

    Full text link
    Radiology report generation, as a key step in medical image analysis, is critical to the quantitative analysis of clinically informed decision-making levels. However, complex and diverse radiology reports with cross-source heterogeneity pose a huge generalizability challenge to the current methods under massive data volume, mainly because the style and normativity of radiology reports are obviously distinctive among institutions, body regions inspected and radiologists. Recently, the advent of large language models (LLM) offers great potential for recognizing signs of health conditions. To resolve the above problem, we collaborate with the Second Xiangya Hospital in China and propose ChatRadio-Valuer based on the LLM, a tailored model for automatic radiology report generation that learns generalizable representations and provides a basis pattern for model adaptation in sophisticated analysts' cases. Specifically, ChatRadio-Valuer is trained based on the radiology reports from a single institution by means of supervised fine-tuning, and then adapted to disease diagnosis tasks for human multi-system evaluation (i.e., chest, abdomen, muscle-skeleton, head, and maxillofacial &\& neck) from six different institutions in clinical-level events. The clinical dataset utilized in this study encompasses a remarkable total of \textbf{332,673} observations. From the comprehensive results on engineering indicators, clinical efficacy and deployment cost metrics, it can be shown that ChatRadio-Valuer consistently outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and GPT-4 et al., in terms of the diseases diagnosis from radiology reports. ChatRadio-Valuer provides an effective avenue to boost model generalization performance and alleviate the annotation workload of experts to enable the promotion of clinical AI applications in radiology reports

    Research priorities to reduce the impact of COVID-19 in low- and middle-income countries

    Get PDF
    Background: The COVID-19 pandemic has caused disruptions to the functioning of societies and their health systems. Prior to the pandemic, health systems in low- and middle-income countries (LMIC) were particularly stretched and vulnerable. The International Society of Global Health (ISoGH) sought to systematically identify priorities for health research that would have the potential to reduce the impact of the COVID-19 pandemic in LMICs. Methods: The Child Health and Nutrition Research Initiative (CHNRI) method was used to identify COVID-19-related research priorities. All ISoGH members were invited to participate. Seventy-nine experts in clinical, translational, and population research contributed 192 research questions for consideration. Fifty-two experts then scored those questions based on five pre-defined criteria that were selected for this exercise: 1) feasibility and answerability; 2) potential for burden reduction; 3) potential for a paradigm shift; 4) potential for translation and implementation; and 5) impact on equity. Results: Among the top 10 research priorities, research questions related to vaccination were prominent: health care system access barriers to equitable uptake of COVID-19 vaccination (ranked 1st), determinants of vaccine hesitancy (4th), development and evaluation of effective interventions to decrease vaccine hesitancy (5th), and vaccination impacts on vulnerable population/s (6th). Health care delivery questions also ranked highly, including: effective strategies to manage COVID-19 globally and in LMICs (2nd) and integrating health care for COVID-19 with other essential health services in LMICs (3rd). Additionally, the assessment of COVID-19 patients’ needs in rural areas of LMICs was ranked 7th, and studying the leading socioeconomic determinants and consequences of the COVID-19 pandemic in LMICs using multi-faceted approaches was ranked 8th. The remaining questions in the top 10 were: clarifying paediatric case-fatality rates (CFR) in LMICs and identifying effective strategies for community engagement against COVID-19 in different LMIC contexts. Interpretation: Health policy and systems research to inform COVID-19 vaccine uptake and equitable access to care are urgently needed, especially for rural, vulnerable, and/or marginalised populations. This research should occur in parallel with studies that will identify approaches to minimise vaccine hesitancy and effectively integrate care for COVID-19 with other essential health services in LMICs. ISoGH calls on the funders of health research in LMICs to consider the urgency and priority of this research during the COVID-19 pandemic and support studies that could make a positive difference for the populations of LMICs
    corecore