616 research outputs found

    PISA: A measure of Preference In Selection of Arguments to model verb argument recoverability

    Get PDF
    Our paper offers a computational model ofthe semantic recoverability of verb arguments,tested in particular on direct objects and In-struments. Our fully distributional model isintended to improve on older taxonomy-basedmodels, which require a lexicon in addition tothe training corpus. We computed the selec-tional preferences of 99 transitive verbs and173 Instrument verbs as the mean value of thepairwise cosine similarity between their argu-ments (a weighted mean between all the argu-ments, or an unweighted mean with the top-mostkarguments).Results show that ourmodel can predict the recoverability of objectsand Instruments, providing a similar result tothat of taxonomy-based models but at a muchcheaper computational cost

    Molecular Diagnosis of Malaria Infection: A Survey in a Hospital in Central Italy

    Get PDF
    Malaria is a dramatic disease caused by the protozoan parasites Plasmodium. The diagnosis is mainly based on microscopy and rapid diagnostic tests (RDT). Molecular approaches based on PCR techniques may be an alternative tool particularly favourable in regions with declining prevalence. This work aimed to assess pros and cons of molecular diagnosis of malaria in a district of Central Italy were several tens of imported malaria cases are diagnosed every year. Thirty-three blood samples were analysed by microscopy, RDT and molecular techniques to monitor the relative efficiency in malaria diagnosis. Molecular analysis and microscopy diagnosed 32 out of 33 samples as positive for malaria, while RDT only 29. More differences concerned the diagnosis of mixed infections. Our findings remark the importance of the molecular approach in supporting and improving malaria diagnosis. In the cases here presented, the molecular analysis was particularly useful to unveil parasites presence in infections not detectable by blood smear analysis and to additionally solve real and/or presumed mixed infections

    Evaluation of Natural Language Tools for Italian: EVALITA 2007

    Get PDF
    EVALITA 2007, the first edition of the initiative devoted to the evaluation of Natural Language Processing tools for Italian, provided a shared framework where participants? systems had the possibility to be evaluated on five different tasks, namely Part of Speech Tagging (organised by the University of Bologna), Parsing (organised by the University of Torino), Word Sense Disambiguation (organised by CNR-ILC, Pisa), Temporal Expression Recognition and Normalization (organised by CELCT, Trento), and Named Entity Recognition (organised by FBK, Trento). We believe that the diffusion of shared tasks and shared evaluation practices is a crucial step towards the development of resources and tools for Natural Language Processing. Experiences of this kind, in fact, are a valuable contribution to the validation of existing models and data, allowing for consistent comparisons among approaches and among representation schemes. The good response obtained by EVALITA, both in the number of participants and in the quality of results, showed that pursuing such goals is feasible not only for English, but also for other languages

    A Cloud-Native Web Application for Assisted Metadata Generation and Retrieval: THESPIAN-NER

    Get PDF
    Within the context of the Competence Centre for the Conservation of Cultural Heritage (4CH) project, the design and deployment of a platform-as-a-service cloud infrastructure for the first European competence centre of cultural heritage (CH) has begun, and some web services have been integrated into the platform. The first integrated service is the INFN-CHNet web application for FAIR storage of scientific analysis on CH: THESPIAN-Mask. It is based on CIDOC-CRM-compatible ontology and CRMhs, describing the scientific metadata. To ease the process of metadata generation and data injection, another web service has been developed: THESPIAN-NER. It is a tool based on a deep neural network for named entity recognition (NER), enabling users to upload their Italian-written report files and obtain labelled entities. Those entities are used as keywords either to serve as (semi)automatically custom queries for the database, or to fill (part of) the metadata form as a descriptor for the file to be uploaded. The services have been made freely available in the 4CH PaaS cloud platform

    Head Nurse Leadership: Facilitators and Barriers to Adherence to Infection Prevention and Control Programs—A Qualitative Study Protocol

    Get PDF
    Background: The effective management of Healthcare-Associated Infections (HAIs) relies on the implementation of good practice across the entire multidisciplinary team. The organizational context and the role of head nurses influence the team's performance and behavior. Understanding how decision-making processes influence healthcare professionals' behavior in the management of HAIs could help identify alternative interventions for reducing the risk of infection in healthcare organizations. This study aims to explore how the behaviors promoted and actions implemented by the head nurse can influence healthcare professionals' adherence to Infection Prevention and Control (IPC) programs. Methods: A multi-center qualitative study will be conducted using a Grounded Theory approach. Observations will be conducted, followed by individual interviews and/or focus groups. A constructive and representative sample of healthcare professionals who care directly for patients will be enrolled in the study. The COnsolidated criteria for REporting Qualitative research (COREQ) checklist will be followed to ensure the quality of this study protocol. A multistep inductive process will be used to analyze the data. Conclusions: The study results will provide an understanding of how nurses perceive the influence of leadership and how they modify their behaviors and activities toward patients according to IPC programs. The study will identify barriers and facilitators to IPC compliance and suggest strategies to minimize negative patient outcomes, such as the development of an HAI

    Strategies to improve the performances of bakery products made from ancient wheat’s

    Get PDF
    The growing consumers\u2019 attention regarding the inclusion of foods able to provide health benefits in one\u2019s diet, is currently a theme of fundamental importance. Between these products, ancient wheat\u2019s and whole wheat flours seem to be the most appealing in the cereal industry thanks to their nutritional content. Nevertheless, ancient wheat\u2019s show worse rheological and technological performances compared to modern cultivars, in particular when using whole wheat flour. According to Migliorini, et al. (2016), the content of starch and protein is strongly influenced by annual variability and agronomic practices. This highlights the need for further investigation to understand the relationship between different agronomic practices and the rheological and technological properties of flours and dough\u2019s made from ancient wheat\u2019s. Furthermore, the greatest challenge for the bakery industry still remains the improvement of the technological properties of bakery products made from ancient wheat\u2019s. In this paper, some of the strategies aimed to face this challenge are proposed. Starting from the improvement of the rheological properties of dough\u2019s made from ancient wheat, Cappelli, et al. (2018) provided a rheological study which allows to identify the optimal water content to be added, through models represented by level curves diagrams. Moreover, regarding the improvement of bakery products based on ancient wheat, the sourdough fermentation (Saa, et al. 2017) and the reduction of free lipid in the doughs (Collar & Angioloni, 2014) seems to be the most interesting strategies. Finally, future strategies finalized to improve the technological properties of bakery products made from ancient wheat\u2019s are related to the assessment of suitability and bread-making aptitude of ancient wheat flours blended with the most interesting and innovative sources of proteins, i.e. legume and insect flours

    Years of life that could be saved from prevention of hepatocellular carcinoma

    Get PDF
    BACKGROUND: Hepatocellular carcinoma (HCC) causes premature death and loss of life expectancy worldwide. Its primary and secondary prevention can result in a significant number of years of life saved. AIM: To assess how many years of life are lost after HCC diagnosis. METHODS: Data from 5346 patients with first HCC diagnosis were used to estimate lifespan and number of years of life lost after tumour onset, using a semi-parametric extrapolation having as reference an age-, sex- and year-of-onset-matched population derived from national life tables. RESULTS: Between 1986 and 2014, HCC lead to an average of 11.5 years-of-life lost for each patient. The youngest age-quartile group (18-61 years) had the highest number of years-of-life lost, representing approximately 41% of the overall benefit obtainable from prevention. Advancements in HCC management have progressively reduced the number of years-of-life lost from 12.6 years in 1986-1999, to 10.7 in 2000-2006 and 7.4 years in 2007-2014. Currently, an HCC diagnosis when a single tumour <2 cm results in 3.7 years-of-life lost while the diagnosis when a single tumour 65 2 cm or 2/3 nodules still within the Milan criteria, results in 5.0 years-of-life lost, representing the loss of only approximately 5.5% and 7.2%, respectively, of the entire lifespan from birth. CONCLUSIONS: Hepatocellular carcinoma occurrence results in the loss of a considerable number of years-of-life, especially for younger patients. In recent years, the increased possibility of effectively treating this tumour has improved life expectancy, thus reducing years-of-life lost

    Non-Alcoholic Fatty Liver Disease and Risk of Macro- and Microvascular Complications in Patients with Type 2 Diabetes

    Get PDF
    Non-alcoholic fatty liver disease (NAFLD) is considered the hepatic manifestation of metabolic syndrome. To date, NAFLD is the most frequent chronic liver disease seen day by day in clinical practice across most high-income countries, affecting nearly 25-30% of adults in the general population and up to 70% of patients with T2DM. Over the last few decades, it clearly emerged that NAFLD is a "multisystemic disease" and that the leading cause of death among patients with NAFLD is cardiovascular disease (CVD). Indeed, several observational studies and some meta-analyses have documented that NAFLD, especially its advanced forms, is strongly associated with fatal and non-fatal cardiovascular events, as well as with specific cardiac complications, including sub-clinical myocardial alteration and dysfunction, heart valve diseases and cardiac arrhythmias. Importantly, across various studies, these associations remained significant after adjustment for established cardiovascular risk factors and other confounders. Additionally, several observational studies and some meta-analyses have also reported that NAFLD is independently associated with specific microvascular conditions, such as chronic kidney disease and distal or autonomic neuropathy. Conversely, data regarding a potential association between NAFLD and retinopathy are scarce and often conflicting. This narrative review will describe the current evidence about the association between NAFLD and the risk of macro- and microvascular manifestations of CVD, especially in patients with T2DM. We will also briefly discuss the biological mechanisms underpinning the association between NAFLD and its advanced forms and macro- and microvascular CVD

    Predicting gene expression levels from DNA sequences and post-transcriptional information with transformers

    Get PDF
    Background and objectives: In the latest years, the prediction of gene expression levels has been crucial due to its potential applications in the clinics. In this context, Xpresso and others methods based on Convolutional Neural Networks and Transformers were firstly proposed to this aim. However, all these methods embed data with a standard one-hot encoding algorithm, resulting in impressively sparse matrices. In addition, post-transcriptional regulation processes, which are of uttermost importance in the gene expression process, are not considered in the model.Methods: This paper presents Transformer DeepLncLoc, a novel method to predict the abundance of the mRNA (i.e., gene expression levels) by processing gene promoter sequences, managing the problem as a regression task. The model exploits a transformer-based architecture, introducing the DeepLncLoc method to perform the data embedding. Since DeepLncloc is based on word2vec algorithm, it avoids the sparse matrices problem.Results: Post-transcriptional information related to mRNA stability and transcription factors is included in the model, leading to significantly improved performances compared to the state-of-the-art works. Transformer DeepLncLoc reached 0.76 of R-2 evaluation metric compared to 0.74 of Xpresso.Conclusion: The Multi-Headed Attention mechanisms which characterizes the transformer methodology is suitable for modeling the interactions between DNA's locations, overcoming the recurrent models. Finally, the integration of the transcription factors data in the pipeline leads to impressive gains in predictive power. (C) 2022 Elsevier B.V. All rights reserved
    corecore