1,405 research outputs found

    Standards of lithium monitoring in mental health trusts in the UK

    Get PDF
    Background Lithium is a commonly prescribed drug with a narrow therapeutic index, and recognised adverse effects on the kidneys and thyroid. Clinical guidelines for the management of bipolar affective disorder published by The National Institute for Health and Clinical Excellence (NICE) recommend checks of renal and thyroid function before lithium is prescribed. They further recommend that all patients who are prescribed lithium should have their renal and thyroid function checked every six months, and their serum lithium checked every three months. Adherence to these recommendations has not been subject to national UK audit. Methods The Prescribing Observatory for Mental Health (POMH-UK) invited all National Health Service Mental Health Trusts in the UK to participate in a benchmarking audit of lithium monitoring against recommended standards. Data were collected retrospectively from clinical records and submitted electronically. Results 436 clinical teams from 38 Trusts submitted data for 3,373 patients. In patients recently starting lithium, there was a documented baseline measure of renal or thyroid function in 84% and 82% respectively. For patients prescribed lithium for a year or more, the NICE standards for monitoring lithium serum levels, and renal and thyroid function were met in 30%, 55% and 50% of cases respectively. Conclusions The quality of lithium monitoring in patients who are in contact with mental health services falls short of recognised standards and targets. Findings from this audit, along with reports of harm received by the National Patient Safety Agency, prompted a Patient Safety Alert mandating primary care, mental health and acute Trusts, and laboratory staff to work together to ensure systems are in place to support recommended lithium monitoring by December 2010

    Global Value Chains and the Great Recession: Evidence from Italian and German Firms

    Get PDF
    During the last two decades, profound changes in the international division of labour among firms have occurred, with impressive growth in outsourcing, off-shoring of some stages of production and the globalization of intermediates goods markets. This new model of the international division of labour has both initiated an increasing variety of relationships among producers and spurred the development of Global Value Chains. According to some recent research, Global Value Chains have been one of the main transmission mechanisms of the Great Trade Collapse that severely and simultaneously hit all OECD countries in 2009. Pervasive as it has been, it also appears that the impact of the crisis on firms involved in Global Value Chains has been highly heterogeneous. Our paper intends to contribute to this very recent and ongoing debate, providing a description of the effects of the crisis from a perspective that is both countrycomparative, Germany and Italy being the countries taken into consideration, and on firm level, as we pay particular attention to the positioning of the firms along Global Value Chains, i.e., whether intermediate or final firms- and to their strategies. Three are the main conclusions: i) intermediate firms were hit by the crisis more than final firms; ii) among intermediate firms, the ones that carried out innovation activities in the previous period (before 2008) were somewhat sheltered by the effect of crisis; iii) firms ’ positioning in GVCs and their strategies may help to explain the Italy-Germany performance gap

    Differential entropy and time

    Full text link
    We give a detailed analysis of the Gibbs-type entropy notion and its dynamical behavior in case of time-dependent continuous probability distributions of varied origins: related to classical and quantum systems. The purpose-dependent usage of conditional Kullback-Leibler and Gibbs (Shannon) entropies is explained in case of non-equilibrium Smoluchowski processes. A very different temporal behavior of Gibbs and Kullback entropies is confronted. A specific conceptual niche is addressed, where quantum von Neumann, classical Kullback-Leibler and Gibbs entropies can be consistently introduced as information measures for the same physical system. If the dynamics of probability densities is driven by the Schr\"{o}dinger picture wave-packet evolution, Gibbs-type and related Fisher information functionals appear to quantify nontrivial power transfer processes in the mean. This observation is found to extend to classical dissipative processes and supports the view that the Shannon entropy dynamics provides an insight into physically relevant non-equilibrium phenomena, which are inaccessible in terms of the Kullback-Leibler entropy and typically ignored in the literature.Comment: Final, unabridged version; http://www.mdpi.org/entropy/ Dedicated to Professor Rafael Sorkin on his 60th birthda

    Machine Learning under the light of Phraseology expertise: use case of presidential speeches, De Gaulle -Hollande (1958-2016)

    Get PDF
    International audienceAuthor identification and text genesis have always been a hot topic for the statistical analysis of textual data community. Recent advances in machine learning have seen the emergence of machines competing state-of-the-art computational linguistic methods on specific natural language processing tasks (part-of-speech tagging, chunking and parsing, etc). In particular, Deep Linguistic Architectures are based on the knowledge of language speci-ficities such as grammar or semantic structure. These models are considered as the most competitive thanks to their assumed ability to capture syntax. However if those methods have proven their efficiency, their underlying mechanisms, both from a theoretical and an empirical analysis point of view, remains hard both to explicit and to maintain stable, which restricts their area of applications. Our work is enlightening mechanisms involved in deep architectures when applied to Natural Language Processing (NLP) tasks. The Query-By-Dropout-Committee (QBDC) algorithm is an active learning technique we have designed for deep architectures: it selects iteratively the most relevant samples to be added to the training set so that the model is improved the most when built from the new training set. However in this article, we do not go into details of the QBDC algorithm-as it has already been studied in the original QBDC article-but we rather confront the relevance of the sentences chosen by our active strategy to state of the art phraseology techniques. We have thus conducted experiments on the presidential discourses from presidents C. De Gaulle, N. Sarkozy and F. Hollande in order to exhibit the interest of our active deep learning method in terms of discourse author identification and to analyze the extracted linguistic patterns by our artificial approach compared to standard phraseology techniques.L'identification de l'auteur et la gen ese d'un texte ont toujours eté une question de tr es grand intérêt pour la com-munauté de l'analyse statistique des données textuelles. Les récentes avancées dans le domaine de l'apprentissage machine ont permis l'´ emergence d'algorithmes concurrençant les méthodes de linguistique computationnelles de l'´ etat de l'art pour des tâches spécifiques en traitement automatique du langage (´ etiquetage des parties du dis-cours, segmentation et l'analyse du texte, etc). En particulier, les architectures profondes pour la linguistique sont fondées sur la connaissance des spécificités linguistiques telles que la grammaire ou la structure sémantique. Ces mod eles sont considérés comme les plus compétitifs grâcè a leur capacité supposée de capturer la syntaxe. Toute-fois, si ces méthodes ont prouvé leur efficacité, leurs mécanismes sous-jacents, tant du point de vue théorique que du point de vue de l'analyse empirique, restent difficilè a la fois a expliciter et a maintenir stables, ce qui limite leur domaine d'application. Notre article visè a mettre enlumì ere certains des mécanismes impliqués dans l'apprentissage profond lorsqu'il est appliqué a des tâches de traitement automatique du langage (TAL). L'algorithme Query-By-Dropout-Committee (QBDC) est une technique d'apprentissage actif, nous avons conçu pour les architectures profondes : il sélectionne itérativement les echantillons les plus pertinents pour etre ajoutés a l'ensemble d'entrainement afin que le mod ele soit amélioré de façon optimale lorsqu'on il est mis a jour a partir du nouvel ensemble d'entrainement. Cependant, dans cet article, nous ne détaillons pas l'algorithme QBDC-qui a déj a ´ eté etudié dans l'article original sur QBDC-mais nous confrontons plutôt la pertinence des phrases choisies par notre stratégie active aux techniques de l'´ etat de l'art en phraséologie. Nous avons donc mené des expériences sur les discours présidentiels des présidents C. De Gaulle , N. Sarkozy et F. Hollande afin de présenter l' intérêt de notre méthode d'apprentissage profond actif en termes de d'identification de l'auteur d'un discours et pour analyser les motifs linguistiques extraits par notre approche artificielle par rapport aux techniques de phraséologie standard
    corecore