3 research outputs found

    Evaluation of Facebook and Twitter Monitoring to Detect Safety Signals for Medical Products: An Analysis of Recent FDA Safety Alerts

    Get PDF
    INTRODUCTION: The rapid expansion of the Internet and computing power in recent years has opened up the possibility of using social media for pharmacovigilance. While this general concept has been proposed by many, central questions remain as to whether social media can provide earlier warnings for rare and serious events than traditional signal detection from spontaneous report data. OBJECTIVE: Our objective was to examine whether specific product–adverse event pairs were reported via social media before being reported to the US FDA Adverse Event Reporting System (FAERS). METHODS: A retrospective analysis of public Facebook and Twitter data was conducted for 10 recent FDA postmarketing safety signals at the drug–event pair level with six negative controls. Social media data corresponding to two years prior to signal detection of each product–event pair were compiled. Automated classifiers were used to identify each ‘post with resemblance to an adverse event’ (Proto-AE), among English language posts. A custom dictionary was used to translate Internet vernacular into Medical Dictionary for Regulatory Activities (MedDRA(®)) Preferred Terms. Drug safety physicians conducted a manual review to determine causality using World Health Organization-Uppsala Monitoring Centre (WHO-UMC) assessment criteria. Cases were also compared with those reported in FAERS. FINDINGS: A total of 935,246 posts were harvested from Facebook and Twitter, from March 2009 through October 2014. The automated classifier identified 98,252 Proto-AEs. Of these, 13 posts were selected for causality assessment of product–event pairs. Clinical assessment revealed that posts had sufficient information to warrant further investigation for two possible product–event associations: dronedarone–vasculitis and Banana Boat Sunscreen--skin burns. No product–event associations were found among the negative controls. In one of the positive cases, the first report occurred in social media prior to signal detection from FAERS, whereas the other case occurred first in FAERS. CONCLUSIONS: An efficient semi-automated approach to social media monitoring may provide earlier insights into certain adverse events. More work is needed to elaborate additional uses for social media data in pharmacovigilance and to determine how they can be applied by regulatory agencies. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s40264-016-0491-0) contains supplementary material, which is available to authorized users

    Reducing the probability of false positive research findings by pre-publication validation – Experience with a large multiple sclerosis database

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Published false positive research findings are a major problem in the process of scientific discovery. There is a high rate of lack of replication of results in clinical research in general, multiple sclerosis research being no exception. Our aim was to develop and implement a policy that reduces the probability of publishing false positive research findings.</p> <p>We have assessed the utility to work with a pre-publication validation policy after several years of research in the context of a large multiple sclerosis database.</p> <p>Methods</p> <p>The large database of the Sylvia Lawry Centre for Multiple Sclerosis Research was split in two parts: one for hypothesis generation and a validation part for confirmation of selected results. We present case studies from 5 finalized projects that have used the validation policy and results from a simulation study.</p> <p>Results</p> <p>In one project, the "relapse and disability" project as described in section II (example 3), findings could not be confirmed in the validation part of the database. The simulation study showed that the percentage of false positive findings can exceed 20% depending on variable selection.</p> <p>Conclusion</p> <p>We conclude that the validation policy has prevented the publication of at least one research finding that could not be validated in an independent data set (and probably would have been a "true" false-positive finding) over the past three years, and has led to improved data analysis, statistical programming, and selection of hypotheses. The advantages outweigh the lost statistical power inherent in the process.</p
    corecore