109 research outputs found
Missing data in trial-based cost-effectiveness analysis: An incomplete journey.
Cost-effectiveness analyses (CEA) conducted alongside randomised trials provide key evidence for informing healthcare decision making, but missing data pose substantive challenges. Recently, there have been a number of developments in methods and guidelines addressing missing data in trials. However, it is unclear whether these developments have permeated CEA practice. This paper critically reviews the extent of and methods used to address missing data in recently published trial-based CEA. Issues of the Health Technology Assessment journal from 2013 to 2015 were searched. Fifty-two eligible studies were identified. Missing data were very common; the median proportion of trial participants with complete cost-effectiveness data was 63% (interquartile range: 47%-81%). The most common approach for the primary analysis was to restrict analysis to those with complete data (43%), followed by multiple imputation (30%). Half of the studies conducted some sort of sensitivity analyses, but only 2 (4%) considered possible departures from the missing-at-random assumption. Further improvements are needed to address missing data in cost-effectiveness analyses conducted alongside randomised trials. These should focus on limiting the extent of missing data, choosing an appropriate method for the primary analysis that is valid under contextually plausible assumptions, and conducting sensitivity analyses to departures from the missing-at-random assumption
Modern approaches for evaluating treatment effect heterogeneity from clinical trials and observational data
In this paper we review recent advances in statistical methods for the
evaluation of the heterogeneity of treatment effects (HTE), including subgroup
identification and estimation of individualized treatment regimens, from
randomized clinical trials and observational studies. We identify several types
of approaches using the features introduced in Lipkovich, Dmitrienko and
D'Agostino (2017) that distinguish the recommended principled methods from
basic methods for HTE evaluation that typically rely on rules of thumb and
general guidelines (the methods are often referred to as common practices). We
discuss the advantages and disadvantages of various principled methods as well
as common measures for evaluating their performance. We use simulated data and
a case study based on a historical clinical trial to illustrate several new
approaches to HTE evaluation
Using principal stratification in analysis of clinical trials
The ICH E9(R1) addendum (2019) proposed principal stratification (PS) as one
of five strategies for dealing with intercurrent events. Therefore,
understanding the strengths, limitations, and assumptions of PS is important
for the broad community of clinical trialists. Many approaches have been
developed under the general framework of PS in different areas of research,
including experimental and observational studies. These diverse applications
have utilized a diverse set of tools and assumptions. Thus, need exists to
present these approaches in a unifying manner. The goal of this tutorial is
threefold. First, we provide a coherent and unifying description of PS. Second,
we emphasize that estimation of effects within PS relies on strong assumptions
and we thoroughly examine the consequences of these assumptions to understand
in which situations certain assumptions are reasonable. Finally, we provide an
overview of a variety of key methods for PS analysis and use a real clinical
trial example to illustrate them. Examples of code for implementation of some
of these approaches are given in supplemental materials
Sensitivity Analysis for Not-at-Random Missing Data in Trial-Based Cost-Effectiveness Analysis : A Tutorial
Cost-effectiveness analyses (CEA) of randomised controlled trials are a key source of information for health care decision makers. Missing data are, however, a common issue that can seriously undermine their validity. A major concern is that the chance of data being missing may be directly linked to the unobserved value itself [missing not at random (MNAR)]. For example, patients with poorer health may be less likely to complete quality-of-life questionnaires. However, the extent to which this occurs cannot be ascertained from the data at hand. Guidelines recommend conducting sensitivity analyses to assess the robustness of conclusions to plausible MNAR assumptions, but this is rarely done in practice, possibly because of a lack of practical guidance. This tutorial aims to address this by presenting an accessible framework and practical guidance for conducting sensitivity analysis for MNAR data in trial-based CEA. We review some of the methods for conducting sensitivity analysis, but focus on one particularly accessible approach, where the data are multiply-imputed and then modified to reflect plausible MNAR scenarios. We illustrate the implementation of this approach on a weight-loss trial, providing the software code. We then explore further issues around its use in practice
On the use of the not-at-random fully conditional specification (NARFCS) procedure in practice.
The not-at-random fully conditional specification (NARFCS) procedure provides a flexible means for the imputation of multivariable missing data under missing-not-at-random conditions. Recent work has outlined difficulties with eliciting the sensitivity parameters of the procedure from expert opinion due to their conditional nature. Failure to adequately account for this conditioning will generate imputations that are inconsistent with the assumptions of the user. In this paper, we clarify the importance of correct conditioning of NARFCS sensitivity parameters and develop procedures to calibrate these sensitivity parameters by relating them to more easily elicited quantities, in particular, the sensitivity parameters from simpler pattern mixture models. Additionally, we consider how to include the missingness indicators as part of the imputation models of NARFCS, recommending including all of them in each model as default practice. Algorithms are developed to perform the calibration procedure and demonstrated on data from the Avon Longitudinal Study of Parents and Children, as well as with simulation studies
Clinical Validation of Novel Digital Measures: Statistical Methods for Reliability Evaluation
Background: Assessment of reliability is one of the key components of the validation process designed to demonstrate that a novel clinical measure assessed by a digital health technology tool is fit-for-purpose in clinical research, care, and decision-making. Reliability assessment contributes to characterization of the signal-to-noise ratio and measurement error and is the first indicator of potential usefulness of the proposed clinical measure. Summary: Methodologies for reliability analyses are scattered across literature on validation of PROs, wet biomarkers, etc., yet are equally useful for digital clinical measures. We review a general modeling framework and statistical metrics typically used for reliability assessments as part of the clinical validation. We also present methods for the assessment of agreement and measurement error, alongside modified approaches for categorical measures. We illustrate the discussed techniques using physical activity data from a wearable device with an accelerometer sensor collected in clinical trial participants. Key Messages: This paper provides statisticians and data scientists, involved in development and validation of novel digital clinical measures, an overview of the statistical methodologies and analytical tools for reliability assessment
Defining Efficacy Estimands in Clinical Trials: Examples Illustrating ICH E9(R1) Guidelines
This paper provides examples of defining estimands in real-world scenarios following ICH E9(R1) guidelines. Detailed discussions on choosing the estimands and estimators can be found in our companion papers. Three scenarios of increasing complexity are illustrated. The first example is a proof-of-concept trial in major depressive disorder where the estimand is chosen to support the sponsor decision on whether to continue development. The second and third examples are confirmatory trials in severe asthma and rheumatoid arthritis respectively. We discuss the intercurrent events expected during each trial and how they can be handled so as to be consistent with the study objectives. The estimands discussed in these examples are not the only acceptable choices for their respective scenarios. The intent is to illustrate the key concepts rather than focus on specific choices. Emphasis is placed on following a study development process where estimands link the study objectives with data collection and analysis in a coherent manner, thereby avoiding disconnect between objectives, estimands, and analyses.</p
Choosing Estimands in Clinical Trials: Putting the ICH E9(R1) Into Practice
The National Research Council (NRC) Expert Panel Report on Prevention and Treatment of Missing Data in Clinical Trials highlighted the need for clearly defining objectives and estimands. That report sparked considerable discussion and literature on estimands and how to choose them. Importantly, consideration moved beyond missing data to include all postrandomization events that have implications for estimating quantities of interest (intercurrent events, aka ICEs). The ICH E9(R1) draft addendum builds on that research to outline key principles in choosing estimands for clinical trials, primarily with focus on confirmatory trials. This paper provides additional insights, perspectives, details, and examples to help put ICH E9(R1) into practice. Specific areas of focus include how the perspectives of different stakeholders influence the choice of estimands; the role of randomization and the intention-to-treat principle; defining the causal effects of a clearly defined treatment regimen, along with the implications this has for trial design and the generalizability of conclusions; detailed discussion of strategies for handling ICEs along with their implications and assumptions; estimands for safety objectives, time-to-event endpoints, early-phase and one-arm trials, and quality of life endpoints; and realistic examples of the thought process involved in defining estimands in specific clinical contexts.</p
A Guide to Handling Missing Data in Cost-Effectiveness Analysis Conducted Within Randomised Controlled Trials
The authors would like to thank Professor Adrian Grant and the team at the University of Aberdeen (Professor Craig Ramsay, Janice Cruden, Charles Boachie, Professor Marion Campbell and Seonaidh Cotton) who kindly allowed the REFLUX dataset to be used for this work, and Eldon Spackman for kindly sharing the Stata (R) code for calculating the probability that an intervention is cost effective following MI. The authors are grateful to the reviewers for their comments, which greatly improved this paper. M. G. is recipient of a Medical Research Council Early Career Fellowship in Economics of Health (grant number: MR/K02177X/1). I. R. W. was supported by the Medical Research Council [Unit Programme U105260558]. No specific funding was obtained to produce this paper. The authors declare no conflicts of interest.Missing data are a frequent problem in cost-effectiveness analysis (CEA) within a randomised controlled trial. Inappropriate methods to handle missing data can lead to misleading results and ultimately can affect the decision of whether an intervention is good value for money. This article provides practical guidance on how to handle missing data in within-trial CEAs following a principled approach: (i) the analysis should be based on a plausible assumption for the missing data mechanism, i.e. whether the probability that data are missing is independent of or dependent on the observed and/or unobserved values; (ii) the method chosen for the base-case should fit with the assumed mechanism; and (iii) sensitivity analysis should be conducted to explore to what extent the results change with the assumption made. This approach is implemented in three stages, which are described in detail: (1) descriptive analysis to inform the assumption on the missing data mechanism; (2) how to choose between alternative methods given their underlying assumptions; and (3) methods for sensitivity analysis. The case study illustrates how to apply this approach in practice, including software code. The article concludes with recommendations for practice and suggestions for future research.Medical Research Council Early Career Fellowship in Economics of Health
MR/K02177X/1Medical Research Council UK (MRC)
U105260558Medical Research Council UK (MRC)
MC_U105260558
MR/K02177X/
- …
