129 research outputs found
Bayesian nonparametric hierarchical modeling for multiple membership data in grouped attendance interventions
We develop a dependent Dirichlet process (DDP) model for repeated measures
multiple membership (MM) data. This data structure arises in studies under
which an intervention is delivered to each client through a sequence of
elements which overlap with those of other clients on different occasions. Our
interest concentrates on study designs for which the overlaps of sequences
occur for clients who receive an intervention in a shared or grouped fashion
whose memberships may change over multiple treatment events. Our motivating
application focuses on evaluation of the effectiveness of a group therapy
intervention with treatment delivered through a sequence of cognitive
behavioral therapy session blocks, called modules. An open-enrollment protocol
permits entry of clients at the beginning of any new module in a manner that
may produce unique MM sequences across clients. We begin with a model that
composes an addition of client and multiple membership module random effect
terms, which are assumed independent. Our MM DDP model relaxes the assumption
of conditionally independent client and module random effects by specifying a
collection of random distributions for the client effect parameters that are
indexed by the unique set of module attendances. We demonstrate how this
construction facilitates examining heterogeneity in the relative effectiveness
of group therapy modules over repeated measurement occasions.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS620 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Loss Function Based Ranking in Two-Stage, Hierarchical Models
Several authors have studied the performance of optimal, squared error loss (SEL) estimated ranks. Though these are effective, in many applications interest focuses on identifying the relatively good (e.g., in the upper 10%) or relatively poor performers. We construct loss functions that address this goal and evaluate candidate rank estimates, some of which optimize specific loss functions. We study performance for a fully parametric hierarchical model with a Gaussian prior and Gaussian sampling distributions, evaluating performance for several loss functions. Results show that though SEL-optimal ranks and percentiles do not specifically focus on classifying with respect to a percentile cut point, they perform very well over a broad range of loss functions. We compare inferences produced by the candidate estimates using data from The Community Tracking Study
Ranking USRDS Provider-Specific SMRs from 1998-2001
Provider profiling (ranking, league tables ) is prevalent in health services research. Similarly, comparing educational institutions and identifying differentially expressed genes depend on ranking. Effective ranking procedures must be structured by a hierarchical (Bayesian) model and guided by a ranking-specific loss function, however even optimal methods can perform poorly and estimates must be accompanied by uncertainty assessments. We use the 1998-2001 Standardized Mortality Ratio (SMR) data from United States Renal Data System (USRDS) as a platform to identify issues and approaches. Our analyses extend Liu et al. (2004) by combining evidence over multiple years via an AR(1) model; by considering estimates that minimize errors in classifying providers above or below a percentile cutpoint in addition to those that minimize rank-based, squared-error loss; by considering ranks based on the posterior probability that a provider\u27s SMR exceeds a threshold; by comparing these ranks to those produced by ranking MLEs and ranking P-values associated with testing whether a provider\u27s SMR = 1; by comparing results for a parametric and a non-parametric prior; by reporting on a suite of uncertainty measures.
Results show that MLE-based and hypothesis test based ranks are far from optimal, that uncertainty measures effectively calibrate performance; that in the USRDS context ranks based on single-year data perform poorly, but that performance improves substantially when using the AR(1) model; that ranks based on posterior probabilities of exceeding a properly chosen SMR threshold are essentially identical to those produced by minimizing classification loss. These findings highlight areas requiring additional research and the need to educate stakeholders on the uses and abuses of ranks; on their proper role in science and policy; on the absolute necessity of accompanying estimated ranks with uncertainty assessments and ensuring that these uncertainties influence decisions
The Health Effects of Medicare for the Near-Elderly Uninsured
We study how the trajectory of health for the near-elderly uninsured changes upon enrolling into Medicare at the age of 65. We find that Medicare increases the probability of the previously uninsured having excellent or very good health, decreases their probability of being in good health, and has no discernable effects at lower health levels. Surprisingly, we found Medicare had a similar effect on health for the previously insured. This suggests that Medicare helps the relatively healthy 65 year olds, but does little for those who are already in declining health once they reach the age of 65. The improvement in health between the uninsured and insured were not statistically different from each other. The stability of insurance coverage afforded by Medicare may be the source of the health benefit suggesting that universal coverage at other ages may have similar health effects.
Why the DEA STRIDE data are still useful for understanding drug markets
In 2001, use of the STRIDE data base for the purpose of analyzing drug prices and the impact of public policies on drug markets came under serious attack by the National Research Council (Manski, et al., 2001; Horowitz, 2001). While some of the criticisms raised by the committee were valid, many of the concerns can be easily addressed through more careful use of the data. In this paper, we first disprove Horowitz's main argument that prices are different for observations collected by different agencies within a city. We then revisit other issues raised by the NRC and discuss how certain limitations can be easily overcome through the adoption of random coefficient models of drug prices and by paying serious attention to drug form and distribution levels. Although the sample remains a convenience sample, we demonstrate how construction of city-specific price and purity series that pay careful attention to the data and incorporate existing knowledge of drug markets (e.g. the expected purity hypothesis) are internally consistent and can be externally validated. The findings from this study have important implications regarding the utility of these data and the appropriateness of using them in econmic analyses of supply, demand and harms.Approved for public release; distribution is unlimited
Why the DEA STRIDE Data are Still Useful for Understanding Drug Markets
In 2001, use of the STRIDE data base for the purposes of analyzing drug prices and the impact of public policies on drug markets came under serious attack by the National Research Council (Manski et al., 2001; Horowitz, 2001). While some of the criticisms raised by the committee were valid, many of the concerns can be easily addressed through more careful use of the data. In this paper, we first disprove Horowitz's main argument that prices are different for observations collected by different agencies within a city. We then revisit other issues raised by the NRC and discuss how certain limitations can be easily overcome through the adoption of random coefficient models of drug prices and by paying serious attention to drug form and distribution levels. Although the sample remains a convenience sample, we demonstrate how construction of city-specific price and purity series that pay careful attention to the data and incorporate existing knowledge of drug markets (e.g. the expected purity hypothesis) are internally consistent and can be externally validated. The findings from this study have important implications regarding the utility of these data and the appropriateness of using them in economic analyses of supply, demand and harms.
Towards the clinical implementation of pharmacogenetics in bipolar disorder.
BackgroundBipolar disorder (BD) is a psychiatric illness defined by pathological alterations between the mood states of mania and depression, causing disability, imposing healthcare costs and elevating the risk of suicide. Although effective treatments for BD exist, variability in outcomes leads to a large number of treatment failures, typically followed by a trial and error process of medication switches that can take years. Pharmacogenetic testing (PGT), by tailoring drug choice to an individual, may personalize and expedite treatment so as to identify more rapidly medications well suited to individual BD patients.DiscussionA number of associations have been made in BD between medication response phenotypes and specific genetic markers. However, to date clinical adoption of PGT has been limited, often citing questions that must be answered before it can be widely utilized. These include: What are the requirements of supporting evidence? How large is a clinically relevant effect? What degree of specificity and sensitivity are required? Does a given marker influence decision making and have clinical utility? In many cases, the answers to these questions remain unknown, and ultimately, the question of whether PGT is valid and useful must be determined empirically. Towards this aim, we have reviewed the literature and selected drug-genotype associations with the strongest evidence for utility in BD.SummaryBased upon these findings, we propose a preliminary panel for use in PGT, and a method by which the results of a PGT panel can be integrated for clinical interpretation. Finally, we argue that based on the sufficiency of accumulated evidence, PGT implementation studies are now warranted. We propose and discuss the design for a randomized clinical trial to test the use of PGT in the treatment of BD
Continuous quality improvement (CQI) in addiction treatment settings: design and intervention protocol of a group randomized pilot study
BACKGROUND: Few studies have designed and tested the use of continuous quality improvement approaches in community based substance use treatment settings. Little is known about the feasibility, costs, efficacy, and sustainment of such approaches in these settings. METHODS/DESIGN: A group-randomized trial using a modified stepped wedge design is being used. In the first phase of the study, eight programs, stratified by modality (residential, outpatient) are being randomly assigned to the intervention or control condition. In the second phase, the initially assigned control programs are receiving the intervention to gain additional information about feasibility while sustainment is being studied among the programs initially assigned to the intervention. DISCUSSION: By using this design in a pilot study, we help inform the field about the feasibility, costs, efficacy and sustainment of the intervention. Determining information at the pilot stage about costs and sustainment provides value for designing future studies and implementation strategies with the goal to reduce the time between intervention development and translation to real world practice settings
- …
