151 research outputs found
Parents\u27 Reactions to the Sexual Abuse of their Children
The present study examined 100 parents\u27 predictions of their reactions to child sexual abuse. Subjects consisted of two groups, parents with a self-reported history of sexual abuse and parents with no reported history of sexual abuse. They were assigned to one of three experimental conditions to consider a case where their own child is abused (hypothetically) by: 1) an adult relative 2) an adult non- relative 3) a spouse or partner (i.e., a case of incest). Thus, the present study examined parents\u27 reactions, how the reactions of parents\u27 relationship to reported perpetrator affects these reactions, and how parents\u27 prior history of sexual abuse might interact with this variable.
Results of the present study replicated the rank order percentages of parents\u27 reactions in the hypothetical condition of Finkelhor\u27s 1981 Boston study. Results of the present study contradicted the findings of Russell\u27s 1986 study regarding parents\u27 supportiveness and perpetrator relationship. Parents in the partner/spouse condition and stranger condition more frequently endorsed guilt reactions than parents in the relative condition. Parents who “would tell the alleged offender to leave the household had higher mean parenting scores on all four constructs of the Adult-Adolescent Parenting Inventory (AAPI). There were no differences between parents with a self-reported history of child sexual abuse versus parents without a self-reported history of child sexual abuse in any of the analyses in this study. And finally, parents tended to choose police most often and school least often as agencies to which they would report child sexual abuse
Improving Methods for Propensity Score Analysis with Mis-Measured Variables by Incorporating Background Variables with Moderated Nonlinear Factor Analysis
There has been some research in the use of propensity scores in the context of measurement error in the confounding variables; one recommended method is to generate estimates of the mis-measured covariate using a latent variable model, and to use those estimates (i.e., factor scores) in place of the covariate. I describe a simulation study designed to examine the performance of this method in the context of differential measurement error and propose a method based on moderated nonlinear factor analysis (MNLFA) to try to address known problems with standard methods. Although MNLFA improves effect estimation somewhat in the presence of differential measurement error relative to standard factor analysis methods, the greatest gains come from the nonstandard practice of including the treatment variable as an indicator in the scoring models. More research is required on the effects of model misspecification on the performance of these methods for causal inference applications.Master of Art
Choosing the Estimand When Matching or Weighting in Observational Studies
Matching and weighting methods for observational studies require the choice
of an estimand, the causal effect with reference to a specific target
population. Commonly used estimands include the average treatment effect in the
treated (ATT), the average treatment effect in the untreated (ATU), the average
treatment effect in the population (ATE), and the average treatment effect in
the overlap (i.e., equipoise population; ATO). Each estimand has its own
assumptions, interpretation, and statistical methods that can be used to
estimate it. This article provides guidance on selecting and interpreting an
estimand to help medical researchers correctly implement statistical methods
used to estimate causal effects in observational studies and to help audiences
correctly interpret the results and limitations of these studies. The
interpretations of the estimands resulting from regression and instrumental
variable analyses are also discussed. Choosing an estimand carefully is
essential for making valid inferences from the analysis of observational data
and ensuring results are replicable and useful for practitioners
How to Interpret Statistical Models Using marginaleffects for R and Python
The parameters of a statistical model can sometimes be difficult to interpret substantively, especially when that model includes nonlinear components, interactions, or transformations. Analysts who fit such complex models often seek to transform raw parameter estimates into quantities that are easier for domain experts and stakeholders to understand. This article presents a simple conceptual framework to describe a vast array of such quantities of interest, which are reported under imprecise and inconsistent terminology across disciplines: predictions, marginal predictions, marginal means, marginal effects, conditional effects, slopes, contrasts, risk ratios, etc. We introduce marginaleffects, a package for R and Python which offers a simple and powerful interface to compute all of those quantities, and to conduct (non-)linear hypothesis and equivalence tests on them. marginaleffects is lightweight; extensible; it works well in combination with other R and Python packages; and it supports over 100 classes of models, including linear, generalized linear, generalized additive, mixed effects, Bayesian, and several machine learning models
lmw: Linear Model Weights for Causal Inference
The linear regression model is widely used in the biomedical and social
sciences as well as in policy and business research to adjust for covariates
and estimate the average effects of treatments. Behind every causal inference
endeavor there is a hypothetical randomized experiment. However, in routine
regression analyses in observational studies, it is unclear how well the
adjustments made by regression approximate key features of randomized
experiments, such as covariate balance, study representativeness, sample
boundedness, and unweighted sampling. In this paper, we provide software to
empirically address this question. We introduce the lmw package for R to
compute the implied linear model weights and perform diagnostics for their
evaluation. The weights are obtained as part of the design stage of the study;
that is, without using outcome information. The implementation is general and
applicable, for instance, in settings with instrumental variables and
multi-valued treatments; in essence, in any situation where the linear model is
the vehicle for adjustment and estimation of average treatment effects with
discrete-valued interventions
MatchThem:: Matching and Weighting after Multiple Imputation
Balancing the distributions of the confounders across the exposure levels in
an observational study through matching or weighting is an accepted method to
control for confounding due to these variables when estimating the association
between an exposure and outcome and to reduce the degree of dependence on
certain modeling assumptions. Despite the increasing popularity in practice,
these procedures cannot be immediately applied to datasets with missing values.
Multiple imputation of the missing data is a popular approach to account for
missing values while preserving the number of units in the dataset and
accounting for the uncertainty in the missing values. However, to the best of
our knowledge, there is no comprehensive matching and weighting software that
can be easily implemented with multiply imputed datasets. In this paper, we
review this problem and suggest a framework to map out the matching and
weighting multiply imputed datasets to 5 actions as well as the best practices
to assess balance in these datasets after matching and weighting. We also
illustrate these approaches using a companion package for R, MatchThem.Comment: 23 Pages, 3 Figure
Estimating Balancing Weights for Continuous Treatments Using Constrained Optimization
In the absence of randomization, common causes of a treatment and an outcome create an association between them that does not correspond to the causal effect of the treatment. When a sufficient set of these confounding variables have been measured, statistical methods such as regression and propensity score weighting can be used to adjust for the common causes and arrive at an unbiased estimate of the causal effect. For continuous treatments, current weighting methods suffer from imprecision, bias, and reliance on correct model specification. Here, I derived the bias of the unadjusted estimate of a linear average dose-response function and developed optweights, a convex optimization-based weight estimation method that targets each component of the bias with constraints. In two simulation studies, I evaluated the performance of optweights, comparing it to regression and other weighting methods. In a common data setting, with many more units than covariates, optweights performed better than the other weighting methods in most scenarios and performed comparably to regression. In scenarios where the number of covariates approached the number of units, optweights could outperform regression in terms of mean squared error when relaxing its constraints to manage the bias-variance tradeoff. The results indicate that optweights should be considered a strong alternative to regression and other weighting methods for estimating the effects of continuous treatments, though further research is required on how to optimize its performance.Doctor of Philosoph
MatchThem: Matching and Weighting after Multiple Imputation
Balancing the distributions of the confounders across the exposure levels in an observational study through matching or weighting is an accepted method to control for confounding due to these variables when estimating the association between an exposure and outcome and reducing the degree of dependence on certain modeling assumptions. Despite the increasing popularity in practice, these procedures cannot be immediately applied to datasets with missing values. Multiple imputation of the missing data is a popular approach to account for missing values while preserving the number of units in the dataset and accounting for the uncertainty in the missing values. However, to the best of our knowledge, there is no comprehensive matching and weighting software that can be easily implemented with multiply imputed datasets. In this paper, we review this problem and suggest a framework to map out the matching and weighting of multiply imputed datasets to 5 actions as well as the best practices to assess balance in these datasets after matching and weighting. We also illustrate these approaches using a companion package for R, MatchThem
When does criminal victimization undermine generalized trust? A weighted panel analysis of the effects of crime type, frequency, and variety
Scholars from various fields have suggested that criminal victimization can shatter generalized trust. Whereas small average effects in longitudinal studies provide only weak support for this claim, victimization effects may be stronger for specific crime types and multiple victimization. To test this assumption, we estimated various victimization effects by combining Energy weighting with lagged dependent variable models, using data from two-wave panel surveys conducted in 2014/2015 (cohort 1; N = 3401) and 2020/2021 (cohort 2; N = 2932) in two German cities. We found weak evidence that trust-undermining effects of victimization were more pronounced for severe crime types or multiple victimization. Effects were only stronger for violent crimes and some forms of multiple victimization in 2014/2015 but not in 2020/2021. Besides, our weighting procedure implies that our (and probably others’) findings for more intense victimization conditions must be viewed with caution, as they suffer from lower internal validity
Adverse childhood experiences (ACEs) and trauma in young children: What we know and what we can do
This archival publication may not reflect current scientific knowledge or recommendations. Current information available from the University of Minnesota Extension: https://www.extension.umn.edu
- …
