255 research outputs found
Independent assessment of improvements in dementia care and support since 2009
The Department of Health commissioned a team from the London School of Economics and Political Science and the London School of Hygiene and Tropical Medicine to consider progress in dementia care since 2009. We were asked to focus particularly on three areas: improvements in diagnosis and post-diagnostic support, changes in public attitudes, and developments in research. Two major policy
documents provide the context: the National Dementia Strategy 2009, which is now finished, and the Prime Minister’s Challenge on Dementia 2012, which superseded it
The Microsegmentation of the Autism Spectrum: economic and research implications for Scotland
Economic research on autism and implications for Scotland, including how the economic cost of autism can inform strategy and planning
How do applied researchers use the Causal Forest? A methodological review of a method
This paper conducts a methodological review of papers using the causal forest
machine learning method for flexibly estimating heterogeneous treatment
effects. It examines 133 peer-reviewed papers. It shows that the emerging best
practice relies heavily on the approach and tools created by the original
authors of the causal forest such as their grf package and the approaches given
by them in examples. Generally researchers use the causal forest on a
relatively low-dimensional dataset relying on randomisation or observed
controls to identify effects. There are several common ways to then communicate
results -- by mapping out the univariate distribution of individual-level
treatment effect estimates, displaying variable importance results for the
forest and graphing the distribution of treatment effects across covariates
that are important either for theoretical reasons or because they have high
variable importance. Some deviations from this common practice are interesting
and deserve further development and use. Others are unnecessary or even
harmful.Comment: 20 pages, 3 figure
Policy learning for many outcomes of interest: Combining optimal policy trees with multi-objective Bayesian optimisation
Methods for learning optimal policies use causal machine learning models to
create human-interpretable rules for making choices around the allocation of
different policy interventions. However, in realistic policy-making contexts,
decision-makers often care about trade-offs between outcomes, not just
singlemindedly maximising utility for one outcome. This paper proposes an
approach termed Multi-Objective Policy Learning (MOPoL) which combines optimal
decision trees for policy learning with a multi-objective Bayesian optimisation
approach to explore the trade-off between multiple outcomes. It does this by
building a Pareto frontier of non-dominated models for different hyperparameter
settings. The key here is that a low-cost surrogate function can be an accurate
proxy for the very computationally costly optimal tree in terms of expected
regret. This surrogate can be fit many times with different hyperparameter
values to proxy the performance of the optimal model. The method is applied to
a real-world case-study of conditional cash transfers in Morocco where hybrid
(partially optimal, partially greedy) policy trees provide good performance as
a surrogate for optimal trees while being computationally cheap enough to
feasibly fit a Pareto frontier.Comment: 15 pages, 6 figure
Fairness Implications of Heterogeneous Treatment Effect Estimation with Machine Learning Methods in Policy-making
Causal machine learning methods which flexibly generate heterogeneous
treatment effect estimates could be very useful tools for governments trying to
make and implement policy. However, as the critical artificial intelligence
literature has shown, governments must be very careful of unintended
consequences when using machine learning models. One way to try and protect
against unintended bad outcomes is with AI Fairness methods which seek to
create machine learning models where sensitive variables like race or gender do
not influence outcomes. In this paper we argue that standard AI Fairness
approaches developed for predictive machine learning are not suitable for all
causal machine learning applications because causal machine learning generally
(at least so far) uses modelling to inform a human who is the ultimate
decision-maker while AI Fairness approaches assume a model that is making
decisions directly. We define these scenarios as indirect and direct
decision-making respectively and suggest that policy-making is best seen as a
joint decision where the causal machine learning model usually only has
indirect power. We lay out a definition of fairness for this scenario - a model
that provides the information a decision-maker needs to accurately make a value
judgement about just policy outcomes - and argue that the complexity of causal
machine learning models can make this difficult to achieve. The solution here
is not traditional AI Fairness adjustments, but careful modelling and awareness
of some of the decision-making biases that these methods might encourage which
we describe.Comment: 13 pages, 1 figur
Productivity dispersion and sectoral labour shares in Europe. ESRI Working Paper 659 May 2020.
The stability of the labour share of income is a fundamental feature of macroeconomic models, with broad implications for the shape of the production function, inequality, and macroeconomic dynamics. However, empirically, this share has been slowly declining in many countries for several decades, though its causes are subject of much debate. This paper analyses the drivers of labour share developments in Europe at a sectoral level. We begin with a simple shift-share analysis which demonstrates that the decline across countries has been primarily driven by changes within industries. We then use aggregated microdata from CompNet to analyse drivers of sector-level labour shares and to decompose their effects into shifts in the sector average or reallocation of resources between firms. Our main findings are that the advance of globalisation and the widening productivity gap between “the best and the rest” have negative implications for the labour share. We also find that most of the changes are due to reallocation within sectors providing support for the “superstar firms” hypothesis. The finding that globalisation has had a negative impact on the labour share is of relevance for policy in the context of the current backlash against globalisation and reinforces the need to ensure benefits of globalisation and productivity are passed on to workers
The case for investment in technology to manage the global costs of dementia
Worldwide growth in the number of people living with dementia will continue over the coming decades and is already putting pressure on health and care systems, both formal and informal, and on costs, both public and private. One response could be to make greater use of digital and other technologies to try to improve outcomes and contain costs. We were commissioned to examine the economic case for accelerated investment in technology that could, over time, deliver savings on the overall cost of care for people with dementia. Our short study included a rapid review of international evidence on effectiveness and cost-effectiveness of technology, consideration of the conditions for its successful adoption, and liaison with people from industry, government, academic, third sector and other sectors, and people with dementia and carers. We used modelling analyses to examine the economic case, using the UK as context. We then discussed the roles that state investment or action could play, perhaps to accelerate use of technology so as to deliver both wellbeing and economic benefits
Transparency challenges in policy evaluation with causal machine learning -- improving usability and accountability
Causal machine learning tools are beginning to see use in real-world policy
evaluation tasks to flexibly estimate treatment effects. One issue with these
methods is that the machine learning models used are generally black boxes,
i.e., there is no globally interpretable way to understand how a model makes
estimates. This is a clear problem in policy evaluation applications,
particularly in government, because it is difficult to understand whether such
models are functioning in ways that are fair, based on the correct
interpretation of evidence and transparent enough to allow for accountability
if things go wrong. However, there has been little discussion of transparency
problems in the causal machine learning literature and how these might be
overcome. This paper explores why transparency issues are a problem for causal
machine learning in public policy evaluation applications and considers ways
these problems might be addressed through explainable AI tools and by
simplifying models in line with interpretable AI principles. It then applies
these ideas to a case-study using a causal forest model to estimate conditional
average treatment effects for a hypothetical change in the school leaving age
in Australia. It shows that existing tools for understanding black-box
predictive models are poorly suited to causal machine learning and that
simplifying the model to make it interpretable leads to an unacceptable
increase in error (in this application). It concludes that new tools are needed
to properly understand causal machine learning models and the algorithms that
fit them.Comment: 31 pages, 8 figure
- …
