52 research outputs found
Moral judgements as organizational accomplishments : insights from a focused ethnography in the English healthcare sector
In this chapter, we aim to deepen our understanding of judgments in organizations. Whilst previous studies have underscored the situated nature of individual judgments exercised by e.g. leaders or managers, our research focuses on how judgments emerge as organizational responses to recurrently emerging moral dilemmas. Accordingly, we study a setting—decision practices in the English healthcare sector—where moral puzzles (to fund or not to fund healthcare for apparently atypical patients) demand ongoing attention and systemic handling. We conducted (and present findings of) a focused ethnography of the ways expert decision-making panels in three health authorities confronted, engaged, and coped with morally perplexed situations. The moral perplexity there lay in that panels were called upon to prudently and demonstrably determine whether a particular patient deserved or not exceptional investment; and do so by taking into consideration the healthcare needs and rights of all patients under the same health system. By adopting a practice perspective (Schatzki, 2002), we develop an analytical account of the effortful accomplishments (sociomaterial activities or intertwined “projects” in practice theory terms), which enabled the recurrent collective exercise of judgments in accordance with publicly recognizable moral expectations—namely notions of fairness. Our main contribution lies in conceptualizing the work of rendering moral judgments as organized pursuits possible and meaningful and hence in complementing current “ecological understandings” of individual judgment-making in organizations
Evidence-based commissioning in the English NHS : who uses which sources of evidence? A survey 2010/2011
Objectives:
To investigate types of evidence used by healthcare commissioners when making decisions and whether decisions were influenced by commissioners’ experience, personal characteristics or role at work. Design: Cross-sectional survey of 345 National Health Service (NHS) staff members.
Setting:
The study was conducted across 11 English Primary Care Trusts between 2010 and 2011.
Participants:
A total of 440 staff involved in commissioning decisions and employed at NHS band 7 or above were invited to participate in the study. Of those, 345 (78%) completed all or a part of the survey.
Main outcome measures:
Participants were asked to rate how important different sources of evidence (empirical or practical) were in a recent decision that had been made. Backwards stepwise logistic regression analyses were undertaken to assess the contributions of age, gender and professional background, as well as the years of experience in NHS commissioning, pay grade and work role.
Results:
The extent to which empirical evidence was used for commissioning decisions in the NHS varied according to the professional background. Only 50% of respondents stated that clinical guidelines and cost-effectiveness evidence were important for healthcare decisions. Respondents were more likely to report use of empirical evidence if they worked in Public Health in comparison to other departments (p<0.0005, commissioning and contracts OR 0.32, 95%CI 0.18 to 0.57, finance OR 0.19, 95%CI 0.05 to 0.78, other departments OR 0.35, 95%CI 0.17 to 0.71) or if they were female (OR 1.8 95% CI 1.01 to 3.1) rather than male. Respondents were more likely to report use of practical evidence if they were more senior within the organisation (pay grade 8b or higher OR 2.7, 95%CI 1.4 to 5.3, p=0.004 in comparison to lower pay grades).
Conclusions:
Those trained in Public Health appeared more likely to use external empirical evidence while those at higher pay scales were more likely to use practical evidence when making commissioning decisions. Clearly, National Institute for Clinical Excellence (NICE) guidance and government publications (eg, National Service Frameworks) are important for decision-making, but practical sources of evidence such as local intelligence, benchmarking data and expert advice are also influential
Improving the capabilities of NHS organisations to use evidence : a qualitative study of redesign projects in Clinical Commissioning Groups
Background
Innovation driven by authoritative evidence is critical to the survival of England’s NHS. Clinical Commissioning Groups (CCGs) are central in NHS efforts to do more with less. Although decisions should be based on the ‘best available evidence’, this is often problematic, with frequent mismatches between the evidence ‘pushed’ by producers and that used in management work. Our concern, then, is to understand practices and conditions (which we term ‘capabilities’) that enable evidence use in commissioning work. We consider how research gets into CCGs (‘push’), how CCGs use evidence (‘pull’) and how this can be supported (toolkit development). We aim to contribute to evidence-based NHS innovation, and, more generally, to improved health-care service provision.
Method
Supported by the National Institute for Health Research (NIHR), we conducted semistructured ethnographic interviews in eight CCGs. We also conducted observations of redesign meetings in two of the CCGs. We used inductive and deductive coding to identify evidence used and capabilities for use from the qualitative data. We then compared across cases to understand variations in outcomes as a function of capabilities. To help improvements in commissioning, we collated our findings into a toolkit for use by stakeholders. We also conducted a small-scale case study of the production of evidence-based guidance to understand evidence ‘push’.
Results
Fieldwork indicated that different evidences inform CCG decision-making, which we categorise as ‘universal’, ‘local’, ‘expertise-based’ and ‘trans-local’. Fieldwork also indicated that certain practices and conditions (‘capabilities’) enable evidence use, including ‘sourcing and evaluating evidence’, ‘engaging experts’, ‘effective framing’, ‘managing roles and expectations’ and ‘managing expert collaboration’. Importantly, cases in which fewer capabilities were recorded tended to report more problems, relative to cases in which needed capabilities were applied. These latter cases were more likely to effectively use evidence, achieve objectives and maintain stakeholder satisfaction. We also found that various understandings of end-users are inscribed into products by evidence producers, which seems to reflect the evolving landscape of the production of authoritative evidence.
Conclusions
This was exploratory research on evidence use capabilities in commissioning decisions. The findings suggest that commissioning stakeholders need support to identify, understand and apply evidence. Support to develop capabilities for evidence may be one means of ensuring effective, evidence-based innovations in commissioning. Our work with evidence producers also shows variation in their perceptions of end users, which may inform the ‘push’/’pull’ gap between research and practice. There were also some limitations to our project, including a smaller than expected sample size and a time frame that did not allow us to capture full redesign projects in all CCGs.
Future work
With these findings in mind, future work may look more closely at how information comes to be treated as evidence and at the relationships of capabilities to project outcomes. Going forward, knowledge, especially that related to generalisability, may be built by means of a longer time and the study of redesign projects in different settings
Fair Algorithms in Organizations: A Performative-Sensemaking Model
The past few years have seen an unprecedented explosion of interest in fair machine learning algorithms. Such algorithms are increasingly being deployed to improve fairness in high-stakes decisions in organizations, such as hiring and risk assessments. Yet, despite early optimism, recent empirical studies suggest that the use of fair algorithms is highly unpredictable and may not necessarily enhance fairness. In this paper, we develop a conceptual model that seeks to unpack the dynamic sensemaking and sensegiving processes associated with the use of fair algorithms in organizations. By adopting a performative-sensemaking lens, we aim to systematically shed light on how the use of fair algorithms can produce new normative realities in organizations, i.e. new ways to perform fairness. The paper contributes to the growing literature on algorithmic fairness and practice-based studies of IS phenomena
Recommended from our members
Coproduction in commissioning decisions: is there an association with decision satisfaction for commissioners working in the NHS? A cross-sectional survey 2010/2011.
Objectives: To undertake an assessment of the association between coproduction and satisfaction with decisions made for local healthcare communities.
Design: A coproduction scale was developed and tested to measure individual National Health Service (NHS) commissioners’ satisfaction with commissioning decisions.
Setting: 11 English Primary Care Trusts in 2010–2011.
Participants: Staff employed at NHS band 7 or above involved in commissioning decisions in the NHS. 345/ 440 (78%) of participants completed part of all of the survey.
Main outcome measure: Reliability and validity of a coproduction scale were assessed using a correlation based principal component analysis model with direct oblimin rotation. Multilevel modelling was used to predict decision satisfaction.
Results: The analysis revealed that coproduction consisted of three principal components: productive discussion, information and dealing with uncertainty. Higher decision satisfaction was associated with smaller decisions, more productive discussion, decisions where information was readily available to use and those where decision-making tools were more often used.
Conclusions: The research indicated that coproduction may be an important factor for satisfaction with decision-making in the commissioning of healthcare services
Explaining the Distinctiveness of Coordination through an in-depth study of a Major Construction Project
Exploring how ward staff engage with the implementation of a patient safety intervention: a UK-based qualitative process evaluation
Objectives: A patient safety intervention was tested in a 33-ward randomised controlled trial. No statistically significant difference between intervention and control wards was found. We conducted a process evaluation of the trial and our aim in this paper is to understand staff engagement across the 17 intervention wards. Design: Large qualitative process evaluation of the implementation of a patient safety intervention. Setting and participants: National Health Service staff based on 17 acute hospital wards located at five hospital sites in the North of England. Data: We concentrate on three sources here: (1) analysis of taped discussion between ward staff during action planning meetings; (2) facilitators’ field notes and (3) follow-up telephone interviews with staff focusing on whether action plans had been achieved. The analysis involved the use of pen portraits and adaptive theory.
Findings: First, there were palpable differences in the ways that the 17 ward teams engaged with the key components of the intervention. Five main engagement typologies were evident across the life course of the study: consistent, partial, increasing, decreasing and disengaged. Second, the intensity of support for the intervention at the level of the organisation does not predict the strength of engagement at the level of the individual ward team. Third, the standardisation of facilitative processes provided by the research team does not ensure that implementation standardisation of the intervention occurs by ward staff. Conclusions: A dilution of the intervention occurred during the trial because wards engaged with Patient Reporting and Action for a Safe Environment (PRASE) in divergent ways, despite the standardisation of key components. Facilitative processes were not sufficiently adequate to enable intervention wards to successfully engage with PRASE components
POWER DYNAMICS AS EMBEDDED IN THE ENACTMENT OF TECHNOLOGICAL CHANGE PRACTICES
Intelligent technologies require special attention due to their increasing presence and performative influence both in everyday and organisational life. We aim at developing greater insights into how power relations shape the development and use of intelligent technologies. Combined with a ‘performative’ process view, our research follows a material-discursive view of power by drawing on Michel Foucault’s work to emphasise the positive effects of power. We show how technological change initiatives are shaped through complex mechanisms by which discourses constrain as much as enable what actors can say and do. We highlight the push-pull dynamic which lies in iterative and recursive acts of power and resistance involving a range of actors who collectively, albeit inadvertently, change the meanings technological change outcomes perform. Finally, we show that technological initiatives that might initially seem to fail can ‘take off’ through persistent and positive power effects
WHAT IS THE BETTER NUMBER? INTERPRETING EVALUATIVE INTELLIGENT TECHNOLOGIES IMPLEMENTATION FROM A POWER DYNAMICS PERSPECTIVE
Intelligent technologies are increasingly used to evaluate and monitor performance through implementing specific metrics such as Key Performance Indicators (KPIs). Recently, scholars have called to better engage with intelligent technologies\u27 specific features, which otherwise remain black-boxed. We aim at developing greater insights into how practices enacted by different end-user’s shape and reshape the production of KPIs in technological change initiatives. Combined with a ‘performative’ process approach, our research draws on Michel Foucault’s view of power to emphasise both the negative and productive effects of power. We highlight the pursuing (push)-withdrawing (pull) dynamics which lie in iterative and recursive acts of power and resistance involving a range of actors who collectively, albeit inadvertently and differently, change the meanings technological outcomes perform. Finally, we show that technologies that might initially seem to fail can ‘take off’ through persistent and positive power effects
The Patient Feedback Response Framework – understanding why UK hospital staff find it difficult to make improvements based on patient feedback: A qualitative study
Patients are increasingly being asked for feedback about their healthcare experiences. However, healthcare staff often find it difficult to act on this feedback in order to make improvements to services. This paper draws upon notions of legitimacy and readiness to develop a conceptual framework (Patient Feedback Response Framework – PFRF) which outlines why staff may find it problematic to respond to patient feedback. A large qualitative study was conducted with 17 ward based teams between 2013 and 2014, across three hospital Trusts in the North of England. This was a process evaluation of a wider study where ward staff were encouraged to make action plans based on patient feedback. We focus on three methods here: i) examination of taped discussion between ward staff during action planning meetings ii) facilitators notes of these meetings iii) telephone interviews with staff focusing on whether action plans had been achieved six months later. Analysis employed an abductive approach. Through the development of the PFRF, we found that making changes based on patient feedback is a complex multi-tiered process and not something that ward staff can simply ‘do’. First, staff must exhibit normative legitimacy – the belief that listening to patients is a worthwhile exercise. Second, structural legitimacy has to be in place – ward teams need adequate autonomy, ownership and resource to enact change. Some ward teams are able to make improvements within their immediate control and environment. Third, for those staff who require interdepartmental co-operation or high level assistance to achieve change, organisational readiness must exist at the level of the hospital otherwise improvement will rarely be enacted. Case studies drawn from our empirical data demonstrate the above. It is only when appropriate levels of individual and organisational capacity to change exist, that patient feedback is likely to be acted upon to improve services
- …
