527 research outputs found

    The fundamental problem of command : plan and compliance in a partially centralised economy

    Get PDF
    When a principal gives an order to an agent and advances resources for its implementation, the temptations for the agent to shirk or steal from the principal rather than comply constitute the fundamental problem of command. Historically, partially centralised command economies enforced compliance in various ways, assisted by nesting the fundamental problem of exchange within that of command. The Soviet economy provides some relevant data. The Soviet command system combined several enforcement mechanisms in an equilibrium that shifted as agents learned and each mechanism's comparative costs and benefits changed. When the conditions for an equilibrium disappeared, the system collapsed.Comparative Economic Studies (2005) 47, 296–314. doi:10.1057/palgrave.ces.810011

    Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at √ s = 8 TeV with the ATLAS detector

    Get PDF
    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses 20.3 fb−1 of √ s = 8 TeV data collected in 2012 with the ATLAS detector at the LHC. Events are required to have at least one jet with pT > 120 GeV and no leptons. Nine signal regions are considered with increasing missing transverse momentum requirements between Emiss T > 150 GeV and Emiss T > 700 GeV. Good agreement is observed between the number of events in data and Standard Model expectations. The results are translated into exclusion limits on models with either large extra spatial dimensions, pair production of weakly interacting dark matter candidates, or production of very light gravitinos in a gauge-mediated supersymmetric model. In addition, limits on the production of an invisibly decaying Higgs-like boson leading to similar topologies in the final state are presente

    Development of a Core Outcome Set for effectiveness trials aimed at optimising prescribing in older adults in care homes

    Get PDF
    Background: Prescribing medicines for older adults in care homes is known to be sub-optimal. Whilst trials testing interventions to optimise prescribing in this setting have been published, heterogeneity in outcome reporting has hindered comparison of interventions, thus limiting evidence synthesis. The aim of this study was to develop a core outcome set (COS), a list of outcomes which should be measured and reported, as a minimum, for all effectiveness trials involving optimising prescribing in care homes. The COS was developed as part of the Care Homes Independent Pharmacist Prescribing Study (CHIPPS). Methods: A long-list of outcomes was identified through a review of published literature and stakeholder input. Outcomes were reviewed and refined prior to entering a two-round online Delphi exercise and then distributed via a web link to the CHIPPS Management Team, a multidisciplinary team including pharmacists, doctors and Patient Public Involvement representatives (amongst others), who comprised the Delphi panel. The Delphi panellists (n = 19) rated the importance of outcomes on a 9-point Likert scale from 1 (not important) to 9 (critically important). Consensus for an outcome being included in the COS was defined as ≥70% participants scoring 7–9 and <15% scoring 1–3. Exclusion was defined as ≥70% scoring 1–3 and <15% 7–9. Individual and group scores were fed back to participants alongside the second questionnaire round, which included outcomes for which no consensus had been achieved. Results: A long-list of 63 potential outcomes was identified. Refinement of this long-list of outcomes resulted in 29 outcomes, which were included in the Delphi questionnaire (round 1). Following both rounds of the Delphi exercise, 13 outcomes (organised into seven overarching domains: medication appropriateness, adverse drug events, prescribing errors, falls, quality of life, all-cause mortality and admissions to hospital (and associated costs)) met the criteria for inclusion in the final COS. Conclusions: We have developed a COS for effectiveness trials aimed at optimising prescribing in older adults in care homes using robust methodology. Widespread adoption of this COS will facilitate evidence synthesis between trials. Future work should focus on evaluating appropriate tools for these key outcomes to further reduce heterogeneity in outcome measurement in this context

    Validation of the surgical fear questionnaire in adult patients waiting for elective surgery

    Get PDF
    Objectives: Because existing instruments for assessing surgical fear seem either too general or too limited, the Surgical Fear Questionnaire (SFQ) was developed. The aim of this study is to assess the validity and reliability of the SFQ. Methods: Based on existing literature and expert consultation the ten-item SFQ was composed. Data on the SFQ were obtained from 5 prospective studies (N = 3233) in inpatient or day surgery patients. These data were used for exploratory factor analysis (EFA), confirmatory factor analysis (CFA), reliability analysis and validity analysis. Results: EFA in Study 1 and 2 revealed a two-factor structure with one factor associated with fear of the short-term consequences of surgery (SFQ-s, item 1-4) and the other factor with fear of the long-term consequences of surgery (SFQ-l, item 5-10). However, in both studies two items of the SFQ-l had low factor loadings. Therefore in Study 3 and 4 the 2-factor structure was tested and confirmed by CFA in an eight-item version of the SFQ. Across all studies significant correlations of the SFQ with pain catastrophizing, state anxiety, and preoperative pain intensity indicated good convergent validity. Internal consistency (Cronbach's alpha) was between 0.765-0.920 (SFQ-total), 0.766-0.877 (SFQ-s), and 0.628-0.899 (SFQ-l). The SFQ proved to be sensitive to detect differences based on age, sex, education level, employment status and preoperative pain intensity. Discussion: The SFQ is a valid and reliable eight-item index of surgical fear consisting of two subscales: fear of the short-term consequences of surgery and fear of the long-term consequences.This study was conducted with departmental funding and supported by a grant from The Netherlands Organisation for Scientific Research (Zon-MW, http://www.zonmw.nl/en/), grant no. 110000007. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript

    Search for supersymmetry in events with four or more leptons in √s =13 TeV pp collisions with ATLAS

    Get PDF
    Results from a search for supersymmetry in events with four or more charged leptons (electrons, muons and taus) are presented. The analysis uses a data sample corresponding to 36.1 fb −1 of proton-proton collisions delivered by the Large Hadron Collider at s √ =13 TeV and recorded by the ATLAS detector. Four-lepton signal regions with up to two hadronically decaying taus are designed to target a range of supersymmetric scenarios that can be either enriched in or depleted of events involving the production and decay of a Z boson. Data yields are consistent with Standard Model expectations and results are used to set upper limits on the event yields from processes beyond the Standard Model. Exclusion limits are set at the 95% confidence level in simplified models of General Gauge Mediated supersymmetry, where higgsino masses are excluded up to 295 GeV. In R -parity-violating simplified models with decays of the lightest supersymmetric particle to charged leptons, lower limits of 1.46 TeV, 1.06 TeV, and 2.25 TeV are placed on wino, slepton and gluino masses, respectively

    Search for High-Mass Resonances Decaying to τν in pp Collisions at √s=13 TeV with the ATLAS Detector

    Get PDF
    A search for high-mass resonances decaying to τν using proton-proton collisions at √s=13 TeV produced by the Large Hadron Collider is presented. Only τ-lepton decays with hadrons in the final state are considered. The data were recorded with the ATLAS detector and correspond to an integrated luminosity of 36.1 fb−1. No statistically significant excess above the standard model expectation is observed; model-independent upper limits are set on the visible τν production cross section. Heavy W′ bosons with masses less than 3.7 TeV in the sequential standard model and masses less than 2.2–3.8 TeV depending on the coupling in the nonuniversal G(221) model are excluded at the 95% credibility level

    Search for the direct production of charginos and neutralinos in final states with tau leptons in √s=13 TeV collisions with the ATLAS detector

    Get PDF
    A search for the direct production of charginos and neutralinos in final states with at least two hadronically decaying tau leptons is presented. The analysis uses a dataset of pp collisions corresponding to an integrated luminosity of 36.1 fb−1, recorded with the ATLAS detector at the Large Hadron Collider at a centre-of-mass energy of 13TeV.Nosignificant deviation from the expected Standard Model background is observed. Limits are derived in scenarios of ˜χ+1 ˜χ−1 pair production and of ˜χ±1 ˜χ02 and ˜χ+1 ˜χ−1 production in simplified models where the neutralinos and charginos decay solely via intermediate left-handed staus and tau sneutrinos, and the mass of the ˜ τL state is set to be halfway between the masses of the ˜χ±1 and the ˜χ01. Chargino masses up to 630 GeV are excluded at 95% confidence level in the scenario of direct production of ˜χ+1 ˜χ−1 for a massless ˜χ01. Common ˜χ±1 and ˜χ02 masses up to 760 GeV are excluded in the case of production of ˜χ±1 ˜χ02 and ˜χ+1 ˜χ−1 assuming a massless ˜χ01. Exclusion limits for additional benchmark scenarios with large and small mass-splitting between the ˜χ±1 and the ˜χ01 are also studied by varying the ˜ τL mass between the masses of the ˜χ±1 and the ˜χ01

    A922 Sequential measurement of 1 hour creatinine clearance (1-CRCL) in critically ill patients at risk of acute kidney injury (AKI)

    Get PDF
    Meeting abstrac

    Performance of missing transverse momentum reconstruction with the ATLAS detector using proton–proton collisions at √s = 13 TeV

    Get PDF
    The performance of the missing transverse momentum (EmissT) reconstruction with the ATLAS detector is evaluated using data collected in proton–proton collisions at the LHC at a centre-of-mass energy of 13 TeV in 2015. To reconstruct EmissT, fully calibrated electrons, muons, photons, hadronically decaying τ -leptons, and jets reconstructed from calorimeter energy deposits and charged-particle tracks are used. These are combined with the soft hadronic activity measured by reconstructed charged-particle tracks not associated with the hard objects. Possible double counting of contributions from reconstructed charged-particle tracks from the inner detector, energy deposits in the calorimeter, and reconstructed muons from the muon spectrometer is avoided by applying a signal ambiguity resolution procedure which rejects already used signals when combining the various EmissT contributions. The individual terms as well as the overall reconstructed EmissT are evaluated with various performance metrics for scale (linearity), resolution, and sensitivity to the data-taking conditions. The method developed to determine the systematic uncertainties of the EmissT scale and resolution is discussed. Results are shown based on the full 2015 data sample corresponding to an integrated luminosity of 3.2 fb−1

    Measurement of the cross section for inclusive isolated-photon production in pp collisions at √s=13TeV using the ATLAS detector

    Get PDF
    Inclusive isolated-photon production in pp collisions at a centre-of-mass energy of 13TeVis studied with the ATLAS detector at the LHC using a data set with an integrated luminosity of 3.2fb−1. The cross section is measured as a function of the photon transverse energy above 125GeVin different regions of photon pseudorapidity. Next-to-leading-order perturbative QCD and Monte Carlo event-generator predictions are compared to the cross-section measurements and provide an adequate description of the data
    corecore