327 research outputs found

    Methodological criteria for the assessment of moderators in systematic reviews of randomised controlled trials : a consensus study

    Get PDF
    Background: Current methodological guidelines provide advice about the assessment of sub-group analysis within RCTs, but do not specify explicit criteria for assessment. Our objective was to provide researchers with a set of criteria that will facilitate the grading of evidence for moderators, in systematic reviews. Method: We developed a set of criteria from methodological manuscripts (n = 18) using snowballing technique, and electronic database searches. Criteria were reviewed by an international Delphi panel (n = 21), comprising authors who have published methodological papers in this area, and researchers who have been active in the study of sub-group analysis in RCTs. We used the Research ANd Development/University of California Los Angeles appropriateness method to assess consensus on the quantitative data. Free responses were coded for consensus and disagreement. In a subsequent round additional criteria were extracted from the Cochrane Reviewers’ Handbook, and the process was repeated. Results: The recommendations are that meta-analysts report both confirmatory and exploratory findings for subgroups analysis. Confirmatory findings must only come from studies in which a specific theory/evidence based apriori statement is made. Exploratory findings may be used to inform future/subsequent trials. However, for inclusion in the meta-analysis of moderators, the following additional criteria should be applied to each study: Baseline factors should be measured prior to randomisation, measurement of baseline factors should be of adequate reliability and validity, and a specific test of the interaction between baseline factors and interventions must be presented. Conclusions: There is consensus from a group of 21 international experts that methodological criteria to assess moderators within systematic reviews of RCTs is both timely and necessary. The consensus from the experts resulted in five criteria divided into two groups when synthesising evidence: confirmatory findings to support hypotheses about moderators and exploratory findings to inform future research. These recommendations are discussed in reference to previous recommendations for evaluating and reporting moderator studies

    Stacked Search for Gravitational Waves from the 2006 SGR 1900+14 Storm

    Get PDF
    We present the results of a LIGO search for short-duration gravitational waves (GWs) associated with the 2006 March 29 SGR 1900+14 storm. A new search method is used, "stacking'' the GW data around the times of individual soft-gamma bursts in the storm to enhance sensitivity for models in which multiple bursts are accompanied by GW emission. We assume that variation in the time difference between burst electromagnetic emission and potential burst GW emission is small relative to the GW signal duration, and we time-align GW excess power time-frequency tilings containing individual burst triggers to their corresponding electromagnetic emissions. We use two GW emission models in our search: a fluence-weighted model and a flat (unweighted) model for the most electromagnetically energetic bursts. We find no evidence of GWs associated with either model. Model-dependent GW strain, isotropic GW emission energy E_GW, and \gamma = E_GW / E_EM upper limits are estimated using a variety of assumed waveforms. The stacking method allows us to set the most stringent model-dependent limits on transient GW strain published to date. We find E_GW upper limit estimates (at a nominal distance of 10 kpc) of between 2x10^45 erg and 6x10^50 erg depending on waveform type. These limits are an order of magnitude lower than upper limits published previously for this storm and overlap with the range of electromagnetic energies emitted in SGR giant flares.Comment: 7 pages, 3 figure

    First measurement of the Hubble Constant from a Dark Standard Siren using the Dark Energy Survey Galaxies and the LIGO/Virgo Binary–Black-hole Merger GW170814

    Get PDF
    International audienceWe present a multi-messenger measurement of the Hubble constant H 0 using the binary–black-hole merger GW170814 as a standard siren, combined with a photometric redshift catalog from the Dark Energy Survey (DES). The luminosity distance is obtained from the gravitational wave signal detected by the Laser Interferometer Gravitational-Wave Observatory (LIGO)/Virgo Collaboration (LVC) on 2017 August 14, and the redshift information is provided by the DES Year 3 data. Black hole mergers such as GW170814 are expected to lack bright electromagnetic emission to uniquely identify their host galaxies and build an object-by-object Hubble diagram. However, they are suitable for a statistical measurement, provided that a galaxy catalog of adequate depth and redshift completion is available. Here we present the first Hubble parameter measurement using a black hole merger. Our analysis results in , which is consistent with both SN Ia and cosmic microwave background measurements of the Hubble constant. The quoted 68% credible region comprises 60% of the uniform prior range [20, 140] km s−1 Mpc−1, and it depends on the assumed prior range. If we take a broader prior of [10, 220] km s−1 Mpc−1, we find (57% of the prior range). Although a weak constraint on the Hubble constant from a single event is expected using the dark siren method, a multifold increase in the LVC event rate is anticipated in the coming years and combinations of many sirens will lead to improved constraints on H 0

    Small business economics: A perspective from The Netherlands

    Get PDF
    In the analysis of economic phenomena either within or across industries there is room for integrating the role of small business. This contribution can be made by aggregation or generalization of the findings at the meso level, which again are partly based upon analyses at the micro level. The Netherlands has a long history in macro model building. A recent discussion among Dutch macro-economists considered the future of econometric model building at the macro level, and considered how best to improve this model building. The explicit integration of scale effects, however, was not mentioned. I am convinced that improvements in this respect are possible. In particular, I have in mind the role which small businesses play in certain areas such as wage structure, employment or investments. The dissection of macro prognoses into a small business component and a remaining component is a traditional practice in The Netherlands.1 Finally, there is much concern in The Netherlands for the calculation of regulatory effects, decomposed into effects for small and large businesses. If anywhere in the world there is a solid foundation for studying scale effects in both macro and sectoral models, it most certainly has been in The Netherlands. There is a strong tradition of macro-econometric model building; groups of econometricians specialized in small business research exist; Dutch policymakers show concern and the required research apparatus is available

    GW170104: Observation of a 50-Solar-Mass Binary Black Hole Coalescence at Redshift 0.2

    Get PDF
    We describe the observation of GW170104, a gravitational-wave signal produced by the coalescence of a pair of stellar-mass black holes. The signal was measured on January 4, 2017 at 10: 11: 58.6 UTC by the twin advanced detectors of the Laser Interferometer Gravitational-Wave Observatory during their second observing run, with a network signal-to-noise ratio of 13 and a false alarm rate less than 1 in 70 000 years. The inferred component black hole masses are 31.2(-6.0)(+8.4)M-circle dot and 19.4(-5.9)(+5.3)M(circle dot) (at the 90% credible level). The black hole spins are best constrained through measurement of the effective inspiral spin parameter, a mass-weighted combination of the spin components perpendicular to the orbital plane, chi(eff) = -0.12(-0.30)(+0.21) . This result implies that spin configurations with both component spins positively aligned with the orbital angular momentum are disfavored. The source luminosity distance is 880(-390)(+450) Mpc corresponding to a redshift of z = 0.18(-0.07)(+0.08) . We constrain the magnitude of modifications to the gravitational-wave dispersion relation and perform null tests of general relativity. Assuming that gravitons are dispersed in vacuum like massive particles, we bound the graviton mass to m(g) <= 7.7 x 10(-23) eV/c(2). In all cases, we find that GW170104 is consistent with general relativity

    Evaluation of emergency department performance:A systematic review on recommended performance and quality-in-care measures

    Get PDF
    BACKGROUND: Evaluation of emergency department (ED) performance remains a difficult task due to the lack of consensus on performance measures that reflects high quality, efficiency, and sustainability. AIM: To describe, map, and critically evaluate which performance measures that the published literature regard as being most relevant in assessing overall ED performance. METHODS: Following the PRISMA guidelines, a systematic literature review of review articles reporting accentuated ED performance measures was conducted in the databases of PubMed, Cochrane Library, and Web of Science. Study eligibility criteria includes: 1) the main purpose was to discuss, analyse, or promote performance measures best reflecting ED performance, 2) the article was a review article, and 3) the article reported macro-level performance measures, thus reflecting an overall departmental performance level. RESULTS: A number of articles addresses this study’s objective (n = 14 of 46 unique hits). Time intervals and patient-related measures were dominant in the identified performance measures in review articles from US, UK, Sweden and Canada. Length of stay (LOS), time between patient arrival to initial clinical assessment, and time between patient arrivals to admission were highlighted by the majority of articles. Concurrently, “patients left without being seen” (LWBS), unplanned re-attendance within a maximum of 72 hours, mortality/morbidity, and number of unintended incidents were the most highlighted performance measures that related directly to the patient. Performance measures related to employees were only stated in two of the 14 included articles. CONCLUSIONS: A total of 55 ED performance measures were identified. ED time intervals were the most recommended performance measures followed by patient centeredness and safety performance measures. ED employee related performance measures were rarely mentioned in the investigated literature. The study’s results allow for advancement towards improved performance measurement and standardised assessment across EDs

    Tools and recommendations for commissioning and quality assurance of deformable image registration in radiotherapy

    Get PDF
    Multiple tools are available for commissioning and quality assurance of deformable image registration (DIR), each with their own advantages and disadvantages in the context of radiotherapy. The selection of appropriate tools should depend on the DIR application with its corresponding available input, desired output, and time requirement. Discussions were hosted by the ESTRO Physics Workshop 2021 on Commissioning and Quality Assurance for DIR in Radiotherapy. A consensus was reached on what requirements are needed for commissioning and quality assurance for different applications, and what combination of tools is associated with this. For commissioning, we recommend the target registration error of manually annotated anatomical landmarks or the distance-to-agreement of manually delineated contours to evaluate alignment. These should be supplemented by the distance to discordance and/or biomechanical criteria to evaluate consistency and plausibility. Digital phantoms can be useful to evaluate DIR for dose accumulation but are currently only available for a limited range of anatomies, image modalities and types of deformations. For quality assurance of DIR for contour propagation, we recommend at least a visual inspection of the registered image and contour. For quality assurance of DIR for warping quantitative information such as dose, Hounsfield units or positron emission tomography-data, we recommend visual inspection of the registered image together with image similarity to evaluate alignment, supplemented by an inspection of the Jacobian determinant or bending energy to evaluate plausibility, and by the dose (gradient) to evaluate relevance. We acknowledge that some of these metrics are still missing in currently available commercial solutions

    XML-BSPM: an XML format for storing Body Surface Potential Map recordings

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The Body Surface Potential Map (BSPM) is an electrocardiographic method, for recording and displaying the electrical activity of the heart, from a spatial perspective. The BSPM has been deemed more accurate for assessing certain cardiac pathologies when compared to the 12-lead ECG. Nevertheless, the 12-lead ECG remains the most popular ECG acquisition method for non-invasively assessing the electrical activity of the heart. Although data from the 12-lead ECG can be stored and shared using open formats such as SCP-ECG, no open formats currently exist for storing and sharing the BSPM. As a result, an innovative format for storing BSPM datasets has been developed within this study.</p> <p>Methods</p> <p>The XML vocabulary was chosen for implementation, as opposed to binary for the purpose of human readability. There are currently no standards to dictate the number of electrodes and electrode positions for recording a BSPM. In fact, there are at least 11 different BSPM electrode configurations in use today. Therefore, in order to support these BSPM variants, the XML-BSPM format was made versatile. Hence, the format supports the storage of custom torso diagrams using SVG graphics. This diagram can then be used in a 2D coordinate system for retaining electrode positions.</p> <p>Results</p> <p>This XML-BSPM format has been successfully used to store the Kornreich-117 BSPM dataset and the Lux-192 BSPM dataset. The resulting file sizes were in the region of 277 kilobytes for each BSPM recording and can be deemed suitable for example, for use with any telemonitoring application. Moreover, there is potential for file sizes to be further reduced using basic compression algorithms, i.e. the deflate algorithm. Finally, these BSPM files have been parsed and visualised within a convenient time period using a web based BSPM viewer.</p> <p>Conclusions</p> <p>This format, if widely adopted could promote BSPM interoperability, knowledge sharing and data mining. This work could also be used to provide conceptual solutions and inspire existing formats such as DICOM, SCP-ECG and aECG to support the storage of BSPMs. In summary, this research provides initial ground work for creating a complete BSPM management system.</p
    corecore