143 research outputs found

    Performance of the CMS Cathode Strip Chambers with Cosmic Rays

    Get PDF
    The Cathode Strip Chambers (CSCs) constitute the primary muon tracking device in the CMS endcaps. Their performance has been evaluated using data taken during a cosmic ray run in fall 2008. Measured noise levels are low, with the number of noisy channels well below 1%. Coordinate resolution was measured for all types of chambers, and fall in the range 47 microns to 243 microns. The efficiencies for local charged track triggers, for hit and for segments reconstruction were measured, and are above 99%. The timing resolution per layer is approximately 5 ns

    Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV

    Get PDF
    The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO

    Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV

    Get PDF
    The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO

    Identification and Filtering of Uncharacteristic Noise in the CMS Hadron Calorimeter

    Get PDF
    VertaisarvioitupeerReviewe

    Performance of CMS hadron calorimeter timing and synchronization using test beam, cosmic ray, and LHC beam data

    Get PDF
    This paper discusses the design and performance of the time measurement technique and of the synchronization systems of the CMS hadron calorimeter. Time measurement performance results are presented from test beam data taken in the years 2004 and 2006. For hadronic showers of energy greater than 100 GeV, the timing resolution is measured to be about 1.2 ns. Time synchronization and out-of-time background rejection results are presented from the Cosmic Run At Four Tesla and LHC beam runs taken in the Autumn of 2008. The inter-channel synchronization is measured to be within ±2 ns

    Robust Biomarkers: Methodologically Tracking Causal Processes in Alzheimer’s Measurement

    Get PDF
    In biomedical measurement, biomarkers are used to achieve reliable prediction of, and useful causal information about patient outcomes while minimizing complexity of measurement, resources, and invasiveness. A biomarker is an assayable metric that discloses the status of a biological process of interest, be it normative, pathophysiological, or in response to intervention. The greatest utility from biomarkers comes from their ability to help clinicians (and researchers) make and evaluate clinical decisions. In this paper we discuss a specific methodological use of clinical biomarkers in pharmacological measurement: Some biomarkers, called ‘surrogate markers’, are used to substitute for a clinically meaningful endpoint corresponding to events and their penultimate risk factors. We confront the reliability of clinical biomarkers that are used to gather information about clinically meaningful endpoints. Our aim is to present a systematic methodology for assessing the reliability of multiple surrogate markers (and biomarkers in general). To do this we draw upon the robustness analysis literature in the philosophy of science and the empirical use of clinical biomarkers. After introducing robustness analysis we present two problems with biomarkers in relation to reliability. Next, we propose an intervention-based robustness methodology for organizing the reliability of biomarkers in general. We propose three relevant conditions for a robust methodology for biomarkers: (R1) Intervention-based demonstration of partial independence of modes: In biomarkers partial independence can be demonstrated through exogenous interventions that modify a process some number of “steps” removed from each of the markers. (R2) Comparison of diverging and converging results across biomarkers: By systematically comparing partially-independent biomarkers we can track under what conditions markers fail to converge in results, and under which conditions they successfully converge. (R3) Information within the context of theory: Through a systematic cross-comparison of the markers we can make causal conclusions as well as eliminate competing theories. We apply our robust methodology to currently developing Alzheimer’s research to show its usefulness for making causal conclusions

    Effect of sitagliptin on cardiovascular outcomes in type 2 diabetes

    Get PDF
    BACKGROUND: Data are lacking on the long-term effect on cardiovascular events of adding sitagliptin, a dipeptidyl peptidase 4 inhibitor, to usual care in patients with type 2 diabetes and cardiovascular disease. METHODS: In this randomized, double-blind study, we assigned 14,671 patients to add either sitagliptin or placebo to their existing therapy. Open-label use of antihyperglycemic therapy was encouraged as required, aimed at reaching individually appropriate glycemic targets in all patients. To determine whether sitagliptin was noninferior to placebo, we used a relative risk of 1.3 as the marginal upper boundary. The primary cardiovascular outcome was a composite of cardiovascular death, nonfatal myocardial infarction, nonfatal stroke, or hospitalization for unstable angina. RESULTS: During a median follow-up of 3.0 years, there was a small difference in glycated hemoglobin levels (least-squares mean difference for sitagliptin vs. placebo, -0.29 percentage points; 95% confidence interval [CI], -0.32 to -0.27). Overall, the primary outcome occurred in 839 patients in the sitagliptin group (11.4%; 4.06 per 100 person-years) and 851 patients in the placebo group (11.6%; 4.17 per 100 person-years). Sitagliptin was noninferior to placebo for the primary composite cardiovascular outcome (hazard ratio, 0.98; 95% CI, 0.88 to 1.09; P<0.001). Rates of hospitalization for heart failure did not differ between the two groups (hazard ratio, 1.00; 95% CI, 0.83 to 1.20; P = 0.98). There were no significant between-group differences in rates of acute pancreatitis (P = 0.07) or pancreatic cancer (P = 0.32). CONCLUSIONS: Among patients with type 2 diabetes and established cardiovascular disease, adding sitagliptin to usual care did not appear to increase the risk of major adverse cardiovascular events, hospitalization for heart failure, or other adverse events
    corecore