3 research outputs found
Experience in Testing the Grid Based Workload Management System of a LHC Experiment
Description of scale testing of the infrasctucture used to perform physics analysis in CMS
CRAB: the CMS distributed analysis tool development and design
Starting from 2007 the CMS experiment will produce several Pbytes of data each year, to be distributed over many computing centers located in many different countries. The CMS computing model defines how the data are to be distributed such that CMS physicists can access them in an efficient manner in order to perform their physics analyses. CRAB (CMS Remote Analysis Builder) is a specific tool, designed and developed by the CMS collaboration, that facilitates access to the distributed data in a very transparent way. The tool's main feature is the possibility of distributing and parallelizing the local CMS batch data analysis processes over different Grid environments without any specific knowledge of the underlying computational infrastructures. More specifically CRAB allows the transparent usage of WLCG, gLite and OSG middleware. CRAB interacts with both the local user environment, with CMS Data Management services and with the Grid middleware. CRAB has been in production and in routine use by end- users since Spring 2004. It has been extensively used during studies to prepare the Physics Technical Design Report (PTDR) and in the analysis of reconstructed event samples generated during the Computing Software and Analysis Challenge (CSA06). This involved generating thousands of jobs per day at peak rates. In this poster we discuss the current implementation of CRAB, experience with using it in production and plans for improvements in the immediate future
CSA06 at the Italian Tiers (CMS Internal Note 2007/007)
The experience of CSA06 at the italian Tiers is described. The operations at the CNAF TIER1 are discussed. The skim procedures for analysis exercizes at the Bari, Legnaro, Pisa and Rome TIER2\u2019s are described and the subsequent results are given
