4,875 research outputs found

    Demand uncertainty In modelling WDS: scaling laws and scenario generation

    Get PDF
    Water distribution systems (WDS) are critical infrastructures that should be designed to work properly in different conditions. The design and management of WDS should take into account the uncertain nature of some system parameters affecting the overall reliability of these infrastructures. In this context, water demand represents the major source of uncertainty. Thus, uncertain demand should be either modelled as a stochastic process or characterized using statistical tools. In this paper, we extend to the 3rd and 4th order moments the analytical equations (namely scaling laws) expressing the dependency of the statistical moments of demand signals on the sampling time resolution and on the number of served users. Also, we describe how the probability density function (pdf) of the demand signal changes with both the increase of the user’s number and the sampling rate variation. With this aim, synthetic data and real indoor water demand data are used. The scaling laws of the water demand statistics are a powerful tool which allows us to incorporate the demand uncertainty in the optimization models for a sustainable management of WDS. Specifically, in the stochastic/robust optimization, solutions close to the optimum in different working conditions should be considered. Obviously, the results of these optimization models are strongly dependent on the conditions that are taken into consideration (i.e. the scenarios). Among the approaches for the definition of demand scenarios and their probability-weight of occurrence, the moment-matching method is based on matching a set of statistical properties, e.g. moments from the 1st (mean) to the 4th (kurtosis) order

    Des X... à la manière toscane

    Get PDF

    USING A.R.P. PROXIMAL SURVEY TO MAP CALCIC HORIZON DEPTH IN VINEYARDS

    Get PDF
    The investigation of spatial variability of soil water retention capacity and depth is essential for a correct and economical planning of water supply of a vineyard. The advantage of measuring soil electrical properties by proximal sensors is the ability to operate with mobile and non-destructive tools, quicker than the traditional soil survey. A.R.P. (Automatic Resistivity Profiling) is a mobile soil electrical resistivity (ER) mapping system conceived by Geocarta (Paris, France), and it is comprised by a couple of transmitter sprocket-wheels, which inject current within the soil, and three couples of receiver sprocket-wheels, which measure the voltage-drop at three different depths, about 0-50, 0-100 and 0-170 cm. Ten vineyards of “Villa Albius” farm in Sicily region (southern Italy) were chosen to carry out the A.R.P. survey, for a overall surface of 45 hectares. The vineyards were located in a wide Plio-Pleistocene marine terrace, characterized by a few meters level of calcarenite, overlying partially cemented by calcium carbonate yellow sands. During the A.R.P. survey, 12 boreholes were described and sampled for the laboratory analysis and other 6 boreholes were carried out to validade the map. All soils showed a calcic horizon (Bk, BCk or Ck) with the upper limit at variable depths. The depth of calcic horizon (Dk) of each boreholes resulted significantly correlated to ER, especially with the ER0-100 (R2 = 0.83). Dk map was interpolated using the regression kriging and validated by the boreholes (R2 = 0.71) and with a NDVI map of the same vintage (R2 = 0.95)

    USING WRB TO MAP THE SOIL SYSTEMS OF ITALY

    Get PDF
    Aim of this work was to test the 2010 version of the WRB soil classification for compilating a map of the soil systems of Italy at 1:500,000 scale. The source of data was the national geodatabase storing information on 1,414 Soil Typological Units (STUs). Though, basically, we followed WRB criteria to prioritize soil qualifiers, however, it was necessary to work out an original methodology in the map legend representation to reproduce the high variability inside each delineation meanwhile avoiding any loss of information. Each map unit may represent a combination of three codominant STUs at the most. Dominant STUs were assessed summing up the occurrence of STUs in the Land Components (LCs) of every soil system, where each LC is a specific combination of morphology, lithology and land cover. STUs were classified according to the WRB soil classification system, at the third level, that is, reference soil group and first two qualifiers, when possible. Since the large number of delineations, map units grouping was needed to make the map more legible. Legend colours were organized according to soil regions groups firstly, then by considering the highest level of soil classification, so resulting a nidificated legend. The map showed 3,357 polygons and 704 map units. The most common STU were Calcaric Cambisols, by far followed by Calcaric Regosols, Eutric Cambisols, Haplic Calcisols, Vertic Cambisols, Cutanic Luvisols, Leptic Pheozems, Chromic Luvisols, Dystric Cambisols, Fluvic Cambisols, and others STUs belonging to almost all the WRB soil references. Keywords: geodatabase, soil system

    HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    Full text link
    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.Comment: 15 pages, 9 figure

    The commissioning of CMS sites: improving the site reliability

    Get PDF
    The computing system of the CMS experiment works using distributed resources from more than 60 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established a procedure to extensively test all relevant aspects of a Grid site, such as the ability to efficiently use their network to transfer data, the functionality of all the site services relevant for CMS and the capability to sustain the various CMS computing workflows at the required scale. This contribution describes in detail the procedure to rate CMS sites depending on their performance, including the complete automation of the program, the description of monitoring tools, and its impact in improving the overall reliability of the Grid from the point of view of the CMS computing system

    Testing and integrating the WLCG/EGEE middleware in the LHC computing

    Get PDF
    The main goal of the Experiment Integration and Support (EIS) team in WLCG is to help the LHC experiments with using proficiently the gLite middleware as part of their computing framework. This contribution gives an overview of the activities of the EIS team, and focuses on a few of them particularly important for the experiments. One activity is the evaluation of the gLite workload management system (WMS) to assess its adequacy for the needs of the LHC computing in terms of functionality, reliability and scalability. We describe in detail how the experiment requirements can be mapped to validation criteria, and the WMS performances are accurately measured under realistic load conditions over prolonged periods of time. Another activity is the integration of the Service Availability Monitoring system (SAM) with the experiment monitoring framework. The SAM system is widely used in the EGEE operations to identify malfunctions in Grid services, but it can be adapted to perform the same function on experiment-specific services. We describe how this has been done for some LHC experiments, which are now using SAM as part of their operations

    It All Starts With a Song

    Get PDF
    The purpose of my project is to forge an identity as a songwriter, composing songs and presenting them in the most stripped and pure version, removing all of the elements that surround a song to deliver them in a very distilled format. An underlying intention is to differentiate from other artists through unifying songwriting, singing and production skills. Firstly, I chose three tunes I wrote in the past that had really strong potential as pure compositions; I analysed and deconstructed them and I reworked their form, lyrics and melody (recreation process). In the second stage, I recorded a new version of the songs using a piano/vocal arrangement. Then, I processed my own vocals to make them sound the best using eq, compression, reverb, delay, etc. Because of this project, I grew as a vocalist, as a songwriter, and as a vocal producer. The design of the processes proposes a path that could be pursued by different professional profiles in the music industry that may need to rebuild existing material through reflecting over their own work or work done by others.https://remix.berklee.edu/graduate-studies-contemporary-performance/1105/thumbnail.jp

    Comparison of five- and six-player basketball for women

    Get PDF

    Measurement of the cross-section and charge asymmetry of WW bosons produced in proton-proton collisions at s=8\sqrt{s}=8 TeV with the ATLAS detector

    Get PDF
    This paper presents measurements of the W+μ+νW^+ \rightarrow \mu^+\nu and WμνW^- \rightarrow \mu^-\nu cross-sections and the associated charge asymmetry as a function of the absolute pseudorapidity of the decay muon. The data were collected in proton--proton collisions at a centre-of-mass energy of 8 TeV with the ATLAS experiment at the LHC and correspond to a total integrated luminosity of 20.2~\mbox{fb^{-1}}. The precision of the cross-section measurements varies between 0.8% to 1.5% as a function of the pseudorapidity, excluding the 1.9% uncertainty on the integrated luminosity. The charge asymmetry is measured with an uncertainty between 0.002 and 0.003. The results are compared with predictions based on next-to-next-to-leading-order calculations with various parton distribution functions and have the sensitivity to discriminate between them.Comment: 38 pages in total, author list starting page 22, 5 figures, 4 tables, submitted to EPJC. All figures including auxiliary figures are available at https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/STDM-2017-13
    corecore