148 research outputs found
The use of meteorological analogues to account for LAM QPF uncertainty
International audienceFlood predictions issued employing quantitative precipitation forecasts (QPFs) provided by deterministic models do not account for the uncertainty in the outcomes. A probabilistic approach to QPF seems to be indispensable to obtain different future flow scenarios that allow to manage the flood accounting for the variability of phenomena and the uncertainty associated with an hydrological forecast. A new approach based on a search for past situations (analogues), similar to previous and current day in terms of different meteorological fields over Western Europe and East Atlantic, has been developed to determine an ensemble of hourly quantitative precipitation forecasts for the Reno river basin, a medium-sized catchment in northern Italy. A statistical analysis, performed over an hydro-meteorological archive collecting ECMWF analyses at 12:00 UTC relative to the autumn seasons ranging from 1990 to 2000 and the corresponding precipitation measurements recorded by the raingauges spread over the catchment of interest, has underlined that the combination of geopotential at 500 hPa and vertical velocity at 700 hPa provides a better estimation of precipitation. The analogue-based ensemble prediction has to be considered not alternative but complementary with the deterministic QPF provided by a numerical model, even in view of a joint employment to improve real-time flood forecasting. In the present study, the analogue-based QPFs and the precipitation forecast provided by the Limited Area Model LAMBO have been used as different input to the distributed rainfall-runoff model TOPKAPI, thus generating, respectively, an ensemble of discharge forecasts, which provides a confidence interval for the predicted streamflow, and a deterministic discharge forecast taken as an error affected "measurement'' of the future flow, which does not convey any quantification of the forecast uncertainty. To make more informative the hydrological prediction, the ensemble spread could be regarded as a measure of the uncertainty of the deterministic forecast
The use of meteorological analogues to account for LAM QPF uncertainty
International audienceFlood predictions based on quantitative precipitation forecasts (QPFs) provided by deterministic models do not account for the uncertainty in the outcomes. A probabilistic approach to QPF, one which accounts for the variability of phenomena and the uncertainty associated with a hydrological forecast, seems to be indispensable to obtain different future flow scenarios for improved flood management. A new approach based on a search for analogues, that is past situations similar to the current one under investigation in terms of different meteorological fields over Western Europe and East Atlantic, has been developed to determine an ensemble of hourly quantitative precipitation forecasts for the Reno river basin, a medium-sized catchment in northern Italy. A statistical analysis, performed over a hydro-meteorological archive of ECMWF analyses at 12:00 UTC relative to the autumn seasons ranging from 1990 to 2000 and the corresponding precipitation measurements recorded by the raingauges spread over the catchment of interest, has underlined that the combination of geopotential at 500 hPa and vertical velocity at 700 hPa provides a better estimation of precipitation. The analogue-based ensemble prediction has to be considered not alternative but complementary to the deterministic QPF provided by a numerical model, even when employed jointly to improve real-time flood forecasting. In the present study, the analogue-based QPFs and the precipitation forecast provided by the Limited Area Model LAMBO have been used as different input to the distributed rainfall-runoff model TOPKAPI, thus generating, respectively, an ensemble of discharge forecasts, which provides a confidence interval for the predicted streamflow, and a deterministic discharge forecast taken as an error-affected "measurement" of the future flow, which does not convey any quantification of the forecast uncertainty. To make more informative the hydrological prediction, the ensemble spread could be regarded as a measure of the uncertainty of the deterministic forecast
Pressure-driven Demand and Leakage Simulation for Water Distribution Networks
Copyright © 2008 American Society of Civil EngineersIncreasingly, water loss via leakage is acknowledged as one of the main challenges facing water distribution system operations. The consideration of water loss over time, as systems age, physical networks grow, and consumption patterns mature, should form an integral part of effective asset management, rendering any simulation model capable of quantifying pressure-driven leakage indispensable. To this end, a novel steady-state network simulation model that fully integrates into a classical hydraulic representation, pressure-driven demand and leakage at the pipe level is developed and presented here. After presenting a brief literature review about leakage modeling, the importance of a more realistic simulation model allowing for leakage analysis is demonstrated. The algorithm is then tested from a numerical standpoint and subjected to a convergence analysis. These analyses are performed on a case study involving two networks derived from real systems. Experimentally observed convergence/error statistics demonstrate the high robustness of the proposed pressure-driven demand and leakage simulation model
Spectrally resolved observations of atmospheric emitted radiance in the H2O rotation band
This paper presents the project Earth Cooling by Water
Vapor Radiation, an observational programme, which aims at
developing a database of spectrally resolved far infrared
observations, in atmospheric dry conditions, in order to
validate radiative transfer models and test the quality of water
vapor continuum and line parameters. The project provides
the very first set of far-infrared spectral downwelling
radiance measurements, in dry atmospheric conditions,
which are complemented with Raman Lidar-derived
temperature and water vapor profiles
Supernova dust for the extinction law in a young infrared galaxy at z = 1
We apply the supernova(SN) extinction curves to reproduce the observed
properties of SST J1604+4304 which is a young infrared (IR) galaxy at z = 1.
The SN extinction curves used in this work were obtained from models of unmixed
ejecta of type II supernovae(SNe II) for the Salpeter initial mass function
(IMF) with a mass range from 8 to 30 M_sun or 8 to 40 M_sun.
The effect of dust distributions on the attenuation of starlight is
investigated by performing the chi-square fitting method against various dust
distributions. These are the commonly used uniform dust screen, the clumpy dust
screen, and the internal dust geometry. We add to these geometries three
scattering properties, namely, no-scattering, isotropic scattering, and
forward-only scattering. Judging from the chi-square values, we find that the
uniform screen models with any scattering property provide good approximations
to the real dust geometry. Internal dust is inefficient to attenuate starlight
and thus cannot be the dominant source of the extinction.
We show that the SN extinction curves reproduce the data of SST J1604+4304
comparable to or better than the Calzetti extinction curve. The Milky Way
extinction curve is not in satisfactory agreement with the data unless several
dusty clumps are in the line of sight. This trend may be explained by the
abundance of SN-origin dust in these galaxies; SN dust is the most abundant in
the young IR galaxy at z = 1, abundant in local starbursts, and less abundant
in the Galaxy. If dust in SST J1604+4304 is dominated by SN dust, the dust
production rate is about 0.1 M_sun per SN.Comment: 12 pages, 8 figures, 1 tabl
Pressure-dependent EPANET extension
In water distribution systems (WDSs), the available flow at a demand node is dependent on the pressure at that node. When a network is lacking in pressure, not all consumer demands will be met in full. In this context, the assumption that all demands are fully satisfied regardless of the pressure in the system becomes unreasonable and represents the main limitation of the conventional demand driven analysis (DDA) approach to WDS modelling. A realistic depiction of the network performance can only be attained by considering demands to be pressure dependent. This paper presents an extension of the renowned DDA based hydraulic simulator EPANET 2 to incorporate pressure-dependent demands. This extension is termed “EPANET-PDX” (pressure-dependent extension) herein. The utilization of a continuous nodal pressure-flow function coupled with a line search and backtracking procedure greatly enhance the algorithm’s convergence rate and robustness. Simulations of real life networks consisting of multiple sources, pipes, valves and pumps were successfully executed and results are presented herein. Excellent modelling performance was achieved for analysing both normal and pressure deficient conditions of the WDSs. Detailed computational efficiency results of EPANET-PDX with reference to EPANET 2 are included as well
Flood Proofing Low-Income Houses in India: an Application of Climate-Sensitive Probabilistic Benefit-Cost Analysis
Poor communities in high risk areas are disproportionately affected by disasters compared to their wealthy counterparts; yet, there are few analyses to guide public decisions on pro-poor investments in disaster risk reduction. This paper illustrates an application of benefit-cost analysis (BCA) for assessing investments in structural flood proofing of low-income, high-risk houses. The analysis takes account of climate change, which is increasingly viewed as an important consideration for assessing long-term investments. Specifically, the study focuses on the Rohini river basin of India and evaluates options for constructing non-permanent and permanent residential structures on a raised plinth to protect them against flooding. The estimates show a positive benefit-cost ratio for building new houses on a raised plinth, while the ratio is less than one for demolishing existing houses to rebuild on a raised plinth. Climate change is found to significantly affect the BCA results. From a policy perspective, the analysis demonstrates the potential economic returns of raised plinths for ‘building back better’ after disasters, or as a part of good housing design practice
The origin of dust in galaxies revisited: the mechanism determining dust content
The origin of cosmic dust is a fundamental issue in planetary science. This
paper revisits the origin of dust in galaxies, in particular, in the Milky Way,
by using a chemical evolution model of a galaxy composed of stars, interstellar
medium, metals (elements heavier than helium), and dust. We start from a review
of time-evolutionary equations of the four components, and then, we present
simple recipes for the stellar remnant mass and yields of metal and dust based
on models of stellar nucleosynthesis and dust formation. After calibrating some
model parameters with the data from the solar neighborhood, we have confirmed a
shortage of the stellar dust production rate relative to the dust destruction
rate by supernovae if the destruction efficiency suggested by theoretical works
is correct. If the dust mass growth by material accretion in molecular clouds
is active, the observed dust amount in the solar neighborhood is reproduced. We
present a clear analytic explanation of the mechanism for determining dust
content in galaxies after the activation of accretion growth: a balance between
accretion growth and supernova destruction. Thus, the dust content is
independent of the uncertainty of the stellar dust yield after the growth
activation. The timing of the activation is determined by a critical metal mass
fraction which depends on the growth and destruction efficiencies. The solar
system formation seems to have occurred well after the activation and plenty of
dust would have existed in the proto-solar nebula.Comment: 12 pages, 11 figure
On the source of the late-time infrared luminosity of SN 1998S and other type II supernovae
We present late-time near-infrared (NIR) and optical observations of the type
IIn SN 1998S. The NIR photometry spans 333-1242 days after explosion, while the
NIR and optical spectra cover 333-1191 days and 305-1093 days respectively. The
NIR photometry extends to the M'-band (4.7 mu), making SN 1998S only the second
ever supernova for which such a long IR wavelength has been detected. The shape
and evolution of the H alpha and HeI 1.083 mu line profiles indicate a powerful
interaction with a progenitor wind, as well as providing evidence of dust
condensation within the ejecta. The latest optical spectrum suggests that the
wind had been flowing for at least 430 years. The intensity and rise of the HK
continuum towards longer wavelengths together with the relatively bright L' and
M' magnitudes shows that the NIR emission was due to hot dust newly-formed in
supernovae may provide the ejecta and/or pre-existing dust in the progenitor
circumstellar medium (CSM). [ABRIDGED] Possible origins for the NIR emission
are considered. Significant radioactive heating of ejecta dust is ruled out, as
is shock/X-ray-precursor heating of CSM dust. More plausible sources are (a) an
IR-echo from CSM dust driven by the UV/optical peak luminosity, and (b)
emission from newly-condensed dust which formed within a cool, dense shell
produced by the ejecta shock/CSM interaction. We argue that the evidence
favours the condensing dust hypothesis, although an IR-echo is not ruled out.
Within the condensing-dust scenario, the IR luminosity indicates the presence
of at least 0.001 solar masses of dust in the ejecta, and probably considerably
more. Finally, we show that the late-time intrinsic (K-L') evolution of type II
supernovae may provide a useful tool for determining the presence or absence of
a massive CSM around their progenitor stars.Comment: 23 pages, 15 figures, to be published in MNRA
Optical and infrared observations of the Type IIP SN2002hh from day 3 to 397
We present optical and infrared (IR) observations of the type IIP SN2002hh
from 3 to 397 days after explosion. The optical spectroscopic (4-397d) and
photometric (3-278d) data are complemented by spectroscopic (137-381d) and
photometric (137-314d) data acquired at IR wavelengths. This is the first time
L-band spectra have ever been successfully obtained for a supernova at a
distance beyond the Local Group. The VRI light curves in the first 40 days
reveal SN2002hh to be a SN IIP (plateau) - the most common of all core-collapse
supernovae. SN2002hh is one of the most highly extinguished supernovae ever
investigated. To provide a good match between its early-time spectrum and a
coeval spectrum of the Type IIP SN1999em, as well as maintaining consistency
with KI interstellar absorption, we invoke a 2-component extinction model. One
component is due to the combined effect of the interstellar medium of our Milky
Way Galaxy and the SN host galaxy, while the other component is due to a "dust
pocket" where the grains have a mean size smaller than in the interstellar
medium. The early-time optical light curves of SNe 1999em and 2002hh are
generally well-matched, as are the radioactive tails of these two SNe and
SN1987A. The late-time similarity of the SN2002hh optical light curves to those
of SN1987A, together with measurements of the optical/IR luminosity and [FeII]
1.257mu emission indicate that 0.07 +- 0.02 Msun of Ni 56 was ejected by
SN2002hh. [... ABRIDGED...] From the [OI] 6300,6364 A doublet luminosity we
infer a 16-18 Msun main-sequence progenitor star. The progenitor of SN2002hh
was probably a red supergiant with a substantial, dusty wind.Comment: 32 pages, 30 figures, accepted for publication in MNRA
- …
