18,755 research outputs found
Bias adjustment of satellite-based precipitation estimation using gauge observations: A case study in Chile
Satellite-based precipitation estimates (SPEs) are promising alternative precipitation data for climatic and hydrological applications, especially for regions where ground-based observations are limited. However, existing satellite-based rainfall estimations are subject to systematic biases. This study aims to adjust the biases in the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks–Cloud Classification System (PERSIANN-CCS) rainfall data over Chile, using gauge observations as reference. A novel bias adjustment framework, termed QM-GW, is proposed based on the nonparametric quantile mapping approach and a Gaussian weighting interpolation scheme. The PERSIANN-CCS precipitation estimates (daily, 0.04°×0.04°) over Chile are adjusted for the period of 2009–2014. The historical data (satellite and gauge) for 2009–2013 are used to calibrate the methodology; nonparametric cumulative distribution functions of satellite and gauge observations are estimated at every 1°×1° box region. One year (2014) of gauge data was used for validation. The results show that the biases of the PERSIANN-CCS precipitation data are effectively reduced. The spatial patterns of adjusted satellite rainfall show high consistency to the gauge observations, with reduced root-mean-square errors and mean biases. The systematic biases of the PERSIANN-CCS precipitation time series, at both monthly and daily scales, are removed. The extended validation also verifies that the proposed approach can be applied to adjust SPEs into the future, without further need for ground-based measurements. This study serves as a valuable reference for the bias adjustment of existing SPEs using gauge observations worldwide
Merging high-resolution satellite-based precipitation fields and point-scale rain gauge measurements-A case study in Chile
With high spatial-temporal resolution, Satellite-based Precipitation Estimates (SPE) are becoming valuable alternative rainfall data for hydrologic and climatic studies but are subject to considerable uncertainty. Effective merging of SPE and ground-based gauge measurements may help to improve precipitation estimation in both better resolution and accuracy. In this study, a framework for merging satellite and gauge precipitation data is developed based on three steps, including SPE bias adjustment, gauge observation gridding, and data merging, with the objective to produce high-quality precipitation estimates. An inverse-root-mean-square-error weighting approach is proposed to combine the satellite and gauge estimates that are in advance adjusted and gridded, respectively. The model is applied and tested with the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) estimates (daily, 0.04° × 0.04°) over Chile, for the 6 year period of 2009-2014. Daily observations from about 90% of collected gauges over the study area are used for model calibration; the rest of the gauged data are regarded as ground “truth” for validation. Evaluation results indicate high effectiveness of the model in producing high-resolution-precision precipitation data. Compared to reference data, the merged data (daily) show correlation coefficients, probabilities of detection, root-mean-square errors, and absolute mean biases that were consistently improved from the original PERSIANN-CCS estimates. The cross-validation evidences that the framework is effective in providing high-quality estimates even over nongauged satellite pixels. The same method can be applied globally and is expected to produce precipitation products in near real time by integrating gauge observations with satellite estimates
Quantitative Simulation of the Superconducting Proximity Effect
A numerical method is developed to calculate the transition temperature of
double or multi-layers consisting of films of super- and normal conductors. The
approach is based on a dynamic interpretation of Gorkov's linear gap equation
and is very flexible. The mean free path of the different metals, transmission
through the interface, ratio of specular reflection to diffusive scattering at
the surfaces, and fraction of diffusive scattering at the interface can be
included. Furthermore it is possible to vary the mean free path and the BCS
interaction NV in the vicinity of the interface. The numerical results show
that the normalized initial slope of an SN double layer is independent of
almost all film parameters except the ratio of the density of states. There are
only very few experimental investigations of this initial slope and they
consist of Pb/Nn double layers (Nn stands for a normal metal). Surprisingly the
coefficient of the initial slope in these experiments is of the order or less
than 2 while the (weak coupling) theory predicts a value of about 4.5. This
discrepancy has not been recognized in the past. The autor suggests that it is
due to strong coupling behavior of Pb in the double layers. The strong coupling
gap equation is evaluated in the thin film limit and yields the value of 1.6
for the coefficient. This agrees much better with the few experimental results
that are available.
PACS: 74.45.+r, 74.62.-c, 74.20.F
Continuous-Variable Spatial Entanglement for Bright Optical Beams
A light beam is said to be position squeezed if its position can be
determined to an accuracy beyond the standard quantum limit. We identify the
position and momentum observables for bright optical beams and show that
position and momentum entanglement can be generated by interfering two
position, or momentum, squeezed beams on a beam splitter. The position and
momentum measurements of these beams can be performed using a homodyne detector
with local oscillator of an appropriate transverse beam profile. We compare
this form of spatial entanglement with split detection-based spatial
entanglement.Comment: 7 pages, 3 figures, submitted to PR
Fisher Renormalization for Logarithmic Corrections
For continuous phase transitions characterized by power-law divergences,
Fisher renormalization prescribes how to obtain the critical exponents for a
system under constraint from their ideal counterparts. In statistical
mechanics, such ideal behaviour at phase transitions is frequently modified by
multiplicative logarithmic corrections. Here, Fisher renormalization for the
exponents of these logarithms is developed in a general manner. As for the
leading exponents, Fisher renormalization at the logarithmic level is seen to
be involutory and the renormalized exponents obey the same scaling relations as
their ideal analogs. The scheme is tested in lattice animals and the Yang-Lee
problem at their upper critical dimensions, where predictions for logarithmic
corrections are made.Comment: 10 pages, no figures. Version 2 has added reference
An experimental study on (2) modular symmetry in the quantum Hall system with a small spin-splitting
Magnetic-field-induced phase transitions were studied with a two-dimensional
electron AlGaAs/GaAs system. The temperature-driven flow diagram shows the
features of the (2) modular symmetry, which includes distorted
flowlines and shiftted critical point. The deviation of the critical
conductivities is attributed to a small but resolved spin splitting, which
reduces the symmetry in Landau quantization. [B. P. Dolan, Phys. Rev. B 62,
10278.] Universal scaling is found under the reduction of the modular symmetry.
It is also shown that the Hall conductivity could still be governed by the
scaling law when the semicircle law and the scaling on the longitudinal
conductivity are invalid. *corresponding author:[email protected]: The revised manuscript has been published in J. Phys.: Condens.
Matte
Severe discrepancies between experiment and theory in the superconducting proximity effect
The superconducting proximity effect is investigated for SN double layers in
a regime where the resulting transition temperature T_{c} does not depend on
the mean free paths of the films and, within limits, not on the transparency of
the interface. This regime includes the thin film limit and the normalized
initial slope S_{sn}= (d_{s}/T_{s})|dT_{c}/dd_{n}|. The experimental results
for T_{c} are compared with a numerical simulation which was recently developed
in our group. The results for the SN double layers can be devided into three
groups: (i) When N = Cu, Ag, Au, Mg a disagreement between experiment and
theory by a factor of the order of three is observed, (ii) When N = Cd, Zn, Al
the disagreement between experiment and theory is reduced to a factor of about
1.5, (iii) When N = In, Sn a reasonably good agreement between experiment and
theory is observed
Excitons in quasi-one dimensional organics: Strong correlation approximation
An exciton theory for quasi-one dimensional organic materials is developed in
the framework of the Su-Schrieffer-Heeger Hamiltonian augmented by short range
extended Hubbard interactions. Within a strong electron-electron correlation
approximation, the exciton properties are extensively studied. Using scattering
theory, we analytically obtain the exciton energy and wavefunction and derive a
criterion for the existence of a exciton. We also systematically
investigate the effect of impurities on the coherent motion of an exciton. The
coherence is measured by a suitably defined electron-hole correlation function.
It is shown that, for impurities with an on-site potential, a crossover
behavior will occur if the impurity strength is comparable to the bandwidth of
the exciton, corresponding to exciton localization. For a charged impurity with
a spatially extended potential, in addition to localization the exciton will
dissociate into an uncorrelated electron-hole pair when the impurity is
sufficiently strong to overcome the Coulomb interaction which binds the
electron-hole pair. Interchain coupling effects are also discussed by
considering two polymer chains coupled through nearest-neighbor interchain
hopping and interchain Coulomb interaction . Within the
matrix scattering formalism, for every center-of-mass momentum, we find two
poles determined only by , which correspond to the interchain
excitons. Finally, the exciton state is used to study the charge transfer from
a polymer chain to an adjacent dopant molecule.Comment: 24 pages, 23 eps figures, pdf file of the paper availabl
The X-ray luminosity function of Active Galactic Nuclei in the redshift interval z=3-5
We combine deep X-ray survey data from the Chandra observatory and the
wide-area/shallow XMM-XXL field to estimate the AGN X-ray luminosity function
in the redshift range z=3-5. The sample consists of nearly 340 sources with
either photometric (212) or spectroscopic (128) redshift in the above range.
The combination of deep and shallow survey fields provides a luminosity
baseline of three orders of magnitude, Lx(2-10keV)~1e43-1e46erg/s at z>3. We
follow a Bayesian approach to determine the binned AGN space density and
explore their evolution in a model-independent way. Our methodology accounts
for Poisson errors in the determination of X-ray fluxes and uncertainties in
photometric redshift estimates. We demonstrate that the latter is essential for
unbiased measurement of space densities. We find that the AGN X-ray luminosity
function evolves strongly between the redshift intervals z=3-4 and z=4-5. There
is also suggestive evidence that the amplitude of this evolution is luminosity
dependent. The space density of AGN with Lx<1e45erg/s drops by a factor of 5
between the redshift intervals above, while the evolution of brighter AGN
appears to be milder. Comparison of our X-ray luminosity function with that of
UV/optical selected QSOs at similar redshifts shows broad agreement at bright
luminosities, Lx>1e45erg/s. The faint-end slope of UV/optical luminosity
functions however, is steeper than for X-ray selected AGN. This implies that
the type-I AGN fraction increases with decreasing luminosity at z>3, opposite
to trends established at lower redshift. We also assess the significance of AGN
in keeping the hydrogen ionised at high redshift. Our X-ray luminosity function
yields ionising photon rate densities that are insufficient to keep the
Universe ionised at redshift z>4. A source of uncertainty in this calculation
is the escape fraction of UV photons for X-ray selected AGN.Comment: MNRAS accepte
A review of Monte Carlo simulations of polymers with PERM
In this review, we describe applications of the pruned-enriched Rosenbluth
method (PERM), a sequential Monte Carlo algorithm with resampling, to various
problems in polymer physics. PERM produces samples according to any given
prescribed weight distribution, by growing configurations step by step with
controlled bias, and correcting "bad" configurations by "population control".
The latter is implemented, in contrast to other population based algorithms
like e.g. genetic algorithms, by depth-first recursion which avoids storing all
members of the population at the same time in computer memory. The problems we
discuss all concern single polymers (with one exception), but under various
conditions: Homopolymers in good solvents and at the point, semi-stiff
polymers, polymers in confining geometries, stretched polymers undergoing a
forced globule-linear transition, star polymers, bottle brushes, lattice
animals as a model for randomly branched polymers, DNA melting, and finally --
as the only system at low temperatures, lattice heteropolymers as simple models
for protein folding. PERM is for some of these problems the method of choice,
but it can also fail. We discuss how to recognize when a result is reliable,
and we discuss also some types of bias that can be crucial in guiding the
growth into the right directions.Comment: 29 pages, 26 figures, to be published in J. Stat. Phys. (2011
- …
