6,010 research outputs found
Modelling of flood hazard extent in data sparse areas: a case study of the Oti River basin, West Africa
Study region: Terrain and hydrological data are scarce in many African countries. The coarse spatial resolution of freely available Shuttle Radar Topographic Mission elevation data and the absence of flow gauges on flood-prone reaches, such as the Oti River studied here, make flood inundation modelling challenging in West Africa. Study focus: A flood modelling approach is developed here to simulate flood extent in data scarce regions. The methodology is based on a calibrated, distributed hydrological model for the whole basin to simulate the input discharges for a hydraulic model which is used to predict the flood extent for a 140 km reach of the Oti River. New hydrological insight for the region: Good hydrological model calibration (Nash Sutcliffe coefficient: 0.87) and validation (Nash Sutcliffe coefficient: 0.94) results demonstrate that even with coarse scale (5 km) input data, it is possible to simulate the discharge along this region's rivers, and importantly with a distributed model, derive model flows at any ungauged location within basin. With a lack of surveyed channel bathymetry, modelling the flood was only possible with a parametrized sub-grid hydraulic model. Flood model fit results relative to the observed 2007 flood extent and extensive sensitivity testing shows that this fit (64%) is likely to be as good as is possible for this region, given the coarseness of the terrain digital elevation model
The Invasive Species Forecasting System
The Invasive Species Forecasting System (ISFS) provides computational support for the generic work processes found in many regional-scale ecosystem modeling applications. Decision support tools built using ISFS allow a user to load point occurrence field sample data for a plant species of interest and quickly generate habitat suitability maps for geographic regions of management concern, such as a national park, monument, forest, or refuge. This type of decision product helps resource managers plan invasive species protection, monitoring, and control strategies for the lands they manage. Until now, scientists and resource managers have lacked the data-assembly and computing capabilities to produce these maps quickly and cost efficiently. ISFS focuses on regional-scale habitat suitability modeling for invasive terrestrial plants. ISFS s component architecture emphasizes simplicity and adaptability. Its core services can be easily adapted to produce model-based decision support tools tailored to particular parks, monuments, forests, refuges, and related management units. ISFS can be used to build standalone run-time tools that require no connection to the Internet, as well as fully Internet-based decision support applications. ISFS provides the core data structures, operating system interfaces, network interfaces, and inter-component constraints comprising the canonical workflow for habitat suitability modeling. The predictors, analysis methods, and geographic extents involved in any particular model run are elements of the user space and arbitrarily configurable by the user. ISFS provides small, lightweight, readily hardened core components of general utility. These components can be adapted to unanticipated uses, are tailorable, and require at most a loosely coupled, nonproprietary connection to the Web. Users can invoke capabilities from a command line; programmers can integrate ISFS's core components into more complex systems and services. Taken together, these features enable a degree of decentralization and distributed ownership that have helped other types of scientific information services succeed in recent years
Monitoring the impacts of trade agreements on food environments
The liberalization of international trade and foreign direct investment through
multilateral, regional and bilateral agreements has had profound implications
for the structure and nature of food systems, and therefore, for the availability,
nutritional quality, accessibility, price and promotion of foods in different
locations. Public health attention has only relatively recently turned to the links
between trade and investment agreements, diets and health, and there is currently
no systematic monitoring of this area. This paper reviews the available evidence on the links between trade agreements, food environments and diets from an obesity and non-communicable disease (NCD) perspective. Based on the key issues identified through the review, the paper outlines an approach for monitoring the potential impact of trade agreements on food environments and
obesity/NCD risks. The proposed monitoring approach encompasses a set of guiding principles, recommended procedures for data collection and analysis, and quantifiable ‘minimal’, ‘expanded’ and ‘optimal’ measurement indicators to be tailored to national priorities, capacity and resources. Formal risk assessment processes of existing and evolving trade and investment agreements,
which focus on their impacts on food environments will help inform the development of healthy trade policy, strengthen domestic nutrition and health
policy space and ultimately protect population nutrition.The following organizations provided funding support for the travel of participants
to Italy for this meeting and the preparation of background research papers: The Rockefeller Foundation, International Obesity Taskforce (IOTF), University of
Auckland, Deakin University, The George Institute, University of Sydney, Queensland University of Technology, University
of Oxford, University of Pennsylvania Perelman School of Medicine, World Cancer Research Fund International, University of Toronto, and The Australian National
University. The Faculty of Health at Deakin University kindly supported the costs for open access availability of this paper,
and the Australian National Health and Medical Research Council Centre for Research Excellence in Obesity Policy and Food Systems (APP1041020) supported the coordination and finalizing of INFORMAS manuscripts
Geometrical Insights for Implicit Generative Modeling
Learning algorithms for implicit generative models can optimize a variety of
criteria that measure how the data distribution differs from the implicit model
distribution, including the Wasserstein distance, the Energy distance, and the
Maximum Mean Discrepancy criterion. A careful look at the geometries induced by
these distances on the space of probability measures reveals interesting
differences. In particular, we can establish surprising approximate global
convergence guarantees for the -Wasserstein distance,even when the
parametric generator has a nonconvex parametrization.Comment: this version fixes a typo in a definitio
Reliable microsatellite genotyping of the Eurasian badger (Meles meles) using faecal DNA
The potential link between badgers and bovine tuberculosis has made it vital to develop
accurate techniques to census badgers. Here we investigate the potential of using genetic
profiles obtained from faecal DNA as a basis for population size estimation. After trialling
several methods we obtained a high amplification success rate (89%) by storing faeces in
70% ethanol and using the guanidine thiocyanate/silica method for extraction. Using 70%
ethanol as a storage agent had the advantage of it being an antiseptic. In order to obtain reliable
genotypes with fewer amplification reactions than the standard multiple-tubes
approach, we devised a comparative approach in which genetic profiles were compared
and replication directed at similar, but not identical, genotypes. This modified method
achieved a reduction in polymerase chain reactions comparable with the maximumlikelihood
model when just using reliability criteria, and was slightly better when using
reliability criteria with the additional proviso that alleles must be observed twice to be considered
reliable. Our comparative approach would be best suited for studies that include
multiple faeces from each individual. We utilized our approach in a well-studied population
of badgers from which individuals had been sampled and reliable genotypes obtained.
In a study of 53 faeces sampled from three social groups over 10 days, we found that direct
enumeration could not be used to estimate population size, but that the application of
mark–recapture models has the potential to provide more accurate results
U(2) and Maximal Mixing of nu_{mu}
A U(2) flavor symmetry can successfully describe the charged fermion masses
and mixings, and supress SUSY FCNC processes, making it a viable candidate for
a theory of flavor. We show that a direct application of this U(2) flavor
symmetry automatically predicts a mixing of 45 degrees for nu_mu to nu_s, where
nu_s is a light, right-handed state. The introduction of an additional flavor
symmetry acting on the right-handed neutrinos makes the model
phenomenologically viable, explaining the solar neutrino deficit as well as the
atmospheric neutrino anomaly, while giving a potential hot dark matter
candidate and retaining the theory's predictivity in the quark sector.Comment: 20 pages, 1 figur
A proposed approach to monitor private-sector policies and practices related to food environments, obesity and non-communicable disease prevention
Private-sector organizations play a critical role in shaping the food environments
of individuals and populations. However, there is currently
very limited independent monitoring of private-sector actions related to
food environments. This paper reviews previous efforts to monitor the
private sector in this area, and outlines a proposed approach to monitor
private-sector policies and practices related to food environments, and
their influence on obesity and non-communicable disease (NCD) prevention.
A step-wise approach to data collection is recommended, in which
the first (‘minimal’) step is the collation of publicly available food and
nutrition-related policies of selected private-sector organizations. The
second (‘expanded’) step assesses the nutritional composition of each
organization’s products, their promotions to children, their labelling
practices, and the accessibility, availability and affordability of their
products. The third (‘optimal’) step includes data on other commercial
activities that may influence food environments, such as political lobbying
and corporate philanthropy. The proposed approach will be further
developed and piloted in countries of varying size and income levels.
There is potential for this approach to enable national and international
benchmarking of private-sector policies and practices, and to inform
efforts to hold the private sector to account for their role in obesity and
NCD prevention
Higgs friends and counterfeits at hadron colliders
We consider the possibility of "Higgs counterfeits" - scalars that can be
produced with cross sections comparable to the SM Higgs, and which decay with
identical relative observable branching ratios, but which are nonetheless not
responsible for electroweak symmetry breaking. We also consider a related
scenario involving "Higgs friends," fields similarly produced through gg fusion
processes, which would be discovered through diboson channels WW, ZZ, gamma
gamma, or even gamma Z, potentially with larger cross sections times branching
ratios than for the Higgs. The discovery of either a Higgs friend or a Higgs
counterfeit, rather than directly pointing towards the origin of the weak
scale, would indicate the presence of new colored fields necessary for the
sizable production cross section (and possibly new colorless but electroweakly
charged states as well, in the case of the diboson decays of a Higgs friend).
These particles could easily be confused for an ordinary Higgs, perhaps with an
additional generation to explain the different cross section, and we emphasize
the importance of vector boson fusion as a channel to distinguish a Higgs
counterfeit from a true Higgs. Such fields would naturally be expected in
scenarios with "effective Z's," where heavy states charged under the SM produce
effective charges for SM fields under a new gauge force. We discuss the
prospects for discovery of Higgs counterfeits, Higgs friends, and associated
charged fields at the LHC.Comment: 27 pages, 5 figures. References added and typos fixe
Stochastic partial differential equation based modelling of large space-time data sets
Increasingly larger data sets of processes in space and time ask for
statistical models and methods that can cope with such data. We show that the
solution of a stochastic advection-diffusion partial differential equation
provides a flexible model class for spatio-temporal processes which is
computationally feasible also for large data sets. The Gaussian process defined
through the stochastic partial differential equation has in general a
nonseparable covariance structure. Furthermore, its parameters can be
physically interpreted as explicitly modeling phenomena such as transport and
diffusion that occur in many natural processes in diverse fields ranging from
environmental sciences to ecology. In order to obtain computationally efficient
statistical algorithms we use spectral methods to solve the stochastic partial
differential equation. This has the advantage that approximation errors do not
accumulate over time, and that in the spectral space the computational cost
grows linearly with the dimension, the total computational costs of Bayesian or
frequentist inference being dominated by the fast Fourier transform. The
proposed model is applied to postprocessing of precipitation forecasts from a
numerical weather prediction model for northern Switzerland. In contrast to the
raw forecasts from the numerical model, the postprocessed forecasts are
calibrated and quantify prediction uncertainty. Moreover, they outperform the
raw forecasts, in the sense that they have a lower mean absolute error
- …
