2,554 research outputs found
Polling bias and undecided voter allocations: US Presidential elections, 2004 - 2016
Accounting for undecided and uncertain voters is a challenging issue for
predicting election results from public opinion polls. Undecided voters typify
the uncertainty of swing voters in polls but are often ignored or allocated to
each candidate in a simple, deterministic manner. Historically this may have
been adequate because the undecided were comparatively small enough to assume
that they do not affect the relative proportions of the decided voters.
However, in the presence of high numbers of undecided voters, these static
rules may in fact bias election predictions from election poll authors and
meta-poll analysts. In this paper, we examine the effect of undecided voters in
the 2016 US presidential election to the previous three presidential elections.
We show there were a relatively high number of undecided voters over the
campaign and on election day, and that the allocation of undecided voters in
this election was not consistent with two-party proportional (or even)
allocations. We find evidence that static allocation regimes are inadequate for
election prediction models and that probabilistic allocations may be superior.
We also estimate the bias attributable to polling agencies, often referred to
as "house effects".Comment: 32 pages, 9 figures, 6 table
Amplitude and phase effects in Josephson qubits driven by a biharmonic electromagnetic field
We investigate the amplitude and phase effects of qubit dynamics and
excited-state population under the influence of a biharmonic control field. It
is demonstrated that the biharmonic driving field can have a significant effect
on the behavior of quasi-energy level crossing as well as on multiphoton
transitions. Also, the interference pattern for the populations of qubit
excited states is sensitive to the signal parameters. We discuss the
possibility of using these effects for manipulating qubit states and
calibrating nanosecond pulses.Comment: 10 pages, 8 figure
Strongly Coupled Quark Gluon Plasma (SCQGP)
We propose that the reason for the non-ideal behavior seen in lattice
simulation of quark gluon plasma (QGP) and relativistic heavy ion collisions
(URHICs) experiments is that the QGP near T_c and above is strongly coupled
plasma (SCP), i.e., strongly coupled quark gluon plasma (SCQGP). It is
remarkable that the widely used equation of state (EoS) of SCP in QED (quantum
electrodynamics) very nicely fits lattice results on all QGP systems, with
proper modifications to include color degrees of freedom and running coupling
constant. Results on pressure in pure gauge, 2-flavors and 3-flavors QGP, are
all can be explained by treating QGP as SCQGP as demonstated here.Energy
density and speed of sound are also presented for all three systems. We further
extend the model to systems with finite quark mass and a reasonably good fit to
lattice results are obtained for (2+1)-flavors and 4-flavors QGP. Hence it is
the first unified model, namely SCQGP, to explain the non-ideal QGP seen in
lattice simulations with just two system dependent parameters.Comment: Revised with corrections and new results, Latex file (11 pages),
postscript file of 7 figure
Not all surveillance data are created equal—A multi‐method dynamic occupancy approach to determine rabies elimination from wildlife
1. A necessary component of elimination programmes for wildlife disease is effective surveillance. The ability to distinguish between disease freedom and non‐detection can mean the difference between a successful elimination campaign and new epizootics. Understanding the contribution of different surveillance methods helps to optimize and better allocate effort and develop more effective surveillance programmes.
2. We evaluated the probability of rabies virus elimination (disease freedom) in an enzootic area with active management using dynamic occupancy modelling of 10 years of raccoon rabies virus (RABV) surveillance data (2006–2015) collected from three states in the eastern United States. We estimated detection probability of RABV cases for each surveillance method (e.g. strange acting reports, roadkill, surveillance‐trapped animals, nuisance animals and public health samples) used by the USDA National Rabies Management Program.
3. Strange acting, found dead and public health animals were the most likely to detect RABV when it was present, and generally detectability was higher in fall– winter compared to spring–summer. Found dead animals in fall–winter had the highest detection at 0.33 (95% CI: 0.20, 0.48). Nuisance animals had the lowest detection probabilities (~0.02).
4. Areas with oral rabies vaccination (ORV) management had reduced occurrence probability compared to enzootic areas without ORV management. RABV occurrence was positively associated with deciduous and mixed forests and medium to high developed areas, which are also areas with higher raccoon (Procyon lotor) densities. By combining occupancy and detection estimates we can create a probability of elimination surface that can be updated seasonally to provide guidance on areas managed for wildlife disease.
5. Synthesis and applications. Wildlife disease surveillance is often comprised of a combination of targeted and convenience‐based methods. Using a multi‐method analytical approach allows us to compare the relative strengths of these methods, providing guidance on resource allocation for surveillance actions. Applying this multi‐method approach in conjunction with dynamic occupancy analyses better informs management decisions by understanding ecological drivers of disease occurrence
Evidence for the disintegration of KIC 12557548 b
Context. The Kepler object KIC 12557548 b is peculiar. It exhibits
transit-like features every 15.7 hours that vary in depth between 0.2% and
1.2%. Rappaport et al. (2012) explain the observations in terms of a
disintegrating, rocky planet that has a trailing cloud of dust created and
constantly replenished by thermal surface erosion. The variability of the
transit depth is then a consequence of changes in the cloud optical depth.
Aims. We aim to validate the disintegrating-planet scenario by modeling the
detailed shape of the observed light curve, and thereby constrain the cloud
particle properties to better understand the nature of this intriguing object.
Methods. We analysed the six publicly-available quarters of raw Kepler data,
phase-folded the light curve and fitted it to a model for the trailing dust
cloud. Constraints on the particle properties were investigated with a
light-scattering code. Results. The light curve exhibits clear signatures of
light scattering and absorption by dust, including a brightening in flux just
before ingress correlated with the transit depth and explained by forward
scattering, and an asymmetry in the transit light curve shape, which is easily
reproduced by an exponentially decaying distribution of optically thin dust,
with a typical grain size of 0.1 micron. Conclusions. Our quantitative analysis
supports the hypothesis that the transit signal of KIC 12557548 b is due to a
variable cloud of dust, most likely originating from a disintegrating object.Comment: 5 pages, 4 figures. Accepted for publication in Astronomy and
Astrophysic
An Analysis of the Chemical Composition of the Atmosphere of Venus on an AMS of the Venera-12 Using a Gas Chromatograph
Eight analyses of the atmosphere of Venus were made beginning at an altitude of 42 km right down to the surface of the planet. The following were detected in the atmosphere of Venus: nitrogen in concentrations of 2.5 plus or minus 0.5 volumetric %, argon ir concentrations (4 plus or minus 2) x 10 to the minus 3 power volumetric %, CO--(2.8 plus or minus 1.4) x 10 to the minus 3 power volumetric % and SO2 in concentrations (1.3 plus or minus 0.6) x 10 to the minus 2 power volumetric %. The upper limits were estimated for the content of oxygen and water equal to 2 x 10 to the minus 3 power and 10 to the minus 2 power volumetric %, respectively
Gravitational-Wave Astronomy with Inspiral Signals of Spinning Compact-Object Binaries
Inspiral signals from binary compact objects (black holes and neutron stars)
are primary targets of the ongoing searches by ground-based gravitational-wave
interferometers (LIGO, Virgo, GEO-600 and TAMA-300). We present
parameter-estimation simulations for inspirals of black-hole--neutron-star
binaries using Markov-chain Monte-Carlo methods. For the first time, we have
both estimated the parameters of a binary inspiral source with a spinning
component and determined the accuracy of the parameter estimation, for
simulated observations with ground-based gravitational-wave detectors. We
demonstrate that we can obtain the distance, sky position, and binary
orientation at a higher accuracy than previously suggested in the literature.
For an observation of an inspiral with sufficient spin and two or three
detectors we find an accuracy in the determination of the sky position of
typically a few tens of square degrees.Comment: v2: major conceptual changes, 4 pages, 1 figure, 1 table, submitted
to ApJ
THERMAL RADIATION FROM MAGNETIZED NEUTRON STARS: A look at the Surface of a Neutron Star.
Surface thermal emission has been detected by ROSAT from four nearby young
neutron stars. Assuming black body emission, the significant pulsations of the
observed light curves can be interpreted as due to large surface temperature
differences produced by the effect of the crustal magnetic field on the flow of
heat from the hot interior toward the cooler surface. However, the energy
dependence of the modulation observed in Geminga is incompatible with blackbody
emission: this effect will give us a strong constraint on models of the neutron
star surface.Comment: 10 pages. tar-compressed and uuencoded postcript file. talk given at
the `Jubilee Gamow Seminar', St. Petersburg, Sept. 1994
The Challenge of Machine Learning in Space Weather Nowcasting and Forecasting
The numerous recent breakthroughs in machine learning (ML) make imperative to
carefully ponder how the scientific community can benefit from a technology
that, although not necessarily new, is today living its golden age. This Grand
Challenge review paper is focused on the present and future role of machine
learning in space weather. The purpose is twofold. On one hand, we will discuss
previous works that use ML for space weather forecasting, focusing in
particular on the few areas that have seen most activity: the forecasting of
geomagnetic indices, of relativistic electrons at geosynchronous orbits, of
solar flares occurrence, of coronal mass ejection propagation time, and of
solar wind speed. On the other hand, this paper serves as a gentle introduction
to the field of machine learning tailored to the space weather community and as
a pointer to a number of open challenges that we believe the community should
undertake in the next decade. The recurring themes throughout the review are
the need to shift our forecasting paradigm to a probabilistic approach focused
on the reliable assessment of uncertainties, and the combination of
physics-based and machine learning approaches, known as gray-box.Comment: under revie
Infinite factorization of multiple non-parametric views
Combined analysis of multiple data sources has increasing application interest, in particular for distinguishing shared and source-specific aspects. We extend this rationale of classical canonical correlation analysis into a flexible, generative and non-parametric clustering
setting, by introducing a novel non-parametric hierarchical
mixture model. The lower level of the model describes each source with a flexible non-parametric mixture, and the top level combines these to describe commonalities of the sources. The lower-level clusters arise from hierarchical Dirichlet Processes, inducing an infinite-dimensional contingency table between the views. The commonalities between the sources are modeled by an infinite block
model of the contingency table, interpretable as non-negative factorization of infinite matrices, or as a prior for infinite contingency tables. With Gaussian mixture components plugged in for continuous measurements, the model is applied to two views of genes, mRNA expression and abundance of the produced proteins, to expose groups of genes that are co-regulated in either or both of the views.
Cluster analysis of co-expression is a standard simple way of screening for co-regulation, and the two-view analysis extends the approach to distinguishing between pre- and post-translational regulation
- …
