173,684 research outputs found
Implications from ASKAP Fast Radio Burst Statistics
Although there has recently been tremendous progress in studies of fast radio
bursts (FRBs), the nature of their progenitors remains a mystery. We study the
fluence and dispersion measure (DM) distributions of the ASKAP sample to better
understand their energetics and statistics. We first consider a simplified
model of a power-law volumetric rate per unit isotropic energy dN/dE ~
E^{-gamma} with a maximum energy E_max in a uniform Euclidean Universe. This
provides analytic insights for what can be learnt from these distributions. We
find that the observed cumulative DM distribution scales as N(>DM) ~
DM^{5-2*gamma} (for gamma > 1) until a maximum value DM_max above which bursts
near E_max fall below the fluence threshold of a given telescope. Comparing
this model with the observed fluence and DM distributions, we find a reasonable
fit for gamma ~ 1.7 and E_max ~ 10^{33} erg/Hz. We then carry out a full
Bayesian analysis based on a Schechter rate function with cosmological factor.
We find roughly consistent results with our analytical approach, although with
large errors on the inferred parameters due to the small sample size. The
power-law index and the maximum energy are constrained to be gamma = 1.6 +/-
0.3 and log(E_max) [erg/Hz] = 34.1 +1.1 -0.7 (68% confidence), respectively.
From the survey exposure time, we further infer a cumulative local volumetric
rate of log N(E > 10^{32} erg/Hz) [Gpc^{-3} yr^{-1}] = 2.6 +/- 0.4 (68%
confidence). The methods presented here will be useful for the much larger FRB
samples expected in the near future to study their distributions, energetics,
and rates.Comment: ApJ accepted. Expanded beyond the scope of the earlier version into 8
pages, 7 figures. Following referees' comments, we included a full Bayesian
analysis based on a Schechter rate function with cosmological factor. The PDF
of the inferred model parameters are presented by MCMC sampling in Figure 4
(the most important result). We also discussed the completeness of ASKAP
sample in Section
On models of nonlinear evolution paths in adiabatic quantum algorithms
In this paper, we study two different nonlinear interpolating paths in
adiabatic evolution algorithms for solving a particular class of quantum search
problems where both the initial and final Hamiltonian are one-dimensional
projector Hamiltonians on the corresponding ground state. If the overlap
between the initial state and final state of the quantum system is not equal to
zero, both of these models can provide a constant time speedup over the usual
adiabatic algorithms by increasing some another corresponding "complexity". But
when the initial state has a zero overlap with the solution state in the
problem, the second model leads to an infinite time complexity of the algorithm
for whatever interpolating functions being applied while the first one can
still provide a constant running time. However, inspired by a related
reference, a variant of the first model can be constructed which also fails for
the problem when the overlap is exactly equal to zero if we want to make up the
"intrinsic" fault of the second model-an increase in energy. Two concrete
theorems are given to serve as explanations why neither of these two models can
improve the usual adiabatic evolution algorithms for the phenomenon above.
These just tell us what should be noted when using certain nonlinear evolution
paths in adiabatic quantum algorithms for some special kind of problems.Comment: 11 page
How to Host a Data Competition: Statistical Advice for Design and Analysis of a Data Competition
Data competitions rely on real-time leaderboards to rank competitor entries
and stimulate algorithm improvement. While such competitions have become quite
popular and prevalent, particularly in supervised learning formats, their
implementations by the host are highly variable. Without careful planning, a
supervised learning competition is vulnerable to overfitting, where the winning
solutions are so closely tuned to the particular set of provided data that they
cannot generalize to the underlying problem of interest to the host. This paper
outlines some important considerations for strategically designing relevant and
informative data sets to maximize the learning outcome from hosting a
competition based on our experience. It also describes a post-competition
analysis that enables robust and efficient assessment of the strengths and
weaknesses of solutions from different competitors, as well as greater
understanding of the regions of the input space that are well-solved. The
post-competition analysis, which complements the leaderboard, uses exploratory
data analysis and generalized linear models (GLMs). The GLMs not only expand
the range of results we can explore, they also provide more detailed analysis
of individual sub-questions including similarities and differences between
algorithms across different types of scenarios, universally easy or hard
regions of the input space, and different learning objectives. When coupled
with a strategically planned data generation approach, the methods provide
richer and more informative summaries to enhance the interpretation of results
beyond just the rankings on the leaderboard. The methods are illustrated with a
recently completed competition to evaluate algorithms capable of detecting,
identifying, and locating radioactive materials in an urban environment.Comment: 36 page
- …
