348 research outputs found
Tests of Bayesian Model Selection Techniques for Gravitational Wave Astronomy
The analysis of gravitational wave data involves many model selection
problems. The most important example is the detection problem of selecting
between the data being consistent with instrument noise alone, or instrument
noise and a gravitational wave signal. The analysis of data from ground based
gravitational wave detectors is mostly conducted using classical statistics,
and methods such as the Neyman-Pearson criteria are used for model selection.
Future space based detectors, such as the \emph{Laser Interferometer Space
Antenna} (LISA), are expected to produced rich data streams containing the
signals from many millions of sources. Determining the number of sources that
are resolvable, and the most appropriate description of each source poses a
challenging model selection problem that may best be addressed in a Bayesian
framework. An important class of LISA sources are the millions of low-mass
binary systems within our own galaxy, tens of thousands of which will be
detectable. Not only are the number of sources unknown, but so are the number
of parameters required to model the waveforms. For example, a significant
subset of the resolvable galactic binaries will exhibit orbital frequency
evolution, while a smaller number will have measurable eccentricity. In the
Bayesian approach to model selection one needs to compute the Bayes factor
between competing models. Here we explore various methods for computing Bayes
factors in the context of determining which galactic binaries have measurable
frequency evolution. The methods explored include a Reverse Jump Markov Chain
Monte Carlo (RJMCMC) algorithm, Savage-Dickie density ratios, the Schwarz-Bayes
Information Criterion (BIC), and the Laplace approximation to the model
evidence. We find good agreement between all of the approaches.Comment: 11 pages, 6 figure
Nonparametric Reconstruction of the Dark Energy Equation of State from Diverse Data Sets
The cause of the accelerated expansion of the Universe poses one of the most
fundamental questions in physics today. In the absence of a compelling theory
to explain the observations, a first task is to develop a robust phenomenology.
If the acceleration is driven by some form of dark energy, then, the
phenomenology is determined by the dark energy equation of state w. A major aim
of ongoing and upcoming cosmological surveys is to measure w and its time
dependence at high accuracy. Since w(z) is not directly accessible to
measurement, powerful reconstruction methods are needed to extract it reliably
from observations. We have recently introduced a new reconstruction method for
w(z) based on Gaussian process modeling. This method can capture nontrivial
time-dependences in w(z) and, most importantly, it yields controlled and
unbaised error estimates. In this paper we extend the method to include a
diverse set of measurements: baryon acoustic oscillations, cosmic microwave
background measurements, and supernova data. We analyze currently available
data sets and present the resulting constraints on w(z), finding that current
observations are in very good agreement with a cosmological constant. In
addition we explore how well our method captures nontrivial behavior of w(z) by
analyzing simulated data assuming high-quality observations from future
surveys. We find that the baryon acoustic oscillation measurements by
themselves already lead to remarkably good reconstruction results and that the
combination of different high-quality probes allows us to reconstruct w(z) very
reliably with small error bounds.Comment: 14 pages, 9 figures, 3 table
Nonparametric Reconstruction of the Dark Energy Equation of State
A basic aim of ongoing and upcoming cosmological surveys is to unravel the
mystery of dark energy. In the absence of a compelling theory to test, a
natural approach is to better characterize the properties of dark energy in
search of clues that can lead to a more fundamental understanding. One way to
view this characterization is the improved determination of the
redshift-dependence of the dark energy equation of state parameter, w(z). To do
this requires a robust and bias-free method for reconstructing w(z) from data
that does not rely on restrictive expansion schemes or assumed functional forms
for w(z). We present a new nonparametric reconstruction method that solves for
w(z) as a statistical inverse problem, based on a Gaussian Process
representation. This method reliably captures nontrivial behavior of w(z) and
provides controlled error bounds. We demonstrate the power of the method on
different sets of simulated supernova data; the approach can be easily extended
to include diverse cosmological probes.Comment: 16 pages, 11 figures, accepted for publication in Physical Review
A Solution to the Galactic Foreground Problem for LISA
Low frequency gravitational wave detectors, such as the Laser Interferometer
Space Antenna (LISA), will have to contend with large foregrounds produced by
millions of compact galactic binaries in our galaxy. While these galactic
signals are interesting in their own right, the unresolved component can
obscure other sources. The science yield for the LISA mission can be improved
if the brighter and more isolated foreground sources can be identified and
regressed from the data. Since the signals overlap with one another we are
faced with a ``cocktail party'' problem of picking out individual conversations
in a crowded room. Here we present and implement an end-to-end solution to the
galactic foreground problem that is able to resolve tens of thousands of
sources from across the LISA band. Our algorithm employs a variant of the
Markov Chain Monte Carlo (MCMC) method, which we call the Blocked Annealed
Metropolis-Hastings (BAM) algorithm. Following a description of the algorithm
and its implementation, we give several examples ranging from searches for a
single source to searches for hundreds of overlapping sources. Our examples
include data sets from the first round of Mock LISA Data Challenges.Comment: 19 pages, 27 figure
A Bayesian Approach to the Detection Problem in Gravitational Wave Astronomy
The analysis of data from gravitational wave detectors can be divided into
three phases: search, characterization, and evaluation. The evaluation of the
detection - determining whether a candidate event is astrophysical in origin or
some artifact created by instrument noise - is a crucial step in the analysis.
The on-going analyses of data from ground based detectors employ a frequentist
approach to the detection problem. A detection statistic is chosen, for which
background levels and detection efficiencies are estimated from Monte Carlo
studies. This approach frames the detection problem in terms of an infinite
collection of trials, with the actual measurement corresponding to some
realization of this hypothetical set. Here we explore an alternative, Bayesian
approach to the detection problem, that considers prior information and the
actual data in hand. Our particular focus is on the computational techniques
used to implement the Bayesian analysis. We find that the Parallel Tempered
Markov Chain Monte Carlo (PTMCMC) algorithm is able to address all three phases
of the anaylsis in a coherent framework. The signals are found by locating the
posterior modes, the model parameters are characterized by mapping out the
joint posterior distribution, and finally, the model evidence is computed by
thermodynamic integration. As a demonstration, we consider the detection
problem of selecting between models describing the data as instrument noise, or
instrument noise plus the signal from a single compact galactic binary. The
evidence ratios, or Bayes factors, computed by the PTMCMC algorithm are found
to be in close agreement with those computed using a Reversible Jump Markov
Chain Monte Carlo algorithm.Comment: 19 pages, 12 figures, revised to address referee's comment
A Bayesian approach to analyzing phenotype microarray data enables estimation of microbial growth parameters
Biolog phenotype microarrays enable simultaneous, high throughput analysis of cell cultures in different environments. The output is high-density time-course data showing redox curves (approximating growth) for each experimental condition. The software provided with the Omnilog incubator/reader summarizes each time-course as a single datum, so most of the information is not used. However, the time courses can be extremely varied and often contain detailed qualitative (shape of curve) and quantitative (values of parameters) information. We present a novel, Bayesian approach to estimating parameters from Phenotype Microarray data, fitting growth models using Markov Chain Monte Carlo methods to enable high throughput estimation of important information, including length of lag phase, maximal ``growth'' rate and maximum output. We find that the Baranyi model for microbial growth is useful for fitting Biolog data. Moreover, we introduce a new growth model that allows for diauxic growth with a lag phase, which is particularly useful where Phenotype Microarrays have been applied to cells grown in complex mixtures of substrates, for example in industrial or biotechnological applications, such as worts in brewing. Our approach provides more useful information from Biolog data than existing, competing methods, and allows for valuable comparisons between data series and across different models
Extracting galactic binary signals from the first round of Mock LISA Data Challenges
We report on the performance of an end-to-end Bayesian analysis pipeline for
detecting and characterizing galactic binary signals in simulated LISA data.
Our principal analysis tool is the Blocked-Annealed Metropolis Hasting (BAM)
algorithm, which has been optimized to search for tens of thousands of
overlapping signals across the LISA band. The BAM algorithm employs Bayesian
model selection to determine the number of resolvable sources, and provides
posterior distribution functions for all the model parameters. The BAM
algorithm performed almost flawlessly on all the Round 1 Mock LISA Data
Challenge data sets, including those with many highly overlapping sources. The
only misses were later traced to a coding error that affected high frequency
sources. In addition to the BAM algorithm we also successfully tested a Genetic
Algorithm (GA), but only on data sets with isolated signals as the GA has yet
to be optimized to handle large numbers of overlapping signals.Comment: 13 pages, 4 figures, submitted to Proceedings of GWDAW-11 (Berlin,
Dec. '06
LISA Data Analysis using MCMC methods
The Laser Interferometer Space Antenna (LISA) is expected to simultaneously
detect many thousands of low frequency gravitational wave signals. This
presents a data analysis challenge that is very different to the one
encountered in ground based gravitational wave astronomy. LISA data analysis
requires the identification of individual signals from a data stream containing
an unknown number of overlapping signals. Because of the signal overlaps, a
global fit to all the signals has to be performed in order to avoid biasing the
solution. However, performing such a global fit requires the exploration of an
enormous parameter space with a dimension upwards of 50,000. Markov Chain Monte
Carlo (MCMC) methods offer a very promising solution to the LISA data analysis
problem. MCMC algorithms are able to efficiently explore large parameter
spaces, simultaneously providing parameter estimates, error analyses and even
model selection. Here we present the first application of MCMC methods to
simulated LISA data and demonstrate the great potential of the MCMC approach.
Our implementation uses a generalized F-statistic to evaluate the likelihoods,
and simulated annealing to speed convergence of the Markov chains. As a final
step we super-cool the chains to extract maximum likelihood estimates, and
estimates of the Bayes factors for competing models. We find that the MCMC
approach is able to correctly identify the number of signals present, extract
the source parameters, and return error estimates consistent with Fisher
information matrix predictions.Comment: 14 pages, 7 figure
On the flexibility of the design of Multiple Try Metropolis schemes
The Multiple Try Metropolis (MTM) method is a generalization of the classical
Metropolis-Hastings algorithm in which the next state of the chain is chosen
among a set of samples, according to normalized weights. In the literature,
several extensions have been proposed. In this work, we show and remark upon
the flexibility of the design of MTM-type methods, fulfilling the detailed
balance condition. We discuss several possibilities and show different
numerical results
From cosmic deceleration to acceleration: new constraints from SN Ia and BAO/CMB
We use type Ia supernovae (SN Ia) data in combination with recent baryonic
acoustic oscillations (BAO) and cosmic microwave background (CMB) observations
to constrain a kink-like parametrization of the deceleration parameter ().
This -parametrization can be written in terms of the initial () and
present () values of the deceleration parameter, the redshift of the
cosmic transition from deceleration to acceleration () and the redshift
width of such transition (). By assuming a flat space geometry,
and adopting a likelihood approach to deal with the SN Ia data we obtain, at
the 68% confidence level (C.L.), that: ,
and when we combine
BAO/CMB observations with SN Ia data processed with the MLCS2k2 light-curve
fitter. When in this combination we use the SALT2 fitter we get instead, at the
same C.L.: , and
. Our results indicate, with a quite general and
model independent approach, that MLCS2k2 favors Dvali-Gabadadze-Porrati-like
cosmological models, while SALT2 favors CDM-like ones. Progress in
determining the transition redshift and/or the present value of the
deceleration parameter depends crucially on solving the issue of the difference
obtained when using these two light-curve fitters.Comment: 25 pages, 9 figure
- …
