488 research outputs found
Maximizing survey volume for large-area multi-epoch surveys with Voronoi tessellation
The survey volume of a proper motion-limited sample is typically much smaller than a magnitude-limited sample. This is because of the noisy astrometric measurements from detectors that are not dedicated for astrometric missions. In order to apply an empirical completeness correction, existing works limit the survey depth to the shallower parts of the sky that hamper the maximum potential of a survey. The number of epoch of measurement is a discrete quantity that cannot be interpolated across the projected plane of observation, so that the survey properties change in discrete steps across the sky. This work proposes a method to dissect the survey into small parts with Voronoi tessellation using candidate objects as generating points such that each part defines a ‘mini-survey’ that has its own properties. Coupling with a maximum volume density estimator, the new method is demonstrated to be unbiased and recovered ∼20 per cent more objects than the existing method in a mock catalogue of a white dwarf-only solar neighbourhood with Pan–STARRS 1-like characteristics. Towards the end of this work, we demonstrate one way to increase the tessellation resolution with artificial generating points, which would be useful for analysis of rare objects with small number counts
The High Redshift Integrated Sachs-Wolfe Effect
In this paper we rely on the quasar (QSO) catalog of the Sloan Digital Sky
Survey Data Release Six (SDSS DR6) of about one million photometrically
selected QSOs to compute the Integrated Sachs-Wolfe (ISW) effect at high
redshift, aiming at constraining the behavior of the expansion rate and thus
the behaviour of dark energy at those epochs. This unique sample significantly
extends previous catalogs to higher redshifts while retaining high efficiency
in the selection algorithm. We compute the auto-correlation function (ACF) of
QSO number density from which we extract the bias and the stellar
contamination. We then calculate the cross-correlation function (CCF) between
QSO number density and Cosmic Microwave Background (CMB) temperature
fluctuations in different subsamples: at high z>1.5 and low z<1.5 redshifts and
for two different choices of QSO in a conservative and in a more speculative
analysis. We find an overall evidence for a cross-correlation different from
zero at the 2.7\sigma level, while this evidence drops to 1.5\sigma at z>1.5.
We focus on the capabilities of the ISW to constrain the behaviour of the dark
energy component at high redshift both in the \LambdaCDM and Early Dark Energy
cosmologies, when the dark energy is substantially unconstrained by
observations. At present, the inclusion of the ISW data results in a poor
improvement compared to the obtained constraints from other cosmological
datasets. We study the capabilities of future high-redshift QSO survey and find
that the ISW signal can improve the constraints on the most important
cosmological parameters derived from Planck CMB data, including the high
redshift dark energy abundance, by a factor \sim 1.5.Comment: 20 pages, 18 figures, and 7 table
Large-k Limit of Multi-Point Propagators in the RG Formalism
Renormalized versions of cosmological perturbation theory have been very
successful in recent years in describing the evolution of structure formation
in the weakly non-linear regime. The concept of multi-point propagators has
been introduced as a tool to quantify the relation between the initial matter
distribution and the final one and to push the validity of the approaches to
smaller scales. We generalize the n-point propagators that have been considered
until now to include a new class of multi-point propagators that are relevant
in the framework of the renormalization group formalism. The large-k results
obtained for this general class of multi-point propagators match the results
obtained earlier both in the case of Gaussian and non-Gaussian initial
conditions. We discuss how the large-k results can be used to improve on the
accuracy of the calculations of the power spectrum and bispectrum in the
presence of initial non-Gaussianities.Comment: 30 page
Large Synoptic Survey Telescope: Overview
A large wide-field telescope and camera with optical throughput over 200 m^2
deg^2 -- a factor of 50 beyond what we currently have -- would enable the
detection of faint moving or bursting optical objects: from Earth threatening
asteroids to energetic events at the edge of the optical universe. An optimized
design for LSST is a 8.4 m telescope with a 3 degree field of view and an
optical throughput of 260 m^2 deg^2. With its large throughput and dedicated
all-sky monitoring mode, the LSST will reach 24th magnitude in a single 10
second exposure, opening unexplored regions of astronomical parameter space.
The heart of the 2.3 Gpixel camera will be an array of imager modules with 10
micron pixels. Once each month LSST will survey up to 14,000 deg^2 of the sky
with many ~10 second exposures. Over time LSST will survey 30,000 deg^2 deeply
in multiple bandpasses, enabling innovative investigations ranging from
galactic structure to cosmology. This is a shift in paradigm for optical
astronomy: from "survey follow-up" to "survey direct science." The resulting
real-time data products and fifteen petabyte time-tagged imaging database and
photometric catalog will provide a unique resource. A collaboration of ~80
engineers and scientists is gearing up to confront this exciting challenge
A direct probe of cosmological power spectra of the peculiar velocity field and the gravitational lensing magnification from photometric redshift surveys
The cosmological peculiar velocity field (deviations from the pure Hubble
flow) of matter carries significant information on dark energy, dark matter and
the underlying theory of gravity on large scales. Peculiar motions of galaxies
introduce systematic deviations between the observed galaxy redshifts z and the
corresponding cosmological redshifts z_cos. A novel method for estimating the
angular power spectrum of the peculiar velocity field based on observations of
galaxy redshifts and apparent magnitudes m (or equivalently fluxes) is
presented. This method exploits the fact that a mean relation between z_cos and
m of galaxies can be derived from all galaxies in a redshift-magnitude survey.
Given a galaxy magnitude, it is shown that the z_cos(m) relation yields its
cosmological redshift with a 1-sigma error of sigma_z~0.3 for a survey like
Euclid (~10^9 galaxies at z<~2), and can be used to constrain the angular power
spectrum of z-z_cos(m) with a high signal-to-noise ratio. At large angular
separations corresponding to l<~15, we obtain significant constraints on the
power spectrum of the peculiar velocity field. At 15<~l<~60, magnitude shifts
in the z_cos(m) relation caused by gravitational lensing magnification
dominate, allowing us to probe the line-of-sight integral of the gravitational
potential. Effects related to the environmental dependence in the luminosity
function can easily be computed and their contamination removed from the
estimated power spectra. The amplitude of the combined velocity and lensing
power spectra at z~1 can be measured with <~5% accuracy.Comment: 22 pages, 3 figures; added a discussion of systematic errors,
accepted for publication in JCA
Strongly lensed SNe Ia in the era of LSST: observing cadence for lens discoveries and time-delay measurements
The upcoming Large Synoptic Survey Telescope (LSST) will detect many strongly
lensed Type Ia supernovae (LSNe Ia) for time-delay cosmography. This will
provide an independent and direct way for measuring the Hubble constant ,
which is necessary to address the current tension in between
the local distance ladder and the early Universe measurements. We present a
detailed analysis of different observing strategies for the LSST, and quantify
their impact on time-delay measurement between multiple images of LSNe Ia. For
this, we produced microlensed mock-LSST light curves for which we estimated the
time delay between different images. We find that using only LSST data for
time-delay cosmography is not ideal. Instead, we advocate using LSST as a
discovery machine for LSNe Ia, enabling time delay measurements from follow-up
observations from other instruments in order to increase the number of systems
by a factor of 2 to 16 depending on the observing strategy. Furthermore, we
find that LSST observing strategies, which provide a good sampling frequency
(the mean inter-night gap is around two days) and high cumulative season length
(ten seasons with a season length of around 170 days per season), are favored.
Rolling cadences subdivide the survey and focus on different parts in different
years; these observing strategies trade the number of seasons for better
sampling frequency. In our investigation, this leads to half the number of
systems in comparison to the best observing strategy. Therefore rolling
cadences are disfavored because the gain from the increased sampling frequency
cannot compensate for the shortened cumulative season length. We anticipate
that the sample of lensed SNe Ia from our preferred LSST cadence strategies
with rapid follow-up observations would yield an independent percent-level
constraint on .Comment: 25 pages, 22 figures; accepted for publication in A&
Cosmological parameter constraints from galaxy–galaxy lensing and galaxy clustering with the SDSS DR7
Recent studies have shown that the cross-correlation coefficient between galaxies and dark matter is very close to unity on scales outside a few virial radii of galaxy haloes, independent of the details of how galaxies populate dark matter haloes. This finding makes it possible to determine the dark matter clustering from measurements of galaxy–galaxy weak lensing and galaxy clustering. We present new cosmological parameter constraints based on large-scale measurements of spectroscopic galaxy samples from the Sloan Digital Sky Survey (SDSS) data release 7. We generalize the approach of Baldauf et al. to remove small-scale information (below 2 and 4 h^(−1) Mpc for lensing and clustering measurements, respectively), where the cross-correlation coefficient differs from unity. We derive constraints for three galaxy samples covering 7131 deg^2, containing 69 150, 62 150 and 35 088 galaxies with mean redshifts of 0.11, 0.28 and 0.40. We clearly detect scale-dependent galaxy bias for the more luminous galaxy samples, at a level consistent with theoretical expectations. When we vary both σ_8 and Ω_m (and marginalize over non-linear galaxy bias) in a flat Λ cold dark matter model, the best-constrained quantity is σ_8(Ω_m/0.25)^(0.57) = 0.80 ± 0.05 (1σ, stat. + sys.), where statistical and systematic errors (photometric redshift and shear calibration) have comparable contributions, and we have fixed n_s = 0.96 and h = 0.7. These strong constraints on the matter clustering suggest that this method is competitive with cosmic shear in current data, while having very complementary and in some ways less serious systematics. We therefore expect that this method will play a prominent role in future weak lensing surveys. When we combine these data with Wilkinson Microwave Anisotropy Probe 7-year (WMAP7) cosmic microwave background (CMB) data, constraints on σ_8, Ω_m, H_0, w_(de) and ∑m_ν become 30–80 per cent tighter than with CMB data alone, since our data break several parameter degeneracies
Measuring neutrino masses with a future galaxy survey
We perform a detailed forecast on how well a Euclid-like photometric galaxy
and cosmic shear survey will be able to constrain the absolute neutrino mass
scale. Adopting conservative assumptions about the survey specifications and
assuming complete ignorance of the galaxy bias, we estimate that the minimum
mass sum of sum m_nu ~ 0.06 eV in the normal hierarchy can be detected at 1.5
sigma to 2.5 sigma significance, depending on the model complexity, using a
combination of galaxy and cosmic shear power spectrum measurements in
conjunction with CMB temperature and polarisation observations from Planck.
With better knowledge of the galaxy bias, the significance of the detection
could potentially reach 5.4 sigma. Interestingly, neither Planck+shear nor
Planck+galaxy alone can achieve this level of sensitivity; it is the combined
effect of galaxy and cosmic shear power spectrum measurements that breaks the
persistent degeneracies between the neutrino mass, the physical matter density,
and the Hubble parameter. Notwithstanding this remarkable sensitivity to sum
m_nu, Euclid-like shear and galaxy data will not be sensitive to the exact mass
spectrum of the neutrino sector; no significant bias (< 1 sigma) in the
parameter estimation is induced by fitting inaccurate models of the neutrino
mass splittings to the mock data, nor does the goodness-of-fit of these models
suffer any significant degradation relative to the true one (Delta chi_eff ^2<
1).Comment: v1: 29 pages, 10 figures. v2: 33 pages, 12 figures; added sections on
shape evolution and constraints in more complex models, accepted for
publication in JCA
Deep recurrent neural networks for supernovae classification
We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys
- …
