186 research outputs found
Planck Intermediate Results. IV. The XMM-Newton validation programme for new Planck galaxy clusters
We present the final results from the XMM-Newton validation follow-up of new
Planck galaxy cluster candidates. We observed 15 new candidates, detected with
signal-to-noise ratios between 4.0 and 6.1 in the 15.5-month nominal Planck
survey. The candidates were selected using ancillary data flags derived from
the ROSAT All Sky Survey (RASS) and Digitized Sky Survey all-sky maps, with the
aim of pushing into the low SZ flux, high-z regime and testing RASS flags as
indicators of candidate reliability. 14 new clusters were detected by XMM,
including 2 double systems. Redshifts lie in the range 0.2 to 0.9, with 6
clusters at z>0.5. Estimated M500 range from 2.5 10^14 to 8 10^14 Msun. We
discuss our results in the context of the full XMM validation programme, in
which 51 new clusters have been detected. This includes 4 double and 2 triple
systems, some of which are chance projections on the sky of clusters at
different z. We find that association with a RASS-BSC source is a robust
indicator of the reliability of a candidate, whereas association with a FSC
source does not guarantee that the SZ candidate is a bona fide cluster.
Nevertheless, most Planck clusters appear in RASS maps, with a significance
greater than 2 sigma being a good indication that the candidate is a real
cluster. The full sample gives a Planck sensitivity threshold of Y500 ~ 4 10^-4
arcmin^2, with indication for Malmquist bias in the YX-Y500 relation below this
level. The corresponding mass threshold depends on z. Systems with M500 > 5
10^14 Msun at z > 0.5 are easily detectable with Planck. The newly-detected
clusters follow the YX-Y500 relation derived from X-ray selected samples.
Compared to X-ray selected clusters, the new SZ clusters have a lower X-ray
luminosity on average for their mass. There is no indication of departure from
standard self-similar evolution in the X-ray versus SZ scaling properties.
(abridged)Comment: accepted by A&
Short-lived Nuclei in the Early Solar System: Possible AGB Sources
(Abridged) We review abundances of short-lived nuclides in the early solar
system (ESS) and the methods used to determine them. We compare them to the
inventory for a uniform galactic production model. Within a factor of two,
observed abundances of several isotopes are compatible with this model. I-129
is an exception, with an ESS inventory much lower than expected. The isotopes
Pd-107, Fe-60, Ca-41, Cl-36, Al-26, and Be-10 require late addition to the
solar nebula. Be-10 is the product of particle irradiation of the solar system
as probably is Cl-36. Late injection by a supernova (SN) cannot be responsible
for most short-lived nuclei without excessively producing Mn-53; it can be the
source of Mn-53 and maybe Fe-60. If a late SN is responsible for these two
nuclei, it still cannot make Pd-107 and other isotopes. We emphasize an AGB
star as a source of nuclei, including Fe-60 and explore this possibility with
new stellar models. A dilution factor of about 4e-3 gives reasonable amounts of
many nuclei. We discuss the role of irradiation for Al-26, Cl-36 and Ca-41.
Conflict between scenarios is emphasized as well as the absence of a global
interpretation for the existing data. Abundances of actinides indicate a
quiescent interval of about 1e8 years for actinide group production in order to
explain the data on Pu-244 and new bounds on Cm-247. This interval is not
compatible with Hf-182 data, so a separate type of r-process is needed for at
least the actinides, distinct from the two types previously identified. The
apparent coincidence of the I-129 and trans-actinide time scales suggests that
the last actinide contribution was from an r-process that produced actinides
without fission recycling so that the yields at Ba and below were governed by
fission.Comment: 92 pages, 14 figure files, in press at Nuclear Physics
A combined long-range phasing and long haplotype imputation method to impute phase for SNP genotypes
<p>Abstract</p> <p>Background</p> <p>Knowing the phase of marker genotype data can be useful in genome-wide association studies, because it makes it possible to use analysis frameworks that account for identity by descent or parent of origin of alleles and it can lead to a large increase in data quantities via genotype or sequence imputation. Long-range phasing and haplotype library imputation constitute a fast and accurate method to impute phase for SNP data.</p> <p>Methods</p> <p>A long-range phasing and haplotype library imputation algorithm was developed. It combines information from surrogate parents and long haplotypes to resolve phase in a manner that is not dependent on the family structure of a dataset or on the presence of pedigree information.</p> <p>Results</p> <p>The algorithm performed well in both simulated and real livestock and human datasets in terms of both phasing accuracy and computation efficiency. The percentage of alleles that could be phased in both simulated and real datasets of varying size generally exceeded 98% while the percentage of alleles incorrectly phased in simulated data was generally less than 0.5%. The accuracy of phasing was affected by dataset size, with lower accuracy for dataset sizes less than 1000, but was not affected by effective population size, family data structure, presence or absence of pedigree information, and SNP density. The method was computationally fast. In comparison to a commonly used statistical method (fastPHASE), the current method made about 8% less phasing mistakes and ran about 26 times faster for a small dataset. For larger datasets, the differences in computational time are expected to be even greater. A computer program implementing these methods has been made available.</p> <p>Conclusions</p> <p>The algorithm and software developed in this study make feasible the routine phasing of high-density SNP chips in large datasets.</p
Planck 2013 results. XX. Cosmology from Sunyaev-Zeldovich cluster counts
We present constraints on cosmological parameters using number counts as a
function of redshift for a sub-sample of 189 galaxy clusters from the Planck SZ
(PSZ) catalogue. The PSZ is selected through the signature of the
Sunyaev--Zeldovich (SZ) effect, and the sub-sample used here has a
signal-to-noise threshold of seven, with each object confirmed as a cluster and
all but one with a redshift estimate. We discuss the completeness of the sample
and our construction of a likelihood analysis. Using a relation between mass
and SZ signal calibrated to X-ray measurements, we derive constraints
on the power spectrum amplitude and matter density parameter
in a flat CDM model. We test the robustness of
our estimates and find that possible biases in the -- relation and the
halo mass function are larger than the statistical uncertainties from the
cluster sample. Assuming the X-ray determined mass to be biased low relative to
the true mass by between zero and 30%, motivated by comparison of the observed
mass scaling relations to those from a set of numerical simulations, we find
that , , and
. The value of
is degenerate with the mass bias; if the latter is fixed to a value
of 20% we find and a
tighter one-dimensional range . We find that the larger
values of and preferred by Planck's
measurements of the primary CMB anisotropies can be accommodated by a mass bias
of about 40%. Alternatively, consistency with the primary CMB constraints can
be achieved by inclusion of processes that suppress power on small scales
relative to the CDM model, such as a component of massive neutrinos
(abridged).Comment: 20 pages, accepted for publication by A&
Planck Intermediate Results. XI: The gas content of dark matter halos: the Sunyaev-Zeldovich-stellar mass relation for locally brightest galaxies
Contains fulltext :
119332.pdf (preprint version ) (Open Access
Planck 2013 results. XXIX. Planck catalogue of Sunyaev-Zeldovich sources
We describe the all-sky Planck catalogue of clusters and cluster candidates derived from Sunyaev-Zeldovich (SZ) effect detections using the first 15.5 months of Planck satellite observations. The catalogue contains 1227 entries, making it over six times the size of the Planck Early SZ (ESZ) sample and the largest SZ-selected catalogue to date. It contains 861 confirmed clusters, of which 178 have been confirmed as clusters, mostly through follow-up observations, and a further 683 are previously-known clusters. The remaining 366 have the status of cluster candidates, and we divide them into three classes according to the quality of evidence that they are likely to be true clusters. The Planck SZ catalogue is the deepest all-sky cluster catalogue, with redshifts up to about one, and spans the broadest cluster mass range from (0.1 to 1.6) × 1015 M⊙. Confirmation of cluster candidates through comparison with existing surveys or cluster catalogues is extensively described, as is the statistical characterization of the catalogue in terms of completeness and statistical reliability. The outputs of the validation process are provided as additional information. This gives, in particular, an ensemble of 813 cluster redshifts, and for all these Planck clusters we also include a mass estimated from a newly-proposed SZ-mass proxy. A refined measure of the SZ Compton parameter for the clusters with X-ray counter-parts is provided, as is an X-ray flux for all the Planck clusters not previously detected in X-ray surveys.The development of Planck has been supported by: ESA; CNES and CNRS/INSU-IN2P3-INP (France); ASI, CNR, and INAF (Italy); NASA and DoE (USA); STFC and UKSA (UK); CSIC, MICINN and JA (Spain); Tekes, AoF and CSC (Finland); DLR and MPG (Germany); CSA (Canada); DTU Space (Denmark); SER/SSO (Switzerland); RCN (Norway); SFI (Ireland); FCT/MCTES (Portugal); and PRACE (EU).Peer Reviewe
A method for the allocation of sequencing resources in genotyped livestock populations
International audienceAbstractBackgroundThis paper describes a method, called AlphaSeqOpt, for the allocation of sequencing resources in livestock populations with existing phased genomic data to maximise the ability to phase and impute sequenced haplotypes into the whole population.MethodsWe present two algorithms. The first selects focal individuals that collectively represent the maximum possible portion of the haplotype diversity in the population. The second allocates a fixed sequencing budget among the families of focal individuals to enable phasing of their haplotypes at the sequence level. We tested the performance of the two algorithms in simulated pedigrees. For each pedigree, we evaluated the proportion of population haplotypes that are carried by the focal individuals and compared our results to a variant of the widely-used key ancestors approach and to two haplotype-based approaches. We calculated the expected phasing accuracy of the haplotypes of a focal individual at the sequence level given the proportion of the fixed sequencing budget allocated to its family.ResultsAlphaSeqOpt maximises the ability to capture and phase the most frequent haplotypes in a population in three ways. First, it selects focal individuals that collectively represent a larger portion of the population haplotype diversity than existing methods. Second, it selects focal individuals from across the pedigree whose haplotypes can be easily phased using family-based phasing and imputation algorithms, thus maximises the ability to impute sequence into the rest of the population. Third, it allocates more of the fixed sequencing budget to focal individuals whose haplotypes are more frequent in the population than to focal individuals whose haplotypes are less frequent. Unlike existing methods, we additionally present an algorithm to allocate part of the sequencing budget to the families (i.e. immediate ancestors) of focal individuals to ensure that their haplotypes can be phased at the sequence level, which is essential for enabling and maximising subsequent sequence imputation.ConclusionsWe present a new method for the allocation of a fixed sequencing budget to focal individuals and their families such that the final sequenced haplotypes, when phased at the sequence level, represent the maximum possible portion of the haplotype diversity in the population that can be sequenced and phased at that budget
Genomic evaluations with many more genotypes
<p>Abstract</p> <p>Background</p> <p>Genomic evaluations in Holstein dairy cattle have quickly become more reliable over the last two years in many countries as more animals have been genotyped for 50,000 markers. Evaluations can also include animals genotyped with more or fewer markers using new tools such as the 777,000 or 2,900 marker chips recently introduced for cattle. Gains from more markers can be predicted using simulation, whereas strategies to use fewer markers have been compared using subsets of actual genotypes. The overall cost of selection is reduced by genotyping most animals at less than the highest density and imputing their missing genotypes using haplotypes. Algorithms to combine different densities need to be efficient because numbers of genotyped animals and markers may continue to grow quickly.</p> <p>Methods</p> <p>Genotypes for 500,000 markers were simulated for the 33,414 Holsteins that had 50,000 marker genotypes in the North American database. Another 86,465 non-genotyped ancestors were included in the pedigree file, and linkage disequilibrium was generated directly in the base population. Mixed density datasets were created by keeping 50,000 (every tenth) of the markers for most animals. Missing genotypes were imputed using a combination of population haplotyping and pedigree haplotyping. Reliabilities of genomic evaluations using linear and nonlinear methods were compared.</p> <p>Results</p> <p>Differing marker sets for a large population were combined with just a few hours of computation. About 95% of paternal alleles were determined correctly, and > 95% of missing genotypes were called correctly. Reliability of breeding values was already high (84.4%) with 50,000 simulated markers. The gain in reliability from increasing the number of markers to 500,000 was only 1.6%, but more than half of that gain resulted from genotyping just 1,406 young bulls at higher density. Linear genomic evaluations had reliabilities 1.5% lower than the nonlinear evaluations with 50,000 markers and 1.6% lower with 500,000 markers.</p> <p>Conclusions</p> <p>Methods to impute genotypes and compute genomic evaluations were affordable with many more markers. Reliabilities for individual animals can be modified to reflect success of imputation. Breeders can improve reliability at lower cost by combining marker densities to increase both the numbers of markers and animals included in genomic evaluation. Larger gains are expected from increasing the number of animals than the number of markers.</p
Comparison of linkage disequilibrium levels in Iranian indigenous cattle using whole genome SNPs data
Potential of gene drives with genome editing to increase genetic gain in livestock breeding programs
Abstract
Background
This paper uses simulation to explore how gene drives can increase genetic gain in livestock breeding programs. Gene drives are naturally occurring phenomena that cause a mutation on one chromosome to copy itself onto its homologous chromosome.
Methods
We simulated nine different breeding and editing scenarios with a common overall structure. Each scenario began with 21 generations of selection, followed by 20 generations of selection based on true breeding values where the breeder used selection alone, selection in combination with genome editing, or selection with genome editing and gene drives. In the scenarios that used gene drives, we varied the probability of successfully incorporating the gene drive. For each scenario, we evaluated genetic gain, genetic variance
(
\u3c3
A
2
)
, rate of change in inbreeding (
\u394
F
), number of distinct quantitative trait nucleotides (QTN) edited, rate of increase in favourable allele frequencies of edited QTN and the time to fix favourable alleles.
Results
Gene drives enhanced the benefits of genome editing in seven ways: (1) they amplified the increase in genetic gain brought about by genome editing; (2) they amplified the rate of increase in the frequency of favourable alleles and reduced the time it took to fix them; (3) they enabled more rapid targeting of QTN with lesser effect for genome editing; (4) they distributed fixed editing resources across a larger number of distinct QTN across generations; (5) they focussed editing on a smaller number of QTN within a given generation; (6) they reduced the level of inbreeding when editing a subset of the sires; and (7) they increased the efficiency of converting genetic variation into genetic gain.
Conclusions
Genome editing in livestock breeding results in short-, medium- and long-term increases in genetic gain. The increase in genetic gain occurs because editing increases the frequency of favourable alleles in the population. Gene drives accelerate the increase in allele frequency caused by editing, which results in even higher genetic gain over a shorter period of time with no impact on inbreeding
- …
