845 research outputs found
Quark Mixing, CKM Unitarity
In the Standard Model of elementary particles, quark-mixing is expressed in
terms of a 3 x 3 unitary matrix V, the so called Cabibbo-Kobayashi-Maskawa
(CKM) matrix. Significant unitarity checks are so far possible for the first
row of this matrix. This article reviews the experimental and theoretical
information on these matrix elements. On the experimental side, we find a 2.2
sigma to 2.7 sigma deviation from unitarity, which conflicts with the Standard
Model.Comment: accepted by EPJ
Fast computation by block permanents of cumulative distribution functions of order statistics from several populations
The joint cumulative distribution function for order statistics arising from
several different populations is given in terms of the distribution function of
the populations. The computational cost of the formula in the case of two
populations is still exponential in the worst case, but it is a dramatic
improvement compared to the general formula by Bapat and Beg. In the case when
only the joint distribution function of a subset of the order statistics of
fixed size is needed, the complexity is polynomial, for the case of two
populations.Comment: 21 pages, 3 figure
Studies of parton thermalization at RHIC
We consider the evolution of a parton system which is formed in the central
region just after a relativistic heavy ion collision. The parton consist of
mostly gluons, minijets, which are produced by elastic scattering between
constituent partons of the colliding nuclei. We assume the system can be
described by a semi-classical Boltzmann transport equation, which we solve by
means of the test particle Monte-Carlo method including retardation. The
partons proliferate via secondary radiative processes until the
thermalization is reached for some assumptions. The extended system is
thermalized at about fm/ with MeV and stays in equilibrium
for about 2 fm/ with breaking temperature MeV in the rapidity
central region.Comment: 14 page
Commissioning of the vacuum system of the KATRIN Main Spectrometer
The KATRIN experiment will probe the neutrino mass by measuring the
beta-electron energy spectrum near the endpoint of tritium beta-decay. An
integral energy analysis will be performed by an electro-static spectrometer
(Main Spectrometer), an ultra-high vacuum vessel with a length of 23.2 m, a
volume of 1240 m^3, and a complex inner electrode system with about 120000
individual parts. The strong magnetic field that guides the beta-electrons is
provided by super-conducting solenoids at both ends of the spectrometer. Its
influence on turbo-molecular pumps and vacuum gauges had to be considered. A
system consisting of 6 turbo-molecular pumps and 3 km of non-evaporable getter
strips has been deployed and was tested during the commissioning of the
spectrometer. In this paper the configuration, the commissioning with bake-out
at 300{\deg}C, and the performance of this system are presented in detail. The
vacuum system has to maintain a pressure in the 10^{-11} mbar range. It is
demonstrated that the performance of the system is already close to these
stringent functional requirements for the KATRIN experiment, which will start
at the end of 2016.Comment: submitted for publication in JINST, 39 pages, 15 figure
Interaction of a TeV Scale Black Hole with the Quark-Gluon Plasma at LHC
If the fundamental Planck scale is near a TeV, then parton collisions with
high enough center-of-mass energy should produce black holes. The production
rate for such black holes has been extensively studied for the case of a
proton-proton collision at \sqrt s = 14 TeV and for a lead-lead collision at
\sqrt s = 5.5 TeV at LHC. As the parton energy density is much higher at
lead-lead collisions than in pp collisions at LHC, one natural question is
whether the produced black holes will be able to absorb the partons formed in
the lead-lead collisions and eventually `eat' the quark-gluon plasma formed at
LHC. In this paper, we make a quantitative analysis of this possibility and
find that since the energy density of partons formed in lead-lead collisions at
LHC is about 500 GeV/fm^3, the rate of absorption for one of these black holes
is much smaller than the rate of evaporation. Hence, we argue that black holes
formed in such collisions will decay very quickly, and will not absorb very
many nearby partons. More precisely, we show that for the black hole mass to
increase via parton absorption at the LHC the typical energy density of quarks
and gluons should be of the order of 10^{10} GeV/fm^3. As LHC will not be able
to produce such a high energy density partonic system, the black hole will not
be able to absorb a sufficient number of nearby partons before it decays. The
typical life time of the black hole formed at LHC is found to be a small
fraction of a fm/c.Comment: 7 pages latex (double column), 3 eps figure
Technological Change in Economic Models of Environmental Policy: A Survey
This paper provides an overview of the treatment of technological change in economic models of environmental policy. Numerous economic modeling studies have confirmed the sensitivity of mid- and long-run climate change mitigation cost and benefit projections to assumptions about technology costs. In general, technical progress is considered to be a noneconomic, exogenous variable in global climate change modeling. However, there is overwhelming evidence that technological change is not an exogenous variable but to an important degree endogenous, induced by needs and pressures. Hence, some environmenteconomy models treat technological change as endogenous, responding to socio-economic variables. Three main elements in models of technological innovation are: (i) corporate investment in research and development, (ii) spillovers from R&D, and (iii) technology learning, especially learning-by-doing. The incorporation of induced technological change in different types of environmental-economic models tends to reduce the costs of environmental policy, accelerates abatement and may lead to positive spillover and negative leakage
Temporal change in maternal dietary intake during pregnancy and lactation between and within 2 pregnancy cohorts assembled in the United Kingdom
Background: The association between maternal and infant dietary exposures and risk of allergic disease development is an area of considerable scientific uncertainty.
Objective: This study aims to compare dietary habits during pregnancy and lactation in two pre-birth cohorts from the same location approximately 10 years apart, a timeframe characterised by changes in government dietary advice.
Methods: The FAIR cohort is an unselected birth cohort born between 2001-2002. The 3rd generation cohort was born between 2010-2018. Both cohorts were established on the Isle of Wight (UK) to investigate prevalence of allergic diseases. Nutrition and allergy data was collected prospectively from recruitment and throughout the infant’s early life. Here we present dietary data collected in the third trimester of pregnancy and at three months of age. Differences between cohorts were tested using t-tests, Wilcoxon rank sum tests, chi-squared and Fisher’s exact tests.
Results: Data was available for 1331 participants (969 FAIR and 362 3rd generation). The proportion of mothers that reported excluding peanuts during pregnancy was significantly lower for the 3rd generation compared to the FAIR cohort (16.0% vs. 55.6%, p < 0.01). Cohort membership, primiparity, and maternal education were significantly associated with excluding peanuts during pregnancy (p < 0.01). The proportion of mothers who reported excluding any foods during breastfeeding was significantly lower for the 3rd generation compared to the FAIR cohort (22.8% vs. 43.4%, p < 0.01).
Conclusion: Maternal exclusion of peanut during pregnancy was lower for mothers giving birth between 2012-2018, compared to mothers giving birth between 2001-2002
Measurement of the Proton Asymmetry Parameter C in Neutron Beta Decay
The proton asymmetry parameter C in neutron decay describes the correlation
between neutron spin and proton momentum. In this Letter, the first measurement
of this quantity is presented. The result C=-0.2377(26) agrees with the
Standard Model expectation. The coefficient C provides an additional parameter
for new and improved Standard Model tests. From a differential analysis of the
same data (assuming the Standard Model), we obtain lambda=-1.275(16) as ratio
of axial-vector and vector coupling constant.Comment: 4 pages, 2 figure
Two-Modality Mammography May Confer an Advantage Over Either Full-Field Digital Mammography or Screen-Film Mammography
To compare the cancer detection rate and ROC area under the curve of full-field digital mammography, screen-film mammography, and a combined technique that allowed diagnosis if a finding was suspicious on film, on digital, or both
- …
