17,121 research outputs found
Strong thermal leptogenesis and the absolute neutrino mass scale
We show that successful strong thermal leptogenesis, where the final
asymmetry is independent of the initial conditions and in particular a large
pre-existing asymmetry is efficiently washed-out, favours values of the
lightest neutrino mass for normal ordering (NO) and
for inverted ordering (IO) for models with
orthogonal matrix entries respecting . . We show
analytically why lower values of require a high level of fine tuning in
the seesaw formula and/or in the flavoured decay parameters (in the electronic
for NO, in the muonic for IO). We also show how this constraint exists thanks
to the measured values of the neutrino mixing angles and can be tighten by a
future determination of the Dirac phase. Our analysis also allows to place more
stringent constraint for a specific model or class of models, such as
-inspired models, and shows that some models cannot realise strong
thermal leptogenesis for any value of . A scatter plot analysis fully
supports the analytical results. We also briefly discuss the interplay with
absolute neutrino mass scale experiments concluding that they will be able in
the coming years to either corner strong thermal leptogenesis or find positive
signals pointing to a non-vanishing . Since the constraint is much
stronger for NO than for IO, it is very important that new data from planned
neutrino oscillation experiments will be able to solve the ambiguity.Comment: 22 pages; 7 figures; v2: matches JCAP versio
A micro-macro homogenization for modeling the masonry out-of-plane response
This study introduces a finite element model based on a two-scale beam-to-beam homogenization procedure for the analysis of masonry structural members undergoing prevailing axial and bending stress states. The model is developed considering the periodic repetition of bricks and mortar joints in regular stack bond arrangement and assuming a linear elastic behavior for the former and a nonlinear response for the latter. At the microscopic heterogeneous scale, the behavior of a Unit Cell (UC) made of a single brick and mortar layer is described through an equivalent Timoshenko beam representation, where a nonlocal damage formulation with friction plasticity governs the mortar nonlinear constitutive relationship. Basing on a semi-analytical approach, the microscopic quantities are, then, homogenized to define an equivalent beam model at the macroscopic scale. The proposed finite element model is implemented in standard numerical codes to investigate the response of typical one-dimensional (1D) masonry elements. This study shows the numerical simulation of two experimental tests: a rectangular wallette under out-of-plane bending and a circular arch under vertical forces. The results obtained for the proposed model are compared with those resulting from micromechanical approaches and the experimental outcomes
Super-crystals in composite ferroelectrics
As atoms and molecules condense to form solids, a crystalline state can emerge with its highly ordered geometry and subnanometric lattice constant. In some physical systems, such as ferroelectric perovskites, a perfect crystalline structure forms even when the condensing substances are non-stoichiometric. The resulting solids have compositional disorder and complex macroscopic properties, such as giant susceptibilities and non-ergodicity. Here, we observe the spontaneous formation of a cubic structure in composite ferroelectric potassium– lithium–tantalate–niobate with micrometric lattice constant, 104 times larger than that of the underlying perovskite lattice. The 3D effect is observed in specifically designed samples in which the substitutional mixture varies periodically along one specific crystal axis. Laser propagation indicates a coherent polarization super-crystal that produces an optical X-ray diffractometry, an ordered mesoscopic state of matter with important implications for critical phenomena and applications in miniaturized 3D optical technologies
Effective Edge-Fault-Tolerant Single-Source Spanners via Best (or Good) Swap Edges
Computing \emph{all best swap edges} (ABSE) of a spanning tree of a given
-vertex and -edge undirected and weighted graph means to select, for
each edge of , a corresponding non-tree edge , in such a way that the
tree obtained by replacing with enjoys some optimality criterion (which
is naturally defined according to some objective function originally addressed
by ). Solving efficiently an ABSE problem is by now a classic algorithmic
issue, since it conveys a very successful way of coping with a (transient)
\emph{edge failure} in tree-based communication networks: just replace the
failing edge with its respective swap edge, so as that the connectivity is
promptly reestablished by minimizing the rerouting and set-up costs. In this
paper, we solve the ABSE problem for the case in which is a
\emph{single-source shortest-path tree} of , and our two selected swap
criteria aim to minimize either the \emph{maximum} or the \emph{average
stretch} in the swap tree of all the paths emanating from the source. Having
these criteria in mind, the obtained structures can then be reviewed as
\emph{edge-fault-tolerant single-source spanners}. For them, we propose two
efficient algorithms running in and time, respectively, and we show that the guaranteed (either
maximum or average, respectively) stretch factor is equal to 3, and this is
tight. Moreover, for the maximum stretch, we also propose an almost linear time algorithm computing a set of \emph{good} swap edges,
each of which will guarantee a relative approximation factor on the maximum
stretch of (tight) as opposed to that provided by the corresponding BSE.
Surprisingly, no previous results were known for these two very natural swap
problems.Comment: 15 pages, 4 figures, SIROCCO 201
Simultaneous Extraction of the Fermi constant and PMNS matrix elements in the presence of a fourth generation
Several recent studies performed on constraints of a fourth generation of
quarks and leptons suffer from the ad-hoc assumption that 3 x 3 unitarity holds
for the first three generations in the neutrino sector. Only under this
assumption one is able to determine the Fermi constant G_F from the muon
lifetime measurement with the claimed precision of G_F = 1.16637 (1) x 10^-5
GeV^-2. We study how well G_F can be extracted within the framework of four
generations from leptonic and radiative mu and tau decays, as well as from K_l3
decays and leptonic decays of charged pions, and we discuss the role of lepton
universality tests in this context. We emphasize that constraints on a fourth
generation from quark and lepton flavour observables and from electroweak
precision observables can only be obtained in a consistent way if these three
sectors are considered simultaneously. In the combined fit to leptonic and
radiative mu and tau decays, K_l3 decays and leptonic decays of charged pions
we find a p-value of 2.6% for the fourth generation matrix element |U_{e 4}|=0
of the neutrino mixing matrix.Comment: 19 pages, 3 figures with 16 subfigures, references and text added
refering to earlier related work, figures and text in discussion section
added, results and conclusions unchange
Trophy hunting certification
Adaptive certification is the best remaining option for the trophy hunting industry in Africa to demonstrate sustainable and ethical hunting practices that benefit local communities and wildlife conservation
Survival of dental implants in patients with oral cancer treated by surgery and radiotherapy: a retrospective study
BACKGROUND:
The aim of this retrospective study was to evaluate the survival of dental implants placed after ablative surgery, in patients affected by oral cancer treated with or without radiotherapy.
METHODS:
We collected data for 34 subjects (22 females, 12 males; mean age: 51 ± 19) with malignant oral tumors who had been treated with ablative surgery and received dental implant rehabilitation between 2007 and 2012. Postoperative radiation therapy (less than 50 Gy) was delivered before implant placement in 12 patients. A total of 144 titanium implants were placed, at a minimum interval of 12 months, in irradiated and non-irradiated residual bone.
RESULTS:
Implant loss was dependent on the position and location of the implants (P = 0.05-0.1). Moreover, implant survival was dependent on whether the patient had received radiotherapy. This result was highly statistically significant (P < 0.01). Whether the implant was loaded is another highly significant (P < 0.01) factor determinin
Sampling Mechanism for Low Gravity Bodies
In future exploration missions to low gravity bodies (e.g. a Mars moon or a near-Earth asteroid) it is planned to collect more than 100 grams of soil and return them to Earth. In previous studies several sampling tools have been proposed but there is no single sampling technology for low-gravity bodies that has been specifically conceived to provide the ability to collect material in any envisaged situation. Low gravity bodies present indeed peculiar conditions which need to be taken into account during the design and test of sampling and sample handling systems. Primarily, the very reduced gravity limits the thrust reaction capability in support to drilling operations; and, although reactions can be achieved by spacecraft anchoring or by thrust reversal, these operative conditions could limit the effectiveness of the sampling action. An alternative solution is the exploitation of the forces naturally arising from Spacecraft momentum inversion, which can be achieved by ‘touch and go’ techniques (as e.g. performed in Hayabusa mission). Although the small duration of the contact with the soil would anyhow limit the sampling depth and the collectable soil types, a properly designed sampling system would require to conclude the operation with a great effectiveness. In the last three years an ESA founded study has been carried on and a fully functional sampling mechanism for "touch and go" sampling on a low-gravity body has been selected, designed and breadboarded. Based on the results of several Proof-Of-Principle models tested on different types of specimen and after the analysis performed on a dynamic simulation model for the sampling action, a device implementing the most promising sampling technique has been designed and manufactured. It has been then tested under ambient conditions using various kinds of asteroid soil stimulants. The proposed paper will resume the key aspects and the main achievements of the study
Determinants of postnatal spleen tissue regeneration and organogenesis
Abstract The spleen is an organ that filters the blood and is responsible for generating blood-borne immune responses. It is also an organ with a remarkable capacity to regenerate. Techniques for splenic auto-transplantation have emerged to take advantage of this characteristic and rebuild spleen tissue in individuals undergoing splenectomy. While this procedure has been performed for decades, the underlying mechanisms controlling spleen regeneration have remained elusive. Insights into secondary lymphoid organogenesis and the roles of stromal organiser cells and lymphotoxin signalling in lymph node development have helped reveal similar requirements for spleen regeneration. These factors are now considered in the regulation of embryonic and postnatal spleen formation, and in the establishment of mature white pulp and marginal zone compartments which are essential for spleen-mediated immunity. A greater understanding of the cellular and molecular mechanisms which control spleen development will assist in the design of more precise and efficient tissue grafting methods for spleen regeneration on demand. Regeneration of organs which harbour functional white pulp tissue will also offer novel opportunities for effective immunotherapy against cancer as well as infectious diseases
- …
