3,063 research outputs found
Measurement of overall insecticidal effects in experimental hut trials
BACKGROUND: The 'overall insecticidal effect' is a key measure used to evaluate public health pesticides for indoor use in experimental hut trials. It depends on the proportion of mosquitoes that are killed out of those that enter the treated hut, intrinsic mortality in the control hut, and the ratio of mosquitoes entering the treatment hut to those entering the control hut. This paper critically examines the way the effect is defined, and discusses how it can be used to infer effectiveness of intervention programmes.
FINDINGS: The overall insecticidal effect, as defined by the World Health Organization in 2006, can be negative when deterrence from entering the treated hut is high, even if all mosquitoes that enter are killed, wrongly suggesting that the insecticide enhances mosquito survival. Also in the absence of deterrence, even if the insecticide kills all mosquitoes in the treatment hut, the insecticidal effect is less than 100%, unless intrinsic mortality is nil. A proposed alternative definition for the measurement of the overall insecticidal effect has the desirable range of 0 to 1 (100%), provided mortality among non-repelled mosquitoes in the treated hut is less than the corresponding mortality in the control hut. This definition can be built upon to formulate the coverage-dependent insecticidal effectiveness of an intervention programme. Coverage-dependent population protection against feeding can be formulated similarly.
CONCLUSIONS: This paper shows that the 2006 recommended quantity for measuring the overall insecticidal effect is problematic, and proposes an alternative quantity with more desirable propertie
Reducing combinatorial uncertainties: A new technique based on MT2 variables
We propose a new method to resolve combinatorial ambiguities in hadron
collider events involving two invisible particles in the final state. This
method is based on the kinematic variable MT2 and on the MT2-assisted-on-shell
reconstruction of invisible momenta, that are reformulated as `test' variables
Ti of the correct combination against the incorrect ones. We show how the
efficiency of the single Ti in providing the correct answer can be
systematically improved by combining the different Ti and/or by introducing
cuts on suitable, combination-insensitive kinematic variables. We illustrate
our whole approach in the specific example of top anti-top production, followed
by a leptonic decay of the W on both sides. However, by construction, our
method is also directly applicable to many topologies of interest for new
physics, in particular events producing a pair of undetected particles, that
are potential dark-matter candidates. We finally emphasize that our method is
apt to several generalizations, that we outline in the last sections of the
paper.Comment: 1+23 pages, 8 figures. Main changes in v3: (1) discussion at the end
of sec. 2 improved; (2) added sec. 4.2 about the method's dependence on mass
information. Matches journal versio
General analysis of signals with two leptons and missing energy at the Large Hadron Collider
A signal of two leptons and missing energy is challenging to analyze at the
Large Hadron Collider (LHC) since it offers only few kinematical handles. This
signature generally arises from pair production of heavy charged particles
which each decay into a lepton and a weakly interacting stable particle. Here
this class of processes is analyzed with minimal model assumptions by
considering all possible combinations of spin 0, 1/2 or 1, and of weak
iso-singlets, -doublets or -triplets for the new particles. Adding to existing
work on mass and spin measurements, two new variables for spin determination
and an asymmetry for the determination of the couplings of the new particles
are introduced. It is shown that these observables allow one to independently
determine the spin and the couplings of the new particles, except for a few
cases that turn out to be indistinguishable at the LHC. These findings are
corroborated by results of an alternative analysis strategy based on an
automated likelihood test.Comment: 18 pages, 3 figures, LaTe
Ghrelin causes hyperphagia and obesity in rats.
Ghrelin, a circulating growth hormone–releasing pep-tide derived from the stomach, stimulates food intake. The lowest systemically effective orexigenic dose of ghrelin was investigated and the resulting plasma ghre-lin concentration was compared with that during fast-ing. The lowest dose of ghrelin that produced a significant stimulation of feeding after intraperitoneal injection was 1 nmol. The plasma ghrelin concentration after intraperitoneal injection of 1 nmol of ghrelin (2.83 0.13 pmol/ml at 60 min postinjection) was not significantly different from that occurring after a 24-h fast (2.79 0.32 pmol/ml). After microinjection into defined hypothalamic sites, ghrelin (30 pmol) stimu-lated food intake most markedly in the arcuate nucleus (Arc) (0–1 h food intake, 427 43 % of control; P <
Recommended from our members
Combination of searches for Higgs boson pairs in pp collisions at s=13TeV with the ATLAS detector
This letter presents a combination of searches for Higgs boson pair production using up to 36.1 fb−1 of proton–proton collision data at a centre-of-mass energy s=13 TeV recorded with the ATLAS detector at the LHC. The combination is performed using six analyses searching for Higgs boson pairs decaying into the bb¯bb¯, bb¯W+W−, bb¯τ+τ−, W+W−W+W−, bb¯γγ and W+W−γγ final states. Results are presented for non-resonant and resonant Higgs boson pair production modes. No statistically significant excess in data above the Standard Model predictions is found. The combined observed (expected) limit at 95% confidence level on the non-resonant Higgs boson pair production cross-section is 6.9 (10) times the predicted Standard Model cross-section. Limits are also set on the ratio (κλ) of the Higgs boson self-coupling to its Standard Model value. This ratio is constrained at 95% confidence level in observation (expectation) to −5.0<κλ<12.0 (−5.8<κλ<12.0). In addition, limits are set on the production of narrow scalar resonances and spin-2 Kaluza–Klein Randall–Sundrum gravitons. Exclusion regions are also provided in the parameter space of the habemus Minimal Supersymmetric Standard Model and the Electroweak Singlet Model
Recommended from our members
Measurement of W± boson production in Pb+Pb collisions at √sNN=5.02Te with the ATLAS detector
A measurement of W± boson production in Pb+Pb collisions at sNN=5.02Te is reported using data recorded by the ATLAS experiment at the LHC in 2015, corresponding to a total integrated luminosity of 0.49nb-1. The W± bosons are reconstructed in the electron or muon leptonic decay channels. Production yields of leptonically decaying W± bosons, normalised by the total number of minimum-bias events and the nuclear thickness function, are measured within a fiducial region defined by the detector acceptance and the main kinematic requirements. These normalised yields are measured separately for W+ and W- bosons, and are presented as a function of the absolute value of pseudorapidity of the charged lepton and of the collision centrality. The lepton charge asymmetry is also measured as a function of the absolute value of lepton pseudorapidity. In addition, nuclear modification factors are calculated using the W± boson production cross-sections measured in pp collisions. The results are compared with predictions based on next-to-leading-order calculations with CT14 parton distribution functions as well as with predictions obtained with the EPPS16 and nCTEQ15 nuclear parton distribution functions. No dependence of normalised production yields on centrality and a good agreement with predictions are observed for mid-central and central collisions. For peripheral collisions, the data agree with predictions within 1.7 (0.9) standard deviations for W- (W+) bosons
Search for flavour-changing neutral currents in processes with one top quark and a photon using 81 fb−1 of pp collisions at s=13TeV with the ATLAS experiment
A search for flavour-changing neutral current (FCNC) events via the coupling of a top quark, a photon, and an up or charm quark is presented using 81 fb−1 of proton–proton collision data taken at a centre-of-mass energy of 13 TeV with the ATLAS detector at the LHC. Events with a photon, an electron or muon, a b-tagged jet, and missing transverse momentum are selected. A neural network based on kinematic variables differentiates between events from signal and background processes. The data are consistent with the background-only hypothesis, and limits are set on the strength of the tqγ coupling in an effective field theory. These are also interpreted as 95% CL upper limits on the cross section for FCNC tγ production via a left-handed (right-handed) tuγ coupling of 36 fb (78 fb) and on the branching ratio for t→γu of 2.8×10−5 (6.1×10−5). In addition, they are interpreted as 95% CL upper limits on the cross section for FCNC tγ production via a left-handed (right-handed) tcγ coupling of 40 fb (33 fb) and on the branching ratio for t→γc of 22×10−5 (18×10−5)
Recommended from our members
Measurement of the Z(→ ℓ + ℓ −)γ production cross-section in pp collisions at √s = 13 TeV with the ATLAS detector
The production of a prompt photon in association with a Z boson is studied in proton-proton collisions at a centre-of-mass energy s = 13 TeV. The analysis uses a data sample with an integrated luminosity of 139 fb−1 collected by the ATLAS detector at the LHC from 2015 to 2018. The production cross-section for the process pp → ℓ+ℓ−γ + X (ℓ = e, μ) is measured within a fiducial phase-space region defined by kinematic requirements on the photon and the leptons, and by isolation requirements on the photon. An experimental precision of 2.9% is achieved for the fiducial cross-section. Differential cross-sections are measured as a function of each of six kinematic variables characterising the ℓ+ℓ−γ system. The data are compared with theoretical predictions based on next-to-leading-order and next-to-next-to-leading-order perturbative QCD calculations. The impact of next-to-leading-order electroweak corrections is also considered. [Figure not available: see fulltext.]
Does \u2018bigger\u2019mean \u2018better\u2019? Pitfalls and shortcuts associated with big data for social research
\u2018Big data is here to stay.\u2019 This key statement has a double value: is an assumption as well as the reason why a theoretical reflection is needed. Furthermore, Big data is something that is gaining visibility and success in social sciences even, overcoming the division between humanities and computer sciences. In this contribution some considerations on the presence and the certain persistence of Big data as a socio-technical assemblage will be outlined. Therefore, the intriguing opportunities for social research linked to such interaction between practices and technological development will be developed. However, despite a promissory rhetoric, fostered by several scholars since the birth of Big data as a labelled concept, some risks are just around the corner. The claims for the methodological power of bigger and bigger datasets, as well as increasing speed in analysis and data collection, are creating a real hype in social research. Peculiar attention is needed in order to avoid some pitfalls. These risks will be analysed for what concerns the validity of the research results \u2018obtained through Big data. After a pars distruens, this contribution will conclude with a pars construens; assuming the previous critiques, a mixed methods research design approach will be described as a general proposal with the objective of stimulating a debate on the integration of Big data in complex research projecting
- …
