1,233 research outputs found

    Online Deniability for Multiparty Protocols with Applications to Externally Anonymous Authentication

    Get PDF
    In the problem of anonymous authentication (Boneh et al. CCS 1999), a sender wishes to authenticate a message to a given recipient in a way that preserves anonymity: the recipient does not know the identity of the sender and only is assured that the sender belongs to some authorized set. Although solutions for the problem exist (for example, by using ring signatures, e.g. Naor, Crypto 2002), they provide no security when the anonymity set is a singleton. This work is motivated by the question of whether there is any type of anonymity possible in this scenario. It turns out that we can still protect the identity of all senders (authorized or not) if we shift our concern from preventing the identity information be revealed to the recipient to preventing it could be revealed to an external entity, other than the recipient. We define a natural functionality which provides such guarantees and we denote it by F_{eaa} for externally anonymous authenticated channel. We argue that any realization of F_{eaa} must be deniable in the sense of Dodis et al. TCC 2009. To prove the deniability of similar primitives, previous work defined ad hoc notions of deniability for each task, and then each notion was showed equivalent to realizing the primitive in the Generalized Universal Composability framework (GUC, Canetti et al. TCC 2007). Instead, we put forward the question of whether deniability can be defined independently from any particular task. We answer this question in the affirmative providing a natural extension of the definition of Dodis et al. for arbitrary multiparty protocols. Furthermore, we show that a protocol satisfies this definition if an only if it realizes the ideal functionality F_{eaa} in the GUC framework. This result enables us to prove that most GUC functionalities we are aware of (and their realizations) are deniable. We conclude by applying our results to the construction of a deniable protocol that realizes F_{eaa}

    Long-term follow-up of IPEX syndrome patients after different therapeutic strategies : an international multicenter retrospective study

    Get PDF
    Background: Immunodysregulation polyendocrinopathy enteropathy x-linked(IPEX) syndrome is a monogenic autoimmune disease caused by FOXP3 mutations. Because it is a rare disease, the natural history and response to treatments, including allogeneic hematopoietic stem cell transplantation (HSCT) and immunosuppression (IS), have not been thoroughly examined. Objective: This analysis sought to evaluate disease onset, progression, and long-term outcome of the 2 main treatments in long-term IPEX survivors. Methods: Clinical histories of 96 patients with a genetically proven IPEX syndrome were collected from 38 institutions worldwide and retrospectively analyzed. To investigate possible factors suitable to predict the outcome, an organ involvement (OI) scoring system was developed. Results: We confirm neonatal onset with enteropathy, type 1 diabetes, and eczema. In addition, we found less common manifestations in delayed onset patients or during disease evolution. There is no correlation between the site of mutation and the disease course or outcome, and the same genotype can present with variable phenotypes. HSCT patients (n = 58) had a median follow-up of 2.7 years (range, 1 week-15 years). Patients receiving chronic IS (n 5 34) had a median follow-up of 4 years (range, 2 months-25 years). The overall survival after HSCT was 73.2% (95% CI, 59.4-83.0) and after IS was 65.1% (95% CI, 62.8-95.8). The pretreatment OI score was the only significant predictor of overall survival after transplant (P = .035) but not under IS. Conclusions: Patients receiving chronic IS were hampered by disease recurrence or complications, impacting long-term.disease-free survival. When performed in patients with a low OI score, HSCT resulted in disease resolution with better quality of life, independent of age, donor source, or conditioning regimen

    Improving topological cluster reconstruction using calorimeter cell timing in ATLAS

    Get PDF
    Clusters of topologically connected calorimeter cells around cells with large absolute signal-to-noise ratio (topo-clusters) are the basis for calorimeter signal reconstruction in the ATLAS experiment. Topological cell clustering has proven performant in LHC Runs 1 and 2. It is, however, susceptible to out-of-time pile-up of signals from soft collisions outside the 25 ns proton-bunch-crossing window associated with the event’s hard collision. To reduce this effect, a calorimeter-cell timing criterion was added to the signal-to-noise ratio requirement in the clustering algorithm. Multiple versions of this criterion were tested by reconstructing hadronic signals in simulated events and Run 2 ATLAS data. The preferred version is found to reduce the out-of-time pile-up jet multiplicity by ∼50% for jet pT ∼ 20 GeV and by ∼80% for jet pT 50 GeV, while not disrupting the reconstruction of hadronic signals of interest, and improving the jet energy resolution by up to 5% for 20 < pT < 30 GeV. Pile-up is also suppressed for other physics objects based on topo-clusters (electrons, photons, τ -leptons), reducing the overall event size on disk by about 6% in early Run 3 pileup conditions. Offline reconstruction for Run 3 includes the timing requirement

    Software Performance of the ATLAS Track Reconstruction for LHC Run 3

    Get PDF
    Charged particle reconstruction in the presence of many simultaneous proton–proton (pp) collisions in the LHC is a challenging task for the ATLAS experiment’s reconstruction software due to the combinatorial complexity. This paper describes the major changes made to adapt the software to reconstruct high-activity collisions with an average of 50 or more simultaneous pp interactions per bunch crossing (pileup) promptly using the available computing resources. The performance of the key components of the track reconstruction chain and its dependence on pile-up are evaluated, and the improvement achieved compared to the previous software version is quantified. For events with an average of 60 pp collisions per bunch crossing, the updated track reconstruction is twice as fast as the previous version, without significant reduction in reconstruction efficiency and while reducing the rate of combinatorial fake tracks by more than a factor two

    Measurement and interpretation of same-sign W boson pair production in association with two jets in pp collisions at √s = 13 TeV with the ATLAS detector

    Get PDF
    This paper presents the measurement of fducial and diferential cross sections for both the inclusive and electroweak production of a same-sign W-boson pair in association with two jets (W±W±jj) using 139 fb−1 of proton-proton collision data recorded at a centre-of-mass energy of √ s = 13 TeV by the ATLAS detector at the Large Hadron Collider. The analysis is performed by selecting two same-charge leptons, electron or muon, and at least two jets with large invariant mass and a large rapidity diference. The measured fducial cross sections for electroweak and inclusive W±W±jj production are 2.92 ± 0.22 (stat.) ± 0.19 (syst.)fb and 3.38±0.22 (stat.)±0.19 (syst.)fb, respectively, in agreement with Standard Model predictions. The measurements are used to constrain anomalous quartic gauge couplings by extracting 95% confdence level intervals on dimension-8 operators. A search for doubly charged Higgs bosons H±± that are produced in vector-boson fusion processes and decay into a same-sign W boson pair is performed. The largest deviation from the Standard Model occurs for an H±± mass near 450 GeV, with a global signifcance of 2.5 standard deviations

    Performance and calibration of quark/gluon-jet taggers using 140 fb⁻¹ of pp collisions at √s=13 TeV with the ATLAS detector

    Get PDF
    The identification of jets originating from quarks and gluons, often referred to as quark/gluon tagging, plays an important role in various analyses performed at the Large Hadron Collider, as Standard Model measurements and searches for new particles decaying to quarks often rely on suppressing a large gluon-induced background. This paper describes the measurement of the efficiencies of quark/gluon taggers developed within the ATLAS Collaboration, using √s=13 TeV proton–proton collision data with an integrated luminosity of 140 fb-1 collected by the ATLAS experiment. Two taggers with high performances in rejecting jets from gluon over jets from quarks are studied: one tagger is based on requirements on the number of inner-detector tracks associated with the jet, and the other combines several jet substructure observables using a boosted decision tree. A method is established to determine the quark/gluon fraction in data, by using quark/gluon-enriched subsamples defined by the jet pseudorapidity. Differences in tagging efficiency between data and simulation are provided for jets with transverse momentum between 500 GeV and 2 TeV and for multiple tagger working points

    Search for non-resonant Higgs boson pair production in the 2b+2l+ETmiss final state in pp collisions at s = 13 TeV with the ATLAS detector

    Get PDF
    A search for non-resonant Higgs boson pair (HH) production is presented, in which one of the Higgs bosons decays to a b-quark pair (bb ̄) and the other decays to WW*, ZZ*, or τ+τ−, with in each case a final state with l+l−+ neutrinos (l = e, μ). The analysis targets separately the gluon-gluon fusion and vector boson fusion production modes. Data recorded by the ATLAS detector in proton-proton collisions at a centre-of-mass energy of 13 TeV at the Large Hadron Collider, corresponding to an integrated luminosity of 140 fb−1, are used in this analysis. Events are selected to have exactly two b-tagged jets and two leptons with opposite electric charge and missing transverse momentum in the final state. These events are classified using multivariate analysis algorithms to separate the HH events from other Standard Model processes. No evidence of the signal is found. The observed (expected) upper limit on the cross-section for non-resonant Higgs boson pair production is determined to be 9.7 (16.2) times the Standard Model prediction at 95% confidence level. The Higgs boson self-interaction coupling parameter κλ and the quadrilinear coupling parameter κ2V are each separately constrained by this analysis to be within the ranges [−6.2, 13.3] and [−0.17, 2.4], respectively, at 95% confidence level, when all other parameters are fixed

    Azimuthal Angle Correlations of Muons Produced via Heavy-Flavor Decays in 5.02 TeV Pb + Pb and pp Collisions with the ATLAS Detector

    Get PDF

    Combination of searches for heavy spin-1 resonances using 139 fb−1 of proton-proton collision data at √s = 13 TeV with the ATLAS detector

    Get PDF
    A combination of searches for new heavy spin-1 resonances decaying into diferent pairings of W, Z, or Higgs bosons, as well as directly into leptons or quarks, is presented. The data sample used corresponds to 139 fb−1 of proton-proton collisions at √ s = 13 TeV collected during 2015–2018 with the ATLAS detector at the CERN Large Hadron Collider. Analyses selecting quark pairs (qq, bb, tt¯, and tb) or third-generation leptons (τν and τ τ ) are included in this kind of combination for the frst time. A simplifed model predicting a spin-1 heavy vector-boson triplet is used. Cross-section limits are set at the 95% confdence level and are compared with predictions for the benchmark model. These limits are also expressed in terms of constraints on couplings of the heavy vector-boson triplet to quarks, leptons, and the Higgs boson. The complementarity of the various analyses increases the sensitivity to new physics, and the resulting constraints are stronger than those from any individual analysis considered. The data exclude a heavy vector-boson triplet with mass below 5.8 TeV in a weakly coupled scenario, below 4.4 TeV in a strongly coupled scenario, and up to 1.5 TeV in the case of production via vector-boson fusion
    corecore