321 research outputs found

    Nonparametric relevance-shifted multiple testing procedures for the analysis of high-dimensional multivariate data with small sample sizes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases.</p> <p>Results</p> <p>This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevance-shifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis.</p> <p>Conclusion</p> <p>The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.</p

    The effects of timing of fine needle aspiration biopsies on gene expression profiles in breast cancers

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>DNA microarray analysis has great potential to become an important clinical tool to individualize prognostication and treatment for breast cancer patients. However, with any emerging technology, there are many variables one must consider before bringing the technology to the bedside. There are already concerted efforts to standardize protocols and to improve reproducibility of DNA microarray. Our study examines one variable that is often overlooked, the timing of tissue acquisition, which may have a significant impact on the outcomes of DNA microarray analyses especially in studies that compare microarray data based on biospecimens taken <it>in vivo </it>and <it>ex vivo</it>.</p> <p>Methods</p> <p>From 16 patients, we obtained paired fine needle aspiration biopsies (FNABs) of breast cancers taken before (PRE) and after (POST) their surgeries and compared the microarray data to determine the genes that were differentially expressed between the FNABs taken at the two time points. qRT-PCR was used to validate our findings. To examine effects of longer exposure to hypoxia on gene expression, we also compared the gene expression profiles of 10 breast cancers from clinical tissue bank.</p> <p>Results</p> <p>Using hierarchical clustering analysis, 12 genes were found to be differentially expressed between the FNABs taken before and after surgical removal. Remarkably, most of the genes were linked to FOS in an early hypoxia pathway. The gene expression of FOS also increased with longer exposure to hypoxia.</p> <p>Conclusion</p> <p>Our study demonstrated that the timing of fine needle aspiration biopsies can be a confounding factor in microarray data analyses in breast cancer. We have shown that FOS-related genes, which have been implicated in early hypoxia as well as the development of breast cancers, were differentially expressed before and after surgery. Therefore, it is important that future studies take timing of tissue acquisition into account.</p

    Functional Comparison of Innate Immune Signaling Pathways in Primates

    Get PDF
    Humans respond differently than other primates to a large number of infections. Differences in susceptibility to infectious agents between humans and other primates are probably due to inter-species differences in immune response to infection. Consistent with that notion, genes involved in immunity-related processes are strongly enriched among recent targets of positive selection in primates, suggesting that immune responses evolve rapidly, yet providing only indirect evidence for possible inter-species functional differences. To directly compare immune responses among primates, we stimulated primary monocytes from humans, chimpanzees, and rhesus macaques with lipopolysaccharide (LPS) and studied the ensuing time-course regulatory responses. We find that, while the universal Toll-like receptor response is mostly conserved across primates, the regulatory response associated with viral infections is often lineage-specific, probably reflecting rapid host–virus mutual adaptation cycles. Additionally, human-specific immune responses are enriched for genes involved in apoptosis, as well as for genes associated with cancer and with susceptibility to infectious diseases or immune-related disorders. Finally, we find that chimpanzee-specific immune signaling pathways are enriched for HIV–interacting genes. Put together, our observations lend strong support to the notion that lineage-specific immune responses may help explain known inter-species differences in susceptibility to infectious diseases

    Measurements of differential cross-sections in top-quark pair events with a high transverse momentum top quark and limits on beyond the Standard Model contributions to top-quark pair production with the ATLAS detector at √s = 13 TeV

    Get PDF
    Cross-section measurements of top-quark pair production where the hadronically decaying top quark has transverse momentum greater than 355 GeV and the other top quark decays into ℓνb are presented using 139 fb−1 of data collected by the ATLAS experiment during proton-proton collisions at the LHC. The fiducial cross-section at s = 13 TeV is measured to be σ = 1.267 ± 0.005 ± 0.053 pb, where the uncertainties reflect the limited number of data events and the systematic uncertainties, giving a total uncertainty of 4.2%. The cross-section is measured differentially as a function of variables characterising the tt¯ system and additional radiation in the events. The results are compared with various Monte Carlo generators, including comparisons where the generators are reweighted to match a parton-level calculation at next-to-next-to-leading order. The reweighting improves the agreement between data and theory. The measured distribution of the top-quark transverse momentum is used to search for new physics in the context of the effective field theory framework. No significant deviation from the Standard Model is observed and limits are set on the Wilson coefficients of the dimension-six operators OtG and Otq(8), where the limits on the latter are the most stringent to date. [Figure not available: see fulltext.]

    Corrigendum to "Search for flavour-changing neutral-current couplings between the top quark and the photon with the ATLAS detector at √s=13 TeV" (Physics Letters B, 842 (2023), 137379)

    Get PDF

    Improving topological cluster reconstruction using calorimeter cell timing in ATLAS

    Get PDF
    Clusters of topologically connected calorimeter cells around cells with large absolute signal-to-noise ratio (topo-clusters) are the basis for calorimeter signal reconstruction in the ATLAS experiment. Topological cell clustering has proven performant in LHC Runs 1 and 2. It is, however, susceptible to out-of-time pile-up of signals from soft collisions outside the 25 ns proton-bunch-crossing window associated with the event’s hard collision. To reduce this effect, a calorimeter-cell timing criterion was added to the signal-to-noise ratio requirement in the clustering algorithm. Multiple versions of this criterion were tested by reconstructing hadronic signals in simulated events and Run 2 ATLAS data. The preferred version is found to reduce the out-of-time pile-up jet multiplicity by ∼50% for jet pT ∼ 20 GeV and by ∼80% for jet pT 50 GeV, while not disrupting the reconstruction of hadronic signals of interest, and improving the jet energy resolution by up to 5% for 20 < pT < 30 GeV. Pile-up is also suppressed for other physics objects based on topo-clusters (electrons, photons, τ -leptons), reducing the overall event size on disk by about 6% in early Run 3 pileup conditions. Offline reconstruction for Run 3 includes the timing requirement

    Software Performance of the ATLAS Track Reconstruction for LHC Run 3

    Get PDF
    Charged particle reconstruction in the presence of many simultaneous proton–proton (pp) collisions in the LHC is a challenging task for the ATLAS experiment’s reconstruction software due to the combinatorial complexity. This paper describes the major changes made to adapt the software to reconstruct high-activity collisions with an average of 50 or more simultaneous pp interactions per bunch crossing (pileup) promptly using the available computing resources. The performance of the key components of the track reconstruction chain and its dependence on pile-up are evaluated, and the improvement achieved compared to the previous software version is quantified. For events with an average of 60 pp collisions per bunch crossing, the updated track reconstruction is twice as fast as the previous version, without significant reduction in reconstruction efficiency and while reducing the rate of combinatorial fake tracks by more than a factor two

    Measurement and interpretation of same-sign W boson pair production in association with two jets in pp collisions at √s = 13 TeV with the ATLAS detector

    Get PDF
    This paper presents the measurement of fducial and diferential cross sections for both the inclusive and electroweak production of a same-sign W-boson pair in association with two jets (W±W±jj) using 139 fb−1 of proton-proton collision data recorded at a centre-of-mass energy of √ s = 13 TeV by the ATLAS detector at the Large Hadron Collider. The analysis is performed by selecting two same-charge leptons, electron or muon, and at least two jets with large invariant mass and a large rapidity diference. The measured fducial cross sections for electroweak and inclusive W±W±jj production are 2.92 ± 0.22 (stat.) ± 0.19 (syst.)fb and 3.38±0.22 (stat.)±0.19 (syst.)fb, respectively, in agreement with Standard Model predictions. The measurements are used to constrain anomalous quartic gauge couplings by extracting 95% confdence level intervals on dimension-8 operators. A search for doubly charged Higgs bosons H±± that are produced in vector-boson fusion processes and decay into a same-sign W boson pair is performed. The largest deviation from the Standard Model occurs for an H±± mass near 450 GeV, with a global signifcance of 2.5 standard deviations

    Performance and calibration of quark/gluon-jet taggers using 140 fb⁻¹ of pp collisions at √s=13 TeV with the ATLAS detector

    Get PDF
    The identification of jets originating from quarks and gluons, often referred to as quark/gluon tagging, plays an important role in various analyses performed at the Large Hadron Collider, as Standard Model measurements and searches for new particles decaying to quarks often rely on suppressing a large gluon-induced background. This paper describes the measurement of the efficiencies of quark/gluon taggers developed within the ATLAS Collaboration, using √s=13 TeV proton–proton collision data with an integrated luminosity of 140 fb-1 collected by the ATLAS experiment. Two taggers with high performances in rejecting jets from gluon over jets from quarks are studied: one tagger is based on requirements on the number of inner-detector tracks associated with the jet, and the other combines several jet substructure observables using a boosted decision tree. A method is established to determine the quark/gluon fraction in data, by using quark/gluon-enriched subsamples defined by the jet pseudorapidity. Differences in tagging efficiency between data and simulation are provided for jets with transverse momentum between 500 GeV and 2 TeV and for multiple tagger working points

    Combination of searches for heavy spin-1 resonances using 139 fb−1 of proton-proton collision data at √s = 13 TeV with the ATLAS detector

    Get PDF
    A combination of searches for new heavy spin-1 resonances decaying into diferent pairings of W, Z, or Higgs bosons, as well as directly into leptons or quarks, is presented. The data sample used corresponds to 139 fb−1 of proton-proton collisions at √ s = 13 TeV collected during 2015–2018 with the ATLAS detector at the CERN Large Hadron Collider. Analyses selecting quark pairs (qq, bb, tt¯, and tb) or third-generation leptons (τν and τ τ ) are included in this kind of combination for the frst time. A simplifed model predicting a spin-1 heavy vector-boson triplet is used. Cross-section limits are set at the 95% confdence level and are compared with predictions for the benchmark model. These limits are also expressed in terms of constraints on couplings of the heavy vector-boson triplet to quarks, leptons, and the Higgs boson. The complementarity of the various analyses increases the sensitivity to new physics, and the resulting constraints are stronger than those from any individual analysis considered. The data exclude a heavy vector-boson triplet with mass below 5.8 TeV in a weakly coupled scenario, below 4.4 TeV in a strongly coupled scenario, and up to 1.5 TeV in the case of production via vector-boson fusion
    corecore