7,937 research outputs found

    Quantum Circuits for the Unitary Permutation Problem

    Full text link
    We consider the Unitary Permutation problem which consists, given nn unitary gates U1,,UnU_1, \ldots, U_n and a permutation σ\sigma of {1,,n}\{1,\ldots, n\}, in applying the unitary gates in the order specified by σ\sigma, i.e. in performing Uσ(n)Uσ(1)U_{\sigma(n)}\ldots U_{\sigma(1)}. This problem has been introduced and investigated by Colnaghi et al. where two models of computations are considered. This first is the (standard) model of query complexity: the complexity measure is the number of calls to any of the unitary gates UiU_i in a quantum circuit which solves the problem. The second model provides quantum switches and treats unitary transformations as inputs of second order. In that case the complexity measure is the number of quantum switches. In their paper, Colnaghi et al. have shown that the problem can be solved within n2n^2 calls in the query model and n(n1)2\frac{n(n-1)}2 quantum switches in the new model. We refine these results by proving that nlog2(n)+Θ(n)n\log_2(n) +\Theta(n) quantum switches are necessary and sufficient to solve this problem, whereas n22n+4n^2-2n+4 calls are sufficient to solve this problem in the standard quantum circuit model. We prove, with an additional assumption on the family of gates used in the circuits, that n2o(n7/4+ϵ)n^2-o(n^{7/4+\epsilon}) queries are required, for any ϵ>0\epsilon >0. The upper and lower bounds for the standard quantum circuit model are established by pointing out connections with the permutation as substring problem introduced by Karp.Comment: 8 pages, 5 figure

    Different DNA End Configurations Dictate Which NHEJ Components Are Most Important for Joining Efficiency

    Get PDF
    The nonhomologous DNA end-joining (NHEJ) pathway is a key mechanism for repairing dsDNA breaks that occur often in eukaryotic cells. In the simplest model, these breaks are first recognized by Ku, which then interacts with other NHEJ proteins to improve their affinity at DNA ends. These include DNA-PKcs_{cs} and Artemis for trimming the DNA ends; DNA polymerase μ and λ to add nucleotides; and the DNA ligase IV complex to ligate the ends with the additional factors, XRCC4 (X-ray repair cross-complementing protein 4), XLF (XRCC4-like factor/Cernunos), and PAXX (paralog of XRCC4 and XLF). In vivo\textit{In vivo} studies have demonstrated the degrees of importance of these NHEJ proteins in the mechanism of repair of dsDNA breaks, but interpretations can be confounded by other cellular processes. In vitro\textit{In vitro} studies with NHEJ proteins have been performed to evaluate the nucleolytic resection, polymerization, and ligation steps, but a complete system has been elusive. Here we have developed a NHEJ reconstitution system that includes the nuclease, polymerase, and ligase components to evaluate relative NHEJ efficiency and analyze ligated junctional sequences for various types of DNA ends, including blunt, 5' overhangs, and 3' overhangs. We find that different dsDNA end structures have differential dependence on these enzymatic components. The dependence of some end joining on only Ku and XRCC4·DNA ligase IV allows us to formulate a physical model that incorporates nuclease and polymerase components as needed.National Institutes of Health, Cancer Research UK Program Grant IDs: C6/A11224, C6946/A14492), Wellcome Trust (Grant IDs: WT092096, WT093167

    A Novel Approach for Ellipsoidal Outer-Approximation of the Intersection Region of Ellipses in the Plane

    Get PDF
    In this paper, a novel technique for tight outer-approximation of the intersection region of a finite number of ellipses in 2-dimensional (2D) space is proposed. First, the vertices of a tight polygon that contains the convex intersection of the ellipses are found in an efficient manner. To do so, the intersection points of the ellipses that fall on the boundary of the intersection region are determined, and a set of points is generated on the elliptic arcs connecting every two neighbouring intersection points. By finding the tangent lines to the ellipses at the extended set of points, a set of half-planes is obtained, whose intersection forms a polygon. To find the polygon more efficiently, the points are given an order and the intersection of the half-planes corresponding to every two neighbouring points is calculated. If the polygon is convex and bounded, these calculated points together with the initially obtained intersection points will form its vertices. If the polygon is non-convex or unbounded, we can detect this situation and then generate additional discrete points only on the elliptical arc segment causing the issue, and restart the algorithm to obtain a bounded and convex polygon. Finally, the smallest area ellipse that contains the vertices of the polygon is obtained by solving a convex optimization problem. Through numerical experiments, it is illustrated that the proposed technique returns a tighter outer-approximation of the intersection of multiple ellipses, compared to conventional techniques, with only slightly higher computational cost

    Higgs production in CP-violating supersymmetric cascade decays: probing the `open hole' at the Large Hadron Collider

    Full text link
    A benchmark CP-violating supersymmetric scenario (known as 'CPX-scenario' in the literature) is studied in the context of the Large Hadron Collider (LHC). It is shown that the LHC, with low to moderate accumulated luminosity, will be able to probe the existing `hole' in the mh1m_{h_1}-tanβ\tan\beta plane, which cannot be ruled out by the LEP data. We explore the parameter space with cascade decay of third generation squarks and gluino with CP-violating decay branching fractions. We propose a multi-channel analysis to probe this parameter space some of which are background free at an integrated luminosity of 5-10 fb1^{-1}. Specially, multi-lepton final states (3\l,\, 4\l and like sign di-lepton) are almost background free and have 5σ5\sigma reach for the corresponding signals with very early data of LHC for both 14 TeV and 7 TeV center of mass energy.Comment: 24 pages, 9 figures, references added as in the journal versio

    Risk factors for race-day fatality in flat racing Thoroughbreds in Great Britain (2000 to 2013)

    Get PDF
    A key focus of the racing industry is to reduce the number of race-day events where horses die suddenly or are euthanased due to catastrophic injury. The objective of this study was therefore to determine risk factors for race-day fatalities in Thoroughbred racehorses, using a cohort of all horses participating in flat racing in Great Britain between 2000 and 2013. Horse-, race- and course-level data were collected and combined with all race-day fatalities, recorded by racecourse veterinarians in a central database. Associations between exposure variables and fatality were assessed using logistic regression analyses for (1) all starts in the dataset and (2) starts made on turf surfaces only. There were 806,764 starts in total, of which 548,571 were on turf surfaces. A total of 610 fatalities were recorded; 377 (61.8%) on turf. In both regression models, increased firmness of the going, increasing racing distance, increasing average horse performance, first year of racing and wearing eye cover for the first time all increased the odds of fatality. Generally, the odds of fatality also increased with increasing horse age whereas increasing number of previous starts reduced fatality odds. In the ‘all starts’ model, horses racing in an auction race were at 1.46 (95% confidence interval (CI) 1.06–2.01) times the odds of fatality compared with horses not racing in this race type. In the turf starts model, horses racing in Group 1 races were at 3.19 (95% CI 1.71–5.93) times the odds of fatality compared with horses not racing in this race type. Identification of novel risk factors including wearing eye cover and race type will help to inform strategies to further reduce the rate of fatality in flat racing horses, enhancing horse and jockey welfare and safety

    Computer simulation of syringomyelia in dogs

    Get PDF
    Syringomyelia is a pathological condition in which fluid-filled cavities (syringes) form and expand in the spinal cord. Syringomyelia is often linked with obstruction of the craniocervical junction and a Chiari malformation, which is similar in both humans and animals. Some brachycephalic toy breed dogs such as Cavalier King Charles Spaniels (CKCS) are particularly predisposed. The exact mechanism of the formation of syringomyelia is undetermined and consequently with the lack of clinical explanation, engineers and mathematicians have resorted to computer models to identify possible physical mechanisms that can lead to syringes. We developed a computer model of the spinal cavity of a CKCS suffering from a large syrinx. The model was excited at the cranial end to simulate the movement of the cerebrospinal fluid (CSF) and the spinal cord due to the shift of blood volume in the cranium related to the cardiac cycle. To simulate the normal condition, the movement was prescribed to the CSF. To simulate the pathological condition, the movement of CSF was blocked

    Naturalness bounds in extensions of the MSSM without a light Higgs boson

    Full text link
    Adopting a bottom-up point of view, we make a comparative study of the simplest extensions of the MSSM with extra tree level contributions to the lightest Higgs boson mass. We show to what extent a relatively heavy Higgs boson, up to 200-350 GeV, can be compatible with data and naturalness. The price to pay is that the theory undergoes some change of regime at a relatively low scale. Bounds on these models come from electroweak precision tests and naturalness, which often requires the scale at which the soft terms are generated to be relatively low.Comment: 18 pages, 5 figures. v2: minor revision, added references. v3,v4: some numerical correction

    Solving the mu problem with a heavy Higgs boson

    Full text link
    We discuss the generation of the mu-term in a class of supersymmetric models characterized by a low energy effective superpotential containing a term lambda S H_1 H_2 with a large coupling lambda~2. These models generically predict a lightest Higgs boson well above the LEP limit of 114 GeV and have been shown to be compatible with the unification of gauge couplings. Here we discuss a specific example where the superpotential has no dimensionful parameters and we point out the relation between the generated mu-term and the mass of the lightest Higgs boson. We discuss the fine-tuning of the model and we find that the generation of a phenomenologically viable mu-term fits very well with a heavy lightest Higgs boson and a low degree of fine-tuning. We discuss experimental constraints from collider direct searches, precision data, thermal relic dark matter abundance, and WIMP searches finding that the most natural region of the parameter space is still allowed by current experiments. We analyse bounds on the masses of the superpartners coming from Naturalness arguments and discuss the main signatures of the model for the LHC and future WIMP searches.Comment: Extended discussion of the LHC phenomenology, as published on JHEP plus an addendum on the existence of further extremal points of the potential. 47 pages, 16 figure

    Fine Tuning in General Gauge Mediation

    Get PDF
    We study the fine-tuning problem in the context of general gauge mediation. Numerical analyses toward for relaxing fine-tuning are presented. We analyse the problem in typical three cases of the messenger scale, that is, GUT (2×10162\times10^{16} GeV), intermediate (101010^{10} GeV), and relatively low energy (10610^6 GeV) scales. In each messenger scale, the parameter space reducing the degree of tuning as around 10% is found. Certain ratios among gluino mass, wino mass and soft scalar masses are favorable. It is shown that the favorable region becomes narrow as the messenger scale becomes lower, and tachyonic initial conditions of stop masses at the messenger scale are favored to relax the fine-tuning problem for the relatively low energy messenger scale. Our spectra would also be important from the viewpoint of the μB\mu-B problem.Comment: 22 pages, 16 figures, comment adde

    Minimal Gaugomaly Mediation

    Get PDF
    Mixed anomaly and gauge mediation ("gaugomaly'' mediation) gives a natural solution to the SUSY flavor problem with a conventional LSP dark matter candidate. We present a minimal version of gaugomaly mediation where the messenger masses arise directly from anomaly mediation, automatically generating a messenger scale of order 50 TeV. We also describe a simple relaxation mechanism that gives rise to realistic mu and B mu terms. B is naturally dominated by the anomaly-mediated contribution from top loops, so the mu/B mu sector only depends on a single new parameter. In the minimal version of this scenario the full SUSY spectrum is determined by two continuous parameters (the anomaly- and gauge-mediated SUSY breaking masses) and one discrete parameter (the number of messengers). We show that these simple models can give realistic spectra with viable dark matter.Comment: 18 pages, 4 figures; v2: corrected example generating non-holomorphic Kahler term
    corecore