5,831 research outputs found

    Looking for Design in Materials Design

    Full text link
    Despite great advances in computation, materials design is still science fiction. The construction of structure-property relations on the quantum scale will turn computational empiricism into true design.Comment: 3 pages, 1 figur

    Electrically-controllable RKKY interaction in semiconductor quantum wires

    Full text link
    We demonstrate in theory that it is possible to all-electrically manipulate the RKKY interaction in a quasi-one-dimensional electron gas embedded in a semiconductor heterostructure, in the presence of Rashba and Dresselhaus spin-orbit interaction. In an undoped semiconductor quantum wire where intermediate excitations are gapped, the interaction becomes the short-ranged Bloembergen-Rowland super-exchange interaction. Owing to the interplay of different types of spin-orbit interaction, the interaction can be controlled to realize various spin models, e.g., isotropic and anisotropic Heisenberg-like models, Ising-like models with additional Dzyaloshinsky-Moriya terms, by tuning the external electric field and designing the crystallographic directions. Such controllable interaction forms a basis for quantum computing with localized spins and quantum matters in spin lattices.Comment: 5 pages, 1 figur

    Likelihood Ratio Test process for Quantitative Trait Locus detection

    Get PDF
    International audienceWe consider the likelihood ratio test (LRT) process related to the test of the absence of QTL (a QTL denotes a quantitative trait locus, i.e. a gene with quantitative effect on a trait) on the interval [0,T] representing a chromosome. The observation is the trait and the composition of the genome at some locations called ''markers''. We give the asymptotic distribution of this LRT process under the null hypothesis that there is no QTL on [0,T] and under local alternatives with a QTL at t* on [0,T]. We show that the LRT is asymptotically the square of some Gaussian process. We give a description of this process as an '' non-linear interpolated and normalized process ''. We propose a simple method to calculate the maximum of the LRT process using only statistics on markers and their ratio. This gives a new method to calculate thresholds for QTL detection

    The effects of a Variable IMF on the Chemical Evolution of the Galaxy

    Get PDF
    In this work we explore the effects of adopting an initial mass function (IMF) variable in time on the chemical evolution of the Galaxy. In order to do that we adopt a chemical evolution model which assumes two main infall episodes for the formation of the Galaxy. We study the effects on such a model of different IMFs. First, we use a theoretical one based on the statistical description of the density field arising from random motions in the gas. This IMF is a function of time as it depends on physical conditions of the site of star formation. We also investigate the behaviour of the model predictions using other variable IMFs, parameterized as a function of metallicity. Our results show that the theoretical IMF when applied to our model depends on time but such time variation is important only in the early phases of the Galactic evolution, when the IMF is biased towards massive stars. We also show that the use of an IMF which is a stronger function of time does not lead to a good agreement with the observational constraints suggesting that if the IMF varied this variation should have been small. Our main conclusion is that the G-dwarf metallicity distribution is best explained by infall with a large timescale and a constant IMF, since it is possible to find variable IMFs of the kind studied here, reproducing the G-dwarf metallicity but this worsens the agreement with other observational constraints.Comment: 7 pages, to appear in "The Chemical Evolution of the Milky Way: Stars vs Clusters", Vulcano, September 1999, F. Giovannelli and F. Matteucci eds. (Kluwer, Dordrecht) in pres

    ASCORE: an up-to-date cardiovascular risk score for hypertensive patients reflecting contemporary clinical practice developed using the (ASCOT-BPLA) trial data.

    No full text
    A number of risk scores already exist to predict cardiovascular (CV) events. However, scores developed with data collected some time ago might not accurately predict the CV risk of contemporary hypertensive patients that benefit from more modern treatments and management. Using data from the randomised clinical trial Anglo-Scandinavian Cardiac Outcomes Trial-BPLA, with 15 955 hypertensive patients without previous CV disease receiving contemporary preventive CV management, we developed a new risk score predicting the 5-year risk of a first CV event (CV death, myocardial infarction or stroke). Cox proportional hazard models were used to develop a risk equation from baseline predictors. The final risk model (ASCORE) included age, sex, smoking, diabetes, previous blood pressure (BP) treatment, systolic BP, total cholesterol, high-density lipoprotein-cholesterol, fasting glucose and creatinine baseline variables. A simplified model (ASCORE-S) excluding laboratory variables was also derived. Both models showed very good internal validity. User-friendly integer score tables are reported for both models. Applying the latest Framingham risk score to our data significantly overpredicted the observed 5-year risk of the composite CV outcome. We conclude that risk scores derived using older databases (such as Framingham) may overestimate the CV risk of patients receiving current BP treatments; therefore, 'updated' risk scores are needed for current patients

    Risk factors for race-day fatality in flat racing Thoroughbreds in Great Britain (2000 to 2013)

    Get PDF
    A key focus of the racing industry is to reduce the number of race-day events where horses die suddenly or are euthanased due to catastrophic injury. The objective of this study was therefore to determine risk factors for race-day fatalities in Thoroughbred racehorses, using a cohort of all horses participating in flat racing in Great Britain between 2000 and 2013. Horse-, race- and course-level data were collected and combined with all race-day fatalities, recorded by racecourse veterinarians in a central database. Associations between exposure variables and fatality were assessed using logistic regression analyses for (1) all starts in the dataset and (2) starts made on turf surfaces only. There were 806,764 starts in total, of which 548,571 were on turf surfaces. A total of 610 fatalities were recorded; 377 (61.8%) on turf. In both regression models, increased firmness of the going, increasing racing distance, increasing average horse performance, first year of racing and wearing eye cover for the first time all increased the odds of fatality. Generally, the odds of fatality also increased with increasing horse age whereas increasing number of previous starts reduced fatality odds. In the ‘all starts’ model, horses racing in an auction race were at 1.46 (95% confidence interval (CI) 1.06–2.01) times the odds of fatality compared with horses not racing in this race type. In the turf starts model, horses racing in Group 1 races were at 3.19 (95% CI 1.71–5.93) times the odds of fatality compared with horses not racing in this race type. Identification of novel risk factors including wearing eye cover and race type will help to inform strategies to further reduce the rate of fatality in flat racing horses, enhancing horse and jockey welfare and safety

    Spectral weight transfer in a disorder-broadened Landau level

    Full text link
    In the absence of disorder, the degeneracy of a Landau level (LL) is N=BA/ϕ0N=BA/\phi_0, where BB is the magnetic field, AA is the area of the sample and ϕ0=h/e\phi_0=h/e is the magnetic flux quantum. With disorder, localized states appear at the top and bottom of the broadened LL, while states in the center of the LL (the critical region) remain delocalized. This well-known phenomenology is sufficient to explain most aspects of the Integer Quantum Hall Effect (IQHE) [1]. One unnoticed issue is where the new states appear as the magnetic field is increased. Here we demonstrate that they appear predominantly inside the critical region. This leads to a certain ``spectral ordering'' of the localized states that explains the stripes observed in measurements of the local inverse compressibility [2-3], of two-terminal conductance [4], and of Hall and longitudinal resistances [5] without invoking interactions as done in previous work [6-8].Comment: 5 pages 3 figure

    Melting of a 2D Quantum Electron Solid in High Magnetic Field

    Full text link
    The melting temperature (TmT_m) of a solid is generally determined by the pressure applied to it, or indirectly by its density (nn) through the equation of state. This remains true even for helium solids\cite{wilk:67}, where quantum effects often lead to unusual properties\cite{ekim:04}. In this letter we present experimental evidence to show that for a two dimensional (2D) solid formed by electrons in a semiconductor sample under a strong perpendicular magnetic field\cite{shay:97} (BB), the TmT_m is not controlled by nn, but effectively by the \textit{quantum correlation} between the electrons through the Landau level filling factor ν\nu=nh/eBnh/eB. Such melting behavior, different from that of all other known solids (including a classical 2D electron solid at zero magnetic field\cite{grim:79}), attests to the quantum nature of the magnetic field induced electron solid. Moreover, we found the TmT_m to increase with the strength of the sample-dependent disorder that pins the electron solid.Comment: Some typos corrected and 2 references added. Final version with minor editoriol revisions published in Nature Physic

    Study of psi(2S) decays to X J/psi

    Full text link
    Using J/psi -> mu^+ mu^- decays from a sample of approximately 4 million psi(2S) events collected with the BESI detector, the branching fractions of psi(2S) -> eta J/psi, pi^0 pi^0 J/psi, and anything J/psi normalized to that of psi(2S) -> pi^+ pi^- J/psi are measured. The results are B(psi(2S) -> eta J/psi)/B(psi(2S) -> pi^+ pi^- J/psi) = 0.098 \pm 0.005 \pm 0.010, B(psi(2S) -> pi^0 pi^0 J/psi)/B(psi(2S) -> pi^+ pi^- J/psi) = 0.570 \pm 0.009 \pm 0.026, and B(psi(2S) -> anything J/psi)/B(psi(2S) -> pi^+ pi^- J/psi) = 1.867 \pm 0.026 \pm 0.055.Comment: 13 pages, 8 figure

    Solving the mu problem with a heavy Higgs boson

    Full text link
    We discuss the generation of the mu-term in a class of supersymmetric models characterized by a low energy effective superpotential containing a term lambda S H_1 H_2 with a large coupling lambda~2. These models generically predict a lightest Higgs boson well above the LEP limit of 114 GeV and have been shown to be compatible with the unification of gauge couplings. Here we discuss a specific example where the superpotential has no dimensionful parameters and we point out the relation between the generated mu-term and the mass of the lightest Higgs boson. We discuss the fine-tuning of the model and we find that the generation of a phenomenologically viable mu-term fits very well with a heavy lightest Higgs boson and a low degree of fine-tuning. We discuss experimental constraints from collider direct searches, precision data, thermal relic dark matter abundance, and WIMP searches finding that the most natural region of the parameter space is still allowed by current experiments. We analyse bounds on the masses of the superpartners coming from Naturalness arguments and discuss the main signatures of the model for the LHC and future WIMP searches.Comment: Extended discussion of the LHC phenomenology, as published on JHEP plus an addendum on the existence of further extremal points of the potential. 47 pages, 16 figure
    corecore