13,892 research outputs found

    Review of Karen Eliot, Albion's Dance: British Ballet During the Second World War

    Get PDF
    No abstract available

    Measurement of Transverse Spin Effects at COMPASS

    Get PDF
    By measuring transverse single spin asymmetries one has access to the transversity distribution function ΔTq(x)\Delta_T q(x) and the transverse momentum dependent Sivers function q0T(x,kT)q_0^T(x,\vec{k}_T). New measurements from identified hadrons and hadron pairs, produced in deep inelastic scattering of a transversely polarized 6LiD^6LiD target are presented. The data were taken in 2003 and 2004 by the COMPASS collaboration using the muon beam of the CERN SPS at 160 GeV/c, resulting in small asymmetries.Comment: 4 pages, 7 figures, in proceedings for 'Rencontres de Moriond 2007, QCD and Hadronic interactions

    Support Vector Machines in High Energy Physics

    Get PDF
    This lecture will introduce the Support Vector algorithms for classification and regression. They are an application of the so called kernel trick, which allows the extension of a certain class of linear algorithms to the non linear case. The kernel trick will be introduced and in the context of structural risk minimization, large margin algorithms for classification and regression will be presented. Current applications in high energy physics will be discussed.Comment: 11 pages, 12 figures. Part of the proceedings of the Track 'Computational Intelligence for HEP Data Analysis' at iCSC 200

    Noiseless compression using non-Markov models

    Get PDF
    Adaptive data compression techniques can be viewed as consisting of a model specified by a database common to the encoder and decoder, an encoding rule and a rule for updating the model to ensure that the encoder and decoder always agree on the interpretation of the next transmission. The techniques which fit this framework range from run-length coding, to adaptive Huffman and arithmetic coding, to the string-matching techniques of Lempel and Ziv. The compression obtained by arithmetic coding is dependent on the generality of the source model. For many sources, an independent-letter model is clearly insufficient. Unfortunately, a straightforward implementation of a Markov model requires an amount of space exponential in the number of letters remembered. The Directed Acyclic Word Graph (DAWG) can be constructed in time and space proportional to the text encoded, and can be used to estimate the probabilities required for arithmetic coding based on an amount of memory which varies naturally depending on the encoded text. The tail of that portion of the text which was encoded is the longest suffix that has occurred previously. The frequencies of letters following these previous occurrences can be used to estimate the probability distribution of the next letter. Experimental results indicate that compression is often far better than that obtained using independent-letter models, and sometimes also significantly better than other non-independent techniques

    Can Electro-Weak \h-Term be Observable ?

    Full text link
    We rederive and discuss the result of the previous paper that in the standard model θ\theta-term related to WW-boson field can not be induced by weak instantons. This follows from the existence of the fermion zero mode in the instanton field even when Yukawa couplings are switched on and there are no massless particles. We consider the new index theorem connecting the topological charge of the weak gauge field with the number of fermion zero modes of a certain differential operator which depends not only on gauge but also on Higgs fields. The possible generalizations of the standard model are discussed which lead to nonvanishing weak θ\theta-term. In SU(2)L×SU(2)RSU(2)_L \times SU(2)_R model the θ\theta dependence of the vacuum energy is computed.Comment: 21 pages, Preprint TPI-MINN-93/24-

    The Rényi Redundancy of Generalized Huffman Codes

    Get PDF
    Huffman's algorithm gives optimal codes, as measured by average codeword length, and the redundancy can be measured as the difference between the average codeword length and Shannon's entropy. If the objective function is replaced by an exponentially weighted average, then a simple modification of Huffman's algorithm gives optimal codes. The redundancy can now be measured as the difference between this new average and A. Renyi's (1961) generalization of Shannon's entropy. By decreasing some of the codeword lengths in a Shannon code, the upper bound on the redundancy given in the standard proof of the noiseless source coding theorem is improved. The lower bound is improved by randomizing between codeword lengths, allowing linear programming techniques to be used on an integer programming problem. These bounds are shown to be asymptotically equal. The results are generalized to the Renyi case and are related to R.G. Gallager's (1978) bound on the redundancy of Huffman codes
    corecore