10,981 research outputs found

    Jet Fragmentation in Medium and Vacuum with the PHENIX Detector

    Full text link
    One of the most active areas of investigation in relativistic heavy-ion collisions is the study of the jet quenching phenomenon whereby hard partons lose their energy as they traverse the hot, dense matter created in such collisions. Strong parton energy loss has been observed in central nucleus-nucleus collisions as evidenced by the a large suppression of the yield of high pT hadrons as compared to the expected yield based on measurements in p+p collisions. Moreover, measurements of back-to-back correlations of charged hadrons suggest that jet shapes are strongly modified modified by the medium. The quantitative interpretation of single and di-hadron measurements is, however, complicated by the fact that the initial parton energy is unknown. A more informative measurement would be one in which the initial parton energy is known, allowing the determination of the fragmentation function, which may be effectively modified from its vacuum form by the presence of the medium. Two measurements in which the initial parton energy may be estimated are discussed in these proceedings: jet reconstruction and two- particle correlations using direct photons. Jet reconstruction in nuclear collisions is challenging due to the large background of soft particles, fluctuations of which give rise to fake jets. Direct photons can be used to estimate the initial parton energy of the recoil jet without recourse to jet reconstruction algorithms. However, such studies suffer from a smaller rate and the direct photon signal must be disentangled from a large background of decay photons. We present jet reconstruction results which use an algorithm suitable for a high multiplicity environment. We also present results of two-particle correlations using direct photons. These results are discussed in the context of medium modification to the fragmentation function.Comment: Talk presented at DIS 2010, Florence, Ital

    Hidden Markov Model Identifiability via Tensors

    Full text link
    The prevalence of hidden Markov models (HMMs) in various applications of statistical signal processing and communications is a testament to the power and flexibility of the model. In this paper, we link the identifiability problem with tensor decomposition, in particular, the Canonical Polyadic decomposition. Using recent results in deriving uniqueness conditions for tensor decomposition, we are able to provide a necessary and sufficient condition for the identification of the parameters of discrete time finite alphabet HMMs. This result resolves a long standing open problem regarding the derivation of a necessary and sufficient condition for uniquely identifying an HMM. We then further extend recent preliminary work on the identification of HMMs with multiple observers by deriving necessary and sufficient conditions for identifiability in this setting.Comment: Accepted to ISIT 2013. 5 pages, no figure

    Intellectual Property and Antitrust Limits on Contract: Comment

    Get PDF
    In their chapter in Dynamic Competition and Public Policy (2001, Cambridge University Press), Burtis and Kobayashi never defined their model\u27s discount rate, making replicating their simulation results difficult. Through our own simulations, we were able to verify their results when using a discount rate of 0.10. We also identified two new types of equilibria that the authors overlooked, doubling the number of distinct equilibria in the model

    Cultural appropriation and the intimacy of groups

    Get PDF
    What could ground normative restrictions concerning cultural appropriation which are not grounded by independent considerations such as property rights or harm? We propose that such restrictions can be grounded by considerations of intimacy. Consider the familiar phenomenon of interpersonal intimacy. Certain aspects of personal life and interpersonal relationships are afforded various protections in virtue of being intimate. We argue that an analogous phenomenon exists at the level of large groups. In many cases, members of a group engage in shared practices that contribute to a sense of common identity, such as wearing certain hair or clothing styles or performing a certain style of music. Participation in such practices can generate relations of group intimacy, which can ground certain prerogatives in much the same way that interpersonal intimacy can. One such prerogative is making what we call an appropriation claim. An appropriation claim is a request from a group member that non-members refrain from appropriating a given element of the group’s culture. Ignoring appropriation claims can constitute a breach of intimacy. But, we argue, just as for the prerogatives of interpersonal intimacy, in many cases there is no prior fact of the matter about whether the appropriation of a given cultural practice constitutes a breach of intimacy. It depends on what the group decides together

    Entanglement of purification: from spin chains to holography

    Full text link
    Purification is a powerful technique in quantum physics whereby a mixed quantum state is extended to a pure state on a larger system. This process is not unique, and in systems composed of many degrees of freedom, one natural purification is the one with minimal entanglement. Here we study the entropy of the minimally entangled purification, called the entanglement of purification, in three model systems: an Ising spin chain, conformal field theories holographically dual to Einstein gravity, and random stabilizer tensor networks. We conjecture values for the entanglement of purification in all these models, and we support our conjectures with a variety of numerical and analytical results. We find that such minimally entangled purifications have a number of applications, from enhancing entanglement-based tensor network methods for describing mixed states to elucidating novel aspects of the emergence of geometry from entanglement in the AdS/CFT correspondence.Comment: 40 pages, multiple figures. v2: references added, typos correcte

    Exploring tradeoffs in pleiotropy and redundancy using evolutionary computing

    Full text link
    Evolutionary computation algorithms are increasingly being used to solve optimization problems as they have many advantages over traditional optimization algorithms. In this paper we use evolutionary computation to study the trade-off between pleiotropy and redundancy in a client-server based network. Pleiotropy is a term used to describe components that perform multiple tasks, while redundancy refers to multiple components performing one same task. Pleiotropy reduces cost but lacks robustness, while redundancy increases network reliability but is more costly, as together, pleiotropy and redundancy build flexibility and robustness into systems. Therefore it is desirable to have a network that contains a balance between pleiotropy and redundancy. We explore how factors such as link failure probability, repair rates, and the size of the network influence the design choices that we explore using genetic algorithms.Comment: 10 pages, 6 figure
    corecore