608 research outputs found

    Lagrangian perfect fluids and black hole mechanics

    Get PDF
    The first law of black hole mechanics (in the form derived by Wald), is expressed in terms of integrals over surfaces, at the horizon and spatial infinity, of a stationary, axisymmetric black hole, in a diffeomorphism invariant Lagrangian theory of gravity. The original statement of the first law given by Bardeen, Carter and Hawking for an Einstein-perfect fluid system contained, in addition, volume integrals of the fluid fields, over a spacelike slice stretching between these two surfaces. When applied to the Einstein-perfect fluid system, however, Wald's methods yield restricted results. The reason is that the fluid fields in the Lagrangian of a gravitating perfect fluid are typically nonstationary. We therefore first derive a first law-like relation for an arbitrary Lagrangian metric theory of gravity coupled to arbitrary Lagrangian matter fields, requiring only that the metric field be stationary. This relation includes a volume integral of matter fields over a spacelike slice between the black hole horizon and spatial infinity, and reduces to the first law originally derived by Bardeen, Carter and Hawking when the theory is general relativity coupled to a perfect fluid. We also consider a specific Lagrangian formulation for an isentropic perfect fluid given by Carter, and directly apply Wald's analysis. The resulting first law contains only surface integrals at the black hole horizon and spatial infinity, but this relation is much more restrictive in its allowed fluid configurations and perturbations than that given by Bardeen, Carter and Hawking. In the Appendix, we use the symplectic structure of the Einstein-perfect fluid system to derive a conserved current for perturbations of this system: this current reduces to one derived ab initio for this system by Chandrasekhar and Ferrari.Comment: 26 pages LaTeX-2

    Noether Currents of Charged Spherical Black Holes

    Full text link
    We calculate the Noether currents and charges for Einstein-Maxwell theory using a version of the Wald approach. In spherical symmetry, the choice of time can be taken as the Kodama vector. For the static case, the resulting combined Einstein-Maxwell charge is just the mass of the black hole. Using either a classically defined entropy or the Iyer-Wald selection rules, the entropy is found to be just a quarter of the area of the trapping horizon. We propose identifying the combined Noether charge as an energy associated with the Kodama time. For the extremal black hole case, we discuss the problem of Wald's rescaling of the surface gravity to define the entropy.Comment: 4 page

    A comparison of Noether charge and Euclidean methods for Computing the Entropy of Stationary Black Holes

    Full text link
    The entropy of stationary black holes has recently been calculated by a number of different approaches. Here we compare the Noether charge approach (defined for any diffeomorphism invariant Lagrangian theory) with various Euclidean methods, specifically, (i) the microcanonical ensemble approach of Brown and York, (ii) the closely related approach of Ba\~nados, Teitelboim, and Zanelli which ultimately expresses black hole entropy in terms of the Hilbert action surface term, (iii) another formula of Ba\~nados, Teitelboim and Zanelli (also used by Susskind and Uglum) which views black hole entropy as conjugate to a conical deficit angle, and (iv) the pair creation approach of Garfinkle, Giddings, and Strominger. All of these approaches have a more restrictive domain of applicability than the Noether charge approach. Specifically, approaches (i) and (ii) appear to be restricted to a class of theories satisfying certain properties listed in section 2; approach (iii) appears to require the Lagrangian density to be linear in the curvature; and approach (iv) requires the existence of suitable instanton solutions. However, we show that within their domains of applicability, all of these approaches yield results in agreement with the Noether charge approach. In the course of our analysis, we generalize the definition of Brown and York's quasilocal energy to a much more general class of diffeomorphism invariant, Lagrangian theories of gravity. In an appendix, we show that in an arbitrary diffeomorphism invariant theory of gravity, the ``volume term" in the ``off-shell" Hamiltonian associated with a time evolution vector field tat^a always can be expressed as the spatial integral of taCat^a {\cal C}_a, where Ca=0{\cal C}_a = 0 are the constraints associated with the diffeomorphism invariance.Comment: 29 pages (double-spaced) late

    Ultraparamagnetic cells formed through intracellular oxidation and chelation of paramagnetic iron

    Get PDF
    Making cells magnetic is a long‐standing goal of chemical biology, aiming to enable the separation of cells from complex biological samples and their visualization in vivo using magnetic resonance imaging (MRI). Previous efforts towards this goal, focused on engineering cells to biomineralize superparamagnetic or ferromagnetic iron oxides, have been largely unsuccessful due to the stringent required chemical conditions. Here, we introduce an alternative approach to making cells magnetic, focused on biochemically maximizing cellular paramagnetism. We show that a novel genetic construct combining the functions of ferroxidation and iron chelation enables engineered bacterial cells to accumulate iron in “ultraparamagnetic” macromolecular complexes, allowing these cells to be trapped with magnetic fields and imaged with MRI in vitro and in vivo. We characterize the properties of these cells and complexes using magnetometry, nuclear magnetic resonance, biochemical assays, and computational modeling to elucidate the unique mechanisms and capabilities of this paramagnetic concept

    A Genome-Wide Association Study for Regulators of Micronucleus Formation in Mice.

    Get PDF
    In mammals the regulation of genomic instability plays a key role in tumor suppression and also controls genome plasticity, which is important for recombination during the processes of immunity and meiosis. Most studies to identify regulators of genomic instability have been performed in cells in culture or in systems that report on gross rearrangements of the genome, yet subtle differences in the level of genomic instability can contribute to whole organism phenotypes such as tumor predisposition. Here we performed a genome-wide association study in a population of 1379 outbred Crl:CFW(SW)-US_P08 mice to dissect the genetic landscape of micronucleus formation, a biomarker of chromosomal breaks, whole chromosome loss, and extranuclear DNA. Variation in micronucleus levels is a complex trait with a genome-wide heritability of 53.1%. We identify seven loci influencing micronucleus formation (false discovery rate <5%), and define candidate genes at each locus. Intriguingly at several loci we find evidence for sexual dimorphism in micronucleus formation, with a locus on chromosome 11 being specific to males.This work was supported by Cancer Research UK and the Wellcome Trust.This is the final version of the article. It first appeared from the Genetics Society of America via http://dx.doi.org/10.1534/g3.116.03076

    VeeAlign: A Supervised Deep Learning Approach to Ontology Alignment

    Get PDF
    While deep learning approaches have shown promising results in Natural Language Processing and Computer Vision domains, they have not yet been able to achieve impressive results in Ontology Alignment, and have typically performed worse than rule-based approaches. Some of the major reasons for this are: a) poor modelling of context, b) overfitting of standard DL models, and c) dataset sparsity, caused by class imbalance of positive alignment pairs wrt negative pairs. To mitigate these limitations, we propose a dual-attention based approach that uses a multi-faceted context representation to compute contextualized representations of concepts, which is then used to discover semantically equivalent concepts.<br/

    Towards Effective Disambiguation for Machine Translation with Large Language Models

    Get PDF
    Resolving semantic ambiguity has long been recognised as a central challenge in the field of Machine Translation. Recent work on benchmarking translation performance on ambiguous sentences has exposed the limitations of conventional Neural Machine Translation (NMT) systems, which fail to handle many such cases. Large language models (LLMs) have emerged as a promising alternative, demonstrating comparable performance to traditional NMT models while introducing new paradigms for controlling the target outputs. In this paper, we study the capabilities of LLMs to translate ``ambiguous sentences'' - i.e. those containing highly polysemous words and/or rare word senses. We also propose two ways to improve their disambiguation capabilities, through a) in-context learning and b) fine-tuning on carefully curated ambiguous datasets. Experiments show that our methods can match or outperform state-of-the-art systems such as DeepL and NLLB in four out of five language directions. Our research provides valuable insights into effectively adapting LLMs to become better disambiguators during Machine Translation. We release our curated disambiguation corpora and resources at https://data.statmt.org/ambiguous-europarl

    VeeAlign: Multifaceted Context Representation Using Dual Attention for Ontology Alignment

    Get PDF
    Ontology Alignment is an important research problem applied to various fields such as data integration, data transfer, data preparation, etc. State-of-the-art (SOTA) Ontology Alignment systems typically use naive domain-dependent approaches with handcrafted rules or domain-specific architectures, making them unscalable and inefficient. In this work, we propose VeeAlign, a Deep Learning based model that uses a novel dual-attention mechanism to compute the contextualized representation of a concept which, in turn, is used to discover alignments. By doing this, not only is our approach able to exploit both syntactic and semantic information encoded in ontologies, it is also, by design, flexible and scalable to different domains with minimal effort. We evaluate our model on four different datasets from different domains and languages, and establish its superiority through these results as well as detailed ablation studies. The code and datasets used are available at https://github.com/Remorax/VeeAlign.Comment: Duplicate of arXiv:2010.1172

    Exploring Enhanced Code-Switched Noising for Pretraining in Neural Machine Translation

    Get PDF
    Multilingual pretraining approaches in Neural Machine Translation (NMT) have shown that training models to denoise synthetic code-switched data can yield impressive performance gains --- owing to better multilingual semantic representations and transfer learning. However, they generated the synthetic code-switched data using non-contextual, one-to-one word translations obtained from lexicons - which can lead to significant noise in a variety of cases, including the poor handling of polysemes and multi-word expressions, violation of linguistic agreement and inability to scale to agglutinative languages. To overcome these limitations, we propose an approach called Contextual Code-Switching (CCS), where contextual, many-to-many word translations are generated using a `base' NMT model. We conduct experiments on 3 different language families - Romance, Uralic, and Indo-Aryan - and show significant improvements (by up to 5.5 spBLEU points) over the previous lexicon-based SOTA approaches. We also observe that small CCS models can perform comparably or better than massive models like mBART50 and mRASP2, depending on the size of data provided. We empirically analyse several key factors responsible for these - including context, many-to-many substitutions, code-switching language count etc. - and prove that they all contribute to enhanced pretraining of multilingual NMT models
    corecore