5,876 research outputs found

    Electroencephalogram variability in patients with cirrhosis associates with the presence and severity of hepatic encephalopathy

    Get PDF
    BACKGROUND & AIMS: The outputs of physiological systems fluctuate in a complex manner even under resting conditions. Decreased variability or increased regularity of these outputs is documented in several disease states. Changes are observed in the spatial and temporal configuration of the electroencephalogram (EEG) in patients with hepatic encephalopathy (HE), but there is no information on the variability of the EEG signal in this condition. The aim of this study was to measure and characterize EEG variability in patients with cirrhosis and to determine its relationship to neuropsychiatric status. METHODS: Eyes-closed, awake EEGs were obtained from 226 patients with cirrhosis, classified, using clinical and psychometric criteria, as neuropsychiatrically unimpaired (n=127) or as having minimal (n=21) or overt (n=78) HE, and from a reference population of 137 healthy controls. Analysis of EEG signal variability was undertaken using continuous wavelet transform and sample entropy. RESULTS: EEG variability was reduced in the patients with cirrhosis compared with the reference population (coefficient of variation: 21.2% [19.3-23.4] vs. 22.4% [20.8-24.5]; p<0.001). A significant association was observed between EEG variability and neuropsychiatric status; thus, variability was increased in the patients with minimal HE compared with their neuropsychiatrically unimpaired counterparts (sample entropy: 0.98 [0.87-1.14] vs. 0.83 [0.75-0.95]; p=0.02), and compared with the patients with overt HE (sample entropy: 0.98 [0.87-1.14] vs. 0.82 [0.71-1.01]; p=0.01). CONCLUSIONS: Variability of the EEG is associated with both the presence and severity of HE. This novel finding may provide new insights into the pathophysiology of HE and provide a means for monitoring patients over time. LAY SUMMARY: Decreased variability or increased regularity of physiological systems is documented in several disease states. Variability of the electroencephalogram was found to be associated with both the presence and severity of brain dysfunction in patients with chronic liver disease

    The Gluonic Field of a Heavy Quark in Conformal Field Theories at Strong Coupling

    Full text link
    We determine the gluonic field configuration sourced by a heavy quark undergoing arbitrary motion in N=4 super-Yang-Mills at strong coupling and large number of colors. More specifically, we compute the expectation value of the operator tr[F^2+...] in the presence of such a quark, by means of the AdS/CFT correspondence. Our results for this observable show that signals propagate without temporal broadening, just as was found for the expectation value of the energy density in recent work by Hatta et al. We attempt to shed some additional light on the origin of this feature, and propose a different interpretation for its physical significance. As an application of our general results, we examine when the quark undergoes oscillatory motion, uniform circular motion, and uniform acceleration. Via the AdS/CFT correspondence, all of our results are pertinent to any conformal field theory in 3+1 dimensions with a dual gravity formulation.Comment: 1+38 pages, 16 eps figures; v2: completed affiliation; v3: corrected typo, version to appear in JHE

    Frame dragging with optical vortices

    Get PDF
    General Relativistic calculations in the linear regime have been made for electromagnetic beams of radiation known as optical vortices. These exotic beams of light carry a physical quantity known as optical orbital angular momentum (OAM). It is found that when a massive spinning neutral particle is placed along the optical axis, a phenomenon known as inertial frame dragging occurs. Our results are compared with those found previously for a ring laser and an order of magnitude estimate of the laser intensity needed for a precession frequency of 1 Hz is given for these "steady" beams of light.Comment: 13 pages, 2 figure

    Fracturing ranked surfaces

    Get PDF
    Discretized landscapes can be mapped onto ranked surfaces, where every element (site or bond) has a unique rank associated with its corresponding relative height. By sequentially allocating these elements according to their ranks and systematically preventing the occupation of bridges, namely elements that, if occupied, would provide global connectivity, we disclose that bridges hide a new tricritical point at an occupation fraction p=pcp=p_{c}, where pcp_{c} is the percolation threshold of random percolation. For any value of pp in the interval pc<p1p_{c}< p \leq 1, our results show that the set of bridges has a fractal dimension dBB1.22d_{BB} \approx 1.22 in two dimensions. In the limit p1p \rightarrow 1, a self-similar fracture is revealed as a singly connected line that divides the system in two domains. We then unveil how several seemingly unrelated physical models tumble into the same universality class and also present results for higher dimensions

    Null Models of Economic Networks: The Case of the World Trade Web

    Get PDF
    In all empirical-network studies, the observed properties of economic networks are informative only if compared with a well-defined null model that can quantitatively predict the behavior of such properties in constrained graphs. However, predictions of the available null-model methods can be derived analytically only under assumptions (e.g., sparseness of the network) that are unrealistic for most economic networks like the World Trade Web (WTW). In this paper we study the evolution of the WTW using a recently-proposed family of null network models. The method allows to analytically obtain the expected value of any network statistic across the ensemble of networks that preserve on average some local properties, and are otherwise fully random. We compare expected and observed properties of the WTW in the period 1950-2000, when either the expected number of trade partners or total country trade is kept fixed and equal to observed quantities. We show that, in the binary WTW, node-degree sequences are sufficient to explain higher-order network properties such as disassortativity and clustering-degree correlation, especially in the last part of the sample. Conversely, in the weighted WTW, the observed sequence of total country imports and exports are not sufficient to predict higher-order patterns of the WTW. We discuss some important implications of these findings for international-trade models.Comment: 39 pages, 46 figures, 2 table

    Lorentz violation, Gravity, Dissipation and Holography

    Get PDF
    We reconsider Lorentz Violation (LV) at the fundamental level. We show that Lorentz Violation is intimately connected with gravity and that LV couplings in QFT must always be fields in a gravitational sector. Diffeomorphism invariance must be intact and the LV couplings transform as tensors under coordinate/frame changes. Therefore searching for LV is one of the most sensitive ways of looking for new physics, either new interactions or modifications of known ones. Energy dissipation/Cerenkov radiation is shown to be a generic feature of LV in QFT. A general computation is done in strongly coupled theories with gravity duals. It is shown that in scale invariant regimes, the energy dissipation rate depends non-triviallly on two characteristic exponents, the Lifshitz exponent and the hyperscaling violation exponent.Comment: LateX, 51 pages, 9 figures. (v2) References and comments added. Misprints correcte

    Quantifying single nucleotide variant detection sensitivity in exome sequencing

    Get PDF
    BACKGROUND: The targeted capture and sequencing of genomic regions has rapidly demonstrated its utility in genetic studies. Inherent in this technology is considerable heterogeneity of target coverage and this is expected to systematically impact our sensitivity to detect genuine polymorphisms. To fully interpret the polymorphisms identified in a genetic study it is often essential to both detect polymorphisms and to understand where and with what probability real polymorphisms may have been missed. RESULTS: Using down-sampling of 30 deeply sequenced exomes and a set of gold-standard single nucleotide variant (SNV) genotype calls for each sample, we developed an empirical model relating the read depth at a polymorphic site to the probability of calling the correct genotype at that site. We find that measured sensitivity in SNV detection is substantially worse than that predicted from the naive expectation of sampling from a binomial. This calibrated model allows us to produce single nucleotide resolution SNV sensitivity estimates which can be merged to give summary sensitivity measures for any arbitrary partition of the target sequences (nucleotide, exon, gene, pathway, exome). These metrics are directly comparable between platforms and can be combined between samples to give “power estimates” for an entire study. We estimate a local read depth of 13X is required to detect the alleles and genotype of a heterozygous SNV 95% of the time, but only 3X for a homozygous SNV. At a mean on-target read depth of 20X, commonly used for rare disease exome sequencing studies, we predict 5–15% of heterozygous and 1–4% of homozygous SNVs in the targeted regions will be missed. CONCLUSIONS: Non-reference alleles in the heterozygote state have a high chance of being missed when commonly applied read coverage thresholds are used despite the widely held assumption that there is good polymorphism detection at these coverage levels. Such alleles are likely to be of functional importance in population based studies of rare diseases, somatic mutations in cancer and explaining the “missing heritability” of quantitative traits

    First LOFAR observations at very low frequencies of cluster-scale non-thermal emission: the case of Abell 2256

    Get PDF
    Abell 2256 is one of the best known examples of a galaxy cluster hosting large-scale diffuse radio emission that is unrelated to individual galaxies. It contains both a giant radio halo and a relic, as well as a number of head-tail sources and smaller diffuse steep-spectrum radio sources. The origin of radio halos and relics is still being debated, but over the last years it has become clear that the presence of these radio sources is closely related to galaxy cluster merger events. Here we present the results from the first LOFAR Low band antenna (LBA) observations of Abell 2256 between 18 and 67 MHz. To our knowledge, the image presented in this paper at 63 MHz is the deepest ever obtained at frequencies below 100 MHz in general. Both the radio halo and the giant relic are detected in the image at 63 MHz, and the diffuse radio emission remains visible at frequencies as low as 20 MHz. The observations confirm the presence of a previously claimed ultra-steep spectrum source to the west of the cluster center with a spectral index of -2.3 \pm 0.4 between 63 and 153 MHz. The steep spectrum suggests that this source is an old part of a head-tail radio source in the cluster. For the radio relic we find an integrated spectral index of -0.81 \pm 0.03, after removing the flux contribution from the other sources. This is relatively flat which could indicate that the efficiency of particle acceleration at the shock substantially changed in the last \sim 0.1 Gyr due to an increase of the shock Mach number. In an alternative scenario, particles are re-accelerated by some mechanism in the downstream region of the shock, resulting in the relatively flat integrated radio spectrum. In the radio halo region we find indications of low-frequency spectral steepening which may suggest that relativistic particles are accelerated in a rather inhomogeneous turbulent region.Comment: 13 pages, 13 figures, accepted for publication in A\&A on April 12, 201

    Zero Sound in Strange Metallic Holography

    Full text link
    One way to model the strange metal phase of certain materials is via a holographic description in terms of probe D-branes in a Lifshitz spacetime, characterised by a dynamical exponent z. The background geometry is dual to a strongly-interacting quantum critical theory while the probe D-branes are dual to a finite density of charge carriers that can exhibit the characteristic properties of strange metals. We compute holographically the low-frequency and low-momentum form of the charge density and current retarded Green's functions in these systems for massless charge carriers. The results reveal a quasi-particle excitation when z<2, which in analogy with Landau Fermi liquids we call zero sound. The real part of the dispersion relation depends on momentum k linearly, while the imaginary part goes as k^2/z. When z is greater than or equal to 2 the zero sound is not a well-defined quasi-particle. We also compute the frequency-dependent conductivity in arbitrary spacetime dimensions. Using that as a measure of the charge current spectral function, we find that the zero sound appears only when the spectral function consists of a single delta function at zero frequency.Comment: 20 pages, v2 minor corrections, extended discussion in sections 5 and 6, added one footnote and four references, version published in JHE

    Predictive modeling of die filling of the pharmaceutical granules using the flexible neural tree

    Get PDF
    In this work, a computational intelligence (CI) technique named flexible neural tree (FNT) was developed to predict die filling performance of pharmaceutical granules and to identify significant die filling process variables. FNT resembles feedforward neural network, which creates a tree-like structure by using genetic programming. To improve accuracy, FNT parameters were optimized by using differential evolution algorithm. The performance of the FNT-based CI model was evaluated and compared with other CI techniques: multilayer perceptron, Gaussian process regression, and reduced error pruning tree. The accuracy of the CI model was evaluated experimentally using die filling as a case study. The die filling experiments were performed using a model shoe system and three different grades of microcrystalline cellulose (MCC) powders (MCC PH 101, MCC PH 102, and MCC DG). The feed powders were roll-compacted and milled into granules. The granules were then sieved into samples of various size classes. The mass of granules deposited into the die at different shoe speeds was measured. From these experiments, a dataset consisting true density, mean diameter (d50), granule size, and shoe speed as the inputs and the deposited mass as the output was generated. Cross-validation (CV) methods such as 10FCV and 5x2FCV were applied to develop and to validate the predictive models. It was found that the FNT-based CI model (for both CV methods) performed much better than other CI models. Additionally, it was observed that process variables such as the granule size and the shoe speed had a higher impact on the predictability than that of the powder property such as d50. Furthermore, validation of model prediction with experimental data showed that the die filling behavior of coarse granules could be better predicted than that of fine granules
    corecore