1,476 research outputs found

    Ice nucleation from aqueous NaCl droplets with and without marine diatoms

    Get PDF
    Ice formation in the atmosphere by homogeneous and heterogeneous nucleation is one of the least understood processes in cloud microphysics and climate. Here we describe our investigation of the marine environment as a potential source of atmospheric IN by experimentally observing homogeneous ice nucleation from aqueous NaCl droplets and comparing against heterogeneous ice nucleation from aqueous NaCl droplets containing intact and fragmented diatoms. Homogeneous and heterogeneous ice nucleation are studied as a function of temperature and water activity, <i>a</i><sub>w</sub>. Additional analyses are presented on the dependence of diatom surface area and aqueous volume on heterogeneous freezing temperatures, ice nucleation rates, ω<sub>het</sub>, ice nucleation rate coefficients, <i>J</i><sub>het</sub>, and differential and cumulative ice nuclei spectra, <i>k(T)</i> and <i>K(T)</i>, respectively. Homogeneous freezing temperatures and corresponding nucleation rate coefficients are in agreement with the water activity based homogeneous ice nucleation theory within experimental and predictive uncertainties. Our results confirm, as predicted by classical nucleation theory, that a stochastic interpretation can be used to describe the homogeneous ice nucleation process. Heterogeneous ice nucleation initiated by intact and fragmented diatoms can be adequately represented by a modified water activity based ice nucleation theory. A horizontal shift in water activity, Δ<i>a</i><sub>w, het</sub> = 0.2303, of the ice melting curve can describe median heterogeneous freezing temperatures. Individual freezing temperatures showed no dependence on available diatom surface area and aqueous volume. Determined at median diatom freezing temperatures for <i>a</i><sub>w</sub> from 0.8 to 0.99, ω<sub>het</sub><u>~</u>0.11<sup>+0.06</sup><sub>−0.05</sub> s<sup>−1</sup>, <i>J</i><sub>het</sub><u>~</u>1.0<sup>+1.16</sup><sub>−0.61</sub>×10<sup>4</sup> cm<sup>−2</sup> s<sup>−1</sup>, and <i>K</i><u>~</u>6.2<sup>+3.5</sup><sub>−4.1</sub> ×10<sup>4</sup> cm<sup>−2</sup>. The experimentally derived ice nucleation rates and nuclei spectra allow us to estimate ice particle production which we subsequently use for a comparison with observed ice crystal concentrations typically found in cirrus and polar marine mixed-phase clouds. Differences in application of time-dependent and time-independent analyses to predict ice particle production are discussed

    Universality in solar flare and earthquake occurrence

    Full text link
    Earthquakes and solar flares are phenomena involving huge and rapid releases of energy characterized by complex temporal occurrence. By analysing available experimental catalogs, we show that the stochastic processes underlying these apparently different phenomena have universal properties. Namely both problems exhibit the same distributions of sizes, inter-occurrence times and the same temporal clustering: we find afterflare sequences with power law temporal correlations as the Omori law for seismic sequences. The observed universality suggests a common approach to the interpretation of both phenomena in terms of the same driving physical mechanism

    Continuum-plasma solution surrounding nonemitting spherical bodies

    Get PDF
    The classical problem of the interaction of a nonemitting spherical body with a zero mean-free-path continuum plasma is solved numerically in the full range of physically allowed free parameters (electron Debye length to body radius ratio, ion to electron temperature ratio, and body bias), and analytically in rigorously defined asymptotic regimes (weak and strong bias, weak and strong shielding, thin and thick sheath). Results include current-voltage characteristics as well as floating potential and capacitance, for both continuum and collisionless electrons. Our numerical computations show that for most combinations of physical parameters, there exists a closest asymptotic regime whose analytic solutions are accurate to 15% or better

    Researching the use of force: The background to the international project

    Get PDF
    This article provides the background to an international project on use of force by the police that was carried out in eight countries. Force is often considered to be the defining characteristic of policing and much research has been conducted on the determinants, prevalence and control of the use of force, particularly in the United States. However, little work has looked at police officers’ own views on the use of force, in particular the way in which they justify it. Using a hypothetical encounter developed for this project, researchers in each country conducted focus groups with police officers in which they were encouraged to talk about the use of force. The results show interesting similarities and differences across countries and demonstrate the value of using this kind of research focus and methodology

    Extremal Optimization for Graph Partitioning

    Full text link
    Extremal optimization is a new general-purpose method for approximating solutions to hard optimization problems. We study the method in detail by way of the NP-hard graph partitioning problem. We discuss the scaling behavior of extremal optimization, focusing on the convergence of the average run as a function of runtime and system size. The method has a single free parameter, which we determine numerically and justify using a simple argument. Our numerical results demonstrate that on random graphs, extremal optimization maintains consistent accuracy for increasing system sizes, with an approximation error decreasing over runtime roughly as a power law t^(-0.4). On geometrically structured graphs, the scaling of results from the average run suggests that these are far from optimal, with large fluctuations between individual trials. But when only the best runs are considered, results consistent with theoretical arguments are recovered.Comment: 34 pages, RevTex4, 1 table and 20 ps-figures included, related papers available at http://www.physics.emory.edu/faculty/boettcher

    A Toy Model for Testing Finite Element Methods to Simulate Extreme-Mass-Ratio Binary Systems

    Full text link
    Extreme mass ratio binary systems, binaries involving stellar mass objects orbiting massive black holes, are considered to be a primary source of gravitational radiation to be detected by the space-based interferometer LISA. The numerical modelling of these binary systems is extremely challenging because the scales involved expand over several orders of magnitude. One needs to handle large wavelength scales comparable to the size of the massive black hole and, at the same time, to resolve the scales in the vicinity of the small companion where radiation reaction effects play a crucial role. Adaptive finite element methods, in which quantitative control of errors is achieved automatically by finite element mesh adaptivity based on posteriori error estimation, are a natural choice that has great potential for achieving the high level of adaptivity required in these simulations. To demonstrate this, we present the results of simulations of a toy model, consisting of a point-like source orbiting a black hole under the action of a scalar gravitational field.Comment: 29 pages, 37 figures. RevTeX 4.0. Minor changes to match the published versio

    Finding community structure in networks using the eigenvectors of matrices

    Get PDF
    We consider the problem of detecting communities or modules in networks, groups of vertices with a higher-than-average density of edges connecting them. Previous work indicates that a robust approach to this problem is the maximization of the benefit function known as "modularity" over possible divisions of a network. Here we show that this maximization process can be written in terms of the eigenspectrum of a matrix we call the modularity matrix, which plays a role in community detection similar to that played by the graph Laplacian in graph partitioning calculations. This result leads us to a number of possible algorithms for detecting community structure, as well as several other results, including a spectral measure of bipartite structure in networks and a new centrality measure that identifies those vertices that occupy central positions within the communities to which they belong. The algorithms and measures proposed are illustrated with applications to a variety of real-world complex networks.Comment: 22 pages, 8 figures, minor corrections in this versio

    Spherical probes at ion saturation in E × B fields

    Get PDF
    The ion saturation current to a spherical probe in the entire range of ion magnetization is computed with SCEPTIC3D, a newthree-dimensional version of the kinetic code SCEPTIC designed to study transverse plasma flows. Results are compared with prior two-dimensional calculations valid in the magneticfree regime (Hutchinson 2002 Plasma Phys. Control. Fusion 44 1953), and with recent semi-analytic solutions to the strongly magnetized transverse Mach probe problem (Patacchini and Hutchinson 2009 Phys. Rev. E 80 036403). At intermediate magnetization (ion Larmor radius close to the probe radius) the plasma density profiles show a complex three-dimensional structure that SCEPTIC3D can fully resolve, and, contrary to intuition, the ion current peaks provided the ion temperature is low enough. Our results are conveniently condensed in a single factor M[subscript c], function of ion temperature and magnetic field only, providing the theoretical calibration for a transverse Mach probe with four electrodes placed at 45◦ to the magnetic field in a plane of flow and magnetic field

    From DNA sequence to application: possibilities and complications

    Get PDF
    The development of sophisticated genetic tools during the past 15 years have facilitated a tremendous increase of fundamental and application-oriented knowledge of lactic acid bacteria (LAB) and their bacteriophages. This knowledge relates both to the assignments of open reading frames (ORF’s) and the function of non-coding DNA sequences. Comparison of the complete nucleotide sequences of several LAB bacteriophages has revealed that their chromosomes have a fixed, modular structure, each module having a set of genes involved in a specific phase of the bacteriophage life cycle. LAB bacteriophage genes and DNA sequences have been used for the construction of temperature-inducible gene expression systems, gene-integration systems, and bacteriophage defence systems. The function of several LAB open reading frames and transcriptional units have been identified and characterized in detail. Many of these could find practical applications, such as induced lysis of LAB to enhance cheese ripening and re-routing of carbon fluxes for the production of a specific amino acid enantiomer. More knowledge has also become available concerning the function and structure of non-coding DNA positioned at or in the vicinity of promoters. In several cases the mRNA produced from this DNA contains a transcriptional terminator-antiterminator pair, in which the antiterminator can be stabilized either by uncharged tRNA or by interaction with a regulatory protein, thus preventing formation of the terminator so that mRNA elongation can proceed. Evidence has accumulated showing that also in LAB carbon catabolite repression in LAB is mediated by specific DNA elements in the vicinity of promoters governing the transcription of catabolic operons. Although some biological barriers have yet to be solved, the vast body of scientific information presently available allows the construction of tailor-made genetically modified LAB. Today, it appears that societal constraints rather than biological hurdles impede the use of genetically modified LAB.
    corecore