246,770 research outputs found

    Oseba: Optimization for Selective Bulk Analysis in Big Data Processing

    Full text link
    Selective bulk analyses, such as statistical learning on temporal/spatial data, are fundamental to a wide range of contemporary data analysis. However, with the increasingly larger data-sets, such as weather data and marketing transactions, the data organization/access becomes more challenging in selective bulk data processing with the use of current big data processing frameworks such as Spark or keyvalue stores. In this paper, we propose a method to optimize selective bulk analysis in big data processing and referred to as Oseba. Oseba maintains a super index for the data organization in memory to support fast lookup through targeting the data involved with each selective analysis program. Oseba is able to save memory as well as computation in comparison to the default data processing frameworks

    Bringing Reference Groups Back: Agent-based Modeling of the Spiral of Silence

    Full text link
    The purpose of this study is threefold: first, to bring reference groups back into the framework of spiral of silence (SOS) by proposing an extended framework of dual opinion climate; second, to investigate the boundary conditions of SOS; third, to identify the characteristics of SOS in terms of spatial variation and temporal evolution. Modeling SOS with agent-based models, the findings suggest (1) there is no guarantee of SOS with reference groups being brought back; (2) Stable existence of SOS is contingent upon the comparative strength of mass media over reference groups; (3) SOS is size-dependent upon reference groups and the population; (4) the growth rate of SOS decreases over time. Thus, this research presents an extension of the SOS theory.Comment: 31 pages, 1 figur

    Recovering modified Newtonian dynamics by changing inertia

    Full text link
    Milgrom's modified Newtonian dynamics (MOND) has done a great job on accounting for the rotation curves of a variety of galaxies by assuming that Newtonian dynamics breaks down for extremely low acceleration typically found in the galactic contexts. This breakdown of Newtonian dynamics may be a result of modified gravity or a manifest of modified inertia. The MOND phenomena are derived here based on three general assumptions: 1) Gravitational mass is conserved; 2) Inverse-square law is applicable at large distance; 3) Inertial mass depends on external gravitational fields. These assumptions not only recover the deep-MOND behaviour, the accelerating expansion of the universe is also a result of these assumptions. Then Lagrangian formulae are developed and it is found that the assumed universal acceleration constant a0 is actually slowly varying by a factor no more than 4. This varying 'constant' is just enough to account for the mass-discrepancy presented in bright clusters

    Renormalization of the SU(2)-symmetric model of hadrodynamics

    Get PDF
    It is proved that the SU(2)-symmetric model of hadrodynamics can well be set up on the gauge-invariance principle. The quantization of the model can readily be performed in the Lagrangian path-integral formalisms by using the Lagrangian undetermined multiplier method. Furthermore, it is shown that the quantum theory is invariant with respect to a kind of BRST-transformations. From the BRST-symmetry of the theory, the Ward-Takahashi identities satisfied by the generating functionals of full Green functions, connected Green functions and proper vertex functions are successively derived. As an application of the above Ward-Takahashi identities, the Ward-Takahashi identities obeyed by the propagators and various proper vertices are derived. Based on these identities, the propagators and vertices are perfectly renormalized. Especially, as a result of the renormalization, the Slavnov-Taylor identity satisfied by renormalization constants is natually deduced. To demonstrate the renormalizability of the theory, the one-loop renormalization of the theory is carried out by means of the mass-dependent momentum space subtraction and the renormalization group approach, giving an exact one-loop effective coupling constant and one-loop effective nucleon, pion and ρ\rho -meson masses

    Particle paths in small amplitude solitary waves with negative vorticity

    Full text link
    We investigate the particle trajectories in solitary waves with vorticity, where the vorticity is assumed to be negative and decrease with depth. We show that the individual particle moves in a similar way as that in the irrotational case if the underlying laminar flow is favorable, that is, the flow is moving in the same direction as the wave propagation throughout the fluid, and show that if the underlying current is not favorable, some particles in a sufficiently small solitary wave move to the opposite direction of wave propagation along a path with a single loop or hump .Comment: 11page

    Holography and (1+1)-dimension non-relativistic Quantum Mechanics

    Full text link
    I generalize classical gravity/quantum gauge theory duality in AdS/CFT correspondence to (1+1)-dimensional non-relativistic quantum mechanical system. It is shown that (1+1)-dimensional non-relativistic quantum mechanical system can be reproduced from holographic projection of (2+1)-dimension classical gravity at semiclassical limit. In this explanation every quantum path in 2-dimension corresponds to a classical path of 3-dimension gravity under definite holographic projection. I consider free particle and harmonic oscillator as two examples and find their dual gravity description.Comment: 4 pages, no fig, use revtex4.cl

    Magnetic field at the center of a vortex: a new criterion for the classification of the superconductors

    Full text link
    Magnetic response of a superconductor depends on the thermodynamic stability of vortex in the material. Here we show that the vortex stability has a close relation with the ratio of the magnetic field at the vortex core center to the thermodynamic critical field. This finding provides a new criterion for the classification of the superconductors according to their magnetic responses.Comment: 3 pages, 2 figure

    Aubry-Mather and weak KAM theories for contact Hamiltonian systems. Part 1: Strictly increasing case

    Full text link
    This paper is concerned with the study of Aubry-Mather and weak KAM theories for contact Hamiltonian systems with Hamiltonians H(x,u,p)H(x,u,p) defined on TM×RT^*M\times\mathbb{R}, satisfying Tonelli conditions with respect to pp and 0000, where MM is a connected, closed and smooth manifold. First, we show the uniqueness of the backward weak KAM solutions of the corresponding Hamilton-Jacobi equation. Using the unique backward weak KAM solution uu_-, we prove the existence of the maximal forward weak KAM solution u+u_+. Next, we analyse Aubry set for the contact Hamiltonian system showing that it is the intersection of two Legendrian pseudographs GuG_{u_-} and Gu+G_{u_+}, and that the projection π:TM×RM\pi:T^*M\times \mathbb{R}\to M induces a bi-Lipschitz homeomorphism πA~\pi|_{\tilde{\mathcal{A}}} from Aubry set A~\tilde{\mathcal{A}} onto the projected Aubry set A\mathcal{A}. At last, we introduce the notion of barrier functions and study their interesting properties along calibrated curves. Our analysis is based on a recent method by [43,44].Comment: 34 page

    Non-commutative Discretize-then-Optimize Algorithms for Elliptic PDE-Constrained Optimal Control Problems

    Full text link
    In this paper, we analyze the convergence of several discretize-then-optimize algorithms, based on either a second-order or a fourth-order finite difference discretization, for solving elliptic PDE-constrained optimization or optimal control problems. To ensure the convergence of a discretize-then-optimize algorithm, one well-accepted criterion is to choose or redesign the discretization scheme such that the resultant discretize-then-optimize algorithm commutes with the corresponding optimize-then-discretize algorithm. In other words, both types of algorithms would give rise to exactly the same discrete optimality system. However, such an approach is not trivial. In this work, by investigating a simple distributed elliptic optimal control problem, we first show that enforcing such a stringent condition of commutative property is only sufficient but not necessary for achieving the desired convergence. We then propose to add some suitable H1H_1 semi-norm penalty/regularization terms to recover the lost convergence due to the inconsistency caused by the loss of commutativity. Numerical experiments are carried out to verify our theoretical analysis and also validate the effectiveness of our proposed regularization techniques.Comment: Revised on Aug 1, 2018. To appear in Journal of Computational and Applied Mathematic

    Structuring Relevant Feature Sets with Multiple Model Learning

    Full text link
    Feature selection is one of the most prominent learning tasks, especially in high-dimensional datasets in which the goal is to understand the mechanisms that underly the learning dataset. However most of them typically deliver just a flat set of relevant features and provide no further information on what kind of structures, e.g. feature groupings, might underly the set of relevant features. In this paper we propose a new learning paradigm in which our goal is to uncover the structures that underly the set of relevant features for a given learning problem. We uncover two types of features sets, non-replaceable features that contain important information about the target variable and cannot be replaced by other features, and functionally similar features sets that can be used interchangeably in learned models, given the presence of the non-replaceable features, with no change in the predictive performance. To do so we propose a new learning algorithm that learns a number of disjoint models using a model disjointness regularization constraint together with a constraint on the predictive agreement of the disjoint models. We explore the behavior of our approach on a number of high-dimensional datasets, and show that, as expected by their construction, these satisfy a number of properties. Namely, model disjointness, a high predictive agreement, and a similar predictive performance to models learned on the full set of relevant features. The ability to structure the set of relevant features in such a manner can become a valuable tool in different applications of scientific knowledge discovery
    corecore