8,377 research outputs found

    The B36/S125 "2x2" Life-Like Cellular Automaton

    Full text link
    The B36/S125 (or "2x2") cellular automaton is one that takes place on a 2D square lattice much like Conway's Game of Life. Although it exhibits high-level behaviour that is similar to Life, such as chaotic but eventually stable evolution and the existence of a natural diagonal glider, the individual objects that the rule contains generally look very different from their Life counterparts. In this article, a history of notable discoveries in the 2x2 rule is provided, and the fundamental patterns of the automaton are described. Some theoretical results are derived along the way, including a proof that the speed limits for diagonal and orthogonal spaceships in this rule are c/3 and c/2, respectively. A Margolus block cellular automaton that 2x2 emulates is investigated, and in particular a family of oscillators made up entirely of 2 x 2 blocks are analyzed and used to show that there exist oscillators with period 2^m(2^k - 1) for any integers m,k \geq 1.Comment: 18 pages, 19 figure

    On computational irreducibility and the predictability of complex physical systems

    Full text link
    Using elementary cellular automata (CA) as an example, we show how to coarse-grain CA in all classes of Wolfram's classification. We find that computationally irreducible (CIR) physical processes can be predictable and even computationally reducible at a coarse-grained level of description. The resulting coarse-grained CA which we construct emulate the large-scale behavior of the original systems without accounting for small-scale details. At least one of the CA that can be coarse-grained is irreducible and known to be a universal Turing machine.Comment: 4 pages, 2 figures, to be published in PR

    The rank of the semigroup of transformations stabilising a partition of a finite set

    Full text link
    Let P\mathcal{P} be a partition of a finite set XX. We say that a full transformation f:XXf:X\to X preserves (or stabilizes) the partition P\mathcal{P} if for all PPP\in \mathcal{P} there exists QPQ\in \mathcal{P} such that PfQPf\subseteq Q. Let T(X,P)T(X,\mathcal{P}) denote the semigroup of all full transformations of XX that preserve the partition P\mathcal{P}. In 2005 Huisheng found an upper bound for the minimum size of the generating sets of T(X,P)T(X,\mathcal{P}), when P\mathcal{P} is a partition in which all of its parts have the same size. In addition, Huisheng conjectured that his bound was exact. In 2009 the first and last authors used representation theory to completely solve Hisheng's conjecture. The goal of this paper is to solve the much more complex problem of finding the minimum size of the generating sets of T(X,P)T(X,\mathcal{P}), when P\mathcal{P} is an arbitrary partition. Again we use representation theory to find the minimum number of elements needed to generate the wreath product of finitely many symmetric groups, and then use this result to solve the problem. The paper ends with a number of problems for experts in group and semigroup theories

    Deep Transfer Learning for Error Decoding from Non-Invasive EEG

    Full text link
    We recorded high-density EEG in a flanker task experiment (31 subjects) and an online BCI control paradigm (4 subjects). On these datasets, we evaluated the use of transfer learning for error decoding with deep convolutional neural networks (deep ConvNets). In comparison with a regularized linear discriminant analysis (rLDA) classifier, ConvNets were significantly better in both intra- and inter-subject decoding, achieving an average accuracy of 84.1 % within subject and 81.7 % on unknown subjects (flanker task). Neither method was, however, able to generalize reliably between paradigms. Visualization of features the ConvNets learned from the data showed plausible patterns of brain activity, revealing both similarities and differences between the different kinds of errors. Our findings indicate that deep learning techniques are useful to infer information about the correctness of action in BCI applications, particularly for the transfer of pre-trained classifiers to new recording sessions or subjects.Comment: 6 pages, 9 figures, The 6th International Winter Conference on Brain-Computer Interface 201

    Comptonization and the Spectra of Accretion-Powered X-Ray Pulsars

    Full text link
    Accretion-powered X-ray pulsars are among the most luminous X-ray sources in the Galaxy. However, despite decades of theoretical and observational work since their discovery, no satisfactory model for the formation of the observed X-ray spectra has emerged. In this paper, we report on a self-consistent calculation of the spectrum emerging from a pulsar accretion column that includes an explicit treatment of the bulk and thermal Comptonization occurring in the radiation-dominated shocks that form in the accretion flows. Using a rigorous eigenfunction expansion method, we obtain a closed-form expression for the Green's function describing the upscattering of monochromatic radiation injected into the column. The Green's function is convolved with bremsstrahlung, cyclotron, and blackbody source terms to calculate the emergent photon spectrum. We show that energization of photons in the shock naturally produces an X-ray spectrum with a relatively flat continuum and a high-energy exponential cutoff. Finally, we demonstrate that our model yields good agreement with the spectra of the bright pulsar Her X-1 and the low luminosity pulsar X Per.Comment: 6 Pages, 2 Figures, To appear in "The Multicoloured Landscape of Compact Objects and their Explosive Progenitors" (Cefalu, Sicily, June 2006). Eds. L. Burderi et al. (New York: AIP

    On the numerical analysis of triplet pair production cross-sections and the mean energy of produced particles for modelling electron-photon cascade in a soft photon field

    Full text link
    The double and single differential cross-sections with respect to positron and electron energies as well as the total cross-section of triplet production in the laboratory frame are calculated numerically in order to develop a Monte Carlo code for modelling electron-photon cascades in a soft photon field. To avoid numerical integration irregularities of the integrands, which are inherent to problems of this type, we have used suitable substitutions in combination with a modern powerful program code Mathematica allowing one to achieve reliable higher-precission results. The results obtained for the total cross-section closely agree with others estimated analytically or by a different numerical approach. The results for the double and single differential cross-sections turn out to be somewhat different from some reported recently. The mean energy of the produced particles, as a function of the characteristic collisional parameter (the electron rest frame photon energy), is calculated and approximated by an analytical expression that revises other known approximations over a wide range of values of the argument. The primary-electron energy loss rate due to triplet pair production is shown to prevail over the inverse Compton scattering loss rate at several (\sim2) orders of magnitude higher interaction energy than that predicted formerly.Comment: 18 pages, 8 figures, 2 tables, LaTex2e, Iopart.cls, Iopart12.clo, Iopams.st

    Coarse-graining of cellular automata, emergence, and the predictability of complex systems

    Full text link
    We study the predictability of emergent phenomena in complex systems. Using nearest neighbor, one-dimensional Cellular Automata (CA) as an example, we show how to construct local coarse-grained descriptions of CA in all classes of Wolfram's classification. The resulting coarse-grained CA that we construct are capable of emulating the large-scale behavior of the original systems without accounting for small-scale details. Several CA that can be coarse-grained by this construction are known to be universal Turing machines; they can emulate any CA or other computing devices and are therefore undecidable. We thus show that because in practice one only seeks coarse-grained information, complex physical systems can be predictable and even decidable at some level of description. The renormalization group flows that we construct induce a hierarchy of CA rules. This hierarchy agrees well with apparent rule complexity and is therefore a good candidate for a complexity measure and a classification method. Finally we argue that the large scale dynamics of CA can be very simple, at least when measured by the Kolmogorov complexity of the large scale update rule, and moreover exhibits a novel scaling law. We show that because of this large-scale simplicity, the probability of finding a coarse-grained description of CA approaches unity as one goes to increasingly coarser scales. We interpret this large scale simplicity as a pattern formation mechanism in which large scale patterns are forced upon the system by the simplicity of the rules that govern the large scale dynamics.Comment: 18 pages, 9 figure

    Boolean networks with reliable dynamics

    Full text link
    We investigated the properties of Boolean networks that follow a given reliable trajectory in state space. A reliable trajectory is defined as a sequence of states which is independent of the order in which the nodes are updated. We explored numerically the topology, the update functions, and the state space structure of these networks, which we constructed using a minimum number of links and the simplest update functions. We found that the clustering coefficient is larger than in random networks, and that the probability distribution of three-node motifs is similar to that found in gene regulation networks. Among the update functions, only a subset of all possible functions occur, and they can be classified according to their probability. More homogeneous functions occur more often, leading to a dominance of canalyzing functions. Finally, we studied the entire state space of the networks. We observed that with increasing systems size, fixed points become more dominant, moving the networks close to the frozen phase.Comment: 11 Pages, 15 figure

    1/d1/d Expansion for kk-Core Percolation

    Get PDF
    The physics of kk-core percolation pertains to those systems whose constituents require a minimum number of kk connections to each other in order to participate in any clustering phenomenon. Examples of such a phenomenon range from orientational ordering in solid ortho-para H2{\rm H}_2 mixtures to the onset of rigidity in bar-joint networks to dynamical arrest in glass-forming liquids. Unlike ordinary (k=1k=1) and biconnected (k=2k=2) percolation, the mean field k3k\ge3-core percolation transition is both continuous and discontinuous, i.e. there is a jump in the order parameter accompanied with a diverging length scale. To determine whether or not this hybrid transition survives in finite dimensions, we present a 1/d1/d expansion for kk-core percolation on the dd-dimensional hypercubic lattice. We show that to order 1/d31/d^3 the singularity in the order parameter and in the susceptibility occur at the same value of the occupation probability. This result suggests that the unusual hybrid nature of the mean field kk-core transition survives in high dimensions.Comment: 47 pages, 26 figures, revtex

    Fundamental Cycle of a Periodic Box-Ball System

    Full text link
    We investigate a soliton cellular automaton (Box-Ball system) with periodic boundary conditions. Since the cellular automaton is a deterministic dynamical system that takes only a finite number of states, it will exhibit periodic motion. We determine its fundamental cycle for a given initial state.Comment: 28 pages, 6 figure
    corecore