1,001 research outputs found

    Public channel cryptography by synchronization of neural networks and chaotic maps

    Full text link
    Two different kinds of synchronization have been applied to cryptography: Synchronization of chaotic maps by one common external signal and synchronization of neural networks by mutual learning. By combining these two mechanisms, where the external signal to the chaotic maps is synchronized by the nets, we construct a hybrid network which allows a secure generation of secret encryption keys over a public channel. The security with respect to attacks, recently proposed by Shamir et al, is increased by chaotic synchronization.Comment: 4 page

    Cryptography based on neural networks - analytical results

    Full text link
    Mutual learning process between two parity feed-forward networks with discrete and continuous weights is studied analytically, and we find that the number of steps required to achieve full synchronization between the two networks in the case of discrete weights is finite. The synchronization process is shown to be non-self-averaging and the analytical solution is based on random auxiliary variables. The learning time of an attacker that is trying to imitate one of the networks is examined analytically and is found to be much longer than the synchronization time. Analytical results are found to be in agreement with simulations

    Numerical Diagonalisation Study of the Trimer Deposition-Evaporation Model in One Dimension

    Get PDF
    We study the model of deposition-evaporation of trimers on a line recently introduced by Barma, Grynberg and Stinchcombe. The stochastic matrix of the model can be written in the form of the Hamiltonian of a quantum spin-1/2 chain with three-spin couplings given by H= \sum\displaylimits_i [(1 - \sigma_i^-\sigma_{i+1}^-\sigma_{i+2}^-) \sigma_i^+\sigma_{i+1}^+\sigma_{i+2}^+ + h.c]. We study by exact numerical diagonalization of HH the variation of the gap in the eigenvalue spectrum with the system size for rings of size up to 30. For the sector corresponding to the initial condition in which all sites are empty, we find that the gap vanishes as LzL^{-z} where the gap exponent zz is approximately 2.55±0.152.55\pm 0.15. This model is equivalent to an interfacial roughening model where the dynamical variables at each site are matrices. From our estimate for the gap exponent we conclude that the model belongs to a new universality class, distinct from that studied by Kardar, Parisi and Zhang.Comment: 11 pages, 2 figures (included

    Genetic attack on neural cryptography

    Full text link
    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.Comment: 8 pages, 12 figures; section 5 amended, typos correcte

    Minority Game of price promotions in fast moving consumer goods markets

    Full text link
    A variation of the Minority Game has been applied to study the timing of promotional actions at retailers in the fast moving consumer goods market. The underlying hypotheses for this work are that price promotions are more effective when fewer than average competitors do a promotion, and that a promotion strategy can be based on past sales data. The first assumption has been checked by analysing 1467 promotional actions for three products on the Dutch market (ketchup, mayonnaise and curry sauce) over a 120-week period, both on an aggregated level and on retailer chain level. The second assumption was tested by analysing past sales data with the Minority Game. This revealed that high or low competitor promotional pressure for actual ketchup, mayonnaise, curry sauce and barbecue sauce markets is to some extent predictable up to a forecast of some 10 weeks. Whereas a random guess would be right 50% of the time, a single-agent game can predict the market with a success rate of 56% for a 6 to 9 week forecast. This number is the same for all four mentioned fast moving consumer markets. For a multi-agent game a larger variability in the success rate is obtained, but predictability can be as high as 65%. Contrary to expectation, the actual market does the opposite of what game theory would predict. This points at a systematic oscillation in the market. Even though this result is not fully understood, merely observing that this trend is present in the data could lead to exploitable trading benefits. As a check, random history strings were generated from which the statistical variation in the game prediction was studied. This shows that the odds are 1:1,000,000 that the observed pattern in the market is based on coincidence.Comment: 19 pages, 10 figures, accepted for publication in Physica

    Training a perceptron in a discrete weight space

    Full text link
    On-line and batch learning of a perceptron in a discrete weight space, where each weight can take 2L+12 L+1 different values, are examined analytically and numerically. The learning algorithm is based on the training of the continuous perceptron and prediction following the clipped weights. The learning is described by a new set of order parameters, composed of the overlaps between the teacher and the continuous/clipped students. Different scenarios are examined among them on-line learning with discrete/continuous transfer functions and off-line Hebb learning. The generalization error of the clipped weights decays asymptotically as exp(Kα2)exp(-K \alpha^2)/exp(eλα)exp(-e^{|\lambda| \alpha}) in the case of on-line learning with binary/continuous activation functions, respectively, where α\alpha is the number of examples divided by N, the size of the input vector and KK is a positive constant that decays linearly with 1/L. For finite NN and LL, a perfect agreement between the discrete student and the teacher is obtained for αLln(NL)\alpha \propto \sqrt{L \ln(NL)}. A crossover to the generalization error 1/α\propto 1/\alpha, characterized continuous weights with binary output, is obtained for synaptic depth L>O(N)L > O(\sqrt{N}).Comment: 10 pages, 5 figs., submitted to PR

    Performance enhancement of downstream vertical-axis wind turbines

    Get PDF
    Increased power production is observed in downstream vertical-axis wind turbines (VAWTs) when positioned offset from the wake of upstream turbines. This effect is found to exist in both laboratory and field environments with pairs of co- and counter-rotating turbines, respectively. It is hypothesized that the observed production enhancement is due to flow acceleration adjacent to the upstream turbine due to bluff body blockage, which would increase the incident freestream velocity on appropriately positioned downstream turbines. A low-order model combining potential flow and actuator disk theory captures this effect. Additional laboratory and field experiments further validate the predictive capabilities of the model. Finally, an evolutionary algorithm reveals patterns in optimized VAWT arrays with various numbers of turbines. A “truss-shaped” array is identified as a promising configuration to optimize energy extraction in VAWT wind farms by maximizing the performance enhancement of downstream turbines

    Multi-Choice Minority Game

    Full text link
    The generalization of the problem of adaptive competition, known as the minority game, to the case of KK possible choices for each player is addressed, and applied to a system of interacting perceptrons with input and output units of the type of KK-states Potts-spins. An optimal solution of this minority game as well as the dynamic evolution of the adaptive strategies of the players are solved analytically for a general KK and compared with numerical simulations.Comment: 5 pages, 2 figures, reorganized and clarifie

    Nonlocal mechanism for cluster synchronization in neural circuits

    Full text link
    The interplay between the topology of cortical circuits and synchronized activity modes in distinct cortical areas is a key enigma in neuroscience. We present a new nonlocal mechanism governing the periodic activity mode: the greatest common divisor (GCD) of network loops. For a stimulus to one node, the network splits into GCD-clusters in which cluster neurons are in zero-lag synchronization. For complex external stimuli, the number of clusters can be any common divisor. The synchronized mode and the transients to synchronization pinpoint the type of external stimuli. The findings, supported by an information mixing argument and simulations of Hodgkin Huxley population dynamic networks with unidirectional connectivity and synaptic noise, call for reexamining sources of correlated activity in cortex and shorter information processing time scales.Comment: 8 pges, 6 figure

    Secure exchange of information by synchronization of neural networks

    Full text link
    A connection between the theory of neural networks and cryptography is presented. A new phenomenon, namely synchronization of neural networks is leading to a new method of exchange of secret messages. Numerical simulations show that two artificial networks being trained by Hebbian learning rule on their mutual outputs develop an antiparallel state of their synaptic weights. The synchronized weights are used to construct an ephemeral key exchange protocol for a secure transmission of secret data. It is shown that an opponent who knows the protocol and all details of any transmission of the data has no chance to decrypt the secret message, since tracking the weights is a hard problem compared to synchronization. The complexity of the generation of the secure channel is linear with the size of the network.Comment: 11 pages, 5 figure
    corecore