28 research outputs found
Optimal cooperation-trap strategies for the iterated Rock-Paper-Scissors game
In an iterated non-cooperative game, if all the players act to maximize their
individual accumulated payoff, the system as a whole usually converges to a
Nash equilibrium that poorly benefits any player. Here we show that such an
undesirable destiny is avoidable in an iterated Rock-Paper-Scissors (RPS) game
involving two players X and Y. Player X has the option of proactively adopting
a cooperation-trap strategy, which enforces complete cooperation from the
rational player Y and leads to a highly beneficial as well as maximally fair
situation to both players. That maximal degree of cooperation is achievable in
such a competitive system with cyclic dominance of actions may stimulate
creative thinking on how to resolve conflicts and enhance cooperation in human
societies.Comment: 5 pages including 3 figure
Partition Function Expansion on Region-Graphs and Message-Passing Equations
Disordered and frustrated graphical systems are ubiquitous in physics,
biology, and information science. For models on complete graphs or random
graphs, deep understanding has been achieved through the mean-field replica and
cavity methods. But finite-dimensional `real' systems persist to be very
challenging because of the abundance of short loops and strong local
correlations. A statistical mechanics theory is constructed in this paper for
finite-dimensional models based on the mathematical framework of partition
function expansion and the concept of region-graphs. Rigorous expressions for
the free energy and grand free energy are derived. Message-passing equations on
the region-graph, such as belief-propagation and survey-propagation, are also
derived rigorously.Comment: 10 pages including two figures. New theoretical and numerical results
added. Will be published by JSTAT as a lette
Understanding the computation of time using neural network models
To maximize future rewards in this ever-changing world, animals must be able to discover the temporal structure of stimuli and then anticipate or act correctly at the right time. How do animals perceive, maintain, and use time intervals ranging from hundreds of milliseconds to multiseconds in working memory? How is temporal information processed concurrently with spatial information and decision making? Why are there strong neuronal temporal signals in tasks in which temporal information is not required? A systematic understanding of the underlying neural mechanisms is still lacking. Here, we addressed these problems using supervised training of recurrent neural network models. We revealed that neural networks perceive elapsed time through state evolution along stereotypical trajectory, maintain time intervals in working memory in the monotonic increase or decrease of the firing rates of interval-tuned neurons, and compare or produce time intervals by scaling state evolution speed. Temporal and nontemporal information is coded in subspaces orthogonal with each other, and the state trajectories with time at different nontemporal information are quasiparallel and isomorphic. Such coding geometry facilitates the decoding generalizability of temporal and nontemporal information across each other. The network structure exhibits multiple feedforward sequences that mutually excite or inhibit depending on whether their preferences of nontemporal information are similar or not. We identified four factors that facilitate strong temporal signals in nontiming tasks, including the anticipation of coming events. Our work discloses fundamental computational principles of temporal processing, and it is supported by and gives predictions to a number of experimental phenomena.</jats:p
Top-down generation of low-resolution representations improves visual perception and imagination
AbstractPerception or imagination requires top-down signals from high-level cortex to primary visual cortex (V1) to reconstruct or simulate the representations bottom-up stimulated by the seen images. Interestingly, top-down signals in V1 have lower spatial resolution than bottom-up representations. It is unclear why the brain uses low-resolution signals to reconstruct or simulate high-resolution representations. By modeling the top-down pathway of the visual system using the decoder of variational auto-encoder (VAE), we reveal that low-resolution top-down signals can better reconstruct or simulate the information contained in the sparse activities of V1 simple cells, which facilitates perception and imagination. This advantage of low-resolution generation is related to facilitating high-level cortex to form geometry-respecting representations observed in experiments. Moreover, our finding inspires a simple artificial- intelligence (AI) technique to significantly improve the generation quality and diversity of sketches, a style of drawings made of thin lines. Specifically, instead of directly using original sketches, we use blurred sketches to train VAE or GAN (generative adversarial network), and then infer the thin-line sketches from the VAE- or GAN- generated blurred sketches. Collectively, our work suggests that low-resolution top-down generation is a strategy the brain uses to improve visual perception and imagination, and advances sketch-generation AI techniques.</jats:p
Testing and Understanding Second-Order Statistics of Spike Patterns Using Spike Shuffling Methods
Spike Pattern Structure Influences Synaptic Efficacy Variability Under STDP and Synaptic Homeostasis. II: Spike Shuffling Methods on LIF Networks
Synapses may undergo variable changes during plasticity because of the variability of spike patterns such as temporal stochasticity and spatial randomness. Here, we call the variability of synaptic weight changes during plasticity to be efficacy variability. In this paper, we investigate how four aspects of spike pattern statistics (i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations) influence the efficacy variability under pair-wise additive spike-timing dependent plasticity (STDP) and synaptic homeostasis (the mean strength of plastic synapses into a neuron is bounded), by implementing spike shuffling methods onto spike patterns self-organized by a network of excitatory and inhibitory leaky integrate-and-fire (LIF) neurons. With the increase of the decay time scale of the inhibitory synaptic currents, the LIF network undergoes a transition from asynchronous state to weak synchronous state and then to synchronous bursting state. We first shuffle these spike patterns using a variety of methods, each designed to evidently change a specific pattern statistics; and then investigate the change of efficacy variability of the synapses under STDP and synaptic homeostasis, when the neurons in the network fire according to the spike patterns before and after being treated by a shuffling method. In this way, we can understand how the change of pattern statistics may cause the change of efficacy variability. Our results are consistent with those of our previous study which implements spike-generating models on converging motifs. We also find that burstiness/regularity is important to determine the efficacy variability under asynchronous states, while heterogeneity of cross-correlations is the main factor to cause efficacy variability when the network moves into synchronous bursting states (the states observed in epilepsy)
