322 research outputs found
Recurrent Segmentation for Variable Computational Budgets
State-of-the-art systems for semantic image segmentation use feed-forward
pipelines with fixed computational costs. Building an image segmentation system
that works across a range of computational budgets is challenging and
time-intensive as new architectures must be designed and trained for every
computational setting. To address this problem we develop a recurrent neural
network that successively improves prediction quality with each iteration.
Importantly, the RNN may be deployed across a range of computational budgets by
merely running the model for a variable number of iterations. We find that this
architecture is uniquely suited for efficiently segmenting videos. By
exploiting the segmentation of past frames, the RNN can perform video
segmentation at similar quality but reduced computational cost compared to
state-of-the-art image segmentation methods. When applied to static images in
the PASCAL VOC 2012 and Cityscapes segmentation datasets, the RNN traces out a
speed-accuracy curve that saturates near the performance of state-of-the-art
segmentation methods
Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control
It is widely accepted that the complex dynamics characteristic of recurrent
neural circuits contributes in a fundamental manner to brain function. Progress
has been slow in understanding and exploiting the computational power of
recurrent dynamics for two main reasons: nonlinear recurrent networks often
exhibit chaotic behavior and most known learning rules do not work in robust
fashion in recurrent networks. Here we address both these problems by
demonstrating how random recurrent networks (RRN) that initially exhibit
chaotic dynamics can be tuned through a supervised learning rule to generate
locally stable neural patterns of activity that are both complex and robust to
noise. The outcome is a novel neural network regime that exhibits both
transiently stable and chaotic trajectories. We further show that the recurrent
learning rule dramatically increases the ability of RRNs to generate complex
spatiotemporal motor patterns, and accounts for recent experimental data
showing a decrease in neural variability in response to stimulus onset
Cortical Variability and Challenges for Modeling Approaches.
The functional role of the observed neuronal variability (the disparity in neural responses across multiple instances of the same experiment) is again receiving close attention in Computational and Systems Neuroscience (e.g., Durstewitz et al., 2010; Moreno-Bote et al., 2011; Oram, 2011; Beck et al., 2012; Churchland and Abbott, 2012; Brunton et al., 2013; Masquelier, 2013; Mattia et al., 2013; Balaguer-Ballester et al., 2014; Renart and Machens, 2014; Bujan et al., 2015; Lin et al., 2015; Pachitariu et al., 2015; Arandia-Romero et al., 2016; Doiron et al., 2016; McDonnell et al., 2016). Special consideration is currently given to understanding how spiking (Bujan et al., 2015; Deneve and Machens, 2016; Doiron et al., 2016; Hartmann et al., 2016; Landau et al., 2016) and phenomenological (Goris et al., 2014; Lin et al., 2015; Mochol et al., 2015; Arandia-Romero et al., 2016; Doiron et al., 2016) models account for the wide range of classical and new phenomena associated with trial-to-trial uncorrelated activity.
Specifically, it has often been proposed that a network state characterized by largely asynchronous spike times whilst maintaining slow oscillations in the firing-rates, may represent the default spontaneous cortical mode (e.g., Sanchez-Vives and Mattia, 2014; Deneve and Machens, 2016; Sancristobal et al., 2016); and similar states could also underlie observed stimulus-driven variability in rate (Litwin Kumar and Doiron, 2012; Deneve and Machens, 2016; Hartmann et al., 2016). However, the way in which such a computationally advantageous network state for neural coding is achieved can differ substantially between modeling approaches; this challenge will be the focus of this manuscript
Towards learning inverse kinematics with a neural network based tracking controller
Learning an inverse kinematic model of a robot is a well studied subject. However, achieving this without information about the geometric characteristics of the robot is less investigated. In this work, a novel control approach is presented based on a recurrent neural network. Without any prior knowledge about the robot, this control strategy learns to control the iCub’s robot arm online by solving the inverse kinematic problem in its control region. Because of its exploration strategy the robot starts to learn by generating and observing random motor behavior. The modulation and generalization capabilities of this approach are investigated as well
Are task representations gated in macaque prefrontal cortex?
A recent paper (Flesch et al, 2022) describes behavioural and neural data
suggesting that task representations are gated in the prefrontal cortex in both
humans and macaques. This short note proposes an alternative explanation for
the reported results from the macaque data
- …
