9 research outputs found
Distinct Visual Working Memory Systems for View-Dependent and View-Invariant Representation
Background: How do people sustain a visual representation of the environment? Currently, many researchers argue that a single visual working memory system sustains non-spatial object information such as colors and shapes. However, previous studies tested visual working memory for two-dimensional objects only. In consequence, the nature of visual working memory for three-dimensional (3D) object representation remains unknown. Methodology/Principal Findings: Here, I show that when sustaining information about 3D objects, visual working memory clearly divides into two separate, specialized memory systems, rather than one system, as was previously thought. One memory system gradually accumulates sensory information, forming an increasingly precise view-dependent representation of the scene over the course of several seconds. A second memory system sustains view-invariant representations of 3D objects. The view-dependent memory system has a storage capacity of 3–4 representations and the view-invariant memory system has a storage capacity of 1–2 representations. These systems can operate independently from one another and do not compete for working memory storage resources. Conclusions/Significance: These results provide evidence that visual working memory sustains object information in two separate, specialized memory systems. One memory system sustains view-dependent representations of the scene, akin to the view-specific representations that guide place recognition during navigation in humans, rodents and insects. Th
Using the robot operating system for biomimetic research
Biomimetics seeks to reveal the methods by which natural systems solve complex tasks and abstract principles for development of novel technological solutions. If these outcomes are to either explain behaviour, or be applied in commercial settings, they must be verified on robot platforms in natural environments. Yet development and testing of hypothesis in real robots remains sufficiently challenging for many in this highly cross-disciplinary research field that it is often omitted from biomimetic studies. Here we ..
Generation of stable heading representations in diverse visual scenes
Many animals rely on an internal heading representation when navigating in varied environments1-10. How this representation is linked to the sensory cues that define different surroundings is unclear. In the fly brain, heading is represented by 'compass' neurons that innervate a ring-shaped structure known as the ellipsoid body3,11,12. Each compass neuron receives inputs from 'ring' neurons that are selective for particular visual features13-16; this combination provides an ideal substrate for the extraction of directional information from a visual scene. Here we combine two-photon calcium imaging and optogenetics in tethered flying flies with circuit modelling, and show how the correlated activity of compass and visual neurons drives plasticity17-22, which flexibly transforms two-dimensional visual cues into a stable heading representation. We also describe how this plasticity enables the fly to convert a partial heading representation, established from orienting within part of a novel setting, into a complete heading representation. Our results provide mechanistic insight into the memory-related computations that are essential for flexible navigation in varied surroundings
