165 research outputs found
Working memory replay prioritizes weakly attended events
One view of working memory posits that maintaining a series of events requires their sequential and equal mnemonic replay. Another view is that the content of working memory maintenance is prioritized by attention. We decoded the dynamics for retaining a sequence of items using magnetoencephalography, wherein participants encoded sequences of three stimuli depicting a face, a manufactured object, or a natural item and maintained them in working memory for 5000 ms. Memory for sequence position and stimulus details were probed at the end of the maintenance period. Decoding of brain activity revealed that one of the three stimuli dominated maintenance independent of its sequence position or category; and memory was enhanced for the selectively replayed stimulus. Analysis of event-related responses during the encoding of the sequence showed that the selectively replayed stimuli were determined by the degree of attention at encoding. The selectively replayed stimuli had the weakest initial encoding indexed by weaker visual attention signals at encoding. These findings do not rule out sequential mnemonic replay but reveal that attention influences the content of working memory maintenance by prioritizing replay of weakly encoded events. We propose that the prioritization of weakly encoded stimuli protects them from interference during the maintenance period, whereas the more strongly encoded stimuli can be retrieved from long-term memory at the end of the delay period
Stochastic attractor models of visual working memory
This paper investigates models of working memory in which memory traces evolve according to stochastic attractor dynamics. These models have previously been shown to account for response-biases that are manifest across multiple trials of a visual working memory task. Here we adapt this approach by making the stable fixed points correspond to the multiple items to be remembered within a single-trial, in accordance with standard dynamical perspectives of memory, and find evidence that this multi-item model can provide a better account of behavioural data from continuous-report tasks. Additionally, the multi-item model proposes a simple mechanism by which swap-errors arise: memory traces diffuse away from their initial state and are captured by the attractors of other items. Swap-error curves reveal the evolution of this process as a continuous function of time throughout the maintenance interval and can be inferred from experimental data. Consistent with previous findings, we find that empirical memory performance is not well characterised by a purely-diffusive process but rather by a stochastic process that also embodies error-correcting dynamics
Retrospective Inference as a Form of Bounded Rationality, and Its Beneficial Influence on Learning
Probabilistic models of cognition typically assume that agents make inferences about current states by combining new sensory information with fixed beliefs about the past, an approach known as Bayesian filtering. This is computationally parsimonious, but, in general, leads to suboptimal beliefs about past states, since it ignores the fact that new observations typically contain information about the past as well as the present. This is disadvantageous both because knowledge of past states may be intrinsically valuable, and because it impairs learning about fixed or slowly changing parameters of the environment. For these reasons, in offline data analysis it is usual to infer on every set of states using the entire time series of observations, an approach known as (fixed-interval) Bayesian smoothing. Unfortunately, however, this is impractical for real agents, since it requires the maintenance and updating of beliefs about an ever-growing set of states. We propose an intermediate approach, finite retrospective inference (FRI), in which agents perform update beliefs about a limited number of past states (Formally, this represents online fixed-lag smoothing with a sliding window). This can be seen as a form of bounded rationality in which agents seek to optimize the accuracy of their beliefs subject to computational and other resource costs. We show through simulation that this approach has the capacity to significantly increase the accuracy of both inference and learning, using a simple variational scheme applied to both randomly generated Hidden Markov models (HMMs), and a specific application of the HMM, in the form of the widely used probabilistic reversal task. Our proposal thus constitutes a theoretical contribution to normative accounts of bounded rationality, which makes testable empirical predictions that can be explored in future work
How do neural processes give rise to cognition? Simultaneously predicting brain and behavior with a dynamic model of visual working memory
There is consensus that activation within distributed functional brain networks underlies human thought. The impact of this consensus is limited, however, by a gap that exists between data-driven correlational analyses that specify where functional brain activity is localized using functional magnetic resonance imaging (fMRI), and neural process accounts that specify how neural activity unfolds through time to give rise to behavior. Here, we show how an integrative cognitive neuroscience approach may bridge this gap. In an exemplary study of visual working memory, we use multilevel Bayesian statistics to demonstrate that a neural dynamic model simultaneously explains behavioral data and predicts localized patterns of brain activity, outperforming standard analytic approaches to fMRI. The model explains performance on both correct trials and incorrect trials where errors in change detection emerge from neural fluctuations amplified by neural interaction. Critically, predictions of the model run counter to cognitive theories of the origin of errors in change detection. Results reveal neural patterns predicted by the model within regions of the dorsal attention network that have been the focus of much debate. The model-based analysis suggests that key areas in the dorsal attention network such as the intraparietal sulcus play a central role in change detection rather than working memory maintenance, counter to previous interpretations of fMRI studies. More generally, the integrative cognitive neuroscience approach used here establishes a framework for directly testing theories of cognitive and brain function using the combined power of behavioral and fMRI data. (PsycInfo Database Record (c) 2021 APA, all rights reserved)
Learning words in space and time: Contrasting models of the suspicious coincidence effect
In their 2007b Psychological Review paper, Xu and Tenenbaum found that early word learning follows the classic logic of the “suspicious coincidence effect:” when presented with a novel name (‘fep’) and three identical exemplars (three Labradors), word learners generalized novel names more narrowly than when presented with a single exemplar (one Labrador). Xu and Tenenbaum predicted the suspicious coincidence effect based on a Bayesian model of word learning and demonstrated that no other theory captured this effect. Recent empirical studies have revealed, however, that the effect is influenced by factors seemingly outside the purview of the Bayesian account. A process-based perspective correctly predicted that when exemplars are shown sequentially, the effect is eliminated or reversed (Spencer, Perone, Smith, & Samuelson, 2011). Here, we present a new, formal account of the suspicious coincidence effect using a generalization of a Dynamic Neural Field (DNF) model of word learning. The DNF model captures both the original finding and its reversal with sequential presentation. We compare the DNF model's performance with that of a more flexible version of the Bayesian model that allows both strong and weak sampling assumptions. Model comparison results show that the dynamic field account provides a better fit to the empirical data. We discuss the implications of the DNF model with respect to broader contrasts between Bayesian and process-level models
The Fourteenth Data Release of the Sloan Digital Sky Survey: First Spectroscopic Data from the extended Baryon Oscillation Spectroscopic Survey and from the second phase of the Apache Point Observatory Galactic Evolution Experiment
The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in
operation since July 2014. This paper describes the second data release from
this phase, and the fourteenth from SDSS overall (making this, Data Release
Fourteen or DR14). This release makes public data taken by SDSS-IV in its first
two years of operation (July 2014-2016). Like all previous SDSS releases, DR14
is cumulative, including the most recent reductions and calibrations of all
data taken by SDSS since the first phase began operations in 2000. New in DR14
is the first public release of data from the extended Baryon Oscillation
Spectroscopic Survey (eBOSS); the first data from the second phase of the
Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2),
including stellar parameter estimates from an innovative data driven machine
learning algorithm known as "The Cannon"; and almost twice as many data cubes
from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous
release (N = 2812 in total). This paper describes the location and format of
the publicly available data from SDSS-IV surveys. We provide references to the
important technical papers describing how these data have been taken (both
targeting and observation details) and processed for scientific use. The SDSS
website (www.sdss.org) has been updated for this release, and provides links to
data downloads, as well as tutorials and examples of data use. SDSS-IV is
planning to continue to collect astronomical data until 2020, and will be
followed by SDSS-V.Comment: SDSS-IV collaboration alphabetical author data release paper. DR14
happened on 31st July 2017. 19 pages, 5 figures. Accepted by ApJS on 28th Nov
2017 (this is the "post-print" and "post-proofs" version; minor corrections
only from v1, and most of errors found in proofs corrected
An introduction to thermodynamic integration and application to dynamic causal models
In generative modeling of neuroimaging data, such as dynamic causal modeling (DCM), one typically considers several alternative models, either to determine the most plausible explanation for observed data (Bayesian model selection) or to account for model uncertainty (Bayesian model averaging). Both procedures rest on estimates of the model evidence, a principled trade-off between model accuracy and complexity. In the context of DCM, the log evidence is usually approximated using variational Bayes. Although this approach is highly efficient, it makes distributional assumptions and is vulnerable to local extrema. This paper introduces the use of thermodynamic integration (TI) for Bayesian model selection and averaging in the context of DCM. TI is based on Markov chain Monte Carlo sampling which is asymptotically exact but orders of magnitude slower than variational Bayes. In this paper, we explain the theoretical foundations of TI, covering key concepts such as the free energy and its origins in statistical physics. Our aim is to convey an in-depth understanding of the method starting from its historical origin in statistical physics. In addition, we demonstrate the practical application of TI via a series of examples which serve to guide the user in applying this method. Furthermore, these examples demonstrate that, given an efficient implementation and hardware capable of parallel processing, the challenge of high computational demand can be overcome successfully. The TI implementation presented in this paper is freely available as part of the open source software TAPAS
High-throughput automated scoring of Ki67 in breast cancer tissue microarrays from the Breast Cancer Association Consortium (BCAC)
Automated methods are needed to facilitate high-throughput and reproducible scoring of Ki67 and
other markers in breast cancer tissue microarrays (TMAs) in large-scale studies. To address this need,
we developed an automated protocol for Ki67 scoring and evaluated its performanc
- …
