13 research outputs found

    Striatal intrinsic reinforcement signals during recognition memory: relationship to response bias and dysregulation in schizophrenia

    Get PDF
    Ventral striatum (VS) is a critical brain region for reinforcement learning and motivation, and VS hypofunction is implicated in psychiatric disorders including schizophrenia. Providing rewards or performance feedback has been shown to activate VS. Intrinsically motivated subjects performing challenging cognitive tasks are likely to engage reinforcement circuitry even in the absence of external feedback or incentives. However, such intrinsic reinforcement responses have received little attention, have not been examined in relation to behavioral performance, and have not been evaluated for impairment in neuropsychiatric disorders such as schizophrenia. Here we used fMRI to examine a challenging “old” vs. “new” visual recognition task in healthy subjects and patients with schizophrenia. Targets were unique fractal stimuli previously presented as salient distractors in a visual oddball task, producing incidental memory encoding. Based on the prediction error theory of reinforcement learning, we hypothesized that correct target recognition would activate VS in controls, and that this activation would be greater in subjects with lower expectation of responding correctly as indexed by a more conservative response bias. We also predicted these effects would be reduced in patients with schizophrenia. Consistent with these predictions, controls activated VS and other reinforcement processing regions during correct recognition, with greater VS activation in those with a more conservative response bias. Patients did not show either effect, with significant group differences suggesting hyporesponsivity in patients to internally generated feedback. These findings highlight the importance of accounting for intrinsic motivation and reward when studying cognitive tasks, and add to growing evidence of reward circuit dysfunction in schizophrenia that may impact cognition and function

    Hierarchical Bayesian Models of Reinforcement Learning: Introduction and comparison to alternative methods

    Full text link
    AbstractReinforcement learning models have been used extensively to capture learning and decision-making processes in humans and other organisms. One essential goal of these computational models is the generalization to new sets of observations. Extracting parameters that can reliably predict out-of-sample data can be difficult, however. The use of prior distributions to regularize parameter estimates has been shown to help remedy this issue. While previous research has suggested that empirical priors estimated from a separate dataset improve predictive accuracy, this paper outlines an alternate method for the derivation of empirical priors: hierarchical Bayesian modeling. We provide a detailed introduction to this method, and show that using hierarchical models to simultaneously extract and impose empirical priors leads to better out-of-sample prediction while being more data efficient.</jats:p

    Dopamine modulates learning-related changes in dynamic striatal-cortical connectivity in Parkinson’s disease

    Full text link
    AbstractLearning from reinforcement is thought to depend on striatal dopamine inputs, which serve to update the value of actions by modifying connections in widespread cortico-striatal circuits. While considerable research has described the activity of individual striatal and midbrain regions in reinforcement learning, the broader role for dopamine in modulating network-level processes has been difficult to decipher. To examine whether dopamine modulates circuit-level dynamic connectivity during learning, we characterized the effects of dopamine on learning-related dynamic functional connectivity estimated from fMRI data acquired in patients with Parkinson’s disease. Patients with Parkinson’s disease have severe dopamine depletion in the striatum and are treated with dopamine replacement drugs, providing an opportunity to compare learning and network dynamics when patients are in a low dopamine state (off drugs) versus a high dopamine state (on drugs). We assessed the relationship between dopamine and dynamic connectivity while patients performed a probabilistic reversal learning task. We found that reversal learning altered dynamic network flexibility in the striatum and that this effect was dependent on dopaminergic state. We also found that dopamine modulated changes in connectivity between the striatum and specific task-relevant visual areas of inferior temporal cortex, providing empirical support for theories stipulating that value is updated through changes in cortico-striatal circuits. These results suggest that dopamine exerts a widespread effect on neural circuitry and network dynamics during reinforcement learning.</jats:p

    Machine learning within the Parkinson’s progression markers initiative: Review of the current state of affairs

    No full text
    The Parkinson’s Progression Markers Initiative (PPMI) has collected more than a decade’s worth of longitudinal and multi-modal data from patients, healthy controls, and at-risk individuals, including imaging, clinical, cognitive, and ‘omics’ biospecimens. Such a rich dataset presents unprecedented opportunities for biomarker discovery, patient subtyping, and prognostic prediction, but it also poses challenges that may require the development of novel methodological approaches to solve. In this review, we provide an overview of the application of machine learning methods to analyzing data from the PPMI cohort. We find that there is significant variability in the types of data, models, and validation procedures used across studies, and that much of what makes the PPMI data set unique (multi-modal and longitudinal observations) remains underutilized in most machine learning studies. We review each of these dimensions in detail and provide recommendations for future machine learning work using data from the PPMI cohort.</jats:p

    Dynamic flexibility in striatal-cortical circuits supports reinforcement learning

    Full text link
    AbstractComplex learned behaviors must involve the integrated action of distributed brain circuits. While the contributions of individual regions to learning have been extensively investigated, understanding how distributed brain networks orchestrate their activity over the course of learning remains elusive. To address this gap, we used fMRI combined with tools from dynamic network neuroscience to obtain time-resolved descriptions of network coordination during reinforcement learning. We found that learning to associate visual cues with reward involves dynamic changes in network coupling between the striatum and distributed brain regions, including visual, orbitofrontal, and ventromedial prefrontal cortex. Moreover, we found that flexibility in striatal network dynamics correlates with participants’ learning rate and inverse temperature, two parameters derived from reinforcement learning models. Finally, we found that not all forms of learning relate to this circuit: episodic memory, measured in the same participants at the same time, was related to dynamic connectivity in distinct brain networks. These results suggest that dynamic changes in striatal-centered networks provide a mechanism for information integration during reinforcement learning.Significance StatementLearning from the outcomes of actions–referred to as reinforcement learning–is an essential part of life. The roles of individual brain regions in reinforcement learning have been well characterized in terms of the updating of values for actions or sensory stimuli. Missing from this account, however, is a description of the manner in which different brain areas interact during learning to integrate sensory and value information. Here we characterize flexible striatal-cortical network dynamics that relate to reinforcement learning behavior.</jats:sec
    corecore