318 research outputs found

    The same analysis approach: Practical protection against the pitfalls of novel neuroimaging analysis methods

    Get PDF
    Standard neuroimaging data analysis based on traditional principles of experimental design, modelling, and statistical inference is increasingly complemented by novel analysis methods, driven e.g. by machine learning methods. While these novel approaches provide new insights into neuroimaging data, they often have unexpected properties, generating a growing literature on possible pitfalls. We propose to meet this challenge by adopting a habit of systematic testing of experimental design, analysis procedures, and statistical inference. Specifically, we suggest to apply the analysis method used for experimental data also to aspects of the experimental design, simulated confounds, simulated null data, and control data. We stress the importance of keeping the analysis method the same in main and test analyses, because only this way possible confounds and unexpected properties can be reliably detected and avoided. We describe and discuss this Same Analysis Approach in detail, and demonstrate it in two worked examples using multivariate decoding. With these examples, we reveal two sources of error: A mismatch between counterbalancing (crossover designs) and cross-validation which leads to systematic below-chance accuracies, and linear decoding of a nonlinear effect, a difference in variance

    On Picturing a Candle: The Prehistory of Imagery Science

    Get PDF
    The past 25 years have seen a rapid growth of knowledge about brain mechanisms involved in visual mental imagery. These advances have largely been made independently of the long history of philosophical – and even psychological – reckoning with imagery and its parent concept ‘imagination’. We suggest that the view from these empirical findings can be widened by an appreciation of imagination’s intellectual history, and we seek to show how that history both created the conditions for – and presents challenges to – the scientific endeavor. We focus on the neuroscientific literature’s most commonly used task – imagining a concrete object – and, after sketching what is known of the neurobiological mechanisms involved, we examine the same basic act of imagining from the perspective of several key positions in the history of philosophy and psychology. We present positions that, firstly, contextualize and inform the neuroscientific account, and secondly, pose conceptual and methodological challenges to the scientific analysis of imagery. We conclude by reflecting on the intellectual history of visualization in the light of contemporary science, and the extent to which such science may resolve long-standing theoretical debates

    Brain-optimized inference improves reconstructions of fMRI brain activity

    Full text link
    The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas.Comment: 7 pages, 8 figures, submitted to the 2023 AAAI Workshop on Brain Encoding and Decoding. arXiv admin note: text overlap with arXiv:2306.0092

    Reconstructing seen images from human brain activity via guided stochastic search

    Full text link
    Visual reconstruction algorithms are an interpretive tool that map brain activity to pixels. Past reconstruction algorithms employed brute-force search through a massive library to select candidate images that, when passed through an encoding model, accurately predict brain activity. Here, we use conditional generative diffusion models to extend and improve this search-based strategy. We decode a semantic descriptor from human brain activity (7T fMRI) in voxels across most of visual cortex, then use a diffusion model to sample a small library of images conditioned on this descriptor. We pass each sample through an encoding model, select the images that best predict brain activity, and then use these images to seed another library. We show that this process converges on high-quality reconstructions by refining low-level image details while preserving semantic content across iterations. Interestingly, the time-to-convergence differs systematically across visual cortex, suggesting a succinct new way to measure the diversity of representations across visual brain areas.Comment: 4 pages, 5 figures, submitted to the 2023 Conference on Cognitive Computational Neuroscienc

    A synergy-based hand control is encoded in human motor cortical areas

    Get PDF
    How the human brain controls hand movements to carry out different tasks is still debated. The concept of synergy has been proposed to indicate functional modules that may simplify the control of hand postures by simultaneously recruiting sets of muscles and joints. However, whether and to what extent synergic hand postures are encoded as such at a cortical level remains unknown. Here, we combined kinematic, electromyography, and brain activity measures obtained by functional magnetic resonance imaging while subjects performed a variety of movements towards virtual objects. Hand postural information, encoded through kinematic synergies, were represented in cortical areas devoted to hand motor control and successfully discriminated individual grasping movements, significantly outperforming alternative somatotopic or muscle-based models. Importantly, hand postural synergies were predicted by neural activation patterns within primary motor cortex. These findings support a novel cortical organization for hand movement control and open potential applications for brain-computer interfaces and neuroprostheses

    Towards a synergy framework across neuroscience and robotics: Lessons learned and open questions. Reply to comments on: "Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands"

    Get PDF
    We would like to thank all commentators for their insightful commentaries. Thanks to their diverse and complementary expertise in neuroscience and robotics, the commentators have provided us with the opportunity to further discuss state-of-the-art and gaps in the integration of neuroscience and robotics reviewed in our article. We organized our reply in two sections that capture the main points of all commentaries [1–9]: (1) Advantages and limitations of the synergy approach in neuroscience and robotics, and (2) Learning and role of sensory feedback in biological and robotics synergies

    MindEye2: Shared-Subject Models Enable fMRI-To-Image With 1 Hour of Data

    Full text link
    Reconstructions of visual perception from brain activity have improved tremendously, but the practical utility of such methods has been limited. This is because such models are trained independently per subject where each subject requires dozens of hours of expensive fMRI training data to attain high-quality results. The present work showcases high-quality reconstructions using only 1 hour of fMRI training data. We pretrain our model across 7 subjects and then fine-tune on minimal data from a new subject. Our novel functional alignment procedure linearly maps all brain data to a shared-subject latent space, followed by a shared non-linear mapping to CLIP image space. We then map from CLIP space to pixel space by fine-tuning Stable Diffusion XL to accept CLIP latents as inputs instead of text. This approach improves out-of-subject generalization with limited training data and also attains state-of-the-art image retrieval and reconstruction metrics compared to single-subject approaches. MindEye2 demonstrates how accurate reconstructions of perception are possible from a single visit to the MRI facility. All code is available on GitHub.Comment: In Forty-first International Conference on Machine Learning, 2024. Code at https://github.com/MedARC-AI/MindEyeV2. Published as a conference paper at ICML 202

    How does the primate brain combine generative and discriminative computations in vision?

    Get PDF
    Vision is widely understood as an inference problem. However, two contrasting conceptions of the inference process have each been influential in research on biological vision as well as the engineering of machine vision. The first emphasizes bottom-up signal flow, describing vision as a largely feedforward, discriminative inference process that filters and transforms the visual information to remove irrelevant variation and represent behaviorally relevant information in a format suitable for downstream functions of cognition and behavioral control. In this conception, vision is driven by the sensory data, and perception is direct because the processing proceeds from the data to the latent variables of interest. The notion of "inference" in this conception is that of the engineering literature on neural networks, where feedforward convolutional neural networks processing images are said to perform inference. The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes giving rise to it. In this conception, vision inverts a generative model through an interrogation of the evidence in a process often thought to involve top-down predictions of sensory data to evaluate the likelihood of alternative hypotheses. The authors include scientists rooted in roughly equal numbers in each of the conceptions and motivated to overcome what might be a false dichotomy between them and engage the other perspective in the realm of theory and experiment. The primate brain employs an unknown algorithm that may combine the advantages of both conceptions. We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends the dichotomy and sets the stage for revealing the mysterious hybrid algorithm of primate vision

    How does the primate brain combine generative and discriminative computations in vision?

    Full text link
    Vision is widely understood as an inference problem. However, two contrasting conceptions of the inference process have each been influential in research on biological vision as well as the engineering of machine vision. The first emphasizes bottom-up signal flow, describing vision as a largely feedforward, discriminative inference process that filters and transforms the visual information to remove irrelevant variation and represent behaviorally relevant information in a format suitable for downstream functions of cognition and behavioral control. In this conception, vision is driven by the sensory data, and perception is direct because the processing proceeds from the data to the latent variables of interest. The notion of "inference" in this conception is that of the engineering literature on neural networks, where feedforward convolutional neural networks processing images are said to perform inference. The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes giving rise to it. In this conception, vision inverts a generative model through an interrogation of the evidence in a process often thought to involve top-down predictions of sensory data to evaluate the likelihood of alternative hypotheses. The authors include scientists rooted in roughly equal numbers in each of the conceptions and motivated to overcome what might be a false dichotomy between them and engage the other perspective in the realm of theory and experiment. The primate brain employs an unknown algorithm that may combine the advantages of both conceptions. We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends the dichotomy and sets the stage for revealing the mysterious hybrid algorithm of primate vision

    Of Toasters and Molecular Ticker Tapes

    Get PDF
    Experiments in systems neuroscience can be seen as consisting of three steps: (1) selecting the signals we are interested in, (2) probing the system with carefully chosen stimuli, and (3) getting data out of the brain. Here I discuss how emerging techniques in molecular biology are starting to improve these three steps. To estimate its future impact on experimental neuroscience, I will stress the analogy of ongoing progress with that of microprocessor production techniques. These techniques have allowed computers to simplify countless problems; because they are easier to use than mechanical timers, they are even built into toasters. Molecular biology may advance even faster than computer speeds and has made immense progress in understanding and designing molecules. These advancements may in turn produce impressive improvements to each of the three steps, ultimately shifting the bottleneck from obtaining data to interpreting it
    corecore