488 research outputs found

    Experts bodies, experts minds: How physical and mental training shape the brain

    Get PDF
    Skill learning is the improvement in perceptual, cognitive, or motor performance following practice. Expert performance levels can be achieved with well-organized knowledge, using sophisticated and specific mental representations and cognitive processing, applying automatic sequences quickly and efficiently, being able to deal with large amounts of information, and many other challenging task demands and situations that otherwise paralyze the performance of novices. The neural reorganizations that occur with expertise reflect the optimization of the neurocognitive resources to deal with the complex computational load needed to achieve peak performance. As such, capitalizing on neuronal plasticity, brain modifications take place over time-practice and during the consolidation process. One major challenge is to investigate the neural substrates and cognitive mechanisms engaged in expertise, and to define “expertise” from its neural and cognitive underpinnings. Recent insights showed that many brain structures are recruited during task performance, but only activity in regions related to domain-specific knowledge distinguishes experts from novices. The present review focuses on three expertise domains placed across a motor to mental gradient of skill learning: sequential motor skill, mental simulation of the movement (motor imagery), and meditation as a paradigmatic example of “pure” mental training. We first describe results on each specific domain from the initial skill acquisition to expert performance, including recent results on the corresponding underlying neural mechanisms. We then discuss differences and similarities between these domains with the aim to identify the highlights of the neurocognitive processes underpinning expertise, and conclude with suggestions for future research

    Increasing the Detection Limit of the Parkinson Disorder through a Specific Surface Chemistry Applied onto Inner Surface of the Titration Well

    Full text link
    peer reviewedThe main objective of this paper was to illustrate the enhancement of the sensitivity of ELISA titration for neurodegenerative proteins by reducing nonspecific adsorptions that could lead to false positives. This goal was obtained thanks to the association of plasma and wet chemistries applied to the inner surface of the titration well. The polypropylene surface was plasma-activated and then, dip-coated with different amphiphilic molecules. These molecules have more or less long hydrocarbon chains and may be charged. The modified surfaces were characterized in terms of hydrophilic—phobic character, surface chemical groups and topography. Finally, the coated wells were tested during the ELISA titration of the specific antibody capture of the α-synuclein protein. The highest sensitivity is obtained with polar (Θ = 35°), negatively charged and smooth inner surface.Differential diagnosis of Neurodegeneratives disorder

    Small Transformers Compute Universal Metric Embeddings

    Full text link
    We study representations of data from an arbitrary metric space X\mathcal{X} in the space of univariate Gaussian mixtures with a transport metric (Delon and Desolneux 2020). We derive embedding guarantees for feature maps implemented by small neural networks called \emph{probabilistic transformers}. Our guarantees are of memorization type: we prove that a probabilistic transformer of depth about nlog(n)n\log(n) and width about n2n^2 can bi-H\"{o}lder embed any nn-point dataset from X\mathcal{X} with low metric distortion, thus avoiding the curse of dimensionality. We further derive probabilistic bi-Lipschitz guarantees, which trade off the amount of distortion and the probability that a randomly chosen pair of points embeds with that distortion. If X\mathcal{X}'s geometry is sufficiently regular, we obtain stronger, bi-Lipschitz guarantees for all points in the dataset. As applications, we derive neural embedding guarantees for datasets from Riemannian manifolds, metric trees, and certain types of combinatorial graphs. When instead embedding into multivariate Gaussian mixtures, we show that probabilistic transformers can compute bi-H\"{o}lder embeddings with arbitrarily small distortion.Comment: 42 pages, 10 Figures, 3 Table

    FunkNN: Neural Interpolation for Functional Generation

    Full text link
    Can we build continuous generative models which generalize across scales, can be evaluated at any coordinate, admit calculation of exact derivatives, and are conceptually simple? Existing MLP-based architectures generate worse samples than the grid-based generators with favorable convolutional inductive biases. Models that focus on generating images at different scales do better, but employ complex architectures not designed for continuous evaluation of images and derivatives. We take a signal-processing perspective and treat continuous image generation as interpolation from samples. Indeed, correctly sampled discrete images contain all information about the low spatial frequencies. The question is then how to extrapolate the spectrum in a data-driven way while meeting the above design criteria. Our answer is FunkNN -- a new convolutional network which learns how to reconstruct continuous images at arbitrary coordinates and can be applied to any image dataset. Combined with a discrete generative model it becomes a functional generator which can act as a prior in continuous ill-posed inverse problems. We show that FunkNN generates high-quality continuous images and exhibits strong out-of-distribution performance thanks to its patch-based design. We further showcase its performance in several stylized inverse problems with exact spatial derivatives.Comment: 17 pages, 13 figure

    Joint Cryo-ET Alignment and Reconstruction with Neural Deformation Fields

    Full text link
    We propose a framework to jointly determine the deformation parameters and reconstruct the unknown volume in electron cryotomography (CryoET). CryoET aims to reconstruct three-dimensional biological samples from two-dimensional projections. A major challenge is that we can only acquire projections for a limited range of tilts, and that each projection undergoes an unknown deformation during acquisition. Not accounting for these deformations results in poor reconstruction. The existing CryoET software packages attempt to align the projections, often in a workflow which uses manual feedback. Our proposed method sidesteps this inconvenience by automatically computing a set of undeformed projections while simultaneously reconstructing the unknown volume. We achieve this by learning a continuous representation of the undeformed measurements and deformation parameters. We show that our approach enables the recovery of high-frequency details that are destroyed without accounting for deformations

    Overnight consolidation aids the transfer of statistical knowledge from the medial temporal lobe to the striatum

    Get PDF
    Sleep is important for abstraction of the underlying principles (or gist) which bind together conceptually related stimuli, but little is known about the neural correlates of this process. Here, we investigate this issue using overnight sleep monitoring and functional magnetic resonance imaging (fMRI). Participants were exposed to a statistically structured sequence of auditory tones then tested immediately for recognition of short sequences which conformed to the learned statistical pattern. Subsequently, after consolidation over either 30min or 24h, they performed a delayed test session in which brain activity was monitored with fMRI. Behaviorally, there was greater improvement across 24h than across 30min, and this was predicted by the amount of slow wave sleep (SWS) obtained. Functionally, we observed weaker parahippocampal responses and stronger striatal responses after sleep. Like the behavioral result, these differences in functional response were predicted by the amount of SWS obtained. Furthermore, connectivity between striatum and parahippocampus was weaker after sleep, whereas connectivity between putamen and planum temporale was stronger. Taken together, these findings suggest that abstraction is associated with a gradual shift from the hippocampal to the striatal memory system and that this may be mediated by SWS

    GLIMPSE: Generalized Local Imaging with MLPs

    Full text link
    Deep learning is the current de facto state of the art in tomographic imaging. A common approach is to feed the result of a simple inversion, for example the backprojection, to a convolutional neural network (CNN) which then computes the reconstruction. Despite strong results on 'in-distribution' test data similar to the training data, backprojection from sparse-view data delocalizes singularities, so these approaches require a large receptive field to perform well. As a consequence, they overfit to certain global structures which leads to poor generalization on out-of-distribution (OOD) samples. Moreover, their memory complexity and training time scale unfavorably with image resolution, making them impractical for application at realistic clinical resolutions, especially in 3D: a standard U-Net requires a substantial 140GB of memory and 2600 seconds per epoch on a research-grade GPU when training on 1024x1024 images. In this paper, we introduce GLIMPSE, a local processing neural network for computed tomography which reconstructs a pixel value by feeding only the measurements associated with the neighborhood of the pixel to a simple MLP. While achieving comparable or better performance with successful CNNs like the U-Net on in-distribution test data, GLIMPSE significantly outperforms them on OOD samples while maintaining a memory footprint almost independent of image resolution; 5GB memory suffices to train on 1024x1024 images. Further, we built GLIMPSE to be fully differentiable, which enables feats such as recovery of accurate projection angles if they are out of calibration.Comment: 12 pages, 10 figure

    Differentiable Uncalibrated Imaging

    Full text link
    We propose a differentiable imaging framework to address uncertainty in measurement coordinates such as sensor locations and projection angles. We formulate the problem as measurement interpolation at unknown nodes supervised through the forward operator. To solve it we apply implicit neural networks, also known as neural fields, which are naturally differentiable with respect to the input coordinates. We also develop differentiable spline interpolators which perform as well as neural networks, require less time to optimize and have well-understood properties. Differentiability is key as it allows us to jointly fit a measurement representation, optimize over the uncertain measurement coordinates, and perform image reconstruction which in turn ensures consistent calibration. We apply our approach to 2D and 3D computed tomography and show that it produces improved reconstructions compared to baselines that do not account for the lack of calibration. The flexibility of the proposed framework makes it easy to apply to almost arbitrary imaging problems

    Manifold Rewiring for Unlabeled Imaging

    Full text link
    Geometric data analysis relies on graphs that are either given as input or inferred from data. These graphs are often treated as "correct" when solving downstream tasks such as graph signal denoising. But real-world graphs are known to contain missing and spurious links. Similarly, graphs inferred from noisy data will be perturbed. We thus define and study the problem of graph denoising, as opposed to graph signal denoising, and propose an approach based on link-prediction graph neural networks. We focus in particular on neighborhood graphs over point clouds sampled from low-dimensional manifolds, such as those arising in imaging inverse problems and exploratory data analysis. We illustrate our graph denoising framework on regular synthetic graphs and then apply it to single-particle cryo-EM where the measurements are corrupted by very high levels of noise. Due to this degradation, the initial graph is contaminated by noise, leading to missing or spurious edges. We show that our proposed graph denoising algorithm improves the state-of-the-art performance of multi-frequency vector diffusion maps

    Ice-Tide: Implicit Cryo-ET Imaging and Deformation Estimation

    Full text link
    We introduce ICE-TIDE, a method for cryogenic electron tomography (cryo-ET) that simultaneously aligns observations and reconstructs a high-resolution volume. The alignment of tilt series in cryo-ET is a major problem limiting the resolution of reconstructions. ICE-TIDE relies on an efficient coordinate-based implicit neural representation of the volume which enables it to directly parameterize deformations and align the projections. Furthermore, the implicit network acts as an effective regularizer, allowing for high-quality reconstruction at low signal-to-noise ratios as well as partially restoring the missing wedge information. We compare the performance of ICE-TIDE to existing approaches on realistic simulated volumes where the significant gains in resolution and accuracy of recovering deformations can be precisely evaluated. Finally, we demonstrate ICE-TIDE's ability to perform on experimental data sets.Comment: Under revision for journal publicatio
    corecore