51 research outputs found
Network dynamics associated with experience-dependent plasticity in the rat somatosensory cortex
A Brain-Computer Interface Augmented Reality Framework with Auto-Adaptive SSVEP Recognition
Brain-Computer Interface (BCI) initially gained attention for developing
applications that aid physically impaired individuals. Recently, the idea of
integrating BCI with Augmented Reality (AR) emerged, which uses BCI not only to
enhance the quality of life for individuals with disabilities but also to
develop mainstream applications for healthy users. One commonly used BCI signal
pattern is the Steady-state Visually-evoked Potential (SSVEP), which captures
the brain's response to flickering visual stimuli. SSVEP-based BCI-AR
applications enable users to express their needs/wants by simply looking at
corresponding command options. However, individuals are different in brain
signals and thus require per-subject SSVEP recognition. Moreover, muscle
movements and eye blinks interfere with brain signals, and thus subjects are
required to remain still during BCI experiments, which limits AR engagement. In
this paper, we (1) propose a simple adaptive ensemble classification system
that handles the inter-subject variability, (2) present a simple BCI-AR
framework that supports the development of a wide range of SSVEP-based BCI-AR
applications, and (3) evaluate the performance of our ensemble algorithm in an
SSVEP-based BCI-AR application with head rotations which has demonstrated
robustness to the movement interference. Our testing on multiple subjects
achieved a mean accuracy of 80\% on a PC and 77\% using the HoloLens AR
headset, both of which surpass previous studies that incorporate individual
classifiers and head movements. In addition, our visual stimulation time is 5
seconds which is relatively short. The statistically significant results show
that our ensemble classification approach outperforms individual classifiers in
SSVEP-based BCIs
System Identification of Neural Systems: Going Beyond Images to Modelling Dynamics
Vast literature has compared the recordings of biological neurons in the
brain to deep neural networks. The ultimate goal is to interpret deep networks
or to better understand and encode biological neural systems. Recently, there
has been a debate on whether system identification is possible and how much it
can tell us about the brain computation. System identification recognizes
whether one model is more valid to represent the brain computation over
another. Nonetheless, previous work did not consider the time aspect and how
video and dynamics (e.g., motion) modelling in deep networks relate to these
biological neural systems within a large-scale comparison. Towards this end, we
propose a system identification study focused on comparing single image vs.
video understanding models with respect to the visual cortex recordings. Our
study encompasses two sets of experiments; a real environment setup and a
simulated environment setup. The study also encompasses more than 30 models
and, unlike prior works, we focus on convolutional vs. transformer-based,
single vs. two-stream, and fully vs. self-supervised video understanding
models. The goal is to capture a greater variety of architectures that model
dynamics. As such, this signifies the first large-scale study of video
understanding models from a neuroscience perspective. Our results in the
simulated experiments, show that system identification can be attained to a
certain level in differentiating image vs. video understanding models.
Moreover, we provide key insights on how video understanding models predict
visual cortex responses; showing video understanding better than image
understanding models, convolutional models are better in the early-mid regions
than transformer based except for multiscale transformers that are still good
in predicting these regions, and that two-stream models are better than single
stream
Millisecond-Timescale Local Network Coding in the Rat Primary Somatosensory Cortex
Correlation among neocortical neurons is thought to play an indispensable role in mediating sensory processing of external stimuli. The role of temporal precision in this correlation has been hypothesized to enhance information flow along sensory pathways. Its role in mediating the integration of information at the output of these pathways, however, remains poorly understood. Here, we examined spike timing correlation between simultaneously recorded layer V neurons within and across columns of the primary somatosensory cortex of anesthetized rats during unilateral whisker stimulation. We used Bayesian statistics and information theory to quantify the causal influence between the recorded cells with millisecond precision. For each stimulated whisker, we inferred stable, whisker-specific, dynamic Bayesian networks over many repeated trials, with network similarity of 83.3±6% within whisker, compared to only 50.3±18% across whiskers. These networks further provided information about whisker identity that was approximately 6 times higher than what was provided by the latency to first spike and 13 times higher than what was provided by the spike count of individual neurons examined separately. Furthermore, prediction of individual neurons' precise firing conditioned on knowledge of putative pre-synaptic cell firing was 3 times higher than predictions conditioned on stimulus onset alone. Taken together, these results suggest the presence of a temporally precise network coding mechanism that integrates information across neighboring columns within layer V about vibrissa position and whisking kinetics to mediate whisker movement by motor areas innervated by layer V
- …
