94 research outputs found

    Complexity of Wake Electroencephalography Correlates With Slow Wave Activity After Sleep Onset

    No full text
    Sleep electroencephalography (EEG) provides an opportunity to study sleep scientifically, whose chaotic, dynamic, complex, and dissipative nature implies that non-linear approaches could uncover some mechanism of sleep. Based on well-established complexity theories, one hypothesis in sleep medicine is that lower complexity of brain waves at pre-sleep state can facilitate sleep initiation and further improve sleep quality. However, this has never been studied with solid data. In this study, EEG collected from healthy subjects was used to investigate the association between pre-sleep EEG complexity and sleep quality. Multiscale entropy analysis (MSE) was applied to pre-sleep EEG signals recorded immediately after light-off (while subjects were awake) for measuring the complexities of brain dynamics by a proposed index, CI1−30. Slow wave activity (SWA) in sleep, which is commonly used as an indicator of sleep depth or sleep intensity, was quantified based on two methods, traditional Fast Fourier transform (FFT) and ensemble empirical mode decomposition (EEMD). The associations between wake EEG complexity, sleep latency, and SWA in sleep were evaluated. Our results demonstrated that lower complexity before sleep onset is associated with decreased sleep latency, indicating a potential facilitating role of reduced pre-sleep complexity in the wake-sleep transition. In addition, the proposed EEMD-based method revealed an association between wake complexity and quantified SWA in the beginning of sleep (90 min after sleep onset). Complexity metric could thus be considered as a potential indicator for sleep interventions, and further studies are encouraged to examine the application of EEG complexity before sleep onset in populations with difficulty in sleep initiation. Further studies may also examine the mechanisms of the causal relationships between pre-sleep brain complexity and SWA, or conduct comparisons between normal and pathological conditions

    Age-Related Alterations in Electroencephalography Connectivity and Network Topology During n-Back Working Memory Task

    Get PDF
    The study of the healthy brain in elders, especially age-associated alterations in cognition, is important to understand the deficits created by Alzheimer's disease (AD), which imposes a tremendous burden on individuals, families, and society. Although, the changes in synaptic connectivity and reorganization of brain networks that accompany aging are gradually becoming understood, little is known about how normal aging affects brain inter-regional synchronization and functional networks when items are held in working memory (WM). According to the classic Sternberg WM paradigm, we recorded multichannel electroencephalography (EEG) from healthy adults (young and senior) in three different conditions, i.e., the resting state, 0-back (control) task, and 2-back task. The phase lag index (PLI) between EEG channels was computed and then weighted and undirected network was constructed based on the PLI matrix. The effects of aging on network topology were examined using a brain connectivity toolbox. The results showed that age-related alteration was more prominent when the 2-back task was engaged, especially in the theta band. For the younger adults, the WM task evoked a significant increase in the clustering coefficient of the beta-band functional connectivity network, which was absent in the older adults. Furthermore, significant correlations were observed between the behavioral performance of WM and EEG metrics in the theta and gamma bands, suggesting the potential use of those measures as biomarkers for the evaluation of cognitive training, for instance. Taken together, our findings shed further light on the underlying mechanism of WM in physiological aging and suggest that different EEG frequencies appear to have distinct functional correlates in cognitive aging. Analysis of inter-regional synchronization and topological characteristics based on graph theory is thus an appropriate way to explore natural age-related changes in the human brain

    Elucidating the complex interplay between natural and anthropogenic factors in the deformation of the Muyubao landslide through time-series InSAR analysis

    Get PDF
    In the Three Gorges Reservoir area, landslide disasters occur frequently, making scientific monitoring and risk prediction crucial for disaster prevention and mitigation. However, most previous studies have been constrained by analysis of singular influencing factors. In this study, we employed multi-temporal InSAR techniques coupled with multivariate geospatial statistical analysis to monitor and analyze the dynamic evolution of the Muyuba landslide in Zigui County, Hubei Province, China from 2016 to 2023. The findings indicate that the Muyuba landslide was predominantly characterized by continuous, gradual subsidence. Key factors inducing deformation included well-developed drainage networks, gentle slopes of 15–30°, and the orientation of rock strata. Deformation rates in residential areas and along roadways exceeded background levels, implicating anthropogenic activities in the heightened landslide risk. A significant correlation was observed between landslide deformation and reservoir water level fluctuations, as opposed to rainfall patterns, highlighting reservoir regulation disturbances as a critical landslide triggering factor

    FusionFormer: A Multi-sensory Fusion in Bird's-Eye-View and Temporal Consistent Transformer for 3D Objection

    Full text link
    Multi-sensor modal fusion has demonstrated strong advantages in 3D object detection tasks. However, existing methods that fuse multi-modal features through a simple channel concatenation require transformation features into bird's eye view space and may lose the information on Z-axis thus leads to inferior performance. To this end, we propose FusionFormer, an end-to-end multi-modal fusion framework that leverages transformers to fuse multi-modal features and obtain fused BEV features. And based on the flexible adaptability of FusionFormer to the input modality representation, we propose a depth prediction branch that can be added to the framework to improve detection performance in camera-based detection tasks. In addition, we propose a plug-and-play temporal fusion module based on transformers that can fuse historical frame BEV features for more stable and reliable detection results. We evaluate our method on the nuScenes dataset and achieve 72.6% mAP and 75.1% NDS for 3D object detection tasks, outperforming state-of-the-art methods

    FusionAD: Multi-modality Fusion for Prediction and Planning Tasks of Autonomous Driving

    Full text link
    Building a multi-modality multi-task neural network toward accurate and robust performance is a de-facto standard in perception task of autonomous driving. However, leveraging such data from multiple sensors to jointly optimize the prediction and planning tasks remains largely unexplored. In this paper, we present FusionAD, to the best of our knowledge, the first unified framework that fuse the information from two most critical sensors, camera and LiDAR, goes beyond perception task. Concretely, we first build a transformer based multi-modality fusion network to effectively produce fusion based features. In constrast to camera-based end-to-end method UniAD, we then establish a fusion aided modality-aware prediction and status-aware planning modules, dubbed FMSPnP that take advantages of multi-modality features. We conduct extensive experiments on commonly used benchmark nuScenes dataset, our FusionAD achieves state-of-the-art performance and surpassing baselines on average 15% on perception tasks like detection and tracking, 10% on occupancy prediction accuracy, reducing prediction error from 0.708 to 0.389 in ADE score and reduces the collision rate from 0.31% to only 0.12%

    Concept Design of the “Guanlan” Science Mission: China’s Novel Contribution to Space Oceanography

    Get PDF
    Among the various challenges that spaceborne radar observations of the ocean face, the following two issues are probably of a higher priority: inadequate dynamic resolution, and ineffective vertical penetration. It is therefore the vision of the National Laboratory for Marine Science and Technology of China that two highly anticipated breakthroughs in the coming decade are likely to be associated with radar interferometry and ocean lidar (OL) technology, which are expected to make a substantial contribution to a submesoscale-resolving and depth-resolving observation of the ocean. As an expanded follow-up of SWOT and an oceanic counterpart of CALIPSO, the planned “Guanlan” science mission comprises a dual-frequency (Ku and Ka) interferometric altimetry (IA), and a near-nadir pointing OL. Such an unprecedented combination of sensor systems has at least three prominent advantages. (i) The dual-frequency IA ensures a wider swath and a shorter repeat cycle which leads to a significantly improved temporal and spatial resolution up to days and kilometers. (ii) The first spaceborne active OL ensures a deeper penetration depth and an all-time detection which leads to a layered characterization of the optical properties of the subsurface ocean, while also serving as a near-nadir altimeter measuring vertical velocities associated with the divergence, and convergence of geostrophic eddy motions in the mixed layer. (iii) The simultaneous functioning of the IA/OL system allows for an enhanced correction of the contamination effects of the atmosphere and the air-sea interface, which in turn considerably reduces the error budgets of the two sensors. As a result, the integrated IA/OL payload is expected to resolve the ocean variability at submeso and sub-week scales with a centimeter-level accuracy, while also partially revealing marine life systems and ecosystems with a 10-m vertical interval in the euphotic layer, moving a significant step forward toward a “transparent ocean” down to the vicinity of the thermocline, both dynamically and bio-optically

    HoneyFactory: Container-Based Comprehensive Cyber Deception Honeynet Architecture

    No full text
    Honeynet and honeypot originate as network security tools to collect attack information during the network being compromised. With the development of virtualization and software defined networks, honeynet has recently achieved many breakthroughs. However, existing honeynet architectures treat network attacks as interactions with a single honeypot which is supported by multiple honeypots to make this single one more realistic and efficient. The scale and depth of existing honeynets are limited, making it hard to capture complicated attack information. Existing honeynet frameworks also have low-level simulation of protected network and lacks test metrics. To address these issues, we design and implement a novel container-based comprehensive cyber deception honeynet architecture that consists of five modules, called HoneyFactory. Just like factory producing products according to customer preferences, HoneyFactory generates honeynet using containers based on business networks under protection. In HoneyFactory architecture, we propose a novel honeynet deception model based on hmm model to evaluate deception stage. We also design other modules to make this architecture comprehensive and efficient. Experiments show that HoneyFactory performs better than existing research in communication latency and connections per second. Experiments also show that HoneyFactory can effectively evaluate deception stage and perform deep cyber deception
    corecore