272 research outputs found
Short-segment heart sound classification using an ensemble of deep convolutional neural networks
This paper proposes a framework based on deep convolutional neural networks
(CNNs) for automatic heart sound classification using short-segments of
individual heart beats. We design a 1D-CNN that directly learns features from
raw heart-sound signals, and a 2D-CNN that takes inputs of two- dimensional
time-frequency feature maps based on Mel-frequency cepstral coefficients
(MFCC). We further develop a time-frequency CNN ensemble (TF-ECNN) combining
the 1D-CNN and 2D-CNN based on score-level fusion of the class probabilities.
On the large PhysioNet CinC challenge 2016 database, the proposed CNN models
outperformed traditional classifiers based on support vector machine and hidden
Markov models with various hand-crafted time- and frequency-domain features.
Best classification scores with 89.22% accuracy and 89.94% sensitivity were
achieved by the ECNN, and 91.55% specificity and 88.82% modified accuracy by
the 2D-CNN alone on the test set.Comment: 8 pages, 1 figure, conferenc
Discriminative Tandem Features for HMM-based EEG Classification
Abstract—We investigate the use of discriminative feature extractors in tandem configuration with generative EEG classification system. Existing studies on dynamic EEG classification typically use hidden Markov models (HMMs) which lack discriminative capability. In this paper, a linear and a non-linear classifier are discriminatively trained to produce complementary input features to the conventional HMM system. Two sets of tandem features are derived from linear discriminant analysis (LDA) projection output and multilayer perceptron (MLP) class-posterior probability, before appended to the standard autoregressive (AR) features. Evaluation on a two-class motor-imagery classification task shows that both the proposed tandem features yield consistent gains over the AR baseline, resulting in significant relative improvement of 6.2% and 11.2 % for the LDA and MLP features respectively. We also explore portability of these features across different subjects. Index Terms- Artificial neural network-hidden Markov models, EEG classification, brain-computer-interface (BCI)
Estimating Time-Varying Effective Connectivity in High-Dimensional fMRI Data Using Regime-Switching Factor Models
Recent studies on analyzing dynamic brain connectivity rely on sliding-window
analysis or time-varying coefficient models which are unable to capture both
smooth and abrupt changes simultaneously. Emerging evidence suggests
state-related changes in brain connectivity where dependence structure
alternates between a finite number of latent states or regimes. Another
challenge is inference of full-brain networks with large number of nodes. We
employ a Markov-switching dynamic factor model in which the state-driven
time-varying connectivity regimes of high-dimensional fMRI data are
characterized by lower-dimensional common latent factors, following a
regime-switching process. It enables a reliable, data-adaptive estimation of
change-points of connectivity regimes and the massive dependencies associated
with each regime. We consider the switching VAR to quantity the dynamic
effective connectivity. We propose a three-step estimation procedure: (1)
extracting the factors using principal component analysis (PCA) and (2)
identifying dynamic connectivity states using the factor-based switching vector
autoregressive (VAR) models in a state-space formulation using Kalman filter
and expectation-maximization (EM) algorithm, and (3) constructing the
high-dimensional connectivity metrics for each state based on subspace
estimates. Simulation results show that our proposed estimator outperforms the
K-means clustering of time-windowed coefficients, providing more accurate
estimation of regime dynamics and connectivity metrics in high-dimensional
settings. Applications to analyzing resting-state fMRI data identify dynamic
changes in brain states during rest, and reveal distinct directed connectivity
patterns and modular organization in resting-state networks across different
states.Comment: 21 page
Heart sound monitoring sys
Cardiovascular disease (CVD) is among the leading life threatening ailments [1] [2].Under normal circumstances, a cardiac examination utilizing electrocardiogram appliances or tools is proposed for a person stricken with a heart disorder. The logging of irregular heart behaviour and morphology is frequently achieved through an electrocardiogram (ECG) produced by an electrocardiographic appliance for tracing cardiac activity. For the most part, gauging of this activity is achieved through a non-invasive procedure i.e. through skin electrodes. Taking into consideration the ECG and heart sound together with clinical indications, the cardiologist arrives at a diagnosis on the condition of the patient's heart. This paper focuses on the concerns stated above and utilizes the signal processing theory to pave the way for better heart auscultation performance by GPs. The objective is to take note of heart sounds in correspondence to the valves as these sounds are a source of critical information. Comparative investigations regarding MFCC features with varying numbers of HMM states and varying numbers of Gaussian mixtures were carried out for the purpose of determining the impact of these features on the classification implementation at the sites of heart sound auscultation. We employ new strategy to evaluate and denoise the heart and ecg signal with a specific end goal to address specific issues
BGF-YOLO: Enhanced YOLOv8 with Multiscale Attentional Feature Fusion for Brain Tumor Detection
You Only Look Once (YOLO)-based object detectors have shown remarkable
accuracy for automated brain tumor detection. In this paper, we develop a novel
BGF-YOLO architecture by incorporating Bi-level Routing Attention (BRA),
Generalized feature pyramid networks (GFPN), and Fourth detecting head into
YOLOv8. BGF-YOLO contains an attention mechanism to focus more on important
features, and feature pyramid networks to enrich feature representation by
merging high-level semantic features with spatial details. Furthermore, we
investigate the effect of different attention mechanisms and feature fusions,
detection head architectures on brain tumor detection accuracy. Experimental
results show that BGF-YOLO gives a 4.7% absolute increase of mAP
compared to YOLOv8x, and achieves state-of-the-art on the brain tumor detection
dataset Br35H. The code is available at https://github.com/mkang315/BGF-YOLO
ASF-YOLO: A Novel YOLO Model with Attentional Scale Sequence Fusion for Cell Instance Segmentation
We propose a novel Attentional Scale Sequence Fusion based You Only Look Once
(YOLO) framework (ASF-YOLO) which combines spatial and scale features for
accurate and fast cell instance segmentation. Built on the YOLO segmentation
framework, we employ the Scale Sequence Feature Fusion (SSFF) module to enhance
the multi-scale information extraction capability of the network, and the
Triple Feature Encoder (TFE) module to fuse feature maps of different scales to
increase detailed information. We further introduce a Channel and Position
Attention Mechanism (CPAM) to integrate both the SSFF and TPE modules, which
focus on informative channels and spatial position-related small objects for
improved detection and segmentation performance. Experimental validations on
two cell datasets show remarkable segmentation accuracy and speed of the
proposed ASF-YOLO model. It achieves a box mAP of 0.91, mask mAP of 0.887, and
an inference speed of 47.3 FPS on the 2018 Data Science Bowl dataset,
outperforming the state-of-the-art methods. The source code is available at
https://github.com/mkang315/ASF-YOLO
- …
