481 research outputs found

    Uncertainty-Aware Distillation for Semi-Supervised Few-Shot Class-Incremental Learning

    Get PDF
    Abstract Given a model well-trained with a large-scale base dataset, few-shot class-incremental learning (FSCIL) aims at incrementally learning novel classes from a few labeled samples by avoiding overfitting, without catastrophically forgetting all encountered classes previously. Currently, semi-supervised learning technique that harnesses freely available unlabeled data to compensate for limited labeled data can boost the performance in numerous vision tasks, which heuristically can be applied to tackle issues in FSCIL, i.e., the semi-supervised FSCIL (Semi-FSCIL). So far, very limited work focuses on the Semi-FSCIL task, leaving the adaptability issue of semi-supervised learning to the FSCIL task unresolved. In this article, we focus on this adaptability issue and present a simple yet efficient Semi-FSCIL framework named uncertainty-aware distillation with class-equilibrium (UaD-ClE), encompassing two modules: uncertainty-aware distillation (UaD) and class equilibrium (ClE). Specifically, when incorporating unlabeled data into each incremental session, we introduce the ClE module that employs a class-balanced self-training (CB_ST) to avoid the gradual dominance of easy-to-classified classes on pseudo-label generation. To distill reliable knowledge from the reference model, we further implement the UaD module that combines uncertainty-guided knowledge refinement with adaptive distillation. Comprehensive experiments on three benchmark datasets demonstrate that our method can boost the adaptability of unlabeled data with the semi-supervised learning technique in FSCIL tasks. The code is available at https://github.com/yawencui/UaD-ClE.Abstract Given a model well-trained with a large-scale base dataset, few-shot class-incremental learning (FSCIL) aims at incrementally learning novel classes from a few labeled samples by avoiding overfitting, without catastrophically forgetting all encountered classes previously. Currently, semi-supervised learning technique that harnesses freely available unlabeled data to compensate for limited labeled data can boost the performance in numerous vision tasks, which heuristically can be applied to tackle issues in FSCIL, i.e., the semi-supervised FSCIL (Semi-FSCIL). So far, very limited work focuses on the Semi-FSCIL task, leaving the adaptability issue of semi-supervised learning to the FSCIL task unresolved. In this article, we focus on this adaptability issue and present a simple yet efficient Semi-FSCIL framework named uncertainty-aware distillation with class-equilibrium (UaD-ClE), encompassing two modules: uncertainty-aware distillation (UaD) and class equilibrium (ClE). Specifically, when incorporating unlabeled data into each incremental session, we introduce the ClE module that employs a class-balanced self-training (CB_ST) to avoid the gradual dominance of easy-to-classified classes on pseudo-label generation. To distill reliable knowledge from the reference model, we further implement the UaD module that combines uncertainty-guided knowledge refinement with adaptive distillation. Comprehensive experiments on three benchmark datasets demonstrate that our method can boost the adaptability of unlabeled data with the semi-supervised learning technique in FSCIL tasks. The code is available at https://github.com/yawencui/UaD-ClE

    System quality, information quality, satisfaction and acceptance of online learning platform among college students in the context of online learning and blended learning

    Get PDF
    The paper was based on the User Satisfaction and Technology Acceptance Integration Theory (USATA). The authors analyzed the factors that affect college students’ acceptance and satisfaction of online learning platform, as well as the differences in the relationship between various factors in blended learning scenario and online learning scenario. The results showed that the quality of online learning platform and information quality affect user satisfaction, and satisfaction affects usefulness and ease of use, and then affect attitude and intention. The comparison between the two groups showed that there were significant differences in the impact of information quality on information satisfaction and the impact of perceived usefulness on usage intention. In the online learning scenario, the endogenous latent variables of the model had higher explanatory power, which indicates that learners are more dependent on the quality and relevant characteristics of the learning platform in the online learning scenario

    Transferable Discriminative Feature Mining For Unsupervised Domain Adaptation

    Get PDF
    AbstractUnsupervised Domain Adaptation (UDA) aims to seek an effective model for unlabeled target domain by leveraging knowledge from a labeled source domain with a related but different distribution. Many existing approaches ignore the underlying discriminative features of the target data and the discrepancy of conditional distributions. To address these two issues simultaneously, the paper presents a Transferable Discriminative Feature Mining (TDFM) approach for UDA, which can naturally unify the mining of domain-invariant discriminative features and the alignment of class-wise features into one single framework. To be specific, to achieve the domain-invariant discriminative features, TDFM jointly learns a shared encoding representation for two tasks: supervised classification of labeled source data, and discriminative clustering of unlabeled target data. It then conducts the class-wise alignment by decreasing intra-class variations and increasing inter-class differences across domains, encouraging the emergence of transferable discriminative features. When combined, these two procedures are mutually beneficial. Comprehensive experiments verify that TDFM can obtain remarkable margins over state-of-the-art domain adaptation methods.Abstract Unsupervised Domain Adaptation (UDA) aims to seek an effective model for unlabeled target domain by leveraging knowledge from a labeled source domain with a related but different distribution. Many existing approaches ignore the underlying discriminative features of the target data and the discrepancy of conditional distributions. To address these two issues simultaneously, the paper presents a Transferable Discriminative Feature Mining (TDFM) approach for UDA, which can naturally unify the mining of domain-invariant discriminative features and the alignment of class-wise features into one single framework. To be specific, to achieve the domain-invariant discriminative features, TDFM jointly learns a shared encoding representation for two tasks: supervised classification of labeled source data, and discriminative clustering of unlabeled target data. It then conducts the class-wise alignment by decreasing intra-class variations and increasing inter-class differences across domains, encouraging the emergence of transferable discriminative features. When combined, these two procedures are mutually beneficial. Comprehensive experiments verify that TDFM can obtain remarkable margins over state-of-the-art domain adaptation methods

    Deep Ladder-Suppression Network for Unsupervised Domain Adaptation

    Get PDF
    AbstractUnsupervised domain adaptation (UDA) aims at learning a classifier for an unlabeled target domain by transferring knowledge from a labeled source domain with a related but different distribution. Most existing approaches learn domain-invariant features by adapting the entire information of the images. However, forcing adaptation of domain-specific variations undermines the effectiveness of the learned features. To address this problem, we propose a novel, yet elegant module, called the deep ladder-suppression network (DLSN), which is designed to better learn the cross-domain shared content by suppressing domain-specific variations. Our proposed DLSN is an autoencoder with lateral connections from the encoder to the decoder. By this design, the domain-specific details, which are only necessary for reconstructing the unlabeled target data, are directly fed to the decoder to complete the reconstruction task, relieving the pressure of learning domain-specific variations at the later layers of the shared encoder. As a result, DLSN allows the shared encoder to focus on learning cross-domain shared content and ignores the domain-specific variations. Notably, the proposed DLSN can be used as a standard module to be integrated with various existing UDA frameworks to further boost performance. Without whistles and bells, extensive experimental results on four gold-standard domain adaptation datasets, for example: 1) Digits; 2) Office31; 3) Office-Home; and 4) VisDA-C, demonstrate that the proposed DLSN can consistently and significantly improve the performance of various popular UDA frameworks.Abstract Unsupervised domain adaptation (UDA) aims at learning a classifier for an unlabeled target domain by transferring knowledge from a labeled source domain with a related but different distribution. Most existing approaches learn domain-invariant features by adapting the entire information of the images. However, forcing adaptation of domain-specific variations undermines the effectiveness of the learned features. To address this problem, we propose a novel, yet elegant module, called the deep ladder-suppression network (DLSN), which is designed to better learn the cross-domain shared content by suppressing domain-specific variations. Our proposed DLSN is an autoencoder with lateral connections from the encoder to the decoder. By this design, the domain-specific details, which are only necessary for reconstructing the unlabeled target data, are directly fed to the decoder to complete the reconstruction task, relieving the pressure of learning domain-specific variations at the later layers of the shared encoder. As a result, DLSN allows the shared encoder to focus on learning cross-domain shared content and ignores the domain-specific variations. Notably, the proposed DLSN can be used as a standard module to be integrated with various existing UDA frameworks to further boost performance. Without whistles and bells, extensive experimental results on four gold-standard domain adaptation datasets, for example: 1) Digits; 2) Office31; 3) Office-Home; and 4) VisDA-C, demonstrate that the proposed DLSN can consistently and significantly improve the performance of various popular UDA frameworks

    Cross Domain Object Detection via Multi-Granularity Confidence Alignment based Mean Teacher

    Full text link
    Cross domain object detection learns an object detector for an unlabeled target domain by transferring knowledge from an annotated source domain. Promising results have been achieved via Mean Teacher, however, pseudo labeling which is the bottleneck of mutual learning remains to be further explored. In this study, we find that confidence misalignment of the predictions, including category-level overconfidence, instance-level task confidence inconsistency, and image-level confidence misfocusing, leading to the injection of noisy pseudo label in the training process, will bring suboptimal performance on the target domain. To tackle this issue, we present a novel general framework termed Multi-Granularity Confidence Alignment Mean Teacher (MGCAMT) for cross domain object detection, which alleviates confidence misalignment across category-, instance-, and image-levels simultaneously to obtain high quality pseudo supervision for better teacher-student learning. Specifically, to align confidence with accuracy at category level, we propose Classification Confidence Alignment (CCA) to model category uncertainty based on Evidential Deep Learning (EDL) and filter out the category incorrect labels via an uncertainty-aware selection strategy. Furthermore, to mitigate the instance-level misalignment between classification and localization, we design Task Confidence Alignment (TCA) to enhance the interaction between the two task branches and allow each classification feature to adaptively locate the optimal feature for the regression. Finally, we develop imagery Focusing Confidence Alignment (FCA) adopting another way of pseudo label learning, i.e., we use the original outputs from the Mean Teacher network for supervised learning without label assignment to concentrate on holistic information in the target image. These three procedures benefit from each other from a cooperative learning perspective

    Informative Class-Conditioned Feature Alignment for Unsupervised Domain Adaptation

    Get PDF
    AbstractThe goal of unsupervised domain adaptation is to learn a task classifier that performs well for the unlabeled target domain by borrowing rich knowledge from a well-labeled source domain. Although remarkable breakthroughs have been achieved in learning transferable representation across domains, two bottlenecks remain to be further explored. First, many existing approaches focus primarily on the adaptation of the entire image, ignoring the limitation that not all features are transferable and informative for the object classification task. Second, the features of the two domains are typically aligned without considering the class labels; this can lead the resulting representations to be domain-invariant but non-discriminative to the category. To overcome the two issues, we present a novel Informative Class-Conditioned Feature Alignment (IC2FA) approach for UDA, which utilizes a twofold method: informative feature disentanglement and class-conditioned feature alignment, designed to address the above two challenges, respectively. More specifically, to surmount the first drawback, we cooperatively disentangle the two domains to obtain informative transferable features; here, Variational Information Bottleneck (VIB) is employed to encourage the learning of task-related semantic representations and suppress task-unrelated information. With regard to the second bottleneck, we optimize a new metric, termed Conditional Sliced Wasserstein Distance (CSWD), which explicitly estimates the intra-class discrepancy and the inter-class margin. The intra-class and inter-class CSWDs are minimized and maximized, respectively, to yield the domain-invariant discriminative features. IC2FA equips class-conditioned feature alignment with informative feature disentanglement and causes the two procedures to work cooperatively, which facilitates informative discriminative features adaptation. Extensive experimental results on three domain adaptation datasets confirm the superiority of IC2FA.Abstract The goal of unsupervised domain adaptation is to learn a task classifier that performs well for the unlabeled target domain by borrowing rich knowledge from a well-labeled source domain. Although remarkable breakthroughs have been achieved in learning transferable representation across domains, two bottlenecks remain to be further explored. First, many existing approaches focus primarily on the adaptation of the entire image, ignoring the limitation that not all features are transferable and informative for the object classification task. Second, the features of the two domains are typically aligned without considering the class labels; this can lead the resulting representations to be domain-invariant but non-discriminative to the category. To overcome the two issues, we present a novel Informative Class-Conditioned Feature Alignment (IC2FA) approach for UDA, which utilizes a twofold method: informative feature disentanglement and class-conditioned feature alignment, designed to address the above two challenges, respectively. More specifically, to surmount the first drawback, we cooperatively disentangle the two domains to obtain informative transferable features; here, Variational Information Bottleneck (VIB) is employed to encourage the learning of task-related semantic representations and suppress task-unrelated information. With regard to the second bottleneck, we optimize a new metric, termed Conditional Sliced Wasserstein Distance (CSWD), which explicitly estimates the intra-class discrepancy and the inter-class margin. The intra-class and inter-class CSWDs are minimized and maximized, respectively, to yield the domain-invariant discriminative features. IC2FA equips class-conditioned feature alignment with informative feature disentanglement and causes the two procedures to work cooperatively, which facilitates informative discriminative features adaptation. Extensive experimental results on three domain adaptation datasets confirm the superiority of IC2FA

    Deep ladder reconstruction-classification network for unsupervised domain adaptation

    Get PDF
    AbstractUnsupervised Domain Adaptation aims to learn a classifier for an unlabeled target domain by transferring knowledge from a labeled source domain. Most existing approaches learn domain-invariant features by adapting the entire information of each image. However, forcing adaptation of domain-specific components can undermine the effectiveness of learned features. We propose a novel architecture called Deep Ladder Reconstruction-Classification Network (DLaReC) which is designed to learn cross-domain shared contents by suppressing domain-specific variations. The DLaReC adopts an encoder with cross-domain sharing and a target-domain reconstruction decoder. The encoder and decoder are connected with residual shortcuts at each intermediate layer. By this means, the domain-specific components are directly fed to the decoder for reconstruction, relieving the pressure to learn domain-specific variations at later layers of the shared encoder. Therefore, DLaReC allows the encoder to focus on learning cross-domain shared representations and ignore domain-specific variations. DLaReC is implemented by jointly learning three tasks: supervised classification of the source domain, unsupervised reconstruction of the target domain and cross-domain shared representation adaptation. Extensive experiments on Digit, Office31, ImageCLEF-DA and Office-Home datasets demonstrate the DLaReC outperforms state-of-the-art methods on the whole. The average accuracy on the Digit datasets, for instance, is improved from 95.6% to 96.9%. In addition, the result on Amazon → Webcam obtains significant improvement, i.e., from 91.1% to 94.7%.Abstract Unsupervised Domain Adaptation aims to learn a classifier for an unlabeled target domain by transferring knowledge from a labeled source domain. Most existing approaches learn domain-invariant features by adapting the entire information of each image. However, forcing adaptation of domain-specific components can undermine the effectiveness of learned features. We propose a novel architecture called Deep Ladder Reconstruction-Classification Network (DLaReC) which is designed to learn cross-domain shared contents by suppressing domain-specific variations. The DLaReC adopts an encoder with cross-domain sharing and a target-domain reconstruction decoder. The encoder and decoder are connected with residual shortcuts at each intermediate layer. By this means, the domain-specific components are directly fed to the decoder for reconstruction, relieving the pressure to learn domain-specific variations at later layers of the shared encoder. Therefore, DLaReC allows the encoder to focus on learning cross-domain shared representations and ignore domain-specific variations. DLaReC is implemented by jointly learning three tasks: supervised classification of the source domain, unsupervised reconstruction of the target domain and cross-domain shared representation adaptation. Extensive experiments on Digit, Office31, ImageCLEF-DA and Office-Home datasets demonstrate the DLaReC outperforms state-of-the-art methods on the whole. The average accuracy on the Digit datasets, for instance, is improved from 95.6% to 96.9%. In addition, the result on Amazon → Webcam obtains significant improvement, i.e., from 91.1% to 94.7%

    Markov bidirectional transfer matrix for detecting LSB speech steganography with low embedding rates

    Get PDF
    Steganalysis with low embedding rates is still a challenge in the field of information hiding. Speech signals are typically processed by wavelet packet decomposition, which is capable of depicting the details of signals with high accuracy. A steganography detection algorithm based on the Markov bidirectional transition matrix (MBTM) of the wavelet packet coefficient (WPC) of the second-order derivative-based speech signal is proposed. On basis of the MBTM feature, which can better express the correlation of WPC, a Support Vector Machine (SVM) classifier is trained by a large number of Least Significant Bit (LSB) hidden data with embedding rates of 1%, 3%, 5%, 8%,10%, 30%, 50%, and 80%. LSB matching steganalysis of speech signals with low embedding rates is achieved. The experimental results show that the proposed method has obvious superiorities in steganalysis with low embedding rates compared with the classic method using histogram moment features in the frequency domain (HMIFD) of the second-order derivative-based WPC and the second-order derivative-based Mel-frequency cepstral coefficients (MFCC). Especially when the embedding rate is only 3%, the accuracy rate improves by 17.8%, reaching 68.5%, in comparison with the method using HMIFD features of the second derivative WPC. The detection accuracy improves as the embedding rate increases

    Innovative surgical and stress-stimulated rat model of ligamentum flavum hypertrophy

    Get PDF
    Background and purposeAnimal models of LFH are still in the exploratory stage. This study aimed to establish a reliable, efficient, and economical model of LFH in rats for the study of human ligamentum flavum (LF) pathological mechanisms, drug screening, development, improvement of surgical treatment, disease prevention, and other aspects.Methods and materialsForty rats were divided into an experimental group and a sham group of 20 rats. The experimental group (n = 20) was treated with an innovative operation combined with stress stimulation at the L5-L6 segments, the L5 and L6 spinous processes, transverse processes, and supraspinous ligaments were excised, along with removal of the paraspinal muscles at the L5-L6 level. One week after surgery, the rats were subjected to slow treadmill running daily. In the experimental group (n = 20), the spinous process, transverse process, supraspinous ligament and paraspinous muscle of L5 and L6 were excised. And for a week after the surgery, the rats ran on a treadmill at a slow pace every day. While the sham group (n = 20) was treated with sham operation only. Seven weeks later, MRI, immunohistochemistry (IHC), and western blot (WB) will be performed on the LF of the L5-6 segment in the two groups of rats.ResultsMRI results showed that the LF in the experimental group was significantly thicker than that in the sham group. Masson staining results indicated that LF thickness, collagen fiber area, and collagen volume fraction (CVF) were significantly higher in the experimental group than in the sham group. IHC and WB showed that the expression of TGF-β1, COL1, and IL-1β in the LF of the experimental group was significantly higher than that in the LF of sham group.ConclusionThrough innovative surgical intervention combined with stress stimulation, a relatively reliable, efficient, and convenient rat LFH model was established

    Yeast Probiotics Shape the Gut Microbiome and Improve the Health of Early-Weaned Piglets

    Get PDF
    Weaning is one of the most stressful challenges in the pig’s life, which contributes to dysfunctions of intestinal and immune system, disrupts the gut microbial ecosystem, and therefore compromises the growth performance and health of piglets. To mitigate the negative impact of the stress on early-weaned piglets, effective measures are needed to promote gut health. Toward this end, we tamed a Saccharomyces cerevisiae strain and developed a probiotic Duan-Nai-An, which is a yeast culture of the tamed S. cerevisiae on egg white. In this study, we tested the performance of Duan-Nai-An on growth and health of early-weaned piglets and analyzed its impact on fecal microbiota. The results showed that Duan-Nai-An significantly improved weight gain and feed intake, and reduced diarrhea and death of early-weaned piglets. Analysis of the gut microbiota showed that the bacterial community was shaped by Duan-Nai-An and maintained as a relatively stable structure, represented by a higher core OTU number and lower unweighted UniFrac distances across the early weaned period. However, fungal community was not significantly shaped by the yeast probiotics. Notably, 13 bacterial genera were found to be associated with Duan-Nai-An feeding, including Enterococcus, Succinivibrio, Ruminococcus, Sharpea, Desulfovibrio, RFN20, Sphaerochaeta, Peptococcus, Anaeroplasma, and four other undefined genera. These findings suggest that Duan-Nai-An has the potential to be used as a feed supplement in swine production
    corecore