2,318 research outputs found

    Social contagions on interdependent lattice networks

    Get PDF
    Although an increasing amount of research is being done on the dynamical processes on interdependent spatial networks, knowledge of how interdependent spatial networks influence the dynamics of social contagion in them is sparse. Here we present a novel non-Markovian social contagion model on interdependent spatial networks composed of two identical two-dimensional lattices. We compare the dynamics of social contagion on networks with different fractions of dependency links and find that the density of final recovered nodes increases as the number of dependency links is increased. We use a finite-size analysis method to identify the type of phase transition in the giant connected components (GCC) of the final adopted nodes and find that as we increase the fraction of dependency links, the phase transition switches from second-order to first-order. In strong interdependent spatial networks with abundant dependency links, increasing the fraction of initial adopted nodes can induce the switch from a first-order to second-order phase transition associated with social contagion dynamics. In networks with a small number of dependency links, the phase transition remains second-order. In addition, both the second-order and first-order phase transition points can be decreased by increasing the fraction of dependency links or the number of initially-adopted nodes.This work was partially supported by National Natural Science Foundation of China (Grant Nos 61501358, 61673085), and the Fundamental Research Funds for the Central Universities. (61501358 - National Natural Science Foundation of China; 61673085 - National Natural Science Foundation of China; Fundamental Research Funds for the Central Universities)Published versio

    SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization

    Full text link
    Transfer learning has fundamentally changed the landscape of natural language processing (NLP) research. Many existing state-of-the-art models are first pre-trained on a large text corpus and then fine-tuned on downstream tasks. However, due to limited data resources from downstream tasks and the extremely large capacity of pre-trained models, aggressive fine-tuning often causes the adapted model to overfit the data of downstream tasks and forget the knowledge of the pre-trained model. To address the above issue in a more principled manner, we propose a new computational framework for robust and efficient fine-tuning for pre-trained language models. Specifically, our proposed framework contains two important ingredients: 1. Smoothness-inducing regularization, which effectively manages the capacity of the model; 2. Bregman proximal point optimization, which is a class of trust-region methods and can prevent knowledge forgetting. Our experiments demonstrate that our proposed method achieves the state-of-the-art performance on multiple NLP benchmarks.Comment: The 58th annual meeting of the Association for Computational Linguistics (ACL 2020
    corecore