54 research outputs found

    Sliding Secure Symmetric Multilevel Diversity Coding

    Full text link
    Symmetric multilevel diversity coding (SMDC) is a source coding problem where the independent sources are ordered according to their importance. It was shown that separately encoding independent sources (referred to as ``\textit{superposition coding}") is optimal. In this paper, we consider an (L,s)(L,s) \textit{sliding secure} SMDC problem with security priority, where each source Xα (sαL)X_{\alpha}~(s\leq \alpha\leq L) is kept perfectly secure if no more than αs\alpha-s encoders are accessible. The reconstruction requirements of the LL sources are the same as classical SMDC. A special case of an (L,s)(L,s) sliding secure SMDC problem that the first s1s-1 sources are constants is called the (L,s)(L,s) \textit{multilevel secret sharing} problem. For s=1s=1, the two problems coincide, and we show that superposition coding is optimal. The rate regions for the (3,2)(3,2) problems are characterized. It is shown that superposition coding is suboptimal for both problems. The main idea that joint encoding can reduce coding rates is that we can use the previous source Xα1X_{\alpha-1} as the secret key of XαX_{\alpha}. Based on this idea, we propose a coding scheme that achieves the minimum sum rate of the general (L,s)(L,s) multilevel secret sharing problem. Moreover, superposition coding of the ss sets of sources X1X_1, X2X_2, \cdots, Xs1X_{s-1}, (Xs,Xs+1,,XL)(X_s, X_{s+1}, \cdots, X_L) achieves the minimum sum rate of the general sliding secure SMDC problem

    A computational approach for determining rate regions and codes using entropic vector bounds

    Full text link

    The Tradeoff Between Privacy and Accuracy in Anomaly Detection Using Federated XGBoost

    Full text link
    Privacy has raised considerable concerns recently, especially with the advent of information explosion and numerous data mining techniques to explore the information inside large volumes of data. In this context, a new distributed learning paradigm termed federated learning becomes prominent recently to tackle the privacy issues in distributed learning, where only learning models will be transmitted from the distributed nodes to servers without revealing users' own data and hence protecting the privacy of users. In this paper, we propose a horizontal federated XGBoost algorithm to solve the federated anomaly detection problem, where the anomaly detection aims to identify abnormalities from extremely unbalanced datasets and can be considered as a special classification problem. Our proposed federated XGBoost algorithm incorporates data aggregation and sparse federated update processes to balance the tradeoff between privacy and learning performance. In particular, we introduce the virtual data sample by aggregating a group of users' data together at a single distributed node. We compute parameters based on these virtual data samples in the local nodes and aggregate the learning model in the central server. In the learning model upgrading process, we focus more on the wrongly classified data before in the virtual sample and hence to generate sparse learning model parameters. By carefully controlling the size of these groups of samples, we can achieve a tradeoff between privacy and learning performance. Our experimental results show the effectiveness of our proposed scheme by comparing with existing state-of-the-arts

    On Multi-source Multi-sink Hyperedge Networks

    No full text

    On Rate Region of Caching Problems With Non-Uniform File and Cache Sizes

    Full text link

    Asymmetric Multilevel Diversity Coding Systems With Perfect Secrecy

    Full text link

    YO-SLAM: A Robust Visual SLAM towards Dynamic Environments

    No full text
    corecore