54 research outputs found
Sliding Secure Symmetric Multilevel Diversity Coding
Symmetric multilevel diversity coding (SMDC) is a source coding problem where
the independent sources are ordered according to their importance. It was shown
that separately encoding independent sources (referred to as
``\textit{superposition coding}") is optimal. In this paper, we consider an
\textit{sliding secure} SMDC problem with security priority, where each
source is kept perfectly secure if no more
than encoders are accessible. The reconstruction requirements of the
sources are the same as classical SMDC. A special case of an
sliding secure SMDC problem that the first sources are constants is
called the \textit{multilevel secret sharing} problem. For , the
two problems coincide, and we show that superposition coding is optimal. The
rate regions for the problems are characterized. It is shown that
superposition coding is suboptimal for both problems. The main idea that joint
encoding can reduce coding rates is that we can use the previous source
as the secret key of . Based on this idea, we
propose a coding scheme that achieves the minimum sum rate of the general
multilevel secret sharing problem. Moreover, superposition coding of
the sets of sources , , , , achieves the minimum sum rate of the general sliding secure SMDC
problem
A computational approach for determining rate regions and codes using entropic vector bounds
The Tradeoff Between Privacy and Accuracy in Anomaly Detection Using Federated XGBoost
Privacy has raised considerable concerns recently, especially with the advent
of information explosion and numerous data mining techniques to explore the
information inside large volumes of data. In this context, a new distributed
learning paradigm termed federated learning becomes prominent recently to
tackle the privacy issues in distributed learning, where only learning models
will be transmitted from the distributed nodes to servers without revealing
users' own data and hence protecting the privacy of users.
In this paper, we propose a horizontal federated XGBoost algorithm to solve
the federated anomaly detection problem, where the anomaly detection aims to
identify abnormalities from extremely unbalanced datasets and can be considered
as a special classification problem. Our proposed federated XGBoost algorithm
incorporates data aggregation and sparse federated update processes to balance
the tradeoff between privacy and learning performance. In particular, we
introduce the virtual data sample by aggregating a group of users' data
together at a single distributed node. We compute parameters based on these
virtual data samples in the local nodes and aggregate the learning model in the
central server. In the learning model upgrading process, we focus more on the
wrongly classified data before in the virtual sample and hence to generate
sparse learning model parameters. By carefully controlling the size of these
groups of samples, we can achieve a tradeoff between privacy and learning
performance. Our experimental results show the effectiveness of our proposed
scheme by comparing with existing state-of-the-arts
- …
