60,730 research outputs found
Generalization of Mrs. Gerber's Lemma
Mrs. Gerber's Lemma (MGL) hinges on the convexity of , where
is the binary entropy function. In this work, we prove that
is convex in for every provided is convex in ,
where . Moreover, our result subsumes MGL and
simplifies the original proof. We show that the generalized MGL can be applied
in binary broadcast channel to simplify some discussion.Comment: Accepted by Communications in Information and System
Higher Order Derivatives in Costa's Entropy Power Inequality
Let be an arbitrary continuous random variable and be an independent
Gaussian random variable with zero mean and unit variance. For , Costa
proved that is concave in , where the proof hinged on
the first and second order derivatives of . Specifically, these
two derivatives are signed, i.e., and . In this
paper, we show that the third order derivative of is
nonnegative, which implies that the Fisher information is
convex in . We further show that the fourth order derivative of
is nonpositive. Following the first four derivatives, we make
two conjectures on : the first is that
is nonnegative in if
is odd, and nonpositive otherwise; the second is that is
convex in . The first conjecture can be rephrased in the context of
completely monotone functions: is completely monotone in .
The history of the first conjecture may date back to a problem in mathematical
physics studied by McKean in 1966. Apart from these results, we provide a
geometrical interpretation to the covariance-preserving transformation and
study the concavity of , revealing its connection
with Costa's EPI.Comment: Second version submitted. https://sites.google.com/site/chengfancuhk
Asymmetry Helps: Eigenvalue and Eigenvector Analyses of Asymmetrically Perturbed Low-Rank Matrices
This paper is concerned with the interplay between statistical asymmetry and
spectral methods. Suppose we are interested in estimating a rank-1 and
symmetric matrix , yet only a
randomly perturbed version is observed. The noise matrix
is composed of zero-mean independent (but not
necessarily homoscedastic) entries and is, therefore, not symmetric in general.
This might arise, for example, when we have two independent samples for each
entry of and arrange them into an {\em asymmetric} data
matrix . The aim is to estimate the leading eigenvalue and
eigenvector of . We demonstrate that the leading eigenvalue
of the data matrix can be times more accurate --- up
to some log factor --- than its (unadjusted) leading singular value in
eigenvalue estimation. Further, the perturbation of any linear form of the
leading eigenvector of --- say, entrywise eigenvector perturbation
--- is provably well-controlled. This eigen-decomposition approach is fully
adaptive to heteroscedasticity of noise without the need of careful bias
correction or any prior knowledge about the noise variance. We also provide
partial theory for the more general rank- case. The takeaway message is
this: arranging the data samples in an asymmetric manner and performing
eigen-decomposition could sometimes be beneficial.Comment: accepted to Annals of Statistics, 2020. 37 page
d+id' Chiral Superconductivity in Bilayer Silicene
We investigate the structure and physical properties of the undoped bilayer
silicene through first-principles calculations and find the system is
intrinsically metallic with sizable pocket Fermi surfaces. When realistic
electron-electron interaction turns on, the system is identified as a chiral
d+id' topological superconductor mediated by the strong spin fluctuation on the
border of the antiferromagnetic spin density wave order. Moreover, the tunable
Fermi pocket area via strain makes it possible to adjust the spin density wave
critical interaction strength near the real one and enables a high
superconducting critical temperature
Compressing Inertial Motion Data in Wireless Sensing Systems – An Initial Experiment
The use of wireless inertial motion sensors, such as accelerometers, for supporting medical care and sport’s training, has been under investigation in recent years. As the number of sensors (or their sampling rates) increases, compressing data at source(s) (i.e. at the sensors), i.e. reducing the quantity of data that needs to be transmitted between the on-body sensors and the remote repository, would be essential especially in a bandwidth-limited wireless environment. This paper presents a set of compression experiment results on a set of inertial motion data collected during running exercises. As a starting point, we selected a set of common compression algorithms to experiment with. Our results show that, conventional lossy compression algorithms would achieve a desirable compression ratio with an acceptable time delay. The results also show that the quality of the decompressed data is within acceptable range
MatchZoo: A Learning, Practicing, and Developing System for Neural Text Matching
Text matching is the core problem in many natural language processing (NLP)
tasks, such as information retrieval, question answering, and conversation.
Recently, deep leaning technology has been widely adopted for text matching,
making neural text matching a new and active research domain. With a large
number of neural matching models emerging rapidly, it becomes more and more
difficult for researchers, especially those newcomers, to learn and understand
these new models. Moreover, it is usually difficult to try these models due to
the tedious data pre-processing, complicated parameter configuration, and
massive optimization tricks, not to mention the unavailability of public codes
sometimes. Finally, for researchers who want to develop new models, it is also
not an easy task to implement a neural text matching model from scratch, and to
compare with a bunch of existing models. In this paper, therefore, we present a
novel system, namely MatchZoo, to facilitate the learning, practicing and
designing of neural text matching models. The system consists of a powerful
matching library and a user-friendly and interactive studio, which can help
researchers: 1) to learn state-of-the-art neural text matching models
systematically, 2) to train, test and apply these models with simple
configurable steps; and 3) to develop their own models with rich APIs and
assistance
- …
