30,796 research outputs found
Green infrastructure as an adaptation approach to tackling urban overheating in the Glasgow Clyde Valley Region, UK
EDEN: Evolutionary Deep Networks for Efficient Machine Learning
Deep neural networks continue to show improved performance with increasing
depth, an encouraging trend that implies an explosion in the possible
permutations of network architectures and hyperparameters for which there is
little intuitive guidance. To address this increasing complexity, we propose
Evolutionary DEep Networks (EDEN), a computationally efficient
neuro-evolutionary algorithm which interfaces to any deep neural network
platform, such as TensorFlow. We show that EDEN evolves simple yet successful
architectures built from embedding, 1D and 2D convolutional, max pooling and
fully connected layers along with their hyperparameters. Evaluation of EDEN
across seven image and sentiment classification datasets shows that it reliably
finds good networks -- and in three cases achieves state-of-the-art results --
even on a single GPU, in just 6-24 hours. Our study provides a first attempt at
applying neuro-evolution to the creation of 1D convolutional networks for
sentiment analysis including the optimisation of the embedding layer.Comment: 7 pages, 3 figures, 3 tables and see video
https://vimeo.com/23451009
Highly robust error correction by convex programming
This paper discusses a stylized communications problem where one wishes to
transmit a real-valued signal x in R^n (a block of n pieces of information) to
a remote receiver. We ask whether it is possible to transmit this information
reliably when a fraction of the transmitted codeword is corrupted by arbitrary
gross errors, and when in addition, all the entries of the codeword are
contaminated by smaller errors (e.g. quantization errors).
We show that if one encodes the information as Ax where A is a suitable m by
n coding matrix (m >= n), there are two decoding schemes that allow the
recovery of the block of n pieces of information x with nearly the same
accuracy as if no gross errors occur upon transmission (or equivalently as if
one has an oracle supplying perfect information about the sites and amplitudes
of the gross errors). Moreover, both decoding strategies are very concrete and
only involve solving simple convex optimization programs, either a linear
program or a second-order cone program. We complement our study with numerical
simulations showing that the encoder/decoder pair performs remarkably well.Comment: 23 pages, 2 figure
Highly Robust Error Correction by Convex Programming
This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x ∈ ℝ^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g., quantization errors).
We show that if one encodes the information as Ax where A ∈
ℝ^(m x n) (m ≥ n) is a suitable coding matrix, there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occurred upon transmission (or equivalently as if one had an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well
Sensitivity of Inequality Measures to Extreme Values
We examine the sensitivity of estimates and inequality indices to extreme values, in the sense of their robustness properties and of their statistical performance. We establish that these measures are very sensitive to the properties of the income distribution. Estimation and inference can be dramatically affected, especially when the tail of the income distribution is heavy.Inequality measures, statistical performance, robustness.
Income distribution and inequality measurement: The problem of extreme values
We examine the statistical performance of inequality indices in the presence of extreme values in the data and show that these indices are very sensitive to the properties of the income distribution. Estimation and inference can be dramatically affected, especially when the tail of the income distribution is heavy, even when standard bootstrap methods are employed. However, use of appropriate semiparametric methods for modelling the upper tail can greatly improve the performance of even those inequality indices that are normally considered particularly sensitive to extreme values.inequality measures ; statistical performance ; robustness
Polar Coding for Secret-Key Generation
Practical implementations of secret-key generation are often based on
sequential strategies, which handle reliability and secrecy in two successive
steps, called reconciliation and privacy amplification. In this paper, we
propose an alternative approach based on polar codes that jointly deals with
reliability and secrecy. Specifically, we propose secret-key capacity-achieving
polar coding schemes for the following models: (i) the degraded binary
memoryless source (DBMS) model with rate-unlimited public communication, (ii)
the DBMS model with one-way rate-limited public communication, (iii) the 1-to-m
broadcast model and (iv) the Markov tree model with uniform marginals. For
models (i) and (ii) our coding schemes remain valid for non-degraded sources,
although they may not achieve the secret-key capacity. For models (i), (ii) and
(iii), our schemes rely on pre-shared secret seed of negligible rate; however,
we provide special cases of these models for which no seed is required.
Finally, we show an application of our results to secrecy and privacy for
biometric systems. We thus provide the first examples of low-complexity
secret-key capacity-achieving schemes that are able to handle vector
quantization for model (ii), or multiterminal communication for models (iii)
and (iv).Comment: 26 pages, 9 figures, accepted to IEEE Transactions on Information
Theory; parts of the results were presented at the 2013 IEEE Information
Theory Worksho
Labour Market Dynamics in Greek Regions: a Bayesian Markov Chain Approach Using Proportions Data
This paper focuses on Greek labour market dynamics at a regional base, which comprises of 16 provinces, as defined by NUTS levels 1 and 2 (Eurostat, 2008), using Markov Chains for proportions data for the first time in the literature. We apply a Bayesian approach, which employs a Monte Carlo Integration procedure that uncovers the entire empirical posterior distribution of transition probabilities from full employment to part employment, unemployment and economically unregistered unemployment and vice a versa. Our results show that there are disparities in the transition probabilities across regions, implying that the convergence of the Greek labour market at a regional base is far from being considered as completed. However, some common patterns are observed as regions in the south of the country exhibit similar transition probabilities between different states of the labour marketGreek Regions, Employment, Unemployment, Markov Chains
Limits of contraction groups and the Tits core
The Tits core G^+ of a totally disconnected locally compact group G is
defined as the abstract subgroup generated by the closures of the contraction
groups of all its elements. We show that a dense subgroup is normalised by the
Tits core if and only if it contains it. It follows that every dense subnormal
subgroup contains the Tits core. In particular, if G is topologically simple,
then the Tits core is abstractly simple, and if G^+ is non-trivial then it is
the unique minimal dense normal subgroup. The proofs are based on the fact, of
independent interest, that the map which associates to an element the closure
of its contraction group is continuous.Comment: 11 page
- …
