219,755 research outputs found
A lower bound for the density of states of the lattice Anderson model
We consider the Anderson model on the multi-dimensional cubic lattice and
prove a positive lower bound on the density of states under certain conditions.
For example, if the random variables are independently and identically
distributed and the probability measure has a bounded Lebesgue density with
compact support, and if this density is essentially bounded away from zero on
its support, then we prove that the density of states is strictly positive for
Lebesgue-almost every energy in the deterministic spectrum.Comment: 7 pages, typos corrected in v2, to appear in Proc. Amer. Math. So
Multilinear tensor regression for longitudinal relational data
A fundamental aspect of relational data, such as from a social network, is
the possibility of dependence among the relations. In particular, the relations
between members of one pair of nodes may have an effect on the relations
between members of another pair. This article develops a type of regression
model to estimate such effects in the context of longitudinal and multivariate
relational data, or other data that can be represented in the form of a tensor.
The model is based on a general multilinear tensor regression model, a special
case of which is a tensor autoregression model in which the tensor of relations
at one time point are parsimoniously regressed on relations from previous time
points. This is done via a separable, or Kronecker-structured, regression
parameter along with a separable covariance model. In the context of an
analysis of longitudinal multivariate relational data, it is shown how the
multilinear tensor regression model can represent patterns that often appear in
relational and network data, such as reciprocity and transitivity.Comment: Published at http://dx.doi.org/10.1214/15-AOAS839 in the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Minimum length-scale constraints for parameterized implicit function based topology optimization
Open access via Springer Compact Agreement The author would like to thank the Numerical Analysis Group at the Rutherford Appleton Laboratory for their FORTRAN HSL packages (HSL, a collection of Fortran codes for large-scale scientific computation. See http://www.hsl.rl.ac.uk/). The author also would like to acknowledge the support of the Maxwell compute cluster funded by the University of Aberdeen. Finally, the author thanks the anonymous reviewers for their helpful comments and suggestions that improved this paper.Peer reviewedPublisher PD
The Latent Relation Mapping Engine: Algorithm and Experiments
Many AI researchers and cognitive scientists have argued that analogy is the
core of cognition. The most influential work on computational modeling of
analogy-making is Structure Mapping Theory (SMT) and its implementation in the
Structure Mapping Engine (SME). A limitation of SME is the requirement for
complex hand-coded representations. We introduce the Latent Relation Mapping
Engine (LRME), which combines ideas from SME and Latent Relational Analysis
(LRA) in order to remove the requirement for hand-coded representations. LRME
builds analogical mappings between lists of words, using a large corpus of raw
text to automatically discover the semantic relations among the words. We
evaluate LRME on a set of twenty analogical mapping problems, ten based on
scientific analogies and ten based on common metaphors. LRME achieves
human-level performance on the twenty problems. We compare LRME with a variety
of alternative approaches and find that they are not able to reach the same
level of performance.Comment: related work available at http://purl.org/peter.turney
A theory of cross-validation error
This paper presents a theory of error in cross-validation testing of algorithms for predicting
real-valued attributes. The theory justifies the claim that predicting real-valued
attributes requires balancing the conflicting demands of simplicity and accuracy. Furthermore,
the theory indicates precisely how these conflicting demands must be balanced, in
order to minimize cross-validation error. A general theory is presented, then it is
developed in detail for linear regression and instance-based learning
Technical note: Bias and the quantification of stability
Research on bias in machine learning algorithms has generally been concerned with the
impact of bias on predictive accuracy. We believe that there are other factors that should
also play a role in the evaluation of bias. One such factor is the stability of the algorithm;
in other words, the repeatability of the results. If we obtain two sets of data from the same
phenomenon, with the same underlying probability distribution, then we would like our
learning algorithm to induce approximately the same concepts from both sets of data. This
paper introduces a method for quantifying stability, based on a measure of the agreement
between concepts. We also discuss the relationships among stability, predictive accuracy,
and bias
- …
