218 research outputs found

    Computing in Additive Networks with Bounded-Information Codes

    Full text link
    This paper studies the theory of the additive wireless network model, in which the received signal is abstracted as an addition of the transmitted signals. Our central observation is that the crucial challenge for computing in this model is not high contention, as assumed previously, but rather guaranteeing a bounded amount of \emph{information} in each neighborhood per round, a property that we show is achievable using a new random coding technique. Technically, we provide efficient algorithms for fundamental distributed tasks in additive networks, such as solving various symmetry breaking problems, approximating network parameters, and solving an \emph{asymmetry revealing} problem such as computing a maximal input. The key method used is a novel random coding technique that allows a node to successfully decode the received information, as long as it does not contain too many distinct values. We then design our algorithms to produce a limited amount of information in each neighborhood in order to leverage our enriched toolbox for computing in additive networks

    Constructive updating/downdating of oblique projectors: a generalization of the Gram-Schmidt process

    Get PDF
    A generalization of the Gram-Schmidt procedure is achieved by providing equations for updating and downdating oblique projectors. The work is motivated by the problem of adaptive signal representation outside the orthogonal basis setting. The proposed techniques are shown to be relevant to the problem of discriminating signals produced by different phenomena when the order of the signal model needs to be adjusted.Comment: As it will appear in Journal of Physics A: Mathematical and Theoretical (2007

    Practice Makes Imperfect: Restorative Effects of Sleep on Motor Learning

    Get PDF
    Emerging evidence suggests that sleep plays a key role in procedural learning, particularly in the continued development of motor skill learning following initial acquisition. We argue that a detailed examination of the time course of performance across sleep on the finger-tapping task, established as the paradigm for studying the effect of sleep on motor learning, will help distinguish a restorative role of sleep in motor skill learning from a proactive one. Healthy subjects rehearsed for 12 trials and, following a night of sleep, were tested. Early training rapidly improved speed as well as accuracy on pre-sleep training. Additional rehearsal caused a marked slow-down in further improvement or partial reversal in performance to observed levels below theoretical upper limits derived on the basis of early pre-sleep rehearsal. This decrement in learning efficacy does not occur always, but if and only if it does, overnight sleep has an effect in fully or partly restoring the efficacy and actual performance to the optimal theoretically achieveable level. Our findings re-interpret the sleep-dependent memory enhancement in motor learning reported in the literature as a restoration of fatigued circuitry specialized for the skill. In providing restitution to the fatigued brain, sleep eliminates the rehearsal-induced synaptic fatigue of the circuitry specialized for the task and restores the benefit of early pre-sleep rehearsal. The present findings lend support to the notion that latent sleep-dependent enhancement of performance is a behavioral expression of the brain's restitution in sleep

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    New iterative methods for linear inequalities

    Full text link
    New iterative methods for solving systems of linear inequalities are presented. Each step in these methods consists of finding the orthogonal projection of the current point onto a hyperplane corresponding to a surrogate constraint which is constructed through a positive combination of a group of violated constraints. Both sequential and parallel implementations are discussed.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45240/1/10957_2004_Article_BF00939954.pd

    Pareto optimality solution of the multi-objective photogrammetric resection-intersection problem

    Get PDF
    Reconstruction of architectural structures from photographs has recently experienced intensive efforts in computer vision research. This is achieved through the solution of nonlinear least squares (NLS) problems to obtain accurate structure and motion estimates. In Photogrammetry, NLS contribute to the determination of the 3-dimensional (3D) terrain models from the images taken from photographs. The traditional NLS approach for solving the resection-intersection problem based on implicit formulation on the one hand suffers from the lack of provision by which the involved variables can be weighted. On the other hand, incorporation of explicit formulation expresses the objectives to be minimized in different forms, thus resulting in different parametric values for the estimated parameters at non-zero residuals. Sometimes, these objectives may conflict in a Pareto sense, namely, a small change in the parameters results in the increase of one objective and a decrease of the other, as is often the case in multi-objective problems. Such is often the case with error-in-all-variable (EIV) models, e.g., in the resection-intersection problem where such change in the parameters could be caused by errors in both image and reference coordinates.This study proposes the Pareto optimal approach as a possible improvement to the solution of the resection-intersection problem, where it provides simultaneous estimation of the coordinates and orientation parameters of the cameras in a two or multistation camera system on the basis of a properly weighted multi-objective function. This objective represents the weighted sum of the square of the direct explicit differences of the measured and computed ground as well as the image coordinates. The effectiveness of the proposed method is demonstrated by two camera calibration problems, where the internal and external orientation parameters are estimated on the basis of the collinearity equations, employing the data of a Manhattan-type test field as well as the data of an outdoor, real case experiment. In addition, an architectural structural reconstruction of the Merton college court in Oxford (UK) via estimation of camera matrices is also presented. Although these two problems are different, where the first case considers the error reduction of the image and spatial coordinates, while the second case considers the precision of the space coordinates, the Pareto optimality can handle both problems in a general and flexible way
    corecore