113 research outputs found
Recommended from our members
About SparseLab
Changes and Enhancements for Release 2.0: 4 papers have been added to SparseLab 200: "Fast Solution of l1-norm Minimization Problems When the Solutions May be Sparse"; "Why Simple Shrinkage is Still Relevant For Redundant Representations"; "Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise"; "On the Stability of Basis Pursuit in the Presence of Noise." SparseLab is a library of Matlab routines for finding sparse solutions to underdetermined systems. The library is available free of charge over the Internet. Versions are provided for Macintosh, UNIX and Windows machines. Downloading and installation instructions are given here. SparseLab has over 400 .m files which are documented, indexed and cross-referenced in various ways. In this document we suggest several ways to get started using SparseLab: (a) trying out the pedagogical examples, (b) running the demonstrations, which illustrate the use of SparseLab in published papers, and (c) browsing the extensive collection of source files, which are self-documenting. SparseLab makes available, in one package, all the code to reproduce all the figures in the included published articles. The interested reader can inspect the source code to see exactly what algorithms were used, and how parameters were set in producing our figures, and can then modify the source to produce variations on our results. SparseLab has been developed, in part, because of exhortations by Jon Claerbout of Stanford that computational scientists should engage in "really reproducible" research. This document helps with installation and getting started, as well as describing the philosophy, limitations and rules of the road for this software
Recommended from our members
SparseLab Architecture
Changes and Enhancements for Release 2.0: 4 papers have been added to SparseLab 2.0: "Fast Solution of l1-norm Minimization Problems When the Solutions May be Sparse"; "Why Simple Shrinkage is Still Relevant For Redundant Representations"; "Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise"; "On the Stability of Basis Pursuit in the Presence of Noise." This document describes the architecture of SparseLab version 2.0. It is designed for users who already have had day-to-day interaction with the package and now need specific details about the architecture of the package, for example to modify components for their own research
On the performance of algorithms for the minimization of -penalized functionals
The problem of assessing the performance of algorithms used for the
minimization of an -penalized least-squares functional, for a range of
penalty parameters, is investigated. A criterion that uses the idea of
`approximation isochrones' is introduced. Five different iterative minimization
algorithms are tested and compared, as well as two warm-start strategies. Both
well-conditioned and ill-conditioned problems are used in the comparison, and
the contrast between these two categories is highlighted.Comment: 18 pages, 10 figures; v3: expanded version with an additional
synthetic test problem
Estimates on compressed neural networks regression
When the neural element number nn of neural networks is larger than the sample size mm, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection AA which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given
A Region-based MRF Model for Unsupervised Segmentation of Moving Objects in Image Sequences
This paper addresses the problem of segmentation of moving objects in image sequences, which is of key importance in content-based applications. We transform the problem into a graph labeling problem over a region adjacency graph (RAG), by introducing a Markov random field (MRF) model based on spatio-temporal information. The initial partition is obtained by a fast, color-based watershed segmentation. The motion of each region is estimated and validated in a hierarchical framework. A dynamic memory, based on object tracking, is incorporated into the segmentation process to maintain temporal coherence. The performance of the algorithm is evaluated on several real-world image sequences
A region-based MRF model for unsupervised segmentation of moving objects in image sequences
- …
