307 research outputs found

    Adaptive Kernel Methods Using the Balancing Principle

    Get PDF
    The regularization parameter choice is a fundamental problem in supervised learning since the performance of most algorithms crucially depends on the choice of one or more of such parameters. In particular a main theoretical issue regards the amount of prior knowledge on the problem needed to suitably choose the regularization parameter and obtain learning rates. In this paper we present a strategy, the balancing principle, to choose the regularization parameter without knowledge of the regularity of the target function. Such a choice adaptively achieves the best error rate. Our main result applies to regularization algorithms in reproducing kernel Hilbert space with the square loss, though we also study how a similar principle can be used in other situations. As a straightforward corollary we can immediately derive adaptive parameter choice for various kernel methods recently studied. Numerical experiments with the proposed parameter choice rules are also presented

    Incremental Robot Learning of New Objects with Fixed Update Time

    Get PDF
    8 pages, 3 figuresWe consider object recognition in the context of lifelong learning, where a robotic agent learns to discriminate between a growing number of object classes as it accumulates experience about the environment. We propose an incremental variant of the Regularized Least Squares for Classification (RLSC) algorithm, and exploit its structure to seamlessly add new classes to the learned model. The presented algorithm addresses the problem of having an unbalanced proportion of training examples per class, which occurs when new objects are presented to the system for the first time. We evaluate our algorithm on both a machine learning benchmark dataset and two challenging object recognition tasks in a robotic setting. Empirical evidence shows that our approach achieves comparable or higher classification performance than its batch counterpart when classes are unbalanced, while being significantly faster

    Incremental Semiparametric Inverse Dynamics Learning

    Get PDF
    This paper presents a novel approach for incremental semiparametric inverse dynamics learning. In particular, we consider the mixture of two approaches: Parametric modeling based on rigid body dynamics equations and nonparametric modeling based on incremental kernel methods, with no prior information on the mechanical properties of the system. This yields to an incremental semiparametric approach, leveraging the advantages of both the parametric and nonparametric models. We validate the proposed technique learning the dynamics of one arm of the iCub humanoid robot

    iCub World: Friendly Robots Help Building Good Vision Data-Sets

    Get PDF
    CVPR2013 Workshop: Ground Truth - What is a good dataset?. Portland, USA (June 28, 2013In this paper we present and start analyzing the iCub World data-set, an object recognition data-set, we acquired using a Human-Robot Interaction (HRI) scheme and the iCub humanoid robot platform. Our set up allows for rapid acquisition and annotation of data with corresponding ground truth. While more constrained in its scopes -- the iCub world is essentially a robotics research lab -- we demonstrate how the proposed data-set poses challenges to current recognition systems. The iCubWorld data-set is publicly available. The data-set can be downloaded from: http://www.iit.it/en/projects/data-sets.html

    The Social Sustainability of the Infrastructures: A Case Study in the Liguria Region

    Get PDF
    One of the indicators that measures the economic development of a territory is its infrastructural endowment (road, rail, etc.). The presence of roads, railways, and airports are essential elements in creating the optimal conditions for the establishment or development of productive activities and economic growth; and also to generate benefits. However, the presence of infrastructure can have strong impacts on the environment and the living conditions of the population and infrastructure can be subject to actions related to contrast and opposition. Therefore, in parallel with the economic and environmental sustainability assessment, it is essential to decide whether or not to build new infrastructure. In addition, social sustainability is also pursued on the basis of an assessment that takes into account various aspects that relate the work to the population, also in order to identify the most satisfactory design solution. Alongside the adopted methodology, the assessment must be identified suitable criteria which are capable of taking into account the various impacts generated by the infrastructure, not only of an economic and environmental type, but also social and attributed relative importance (or weight) that is congruous with the correct balance of the three aspects of sustainability. This contribution deals with the identification of criteria for assessing the social sustainability of infrastructure projects, by taking as reference the 24 infrastructure projects in the planning and construction phase in the Liguria Region that make use of the Regional Law n. 39/2007 on the "Regional Strategic Intervention Programs-P.R.I.S." (Regional Strategic Intervention Programs); which guarantees citizens affected by the infrastructure. In this research work, the selection is performed through the involvement of local stakeholders as well as the subjects and institutions that operate within the decision-making process of a work (designers, technicians from public administrations). The selected criteria are then weighted through the pairwise comparison method used in the multi-criteria technique of ThomasSaaty-Analytic Hierarchy Process (AHP). The goal is to identify the useful criteria for assessing social sustainability and the weights attributed by the various parties involved in the decision-making process by citizens directly or indirectly affected by the infrastructure

    Large-scale Nonlinear Variable Selection via Kernel Random Features

    Full text link
    We propose a new method for input variable selection in nonlinear regression. The method is embedded into a kernel regression machine that can model general nonlinear functions, not being a priori limited to additive models. This is the first kernel-based variable selection method applicable to large datasets. It sidesteps the typical poor scaling properties of kernel methods by mapping the inputs into a relatively low-dimensional space of random features. The algorithm discovers the variables relevant for the regression task together with learning the prediction model through learning the appropriate nonlinear random feature maps. We demonstrate the outstanding performance of our method on a set of large-scale synthetic and real datasets.Comment: Final version for proceedings of ECML/PKDD 201

    Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry

    Get PDF
    We provide a comprehensive study of the convergence of the forward-backward algorithm under suitable geometric conditions, such as conditioning or Łojasiewicz properties. These geometrical notions are usually local by nature, and may fail to describe the fine geometry of objective functions relevant in inverse problems and signal processing, that have a nice behaviour on manifolds, or sets open with respect to a weak topology. Motivated by this observation, we revisit those geometric notions over arbitrary sets. In turn, this allows us to present several new results as well as collect in a unified view a variety of results scattered in the literature. Our contributions include the analysis of infinite dimensional convex minimization problems, showing the first Łojasiewicz inequality for a quadratic function associated to a compact operator, and the derivation of new linear rates for problems arising from inverse problems with low-complexity priors. Our approach allows to establish unexpected connections between geometry and a priori conditions in inverse problems, such as source conditions, or restricted isometry properties

    Speeding-up Object Detection Training for Robotics with FALKON

    Get PDF
    Latest deep learning methods for object detection provide remarkable performance, but have limits when used in robotic applications. One of the most relevant issues is the long training time, which is due to the large size and imbalance of the associated training sets, characterized by few positive and a large number of negative examples (i.e. background). Proposed approaches are based on end-to-end learning by back-propagation [22] or kernel methods trained with Hard Negatives Mining on top of deep features [8]. These solutions are effective, but prohibitively slow for on-line applications.In this paper we propose a novel pipeline for object detection that overcomes this problem and provides comparable performance, with a 60x training speedup. Our pipeline combines (i) the Region Proposal Network and the deep feature extractor from [22] to efficiently select candidate RoIs and encode them into powerful representations, with (ii) the FALKON [23] algorithm, a novel kernel-based method that allows fast training on large scale problems (millions of points). We address the size and imbalance of training data by exploiting the stochastic subsampling intrinsic into the method and a novel, fast, bootstrapping approach.We assess the effectiveness of the approach on a standard Computer Vision dataset (PASCAL VOC 2007 [5]) and demonstrate its applicability to a real robotic scenario with the iCubWorld Transformations [18] dataset
    corecore