1,664 research outputs found

    A nodal domain theorem and a higher-order Cheeger inequality for the graph pp-Laplacian

    Get PDF
    We consider the nonlinear graph pp-Laplacian and its set of eigenvalues and associated eigenfunctions of this operator defined by a variational principle. We prove a nodal domain theorem for the graph pp-Laplacian for any p1p\geq 1. While for p>1p>1 the bounds on the number of weak and strong nodal domains are the same as for the linear graph Laplacian (p=2p=2), the behavior changes for p=1p=1. We show that the bounds are tight for p1p\geq 1 as the bounds are attained by the eigenfunctions of the graph pp-Laplacian on two graphs. Finally, using the properties of the nodal domains, we prove a higher-order Cheeger inequality for the graph pp-Laplacian for p>1p>1. If the eigenfunction associated to the kk-th variational eigenvalue of the graph pp-Laplacian has exactly kk strong nodal domains, then the higher order Cheeger inequality becomes tight as p1p\rightarrow 1

    Variants of RMSProp and Adagrad with Logarithmic Regret Bounds

    Get PDF
    Adaptive gradient methods have become recently very popular, in particular as they have been shown to be useful in the training of deep neural networks. In this paper we have analyzed RMSProp, originally proposed for the training of deep neural networks, in the context of online convex optimization and show T\sqrt{T}-type regret bounds. Moreover, we propose two variants SC-Adagrad and SC-RMSProp for which we show logarithmic regret bounds for strongly convex functions. Finally, we demonstrate in the experiments that these new variants outperform other adaptive gradient techniques or stochastic gradient descent in the optimization of strongly convex functions as well as in training of deep neural networks.Comment: ICML 2017, 16 pages, 23 figure

    Variants of RMSProp and Adagrad with Logarithmic Regret Bounds

    Full text link
    Adaptive gradient methods have become recently very popular, in particular as they have been shown to be useful in the training of deep neural networks. In this paper we have analyzed RMSProp, originally proposed for the training of deep neural networks, in the context of online convex optimization and show T\sqrt{T}-type regret bounds. Moreover, we propose two variants SC-Adagrad and SC-RMSProp for which we show logarithmic regret bounds for strongly convex functions. Finally, we demonstrate in the experiments that these new variants outperform other adaptive gradient techniques or stochastic gradient descent in the optimization of strongly convex functions as well as in training of deep neural networks.Comment: ICML 2017, 16 pages, 23 figure

    Loss Functions for Top-k Error: Analysis and Insights

    Full text link
    In order to push the performance on realistic computer vision tasks, the number of classes in modern benchmark datasets has significantly increased in recent years. This increase in the number of classes comes along with increased ambiguity between the class labels, raising the question if top-1 error is the right performance measure. In this paper, we provide an extensive comparison and evaluation of established multiclass methods comparing their top-k performance both from a practical as well as from a theoretical perspective. Moreover, we introduce novel top-k loss functions as modifications of the softmax and the multiclass SVM losses and provide efficient optimization schemes for them. In the experiments, we compare on various datasets all of the proposed and established methods for top-k error optimization. An interesting insight of this paper is that the softmax loss yields competitive top-k performance for all k simultaneously. For a specific top-k error, our new top-k losses lead typically to further improvements while being faster to train than the softmax.Comment: In Computer Vision and Pattern Recognition (CVPR), 201
    corecore