35,184 research outputs found

    Effective Occlusion Handling for Fast Correlation Filter-based Trackers

    Full text link
    Correlation filter-based trackers heavily suffer from the problem of multiple peaks in their response maps incurred by occlusions. Moreover, the whole tracking pipeline may break down due to the uncertainties brought by shifting among peaks, which will further lead to the degraded correlation filter model. To alleviate the drift problem caused by occlusions, we propose a novel scheme to choose the specific filter model according to different scenarios. Specifically, an effective measurement function is designed to evaluate the quality of filter response. A sophisticated strategy is employed to judge whether occlusions occur, and then decide how to update the filter models. In addition, we take advantage of both log-polar method and pyramid-like approach to estimate the best scale of the target. We evaluate our proposed approach on VOT2018 challenge and OTB100 dataset, whose experimental result shows that the proposed tracker achieves the promising performance compared against the state-of-the-art trackers

    Arithmetic on Moran sets

    Full text link
    Let (M,ck,nk)(\mathcal{M}, c_k,n_k) be a class of Moran sets. We assume that the convex hull of any E(M,ck,nk)E\in (\mathcal{M}, c_k,n_k) is [0,1][0,1]. Let A,BA,B be two non-empty sets in R\mathbb{R}. Suppose that ff is a continuous function defined on an open set UR2U\subset \mathbb{R}^{2}. Denote the continuous image of ff by \begin{equation*} f_{U}(A,B)=\{f(x,y):(x,y)\in (A\times B)\cap U\}. \end{equation*} In this paper, we prove the following result. Let E1,E2(M,ck,nk)E_1,E_2\in(\mathcal{M}, c_k, n_k). If there exists some (x0,y0)(E1×E2)U(x_0,y_0)\in (E_1\times E_2)\cap U such that supk1{1cknk}<yf(x0,y0)xf(x0,y0)<infk1{ck1nkck},\sup_{k\geq 1}\left\{1-c_kn_k\right\}<\left\vert \frac{\partial _{y}f|_{(x_{0},y_{0})}}{\partial _{x}f|_{(x_{0},y_{0})}}\right\vert <\inf_{k\geq 1}\left\{\dfrac{c_k}{1-n_kc_k}\right\}, then fU(E1,E2)f_U(E_1, E_2) contains an interior.Comment: 8 page

    Corrected entropy of high dimensional black holes

    Full text link
    Using the corrected expression of Hawking temperature derived from the tunneling formalism beyond semiclassical approximation developed by \emph{Banerjee} and \emph{Majhi}\cite{beyond}, we calculate the corrected entropy of a high dimensional Schwarzschild black hole and a 5-dimensional Gauss-Bonnet (GB) black hole. It is shown that the corrected entropy for this two kinds of black hole are in agreement with the corrected entropy formula (\ref{entropy of apparent horiozn}) that derived from tunneling method for a (n+1)(n+1)-dimensional Friedmann-Robertson-Walker (FRW) universe\cite{FRW}. This feature strongly suggests deep universality of the corrected entropy formula (\ref{entropy of apparent horiozn}), which may not depend on the dimensions of spacetime and gravity theories. In addition, the leading order correction of corrected entropy formula always appears as the logarithmic of the semiclassical entropy, rather than the logarithmic of the area of black hole horizon, this might imply that the logarithmic of the semiclassical entropy is more appropriate for quantum correction than the logarithmic of the area.Comment: 4 pages, 1 table, no figure, any comments are welcome! v2: 5 pages, some mistakes correcte

    Series expansion in fractional calculus and fractional differential equations

    Full text link
    Fractional calculus is the calculus of differentiation and integration of non-integer orders. In a recently paper (Annals of Physics 323 (2008) 2756-2778), the Fundamental Theorem of Fractional Calculus is highlighted. Based on this theorem, in this paper we introduce fractional series expansion method to fractional calculus. We define a kind of fractional Taylor series of an infinitely fractionally-differentiable function. Further, based on our definition we generalize hypergeometric functions and derive corresponding differential equations. For finitely fractionally-differentiable functions, we observe that the non-infinitely fractionally-differentiability is due to more than one fractional indices. We expand functions with two fractional indices and display how this kind of series expansion can help to solve fractional differential equations.Comment: 15 pages, no figur

    Lazy-CFR: fast and near optimal regret minimization for extensive games with imperfect information

    Full text link
    Counterfactual regret minimization (CFR) is the most popular algorithm on solving two-player zero-sum extensive games with imperfect information and achieves state-of-the-art performance in practice. However, the performance of CFR is not fully understood, since empirical results on the regret are much better than the upper bound proved in \cite{zinkevich2008regret}. Another issue is that CFR has to traverse the whole game tree in each round, which is time-consuming in large scale games. In this paper, we present a novel technique, lazy update, which can avoid traversing the whole game tree in CFR, as well as a novel analysis on the regret of CFR with lazy update. Our analysis can also be applied to the vanilla CFR, resulting in a much tighter regret bound than that in \cite{zinkevich2008regret}. Inspired by lazy update, we further present a novel CFR variant, named Lazy-CFR. Compared to traversing O(I)O(|\mathcal{I}|) information sets in vanilla CFR, Lazy-CFR needs only to traverse O(I)O(\sqrt{|\mathcal{I}|}) information sets per round while keeping the regret bound almost the same, where I\mathcal{I} is the class of all information sets. As a result, Lazy-CFR shows better convergence result compared with vanilla CFR. Experimental results consistently show that Lazy-CFR outperforms the vanilla CFR significantly

    Conditional Generative Moment-Matching Networks

    Full text link
    Maximum mean discrepancy (MMD) has been successfully applied to learn deep generative models for characterizing a joint distribution of variables via kernel mean embedding. In this paper, we present conditional generative moment- matching networks (CGMMN), which learn a conditional distribution given some input variables based on a conditional maximum mean discrepancy (CMMD) criterion. The learning is performed by stochastic gradient descent with the gradient calculated by back-propagation. We evaluate CGMMN on a wide range of tasks, including predictive modeling, contextual generation, and Bayesian dark knowledge, which distills knowledge from a Bayesian model by learning a relatively small CGMMN student network. Our results demonstrate competitive performance in all the tasks.Comment: 12 page

    Learning to Write Stylized Chinese Characters by Reading a Handful of Examples

    Full text link
    Automatically writing stylized Chinese characters is an attractive yet challenging task due to its wide applicabilities. In this paper, we propose a novel framework named Style-Aware Variational Auto-Encoder (SA-VAE) to flexibly generate Chinese characters. Specifically, we propose to capture the different characteristics of a Chinese character by disentangling the latent features into content-related and style-related components. Considering of the complex shapes and structures, we incorporate the structure information as prior knowledge into our framework to guide the generation. Our framework shows a powerful one-shot/low-shot generalization ability by inferring the style component given a character with unseen style. To the best of our knowledge, this is the first attempt to learn to write new-style Chinese characters by observing only one or a few examples. Extensive experiments demonstrate its effectiveness in generating different stylized Chinese characters by fusing the feature vectors corresponding to different contents and styles, which is of significant importance in real-world applications.Comment: Accepted by IJCAI 201

    Multiple representations of real numbers on self-similar sets with overlaps

    Full text link
    Let KK be the attractor of the following IFS {f1(x)=λx,f2(x)=λx+cλ,f3(x)=λx+1λ},\{f_1(x)=\lambda x, f_2(x)=\lambda x +c-\lambda,f_3(x)=\lambda x +1-\lambda\}, where f1(I)f2(I),(f1(I)f2(I))f3(I)=,f_1(I)\cap f_2(I)\neq \emptyset, (f_1(I)\cup f_2(I))\cap f_3(I)=\emptyset, and I=[0,1]I=[0,1] is the convex hull of KK. The main results of this paper are as follows: K+K=[0,2]\sqrt{K}+\sqrt{K}=[0,2] if and only if c+121λ,\sqrt{c}+1\geq 2\sqrt{1-\lambda}, where K+K={x+y:x,yK}\sqrt{K}+\sqrt{K}=\{\sqrt{x}+\sqrt{y}:x,y\in K\}. If c(1λ)2c\geq (1-\lambda)^2, then KK={xy:x,yK,y0}=[0,).\dfrac{K}{K}=\left\{\dfrac{x}{y}:x,y\in K, y\neq 0\right\}=\left[0,\infty\right). As a consequence, we prove that the following conditions are equivalent: (1) For any u[0,1]u\in [0,1], there are some x,yKx,y\in K such that u=xy;u=x\cdot y; (2) For any u[0,1]u\in [0,1], there are some x1,x2,x3,x4,x5,x6,x7,x8,x9,x10Kx_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8, x_9,x_{10}\in K such that u=x1+x2=x3x4=x5x6=x7÷x8=x9+x10;u=x_1+x_2=x_3-x_4=x_5\cdot x_6=x_7\div x_8=\sqrt{x_9}+\sqrt{x_{10}}; (3) c(1λ)2c\geq (1-\lambda)^2.Comment: We add a result in this versio

    Fractional Vector Calculus and Fractional Special Function

    Full text link
    Fractional vector calculus is discussed in the spherical coordinate framework. A variation of the Legendre equation and fractional Bessel equation are solved by series expansion and numerically. Finally, we generalize the hypergeometric functions.Comment: 6 pages, 7 figures, revtex

    Smooth Neighbors on Teacher Graphs for Semi-supervised Learning

    Full text link
    The recently proposed self-ensembling methods have achieved promising results in deep semi-supervised learning, which penalize inconsistent predictions of unlabeled data under different perturbations. However, they only consider adding perturbations to each single data point, while ignoring the connections between data samples. In this paper, we propose a novel method, called Smooth Neighbors on Teacher Graphs (SNTG). In SNTG, a graph is constructed based on the predictions of the teacher model, i.e., the implicit self-ensemble of models. Then the graph serves as a similarity measure with respect to which the representations of "similar" neighboring points are learned to be smooth on the low-dimensional manifold. We achieve state-of-the-art results on semi-supervised learning benchmarks. The error rates are 9.89%, 3.99% for CIFAR-10 with 4000 labels, SVHN with 500 labels, respectively. In particular, the improvements are significant when the labels are fewer. For the non-augmented MNIST with only 20 labels, the error rate is reduced from previous 4.81% to 1.36%. Our method also shows robustness to noisy labels.Comment: Accept as Spotlight in Computer Vision and Pattern Recognition 201
    corecore