35,184 research outputs found
Effective Occlusion Handling for Fast Correlation Filter-based Trackers
Correlation filter-based trackers heavily suffer from the problem of multiple
peaks in their response maps incurred by occlusions. Moreover, the whole
tracking pipeline may break down due to the uncertainties brought by shifting
among peaks, which will further lead to the degraded correlation filter model.
To alleviate the drift problem caused by occlusions, we propose a novel scheme
to choose the specific filter model according to different scenarios.
Specifically, an effective measurement function is designed to evaluate the
quality of filter response. A sophisticated strategy is employed to judge
whether occlusions occur, and then decide how to update the filter models. In
addition, we take advantage of both log-polar method and pyramid-like approach
to estimate the best scale of the target. We evaluate our proposed approach on
VOT2018 challenge and OTB100 dataset, whose experimental result shows that the
proposed tracker achieves the promising performance compared against the
state-of-the-art trackers
Arithmetic on Moran sets
Let be a class of Moran sets. We assume that the
convex hull of any is . Let be two
non-empty sets in . Suppose that is a continuous function
defined on an open set . Denote the continuous image
of by \begin{equation*} f_{U}(A,B)=\{f(x,y):(x,y)\in (A\times B)\cap U\}.
\end{equation*} In this paper, we prove the following result. Let
. If there exists some such that then
contains an interior.Comment: 8 page
Corrected entropy of high dimensional black holes
Using the corrected expression of Hawking temperature derived from the
tunneling formalism beyond semiclassical approximation developed by
\emph{Banerjee} and \emph{Majhi}\cite{beyond}, we calculate the corrected
entropy of a high dimensional Schwarzschild black hole and a 5-dimensional
Gauss-Bonnet (GB) black hole. It is shown that the corrected entropy for this
two kinds of black hole are in agreement with the corrected entropy formula
(\ref{entropy of apparent horiozn}) that derived from tunneling method for a
-dimensional Friedmann-Robertson-Walker (FRW) universe\cite{FRW}. This
feature strongly suggests deep universality of the corrected entropy formula
(\ref{entropy of apparent horiozn}), which may not depend on the dimensions of
spacetime and gravity theories. In addition, the leading order correction of
corrected entropy formula always appears as the logarithmic of the
semiclassical entropy, rather than the logarithmic of the area of black hole
horizon, this might imply that the logarithmic of the semiclassical entropy is
more appropriate for quantum correction than the logarithmic of the area.Comment: 4 pages, 1 table, no figure, any comments are welcome! v2: 5 pages,
some mistakes correcte
Series expansion in fractional calculus and fractional differential equations
Fractional calculus is the calculus of differentiation and integration of
non-integer orders. In a recently paper (Annals of Physics 323 (2008)
2756-2778), the Fundamental Theorem of Fractional Calculus is highlighted.
Based on this theorem, in this paper we introduce fractional series expansion
method to fractional calculus. We define a kind of fractional Taylor series of
an infinitely fractionally-differentiable function. Further, based on our
definition we generalize hypergeometric functions and derive corresponding
differential equations. For finitely fractionally-differentiable functions, we
observe that the non-infinitely fractionally-differentiability is due to more
than one fractional indices. We expand functions with two fractional indices
and display how this kind of series expansion can help to solve fractional
differential equations.Comment: 15 pages, no figur
Lazy-CFR: fast and near optimal regret minimization for extensive games with imperfect information
Counterfactual regret minimization (CFR) is the most popular algorithm on
solving two-player zero-sum extensive games with imperfect information and
achieves state-of-the-art performance in practice. However, the performance of
CFR is not fully understood, since empirical results on the regret are much
better than the upper bound proved in \cite{zinkevich2008regret}. Another issue
is that CFR has to traverse the whole game tree in each round, which is
time-consuming in large scale games. In this paper, we present a novel
technique, lazy update, which can avoid traversing the whole game tree in CFR,
as well as a novel analysis on the regret of CFR with lazy update. Our analysis
can also be applied to the vanilla CFR, resulting in a much tighter regret
bound than that in \cite{zinkevich2008regret}. Inspired by lazy update, we
further present a novel CFR variant, named Lazy-CFR. Compared to traversing
information sets in vanilla CFR, Lazy-CFR needs only to
traverse information sets per round while keeping the
regret bound almost the same, where is the class of all
information sets. As a result, Lazy-CFR shows better convergence result
compared with vanilla CFR. Experimental results consistently show that Lazy-CFR
outperforms the vanilla CFR significantly
Conditional Generative Moment-Matching Networks
Maximum mean discrepancy (MMD) has been successfully applied to learn deep
generative models for characterizing a joint distribution of variables via
kernel mean embedding. In this paper, we present conditional generative moment-
matching networks (CGMMN), which learn a conditional distribution given some
input variables based on a conditional maximum mean discrepancy (CMMD)
criterion. The learning is performed by stochastic gradient descent with the
gradient calculated by back-propagation. We evaluate CGMMN on a wide range of
tasks, including predictive modeling, contextual generation, and Bayesian dark
knowledge, which distills knowledge from a Bayesian model by learning a
relatively small CGMMN student network. Our results demonstrate competitive
performance in all the tasks.Comment: 12 page
Learning to Write Stylized Chinese Characters by Reading a Handful of Examples
Automatically writing stylized Chinese characters is an attractive yet
challenging task due to its wide applicabilities. In this paper, we propose a
novel framework named Style-Aware Variational Auto-Encoder (SA-VAE) to flexibly
generate Chinese characters. Specifically, we propose to capture the different
characteristics of a Chinese character by disentangling the latent features
into content-related and style-related components. Considering of the complex
shapes and structures, we incorporate the structure information as prior
knowledge into our framework to guide the generation. Our framework shows a
powerful one-shot/low-shot generalization ability by inferring the style
component given a character with unseen style. To the best of our knowledge,
this is the first attempt to learn to write new-style Chinese characters by
observing only one or a few examples. Extensive experiments demonstrate its
effectiveness in generating different stylized Chinese characters by fusing the
feature vectors corresponding to different contents and styles, which is of
significant importance in real-world applications.Comment: Accepted by IJCAI 201
Multiple representations of real numbers on self-similar sets with overlaps
Let be the attractor of the following IFS
where and is the convex hull of . The main results of
this paper are as follows: if and only if
where
. If ,
then As a consequence, we prove that the
following conditions are equivalent:
(1) For any , there are some such that
(2) For any , there are some such that
(3) .Comment: We add a result in this versio
Fractional Vector Calculus and Fractional Special Function
Fractional vector calculus is discussed in the spherical coordinate
framework. A variation of the Legendre equation and fractional Bessel equation
are solved by series expansion and numerically. Finally, we generalize the
hypergeometric functions.Comment: 6 pages, 7 figures, revtex
Smooth Neighbors on Teacher Graphs for Semi-supervised Learning
The recently proposed self-ensembling methods have achieved promising results
in deep semi-supervised learning, which penalize inconsistent predictions of
unlabeled data under different perturbations. However, they only consider
adding perturbations to each single data point, while ignoring the connections
between data samples. In this paper, we propose a novel method, called Smooth
Neighbors on Teacher Graphs (SNTG). In SNTG, a graph is constructed based on
the predictions of the teacher model, i.e., the implicit self-ensemble of
models. Then the graph serves as a similarity measure with respect to which the
representations of "similar" neighboring points are learned to be smooth on the
low-dimensional manifold. We achieve state-of-the-art results on
semi-supervised learning benchmarks. The error rates are 9.89%, 3.99% for
CIFAR-10 with 4000 labels, SVHN with 500 labels, respectively. In particular,
the improvements are significant when the labels are fewer. For the
non-augmented MNIST with only 20 labels, the error rate is reduced from
previous 4.81% to 1.36%. Our method also shows robustness to noisy labels.Comment: Accept as Spotlight in Computer Vision and Pattern Recognition 201
- …
