58,081 research outputs found
Quantum Structure of Field Theory and Standard Model Based on Infinity-free Loop Regularization/Renormalization
To understand better the quantum structure of field theory and standard model
in particle physics, it is necessary to investigate carefully the divergence
structure in quantum field theories (QFTs) and work out a consistent framework
to avoid infinities. The divergence has got us into trouble since developing
quantum electrodynamics in 1930s, its treatment via the renormalization scheme
is satisfied not by all physicists, like Dirac and Feynman who have made
serious criticisms. The renormalization group analysis reveals that QFTs can in
general be defined fundamentally with the meaningful energy scale that has some
physical significance, which motivates us to develop a new symmetry-preserving
and infinity-free regularization scheme called loop regularization (LORE). A
simple regularization prescription in LORE is realized based on a manifest
postulation that a loop divergence with a power counting dimension larger than
or equal to the space-time dimension must vanish. The LORE method is achieved
without modifying original theory and leads the divergent Feynman loop
integrals well-defined to maintain the divergence structure and meanwhile
preserve basic symmetries of original theory. The crucial point in LORE is the
presence of two intrinsic energy scales which play the roles of ultraviolet
cut-off and infrared cut-off to avoid infinities. The key concept
in LORE is the introduction of irreducible loop integrals (ILIs) on which the
regularization prescription acts, which leads to a set of gauge invariance
consistency conditions between the regularized tensor-type and scalar-type
ILIs. The evaluation of ILIs with ultraviolet-divergence-preserving (UVDP)
parametrization naturally leads to Bjorken-Drell's analogy between Feynman
diagrams and electric circuits. The LORE method has been shown to be applicable
to both underlying and effective QFTs.Comment: 53 pages, 14 figures, the article in honor of Freeman Dyson's 90th
birthday, minor typos corrected, published versio
Conformal Scaling Gauge Symmetry and Inflationary Universe
Considering the conformal scaling gauge symmetry as a fundamental symmetry of
nature in the presence of gravity, a scalar field is required and used to
describe the scale behavior of universe. In order for the scalar field to be a
physical field, a gauge field is necessary to be introduced. A gauge invariant
potential action is constructed by adopting the scalar field and a real
Wilson-like line element of the gauge field. Of particular, the conformal
scaling gauge symmetry can be broken down explicitly via fixing gauge to match
the Einstein-Hilbert action of gravity. As a nontrivial background field
solution of pure gauge has a minimal energy in gauge interactions, the
evolution of universe is then dominated at earlier time by the potential energy
of background field characterized by a scalar field. Since the background field
of pure gauge leads to an exponential potential model of a scalar field, the
universe is driven by a power-law inflation with the scale factor . The power-law index is determined by a basic gauge fixing parameter
via . For the gauge fixing scale
being the Planck mass, we are led to a predictive model with and
.Comment: 12 pages, RevTex, no figure
Constrained Deep Transfer Feature Learning and its Applications
Feature learning with deep models has achieved impressive results for both
data representation and classification for various vision tasks. Deep feature
learning, however, typically requires a large amount of training data, which
may not be feasible for some application domains. Transfer learning can be one
of the approaches to alleviate this problem by transferring data from data-rich
source domain to data-scarce target domain. Existing transfer learning methods
typically perform one-shot transfer learning and often ignore the specific
properties that the transferred data must satisfy. To address these issues, we
introduce a constrained deep transfer feature learning method to perform
simultaneous transfer learning and feature learning by performing transfer
learning in a progressively improving feature space iteratively in order to
better narrow the gap between the target domain and the source domain for
effective transfer of the data from the source domain to target domain.
Furthermore, we propose to exploit the target domain knowledge and incorporate
such prior knowledge as a constraint during transfer learning to ensure that
the transferred data satisfies certain properties of the target domain. To
demonstrate the effectiveness of the proposed constrained deep transfer feature
learning method, we apply it to thermal feature learning for eye detection by
transferring from the visible domain. We also applied the proposed method for
cross-view facial expression recognition as a second application. The
experimental results demonstrate the effectiveness of the proposed method for
both applications.Comment: International Conference on Computer Vision and Pattern Recognition,
201
Facial Landmark Detection: a Literature Survey
The locations of the fiducial facial landmark points around facial components
and facial contour capture the rigid and non-rigid facial deformations due to
head movements and facial expressions. They are hence important for various
facial analysis tasks. Many facial landmark detection algorithms have been
developed to automatically detect those key points over the years, and in this
paper, we perform an extensive review of them. We classify the facial landmark
detection algorithms into three major categories: holistic methods, Constrained
Local Model (CLM) methods, and the regression-based methods. They differ in the
ways to utilize the facial appearance and shape information. The holistic
methods explicitly build models to represent the global facial appearance and
shape information. The CLMs explicitly leverage the global shape model but
build the local appearance models. The regression-based methods implicitly
capture facial shape and appearance information. For algorithms within each
category, we discuss their underlying theories as well as their differences. We
also compare their performances on both controlled and in the wild benchmark
datasets, under varying facial expressions, head poses, and occlusion. Based on
the evaluations, we point out their respective strengths and weaknesses. There
is also a separate section to review the latest deep learning-based algorithms.
The survey also includes a listing of the benchmark databases and existing
software. Finally, we identify future research directions, including combining
methods in different categories to leverage their respective strengths to solve
landmark detection "in-the-wild"
- …
