58,366 research outputs found
Modified mean curvature flow of star-shaped hypersurfaces in hyperbolic space
We define a new version of modified mean curvature flow (MMCF) in hyperbolic
space , which interestingly turns out to be the natural
negative -gradient flow of the energy functional defined by De Silva and
Spruck in \cite{DS09}. We show the existence, uniqueness and convergence of the
MMCF of complete embedded star-shaped hypersurfaces with fixed prescribed
asymptotic boundary at infinity. As an application, we recover the existence
and uniqueness of smooth complete hypersurfaces of constant mean curvature in
hyperbolic space with prescribed asymptotic boundary at infinity, which was
first shown by Guan and Spruck.Comment: 26 pages, 3 figure
Short research report : exploring resilience development in a Taiwanese preschooler’s narrative : an emerging theoretical model
Short Research Report:Exploring resilience development in a Taiwanese preschooler’s
narrative: An emerging theoretical model.peer-reviewe
The Global Gauge Group Structure of F-theory Compactification with U(1)s
We show that F-theory compactifications with abelian gauge factors generally
exhibit a non-trivial global gauge group structure. The geometric origin of
this structure lies with the Shioda map of the Mordell--Weil generators. This
results in constraints on the U(1) charges of non-abelian matter consistent
with observations made throughout the literature. In particular, we find that
F-theory models featuring the Standard Model algebra actually realise the
precise gauge group [SU(3)xSU(2)xU(1)]/Z6. Furthermore, we explore the
relationship between the gauge group structure and geometric (un-)higgsing. In
an explicit class of models, we show that, depending on the global group
structure, an SU(2)xU(1) gauge theory can either unhiggs into an SU(2)xSU(2) or
an SU(3)xSU(2) theory. We also study implications of the charge constraints as
a criterion for the F-theory 'swampland'.Comment: 37 pages; v2: improved derivation of global group structure in
section 2, extended discussion on the 'swampland' conjecture in section 5,
references added, v2 accepted for publication in JHEP; v3: typos correcte
Optimizing 0/1 Loss for Perceptrons by Random Coordinate Descent
The 0/1 loss is an important cost function for perceptrons. Nevertheless it cannot be easily minimized by most existing perceptron learning algorithms. In this paper, we propose a family of random coordinate descent algorithms to directly minimize the 0/1 loss for perceptrons, and prove their convergence. Our algorithms are computationally efficient, and usually achieve the lowest 0/1 loss compared with other algorithms. Such advantages make them favorable for nonseparable real-world problems. Experiments show that our algorithms are especially useful for ensemble learning, and could achieve the lowest test error for many complex data sets when coupled with AdaBoost
Effects of Cosmic String Velocities and the Origin of Globular Clusters
With the hypothesis that cosmic string loops act as seeds for globular
clusters in mind, we study the role that velocities of these strings will play
in determining the mass distribution of globular clusters. Loops with high
enough velocities will not form compact and roughly spherical objects and can
hence not be the seeds for globular clusters. We compute the expected number
density and mass function of globular clusters as a function of both the string
tension and the peak loop velocity, and compare the results with the
observational data on the mass distribution of globular clusters in our Milky
Way. We determine the critical peak string loop velocity above which the
agreement between the string loop model for the origin of globular clusters
(neglecting loop velocities) and observational data is lost.Comment: 8 pages, 5 figure
Cycle-Consistent Deep Generative Hashing for Cross-Modal Retrieval
In this paper, we propose a novel deep generative approach to cross-modal
retrieval to learn hash functions in the absence of paired training samples
through the cycle consistency loss. Our proposed approach employs adversarial
training scheme to lean a couple of hash functions enabling translation between
modalities while assuming the underlying semantic relationship. To induce the
hash codes with semantics to the input-output pair, cycle consistency loss is
further proposed upon the adversarial training to strengthen the correlations
between inputs and corresponding outputs. Our approach is generative to learn
hash functions such that the learned hash codes can maximally correlate each
input-output correspondence, meanwhile can also regenerate the inputs so as to
minimize the information loss. The learning to hash embedding is thus performed
to jointly optimize the parameters of the hash functions across modalities as
well as the associated generative models. Extensive experiments on a variety of
large-scale cross-modal data sets demonstrate that our proposed method achieves
better retrieval results than the state-of-the-arts.Comment: To appeared on IEEE Trans. Image Processing. arXiv admin note: text
overlap with arXiv:1703.10593 by other author
The Labor Market Effects of National Health Insurance : Evidence From Taiwan
This paper investigates the impacts of national health insurance on the labor market, by considering the case of Taiwan, which implemented national health insurance in March 1995. Taiwan’s national health insurance is financed by premiums, which are proportional to an employee’s salary. These premiums may introduce distortions to the labor market. Based on repeated cross-sections of individual data we find that, on average, private sector employees’ work hours declined relative to their public sector counterparts, while their relative wage rates were almost unchanged with the introduction of national health insurance. The results suggest that neither private sector employers nor their employees were able to shift their premium burden to each other.National Health Insurance, Labor Supply, Wage Rate, Difference-in-Difference
- …
