749 research outputs found
Local uniqueness of vortices for 2D steady Euler flow
We study the steady planar Euler flow in a bounded simply connected domain,
where the vortex function is with and the vorticity strength is
prescribed. By studying the location and local uniqueness of vortices, we prove
that the vorticity method and the stream function method actually give the same
solution. We also show that if the vorticity of flow is located near an
isolated minimum point and non-degenerate critical point of the Kirchhoff-Routh
function, it must be stable in the nonlinear sense.Comment: 47 pages. arXiv admin note: text overlap with arXiv:1703.0986
Multiple nodal solutions of nonlinear Choquard equations
In this paper, we consider the existence of multiple nodal solutions of the
nonlinear Choquard equation \begin{equation*} \ \ \ \ (P)\ \ \ \ \begin{cases}
-\Delta u+u=(|x|^{-1}\ast|u|^p)|u|^{p-2}u \ \ \ \text{in}\ \mathbb{R}^3, \ \ \
\ \\ u\in H^1(\mathbb{R}^3),\\ \end{cases} \end{equation*} where . We show that for any positive integer , problem has
at least a radially symmetrical solution changing sign exactly -times
Reading Scene Text in Deep Convolutional Sequences
We develop a Deep-Text Recurrent Network (DTRN) that regards scene text
reading as a sequence labelling problem. We leverage recent advances of deep
convolutional neural networks to generate an ordered high-level sequence from a
whole word image, avoiding the difficult character segmentation problem. Then a
deep recurrent model, building on long short-term memory (LSTM), is developed
to robustly recognize the generated CNN sequences, departing from most existing
approaches recognising each character independently. Our model has a number of
appealing properties in comparison to existing scene text recognition methods:
(i) It can recognise highly ambiguous words by leveraging meaningful context
information, allowing it to work reliably without either pre- or
post-processing; (ii) the deep CNN feature is robust to various image
distortions; (iii) it retains the explicit order information in word image,
which is essential to discriminate word strings; (iv) the model does not depend
on pre-defined dictionary, and it can process unknown words and arbitrary
strings. Codes for the DTRN will be available.Comment: To appear in the 13th AAAI Conference on Artificial Intelligence
(AAAI-16), 201
Single Shot Text Detector with Regional Attention
We present a novel single-shot text detector that directly outputs word-level
bounding boxes in a natural image. We propose an attention mechanism which
roughly identifies text regions via an automatically learned attentional map.
This substantially suppresses background interference in the convolutional
features, which is the key to producing accurate inference of words,
particularly at extremely small sizes. This results in a single model that
essentially works in a coarse-to-fine manner. It departs from recent FCN- based
text detectors which cascade multiple FCN models to achieve an accurate
prediction. Furthermore, we develop a hierarchical inception module which
efficiently aggregates multi-scale inception features. This enhances local
details, and also encodes strong context information, allow- ing the detector
to work reliably on multi-scale and multi- orientation text with single-scale
images. Our text detector achieves an F-measure of 77% on the ICDAR 2015 bench-
mark, advancing the state-of-the-art results in [18, 28]. Demo is available at:
http://sstd.whuang.org/.Comment: To appear in IEEE International Conference on Computer Vision (ICCV),
201
Asymptotic properties of least squares estimator in local to unity processes with fractional Gaussian noises
- …
