4,624 research outputs found
Quality Classified Image Analysis with Application to Face Detection and Recognition
Motion blur, out of focus, insufficient spatial resolution, lossy compression
and many other factors can all cause an image to have poor quality. However,
image quality is a largely ignored issue in traditional pattern recognition
literature. In this paper, we use face detection and recognition as case
studies to show that image quality is an essential factor which will affect the
performances of traditional algorithms. We demonstrated that it is not the
image quality itself that is the most important, but rather the quality of the
images in the training set should have similar quality as those in the testing
set. To handle real-world application scenarios where images with different
kinds and severities of degradation can be presented to the system, we have
developed a quality classified image analysis framework to deal with images of
mixed qualities adaptively. We use deep neural networks first to classify
images based on their quality classes and then design a separate face detector
and recognizer for images in each quality class. We will present experimental
results to show that our quality classified framework can accurately classify
images based on the type and severity of image degradations and can
significantly boost the performances of state-of-the-art face detector and
recognizer in dealing with image datasets containing mixed quality images.Comment: 6 page
Revisiting Pre-Trained Models for Chinese Natural Language Processing
Bidirectional Encoder Representations from Transformers (BERT) has shown
marvelous improvements across various NLP tasks, and consecutive variants have
been proposed to further improve the performance of the pre-trained language
models. In this paper, we target on revisiting Chinese pre-trained language
models to examine their effectiveness in a non-English language and release the
Chinese pre-trained language model series to the community. We also propose a
simple but effective model called MacBERT, which improves upon RoBERTa in
several ways, especially the masking strategy that adopts MLM as correction
(Mac). We carried out extensive experiments on eight Chinese NLP tasks to
revisit the existing pre-trained language models as well as the proposed
MacBERT. Experimental results show that MacBERT could achieve state-of-the-art
performances on many NLP tasks, and we also ablate details with several
findings that may help future research. Resources available:
https://github.com/ymcui/MacBERTComment: 12 pages, to appear at Findings of EMNLP 202
Integral restriction for bilinear operators
We consider the integral domain restriction operator TΩ for certain bilinear operator T. We obtain that if (s, p1, p2) satisfies 1 p1+ 1 p2 ≥ 2 min{1,s} and kTkLp1 ×Lp2→Ls 1 min{1,s}
- …
