80 research outputs found
Recommended from our members
Dynamic attention priors: a new and efficient concept for improving object detection
Recent psychophysical evidence in humans suggests that visual attention is a highly dynamic and predictive process involving precise models of object trajectories. We present a proof-of-concept that such predictive spatial attention can benefit a technical system solving a challenging visual object detection task. To this end, we introduce a Bayes-like integration of the so-called dynamic attention priors (DAPs) and dense detection likelihoods, which get enhanced at predicted object positions obtained by the extrapolation of trajectories.
Using annotated video sequences of pedestrians in a parking lot setting, we quantitatively show that DAPs can improve detection performance significantly as compared to a baseline condition relying purely on pattern analysis
Face and Facial Feature Localization
In this paper we present a general technique for face and facial feature localization in 2D color images with arbitrary background. In a previous work we studied an eye localization module, while here we focus on mouth localization. Given in input an image that depicts a sole person, first we exploit the color information to limit the search area to candidate mouth regions, then we determine the exact mouth position by means of a SVM trained for the purpose. This component-based approach achieves the localization of both the faces and the corresponding facial features, being robust to partial occlusions, pose, scale and illumination variations. We report the results of the separate modules of the single feature classifiers and their combination on images of several public databases
Analysing facial regions for face recognition using forensic protocols
The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-38061-7_22Proceedings of International Workshops of Practical Applications of Agents and Multi-Agent Systems (PAAMS), held in 2013, Salamanca (Spain).This paper focuses on the analysis of automatic facial regions extraction for face recognition applications. Traditional face recognition systems compare just full face images in order to estimate the identity, here different facial areas of face images obtained from both uncontrolled and controlled environments are extracted from a person image. In this work, we study and compare the discriminative capabilities of 15 facial regions considered in forensic practice such as full face, nose, eye, eyebrow, mouth, etc. This study is of interest to biometrics because a more robust general-purpose face recognition system can be built by fusing the similarity scores obtained from the comparison of different individual parts of the face. To analyse the discriminative power of each facial region, we have randomly defined three population subsets of 200 European subjects (male, female and mixed) from MORPH database. First facial landmarks are automatically located, checked and corrected and then 15 forensic facial regions are extracted and considered for the study. In all cases, the performance of the full face (faceISOV region) is higher than the one achieved for the rest of facial regions. It is very interesting to note that the nose region has a very significant discrimination efficiency by itself and similar to the full face performance.This work has been partially supported by contract with Spanish Guardia Civil and projects BBfor2 (FP7-ITN-238803), Bio-Challenge (TEC2009-11186), Bio-Shield (TEC2012-34881), Contexts (S2009/TIC-1485), TeraSense (CSD2008-00068) and "Cátedra UAM-Telefónica"
- …
