33 research outputs found
Goal Detection in Soccer Video: Role-Based Events Detection Approach
Soccer video processing and analysis to find critical events such as occurrences of goal event have been one of the important issues and topics of active researches in recent years. In this paper, a new role-based framework is proposed for goal event detection in which the semantic structure of soccer game is used. Usually after a goal scene, the audiences’ and reporters’ sound intensity is increased, ball is sent back to the center and the camera may: zoom on Player, show audiences’ delighting, repeat the goal scene or display a combination of them. Thus, the occurrence of goal event will be detectable by analysis of sequences of above roles. The proposed framework in this paper consists of four main procedures: 1- detection of game’s critical events by using audio channel, 2- detection of shot boundary and shots classification, 3- selection of candidate events according to the type of shot and existence of goalmouth in the shot, 4- detection of restarting the game from the center of the field. A new method for shot classification is also presented in this framework. Finally, by applying the proposed method it was shown that the goal events detection has a good accuracy and the percentage of detection failure is also very low.DOI:http://dx.doi.org/10.11591/ijece.v4i6.637
A Joint Mapping and Synthesis Approach for Multiview Facial Expression Recognition
This paper presents a novel approach to address pose-invariant face frontalization aiming Multiview Facial Expression Recognition (MFER). Particularly, the proposed approach is a hybrid method, including both synthesizing and mapping techniques. The key idea is to use mapped reconstructive coefficients of each arbitrary viewpoints and the frontal bases where the mapping functions are provided by learning between frontal and non-frontal faces’ coefficients. We also exploit sparse coding for synthesizing the frontalized faces, even with large poses. For evaluation, two qualitative and quantitative assessments are used along with an application of multiview facial expression recognition as a case study. The results show that our approach is efficient in terms of frontalizing non-frontal faces. Moreover, its validation on two popular datasets, BU3DFE and Multi-PIE, with various assessments contexts reveals its efficiency and stability on head pose variation, especially on large poses. </jats:p
