34 research outputs found

    Goal Detection in Soccer Video: Role-Based Events Detection Approach

    Get PDF
    Soccer video processing and analysis to find critical events such as occurrences of goal event have been one of the important issues and topics of active researches in recent years. In this paper, a new role-based framework is proposed for goal event detection in which the semantic structure of soccer game is used. Usually after a goal scene, the audiences’ and reporters’ sound intensity is increased, ball is sent back to the center and the camera may: zoom on Player, show audiences’ delighting, repeat the goal scene or display a combination of them. Thus, the occurrence of goal event will be detectable by analysis of sequences of above roles. The proposed framework in this paper consists of four main procedures: 1- detection of game’s critical events by using audio channel, 2- detection of shot boundary and shots classification, 3- selection of candidate events according to the type of shot and existence of goalmouth in the shot, 4- detection of restarting the game from the center of the field. A new method for shot classification is also presented in this framework. Finally, by applying the proposed method it was shown that the goal events detection has a good accuracy and the percentage of detection failure is also very low.DOI:http://dx.doi.org/10.11591/ijece.v4i6.637

    Attack on Scene Flow using Point Clouds

    Full text link
    Deep neural networks have made significant advancements in accurately estimating scene flow using point clouds, which is vital for many applications like video analysis, action recognition, and navigation. The robustness of these techniques, however, remains a concern, particularly in the face of adversarial attacks that have been proven to deceive state-of-the-art deep neural networks in many domains. Surprisingly, the robustness of scene flow networks against such attacks has not been thoroughly investigated. To address this problem, the proposed approach aims to bridge this gap by introducing adversarial white-box attacks specifically tailored for scene flow networks. Experimental results show that the generated adversarial examples obtain up to 33.7 relative degradation in average end-point error on the KITTI and FlyingThings3D datasets. The study also reveals the significant impact that attacks targeting point clouds in only one dimension or color channel have on average end-point error. Analyzing the success and failure of these attacks on the scene flow networks and their 2D optical flow network variants shows a higher vulnerability for the optical flow networks. Code is available at https://github.com/aheldis/Attack-on-Scene-Flow-using-Point-Clouds.git

    A Joint Mapping and Synthesis Approach for Multiview Facial Expression Recognition

    Full text link
    This paper presents a novel approach to address pose-invariant face frontalization aiming Multiview Facial Expression Recognition (MFER). Particularly, the proposed approach is a hybrid method, including both synthesizing and mapping techniques. The key idea is to use mapped reconstructive coefficients of each arbitrary viewpoints and the frontal bases where the mapping functions are provided by learning between frontal and non-frontal faces’ coefficients. We also exploit sparse coding for synthesizing the frontalized faces, even with large poses. For evaluation, two qualitative and quantitative assessments are used along with an application of multiview facial expression recognition as a case study. The results show that our approach is efficient in terms of frontalizing non-frontal faces. Moreover, its validation on two popular datasets, BU3DFE and Multi-PIE, with various assessments contexts reveals its efficiency and stability on head pose variation, especially on large poses. </jats:p

    A new approach for iris localization in iris recognition systems

    No full text
    corecore