26,700 research outputs found

    Multiple image view synthesis for free viewpoint video applications

    Get PDF
    Interactive audio-visual (AV) applications such as free viewpoint video (FVV) aim to enable unrestricted spatio-temporal navigation within multiple camera environments. Current virtual viewpoint view synthesis solutions for FVV are either purely image-based implying large information redundancy; or involve reconstructing complex 3D models of the scene. In this paper we present a new multiple image view synthesis algorithm that only requires camera parameters and disparity maps. The multi-view synthesis (MVS) approach can be used in any multi-camera environment and is scalable as virtual views can be created given 1 to N of the available video inputs, providing a means to gracefully handle scenarios where camera inputs decrease or increase over time. The algorithm identifies and selects only the best quality surface areas from available reference images, thereby reducing perceptual errors in virtual view reconstruction. Experimental results are presented and verified using both objective (PSNR) and subjective comparisons

    Scalable virtual viewpoint image synthesis for multiple camera environments

    Get PDF
    One of the main aims of emerging audio-visual (AV) applications is to provide interactive navigation within a captured event or scene. This paper presents a view synthesis algorithm that provides a scalable and flexible approach to virtual viewpoint synthesis in multiple camera environments. The multi-view synthesis (MVS) process consists of four different phases that are described in detail: surface identification, surface selection, surface boundary blending and surface reconstruction. MVS view synthesis identifies and selects only the best quality surface areas from the set of available reference images, thereby reducing perceptual errors in virtual view reconstruction. The approach is camera setup independent and scalable as virtual views can be created given 1 to N of the available video inputs. Thus, MVS provides interactive AV applications with a means to handle scenarios where camera inputs increase or decrease over time

    Numerical integration and other techniques for computer aided network design programming Final technical report, 1 Jan. 1970 - 1 Jan. 1971

    Get PDF
    Matrix method and stiffly stable algorithms in numerical integration for computer aided network design programmin

    Multispectral object segmentation and retrieval in surveillance video

    Get PDF
    This paper describes a system for object segmentation and feature extraction for surveillance video. Segmentation is performed by a dynamic vision system that fuses information from thermal infrared video with standard CCTV video in order to detect and track objects. Separate background modelling in each modality and dynamic mutual information based thresholding are used to provide initial foreground candidates for tracking. The belief in the validity of these candidates is ascertained using knowledge of foreground pixels and temporal linking of candidates. The transferable belief model is used to combine these sources of information and segment objects. Extracted objects are subsequently tracked using adaptive thermo-visual appearance models. In order to facilitate search and classification of objects in large archives, retrieval features from both modalities are extracted for tracked objects. Overall system performance is demonstrated in a simple retrieval scenari

    Comparison of fusion methods for thermo-visual surveillance tracking

    Get PDF
    In this paper, we evaluate the appearance tracking performance of multiple fusion schemes that combine information from standard CCTV and thermal infrared spectrum video for the tracking of surveillance objects, such as people, faces, bicycles and vehicles. We show results on numerous real world multimodal surveillance sequences, tracking challenging objects whose appearance changes rapidly. Based on these results we can determine the most promising fusion scheme

    Detecting shadows and low-lying objects in indoor and outdoor scenes using homographies

    Get PDF
    Many computer vision applications apply background suppression techniques for the detection and segmentation of moving objects in a scene. While these algorithms tend to work well in controlled conditions they often fail when applied to unconstrained real-world environments. This paper describes a system that detects and removes erroneously segmented foreground regions that are close to a ground plane. These regions include shadows, changing background objects and other low-lying objects such as leaves and rubbish. The system uses a set-up of two or more cameras and requires no 3D reconstruction or depth analysis of the regions. Therefore, a strong camera calibration of the set-up is not necessary. A geometric constraint called a homography is exploited to determine if foreground points are on or above the ground plane. The system takes advantage of the fact that regions in images off the homography plane will not correspond after a homography transformation. Experimental results using real world scenes from a pedestrian tracking application illustrate the effectiveness of the proposed approach

    Relating visual and semantic image descriptors

    Get PDF
    This paper addresses the automatic analysis of visual content and extraction of metadata beyond pure visual descriptors. Two approaches are described: Automatic Image Annotation (AIA) and Confidence Clustering (CC). AIA attempts to automatically classify images based on two binary classifiers and is designed for the consumer electronics domain. Contrastingly, the CC approach does not attempt to assign a unique label to images but rather to organise the database based on concepts

    Multi-Stage 20-m Shuttle Run Fitness Test, Maximal Oxygen Uptake and Velocity at Maximal Oxygen Uptake.

    Get PDF
    The multi-stage 20-m shuttle run fitness test (20mMSFT) is a popular field test which is widely used to measure aerobic fitness by predicting maximum oxygen uptake (VO2max) and performance. However, the velocity at which VO2max occurs (vVO2max) is a better indicator of performance than VO2max, and can be used to explain inter-individual differences in performance that VO2max cannot. It has been reported as a better predictor for running performance and it can be used to monitor athletes' training for predicting optimal training intensity. This study investigated the validity and suitability of predicting VO2max and vVO2max of adult subjects on the basis of the performance of the 20mMST. Forty eight (25 male and 23 female) physical education students performed, in random order, a laboratory based continuous horizontal treadmill test to determine VO2max, vVO2max and a 20mMST, with an interval of 3 days between each test. The results revealed significant correlations between the number of shuttles in the 20mMSFT and directly determined VO2max (r = 0.87, p<0.05) and vVO2max (r = 0.93, p<0.05). The equation for prediction of VO2max was y = 0.0276x + 27.504, whereas for vVO2max it was y = 0.0937x + 6.890. It can be concluded that the 20mMSFT can accurately predict VO2max and vVO2max and this field test can provide useful information regarding aerobic fitness of adults. The predicted vVO2max can be used in monitoring athletes, especially in determining optimal training intensity
    corecore