142 research outputs found

    Attention modeling for video quality assessment:balancing global quality and local quality

    Get PDF

    IMPROVING IMAGE QUALITY ASSESSMENT WITH MODELING VISUAL ATTENTION

    Get PDF
    Visual attention is an important attribute of the human visual system (HVS), while it has not been explored in image quality assessment adequately. This paper investigates the capabilities of visual attention models for image quality assessment in different scenarios: twodimensional images, stereoscopic images, and Digital Cinema setup. Three bottom-up attention models are employed to detect attention regions and find fixation points from an image and compute respective attention maps. Different approaches for integrating the visual attention models into several image quality metrics are evaluated with respect to three different image quality data sets. Experimental results demonstrate that visual attention is a positive factor that can not be ignored in improving the performance of image quality metrics in perceptual quality assessment. Index Terms — Visual attention, saliency, fixation, image quality metri

    Perceptual Quality Assessment Based on Visual Attention Analysis

    Get PDF
    Most existing quality metrics do not take the human attention analysis into account. Attention to particular objects or regions is an important attribute of human vision and perception system in measuring perceived image and video qualities. This paper presents an approach for extracting visual attention regions based on a combination of a bottom-up saliency model and semantic image analysis. The use of PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural SIMilarity) in extracted attention regions is analyzed for image/video quality assessment, and a novel quality metric is proposed which can exploit the attributes of visual attention information adequately. The experimental results with respect to the subjective measurement demonstrate that the proposed metric outperforms the current methods

    Increasing User Engagement in Virtual Reality:The Role of Interactive Digital Narratives to Trigger Emotional Responses

    Get PDF
    Immersive multimedia technologies such as virtual reality (VR) create narrative experiences in the digital medium, thus revolutionizing how people communicate, learn, and think. These Interactive Digital Narratives (IDN) shape end-users’ experience with a broad potential for various applications. A fundamental aspect of achieving this potential is the establishment of a positive and engaging user experience. This study investigates how enabling the interactive narrative in a VR setting affects the engagement of the users. The study we base this work on involved thirty-two participants in a controlled experiment where they were asked to explore a designed VR environment, with and without a digital narrative. We observed a significant increase in the participants’ level of engagement in the narrative-based environment compared to the non-narrative VR environment. The results showed how the IDN in VR generates an increased emotional response, strengthening the users’ engagement, showing that IDN can be considered an essential factor in shaping the positive experience of end-users, thus shaping a better society.acceptedVersion© ACM, 2020. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution

    A subjective and behavioral assessment of affordances in virtual architectural walkthroughs

    Get PDF
    Immersive technologies, such as VR, offer first-person experiences using depth perception and spatial awareness that elucidate a sense of space impossible with traditional visualization techniques. This paper looks beyond the visual aspects and towards understanding the experiential aspects of two popular uses of VR in 3D architectural visualization: a “passive walkthrough” and an “interactive walkthrough”. We designed a within-subject experiment to measure the user-perceived quality for both experiences. All participants (N = 34) were exposed to both scenarios and afterwards responded to a post-experience questionnaire; meanwhile, their physical activity and simple active behaviors were also recorded. Results indicate that while the fully immersive-interactive experience rendered a heightened sense of presence in users, overt behaviors (movement and gesture) did not change for users. We discuss the potential use of subjective assessments and user behavior analysis to understand user-perceived experiential quality inside virtual environments, which should be useful in building taxonomies and designing affordances that best fit these environments.publishedVersio

    Authenticity and presence: defining perceived quality in VR experiences

    Get PDF
    This work expands the existing understanding of quality assessments of VR experiences. Historically, VR quality has focused on presence and immersion, but current discourse emphasizes plausibility and believability as critical for lifelike, credible VR. However, the two concepts are often conflated, leading to confusion. This paper proposes viewing them as subsets of authenticity and presents a structured hierarchy delineating their differences and connections. Additionally, coherence and congruence are presented as complementary quality functions that integrate internal and external logic. The paper considers quality formation in the experience of authenticity inside VR emphasizing that distinguishing authenticity in terms of precise quality features are essential for accurate assessments. Evaluating quality requires a holistic approach across perceptual, cognitive, and emotional factors. This model provides theoretical grounding for assessing the quality of VR experiences.publishedVersio

    How good are virtual hands? Influences of input modality on motor tasks in virtual reality

    Get PDF
    Hand-tracking enables controller-free interaction with virtual environments, which can make virtual reality (VR) experiences more natural and immersive. As naturalness hinges on both technological and human influence factors, fine-tuning the former while assessing the latter can be used to increase overall experience. This paper investigates a reach-grab-place task inside VR using two input modalities (hand-tracking vs. handheld-controller). Subjects (N = 33) compared the two input methods available on a consumer grade VR headset for their effects on objective user performance and subjective experience of the perceived sense of presence, cognitive workload, and ease-of-use. We found that virtual hands (with hand-tracking) did not influence the subjective feelings of perceived presence, naturalness, & engagement; neither did it inspire the overall ease-of-use while performing the task. In fact, subjects completed the task faster and felt a lower mental workload and higher overall usability with handheld-controllers. The result found that in this particular case, hand-tracking did not improve the psychological and emotional determinants of immersive VR experiences. The study helps expand on our understanding of the two input modalities in terms of their viability for naturalistic experiences in VR akin to real-world scenarios.publishedVersio

    Digital Storytelling Tools - Final Report

    Get PDF
    © NTNU, Institutt for kunst og medievitenskap . This is the authors pre-refereed manuscript to the article

    Techniques for dynamic hardware management of streaming media applications using a framework for system scenarios

    Get PDF
    Many modern applications exhibit dynamic behavior, which can be exploited for reduced energy consumption. We employ a two-phase combined design-time/run-time methodology that identifies different run-time situations and clusters similar behaviors into system scenarios. This methodology is integrated with our framework for system scenario based designs, which dynamically tune the hardware to match the application behavior. We focus on streaming media applications, and achieve significant energy reductions for an extracted control structure of a video codec widely used in hand-held devices today. We encode a video stream consisting of different frame sizes based on measured available wireless bandwidth. Energy consumption is measured with a modified microcontroller board from Atmel with two alternative voltage and frequency settings. While maintaining the perceptual video quality and frame rate, our method results in up to 49% energy reduction for our encoded streams. The average tuning overhead of our mechanism is negligible after applying simple loop transformations to the encoder control structure. We furthermore show how to obtain up to 34.3% energy reductions for a system with limited slack available, by dynamically exploiting the available sleep modes.acceptedVersion© 2017. This is the authors’ accepted and refereed manuscript to the article. Locked until 9.12.2019 due to copyright restrictions. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0
    corecore