322 research outputs found

    The New CGEMS - Preparing the Computer Graphics Educational Materials Source to Meet the Needs of Educators

    Get PDF
    ACM SIGGRAPH and Eurographics are restarting CGEMS, the Computer Graphics Educational Materials Source, an on-line repository of curricular material for computer graphics education. In this context, the question that we ask ourselves is: ''How can CGEMS best meet the needs of educators''? The aim of this forum is to provide the audience with an idea of the purpose of CGEMS - a source of educational materials for educators by educators - and to give them an opportunity to contribute their views and ideas towards shaping the new CGEMS. Towards this purpose, we have identified a number of issues to resolve, which the panel will put forward to the participants of the forum for discussion

    Towards predicting post-editing productivity

    Get PDF
    Machine translation (MT) quality is generally measured via automatic metrics, producing scores that have no meaning for translators who are required to post-edit MT output or for project managers who have to plan and budget for transla- tion projects. This paper investigates correlations between two such automatic metrics (general text matcher and translation edit rate) and post-editing productivity. For the purposes of this paper, productivity is measured via processing speed and cognitive measures of effort using eye tracking as a tool. Processing speed, average fixation time and count are found to correlate well with the scores for groups of segments. Segments with high GTM and TER scores require substantially less time and cognitive effort than medium or low-scoring segments. Future research involving score thresholds and confidence estimation is suggested

    Real-time gaze estimation using a Kinect and a HD webcam

    Get PDF
    In human-computer interaction, gaze orientation is an important and promising source of information to demonstrate the attention and focus of users. Gaze detection can also be an extremely useful metric for analysing human mood and affect. Furthermore, gaze can be used as an input method for human-computer interaction. However, currently real-time and accurate gaze estimation is still an open problem. In this paper, we propose a simple and novel estimation model of the real-time gaze direction of a user on a computer screen. This method utilises cheap capturing devices, a HD webcam and a Microsoft Kinect. We consider that the gaze motion from a user facing forwards is composed of the local gaze motion shifted by eye motion and the global gaze motion driven by face motion. We validate our proposed model of gaze estimation and provide experimental evaluation of the reliability and the precision of the method

    Social presence and dishonesty in retail

    Get PDF
    Self-service checkouts (SCOs) in retail can benefit consumers and retailers, providing control and autonomy to shoppers independent from staff, together with reduced queuing times. Recent research indicates that the absence of staff may provide the opportunity for consumers to behave dishonestly, consistent with a perceived lack of social presence. This study examined whether a social presence in the form of various instantiations of embodied, visual, humanlike SCO interface agents had an effect on opportunistic behaviour. Using a simulated SCO scenario, participants experienced various dilemmas in which they could financially benefit themselves undeservedly. We hypothesised that a humanlike social presence integrated within the checkout screen would receive more attention and result in fewer instances of dishonesty compared to a less humanlike agent. This was partially supported by the results. The findings contribute to the theoretical framework in social presence research. We concluded that companies adopting self-service technology may consider the implementation of social presence in technology applications to support ethical consumer behaviour, but that more research is required to explore the mixed findings in the current study.<br/

    “Is More Better?”:Impact of Multiple Photos on Perception of Persona Profiles

    Get PDF
    In this research, we investigate if and how more photos than a single headshot can heighten the level of information provided by persona profiles. We conduct eye-tracking experiments and qualitative interviews with variations in the photos: a single headshot, a headshot and images of the persona in different contexts, and a headshot with pictures of different people representing key persona attributes. The results show that more contextual photos significantly improve the information end users derive from a persona profile; however, showing images of different people creates confusion and lowers the informativeness. Moreover, we discover that choice of pictures results in various interpretations of the persona that are biased by the end users' experiences and preconceptions. The results imply that persona creators should consider the design power of photos when creating persona profiles

    Eye tracking as an MT evaluation technique

    Get PDF
    Eye tracking has been used successfully as a technique for measuring cognitive load in reading, psycholinguistics, writing, language acquisition etc. for some time now. Its application as a technique for measuring the reading ease of MT output has not yet, to our knowledge, been tested. We report here on a preliminary study testing the use and validity of an eye tracking methodology as a means of semi-automatically evaluating machine translation output. 50 French machine translated sentences, 25 rated as excellent and 25 rated as poor in an earlier human evaluation, were selected. Ten native speakers of French were instructed to read the MT sentences for comprehensibility. Their eye gaze data were recorded non-invasively using a Tobii 1750 eye tracker. The average gaze time and fixation count were found to be higher for the “bad” sentences, while average fixation duration and pupil dilations were not found to be substantially different for output rated as good and output rated as bad. Comparisons between HTER scores and eye gaze data were also found to correlate well with gaze time and fixation count, but not with pupil dilation and fixation duration. We conclude that the eye tracking data, in particular gaze time and fixation count, correlate reasonably well with human evaluation of MT output but fixation duration and pupil dilation may be less reliable indicators of reading difficulty for MT output. We also conclude that eye tracking has promise as a semi-automatic MT evaluation technique, which does not require bi-lingual knowledge, and which can potentially tap into the end users’ experience of machine translation output

    A serious games platform for cognitive rehabilitation with preliminary evaluation

    Get PDF
    In recent years Serious Games have evolved substantially, solving problems in diverse areas. In particular, in Cognitive Rehabilitation, Serious Games assume a relevant role. Traditional cognitive therapies are often considered repetitive and discouraging for patients and Serious Games can be used to create more dynamic rehabilitation processes, holding patients' attention throughout the process and motivating them during their road to recovery. This paper reviews Serious Games and user interfaces in rehabilitation area and details a Serious Games platform for Cognitive Rehabilitation that includes a set of features such as: natural and multimodal user interfaces and social features (competition, collaboration, and handicapping) which can contribute to augment the motivation of patients during the rehabilitation process. The web platform was tested with healthy subjects. Results of this preliminary evaluation show the motivation and the interest of the participants by playing the games.- This work has been supported by FCT - Fundacao para a Ciencia e Tecnologia in the scope of the projects: PEst-UID/CEC/00319/2015 and PEst-UID/CEC/00027/2015. The authors would like to thank also all the volunteers that participated in the study

    Gaze–mouse coordinated movements and dependency with coordination demands in tracing.

    Get PDF
    Eye movements have been shown to lead hand movements in tracing tasks where subjects have to move their fingers along a predefined trace. The question remained, whether the leading relationship was similar when tracing with a pointing device, such as a mouse; more importantly, whether tasks that required more or less gaze–mouse coordination would introduce variation in this pattern of behaviour, in terms of both spatial and temporal leading of gaze position to mouse movement. A three-level gaze–mouse coordination demand paradigm was developed to address these questions. A substantial dataset of 1350 trials was collected and analysed. The linear correlation of gaze–mouse movements, the statistical distribution of the lead time, as well as the lead distance between gaze and mouse cursor positions were all considered, and we proposed a new method to quantify lead time in gaze–mouse coordination. The results supported and extended previous empirical findings that gaze often led mouse movements. We found that the gaze–mouse coordination demands of the task were positively correlated to the gaze lead, both spatially and temporally. However, the mouse movements were synchronised with or led gaze in the simple straight line condition, which demanded the least gaze–mouse coordination

    Eye movements and brain oscillations to symbolic safety signs with different comprehensibility

    Get PDF
    Background: The aim of this study was to investigate eye movements and brain oscillations to symbolic safety signs with different comprehensibility. Methods: Forty-two young adults participated in this study, and ten traffic symbols consisting of easy-to-comprehend and hard-to-comprehend signs were used as stimuli. During the sign comprehension test, real-time eye movements and spontaneous brain activity [electroencephalogram (EEG) data] were simultaneously recorded. Results: The comprehensibility level of symbolic traffic signs significantly affects eye movements and EEG spectral power. The harder to comprehend the sign is, the slower the blink rate, the larger the pupil diameter, and the longer the time to first fixation. Noticeable differences on EEG spectral power between easy-to-comprehend and hard-to-comprehend signs are observed in the prefrontal and visual cortex of the human brain. Conclusions: Sign comprehensibility has significant effects on real-time nonintrusive eye movements and brain oscillations. These findings demonstrate the potential to integrate physiological measures from eye movements and brain oscillations with existing evaluation methods in assessing the comprehensibility of symbolic safety signs.open

    Shedding light on ai in radiology: A systematic review and taxonomy of eye gaze-driven interpretability in deep learning.

    Get PDF
    X-ray imaging plays a crucial role in diagnostic medicine. Yet, a significant portion of the global population lacks access to this essential technology due to a shortage of trained radiologists. Eye-tracking data and deep learning models can enhance X-ray analysis by mapping expert focus areas, guiding automated anomaly detection, optimizing workflow efficiency, and bolstering training methods for novice radiologists. However, the literature shows contradictory results regarding the usefulness of eye-tracking data in deep-learning architectures for abnormality detection. We argue that these discrepancies between studies in the literature are due to (a) the way eye-tracking data is (or is not) processed, (b) the types of deep learning architectures chosen, and (c) the type of application that these architectures will have. We conducted a systematic literature review using PRISMA to address these contradicting results. We analyzed 60 studies that incorporated eye-tracking data in a deep-learning approach for different application goals in radiology. We performed a comparative analysis to understand if eye gaze data contains feature maps that can be useful under a deep learning approach and whether they can promote more interpretable predictions. To the best of our knowledge, this is the first survey in the area that performs a thorough investigation of eye gaze data processing techniques and their impacts in different deep learning architectures for applications such as error detection, classification, object detection, expertise level analysis, fatigue estimation and human attention prediction in medical imaging data. Our analysis resulted in two main contributions: (1) taxonomy that first divides the literature by task, enabling us to analyze the value eye movement can bring for each case and build guidelines regarding architectures and gaze processing techniques adequate for each application, and (2) an overall analysis of how eye gaze data can promote explainability in radiology
    corecore