3 research outputs found

    MEDICINFOSYS: AN ARCHITECTURE FOR AN EVIDENCE-BASED MEDICAL INFORMATION RESEARCH AND DELIVERY SYSTEM

    No full text
    Due to the complicated nature of medical information needs, the time constraints of clinicians, and the linguistic complexities and sheer volume of medical information, most medical questions go unanswered. It has been shown that nearly all of these questions can be answered with the presently available medical sources and that when these questions get answered, patient health benefits. In this work, we design and describe a framework for Evidence-Based medical information research and delivery, MedicInfoSys. This system leverages the strengths of knowledge-based workers and of mature knowledge-based technologies within the medical domain. The most critical element of this framework, is a search interface, PifMed. PifMed uses gold-standard MeSH categorization (presently integrated into medline) as the basis of a navigational structure, which allows users to browse search results with an interactive tree of categories. Evaluation by user study shows it to be superior to PubMed, in terms of speed and usability

    MeSH Represented MEDLINE Query Results

    No full text

    JALI

    Full text link
    The rich signals we extract from facial expressions imposes high expectations for the science and art of facial animation. While the advent of high-resolution performance capture has greatly improved realism, the utility of procedural animation warrants a prominent place in facial animation workflow. We present a system that, given an input audio soundtrack and speech transcript, automatically generates expressive lip-synchronized facial animation that is amenable to further artistic refinement, and that is comparable with both performance capture and professional animator output. Because of the diversity of ways we produce sound, the mapping from phonemes to visual depictions as visemes is many-valued. We draw from psycholinguistics to capture this variation using two visually distinct anatomical actions: Ja w and L ip, wheresound is primarily controlled by jaw articulation and lower-face muscles, respectively. We describe the construction of a transferable template jali 3D facial rig, built upon the popular facial muscle action unit representation facs. We show that acoustic properties in a speech signal map naturally to the dynamic degree of jaw and lip in visual speech. We provide an array of compelling animation clips, compare against performance capture and existing procedural animation, and report on a brief user study. </jats:p
    corecore