11 research outputs found
The innovation binder approach: a guide towards a social-technical balanced pervasive health system
User-driven design of a context-aware application: an ambient-intelligent nurse call system
The envisioned ambient-intelligent patient room contains numerous devices to sense and adjust the environment, monitor patients and support caregivers. Context-aware techniques are often used to combine and exploit the heterogeneous data offered by these devices to improve the provision of continuous care. However, the adoption of context-aware applications is lagging behind what could be expected, because they are not adapted to the daily work practices of the users, a lack of personalization of the services and not tackling problems such as the need of the users for control. To mediate this, an interdisciplinary methodology was investigated and designed in this research to involve the users in each step of the development cycle of the context-aware application. The methodology was used to develop an ambient-intelligent nurse call system, which uses gathered context data to find the most appropriate caregivers to handle a call of a patient and generate new calls based on sensor data. Moreover, a smartphone application was developed for the caregivers to receive and assess calls. The lessons learned during the user-driven development of this system are highlighted
Using Ontology and Gamification to Improve Students’ Participation and Motivation in CSCL
Assessing the importance of audio/video synchronization for simultaneous translation of video sequences
Lip synchronization is considered a key parameter during interactive communication. In the case of video conferencing and television broadcasting, the differential delay between audio and video should remain below certain thresholds, as recommended by several standardization bodies. However, further research has also shown that these thresholds can be relaxed, depending on the targeted application and use case. In this article, we investigate the influence of lip sync on the ability to perform real-time language interpretation during video conferencing. Furthermore, we are also interested in determining proper lip sync visibility thresholds applicable to this use case. Therefore, we conducted a subjective experiment using expert interpreters, which were required to perform a simultaneous translation, and non-experts. Our results show that significant differences are obtained when conducting subjective experiments with expert interpreters. As interpreters are primarily focused on performing the simultaneous translation, lip sync detectability thresholds are higher compared with existing recommended thresholds. As such, primary focus and the targeted application and use case are important factors to be considered when selecting proper lip sync acceptability thresholds
