59,062 research outputs found
Interview with The University of Manchester Faculty e-learning Managers conducted by Graham McElearney for ALT News Online, Issue 18, November 2009.
Graham McElearney conducted an interview with the four Faculty e-learning Managers at The University of Manchester. This document is the full transcript of the interview. The discussion includes e-learning strategy, organisational structure, current choices of tools and the future of the institutional VLE
Spatially augmented audio delivery: applications of spatial sound awareness in sensor-equipped indoor environments
Current mainstream audio playback paradigms do not take any account of a user's physical location or orientation in the delivery of audio through headphones or speakers. Thus audio is usually presented as a static perception whereby it is naturally a dynamic 3D phenomenon audio environment. It fails to take advantage of our innate psycho-acoustical perception that we have of sound source locations around us.
Described in this paper is an operational platform which we have built to augment the sound from a generic set of wireless headphones. We do this in a way that overcomes the spatial awareness limitation of audio playback in indoor 3D environments which are both location-aware and sensor-equipped. This platform provides access to an audio-spatial presentation modality which by its nature lends itself to numerous cross-dissiplinary applications. In the paper we present the platform and two demonstration applications
The Absolute Rate of LGRB Formation
We estimate the LGRB progenitor rate using our recent work on the effects of
environmental metallically on LGRB formation in concert with SNe statistics via
an approach patterned loosely off the Drake equation. Beginning with the cosmic
star-formation history, we consider the expected number of broad-line Type Ic
events (the SNe type associated with LGRBs) that are in low metallicity host
environments adjusted by the contribution of high metallicity host environments
at a much reduced rate. We then compare this estimate to the observed LGRB rate
corrected for instrumental selection effects to provide a combined estimate of
the efficiency fraction of these progenitors to produce LGRBs and the fraction
of which are beamed in our direction. From this we estimate that an aligned
LGRB occurs for approximately every 4000 low metallically broad-lined Type Ic
Supernovae. Therefore if one assumes a semi-nominal beaming factor of 100 then
only about one such supernova out of 40 produce an LGRB. Finally we propose an
off-axis LGRB search strategy of targeting for radio observation broad-line
Type Ic events that occur in low metallicity hosts.Comment: 9 pages, 3 figure
An outdoor spatially-aware audio playback platform exemplified by a virtual zoo
Outlined in this short paper is a framework for the construction of outdoor location-and direction-aware audio applications along with an example application to showcase the strengths of the framework and to demonstrate how it works. Although there has been previous work in this area which has concentrated on the spatial presentation of sound through wireless headphones, typically such sounds are presented as though originating from specific, defined spatial locations within a 3D environment. Allowing a user to move freely within this space and adjusting the sound dynamically as we do here, further enhances the perceived reality of the virtual environment. Techniques to realise this are implemented by the real-time adjustment of the presented 2 channels of audio to the headphones, using readings of the user's head orientation and location which in turn are made possible by sensors mounted upon the headphones.
Aside from proof of concept indoor applications, more user-responsive applications of spatial audio delivery have not been prototyped or explored. In this paper we present an audio-spatial presentation platform along with a primary demonstration application for an outdoor environment which we call a {\em virtual audio zoo}. This application explores our techniques to further improve the realism of the audio-spatial environments we can create, and to assess what types of future application are possible
Eye fixation related potentials in a target search task
Typically BCI (Brain Computer Interfaces) are found in rehabilitative or restorative applications, often allowing users a medium of communication that is otherwise unavailable through conventional means. Recently, however, there is growing interest in using BCI to assist users in searching for images. A class of neural signals often leveraged in common BCI paradigms are ERPs (Event Related Potentials), which are present in the EEG (Electroencephalograph) signals from users in response to various sensory events. One such ERP is the P300, and is typically elicited in an oddball experiment where a subject’s attention is orientated towards a deviant stimulus among a stream of presented images. It has been shown that these types of neural responses can be used to drive an image search or labeling task, where we can rank images by examining the presence of such ERP signals in response to the display of images. To date, systems like these have been demonstrated when presenting sequences of images containing targets at up to 10Hz, however, the target images in these tasks do not necessitate any kind of eye movement for their detection because the targets in the images are quite salient. In this paper we analyse the presence of discriminating signals when they are offset to the time of eye fixations in a visual search task where detection of target images does require eye fixations
Optimising the number of channels in EEG-augmented image search
Recent proof-of-concept research has appeared showing the applicability of Brain Computer Interface (BCI) technology in combination with the human visual system, to classify images. The basic premise here is that images that arouse a participant’s attention generate a detectable response in their brainwaves, measurable using an electroencephalograph (EEG). When a participant is given a target class of images to search for, each image belonging to that target class presented within a stream of images should elicit a distinctly detectable neural response. Previous work in this domain has primarily focused on validating the technique on proof of concept image sets that demonstrate desired properties and on examining the capabilities of the technique at various image presentation speeds. In this paper we expand on this by examining the capability of the technique when using a reduced number of channels in the EEG, and its impact on the detection accuracy
- …
