261 research outputs found

    RecordMe: A Smartphone Application for Experimental Collections of Large Amount of Data Respecting Volunteer's Privacy

    No full text
    International audienceSince the spread of smartphones, researchers now have opportunities to collect more ecological data. However, despite the many advantages of existing databases (e.g., clean data, direct comparison), they may not suit all criteria for a particular experiment, resulting in an unavoidable tradeoff between the gain they provide and the lack of some labels or data sources. In this paper, we introduce RecordMe, an Android application ready to use for the research community. RecordMe allows to continuously record many different sensors and sources and provides a basic GUI for quick and easy settings. Also, a mark up interface is embedded for experiments that need it. Because of the high sensitivity of some data, RecordMe includes features for protecting volunteers' privacy and securing their data. RecordMe has already been successfully tested on different smartphones for 3 data collections

    Collecte de parole pour l’étude des langues peu dotées ou en danger avec l’application mobile Lig-Aikuma

    No full text
    International audienceNous rapportons dans cet article les travaux en cours portant sur la collecte de langues africaines peu dotées ou en danger. Une collecte de données a été menée à l'aide d'une version modifiée de l'application Android AIKUMA, initialement développée par Steven Bird et coll. (Bird et al., 2014). Les modifications apportées suivent les spécifications du projet franco-allemand ANR/DFG BULB 1 pour faciliter la collecte sur le terrain de corpus de parole parallèles. L'application résultante, appelée LIG-AIKUMA, a été testée avec succès sur plusieurs smartphones et tablettes et propose plusieurs modes de fonctionnement (enregistrement de parole, respeaking de parole, traduction et élicitation). Entre autres fonctionnalités, LIG-AIKUMA permet la génération et la manipulation avancée de fichiers de métadonnées ainsi que la prise en compte d'informations d'alignement entre phrases prononcées parallèles dans les modes de respeaking et de traduction. L'application a été utilisée aux cours de campagnes de collecte sur le terrain, au Congo-Brazzaville, permettant l'acquisition de 80 heures de parole. La conception de l'application et l'illustration de son usage dans deux campagnes de collecte sont décrites plus en détail dans cet article

    Forces and trauma associated with minimally invasive image-guided cochlear implantation

    Get PDF
    Objective. Minimally invasive image-guided cochlear implantation (CI) utilizes a patient-customized microstereotactic frame to access the cochlea via a single drill-pass. We investigate the average force and trauma associated with the insertion of lateral wall CI electrodes using this technique. Study Design. Assessment using cadaveric temporal bones. Setting. Laboratory setup. Subjects and Methods. Microstereotactic frames for 6 fresh cadaveric temporal bones were built using CT scans to determine an optimal drill path following which drilling was performed. CI electrodes were inserted using surgical forceps to manually advance the CI electrode array, via the drilled tunnel, into the cochlea. Forces were recorded using a 6-axis load sensor placed under the temporal bone during the insertion of lateral wall electrode arrays (2 each of Nucleus CI422, MED-EL standard, and modified MED-EL electrodes with stiffeners). Tissue histology was performed by microdissection of the otic capsule and apical photo documentation of electrode position and intracochlear tissue. Results. After drilling, CT scanning demonstrated successful access to cochlea in all 6 bones. Average insertion forces ranged from 0.009 to 0.078 N. Peak forces were in the range of 0.056 to 0.469 N. Tissue histology showed complete scala tympani insertion in 5 specimens and scala vestibuli insertion in the remaining specimen with depth of insertion ranging from 360° to 600°. No intracochlear trauma was identified. Conclusion. The use of lateral wall electrodes with the minimally invasive image-guided CI approach was associated with insertion forces comparable to traditional CI surgery. Deep insertions were obtained without identifiable trauma. © American Academy of Otolaryngology-Head and Neck Surgery Foundation 2014

    LIG-AIKUMA: a Mobile App to Collect Parallel Speech for Under-Resourced Language Studies

    No full text
    International audienceThis paper reports on our ongoing efforts to collect speech data in under-resourced or endangered languages of Africa. Data collection is carried out using an improved version of the An-droid application (AIKUMA) developed by Steven Bird and colleagues [1]. Features were added to the app in order to facilitate the collection of parallel speech data in line with the requirements of the French-German ANR/DFG BULB (Breaking the Unwritten Language Barrier) project. The resulting app, called LIG-AIKUMA, runs on various mobile phones and tablets and proposes a range of different speech collection modes (recording , respeaking, translation and elicitation). It was used for field data collections in Congo-Brazzaville resulting in a total of over 80 hours of speech

    On-line Context Aware Physical Activity Recognition from the Accelerometer and Audio Sensors of Smartphones

    No full text
    International audienceActivity Recognition (AR) from smartphone sensors has be-come a hot topic in the mobile computing domain since it can provide ser-vices directly to the user (health monitoring, fitness, context-awareness) as well as for third party applications and social network (performance sharing, profiling). Most of the research effort has been focused on direct recognition from accelerometer sensors and few studies have integrated the audio channel in their model despite the fact that it is a sensor that is always available on all kinds of smartphones. In this study, we show that audio features bring an important performance improvement over an accelerometer based approach. Moreover, the study demonstrates the interest of considering the smartphone location for on-line context-aware AR and the prediction power of audio features for this task. Finally, an-other contribution of the study is the collected corpus that is made avail-able to the community for AR recognition from audio and accelerometer sensors

    Basal body stability and ciliogenesis requires the conserved component Poc1

    Get PDF
    Centrioles are the foundation for centrosome and cilia formation. The biogenesis of centrioles is initiated by an assembly mechanism that first synthesizes the ninefold symmetrical cartwheel and subsequently leads to a stable cylindrical microtubule scaffold that is capable of withstanding microtubule-based forces generated by centrosomes and cilia. We report that the conserved WD40 repeat domain–containing cartwheel protein Poc1 is required for the structural maintenance of centrioles in Tetrahymena thermophila. Furthermore, human Poc1B is required for primary ciliogenesis, and in zebrafish, DrPoc1B knockdown causes ciliary defects and morphological phenotypes consistent with human ciliopathies. T. thermophila Poc1 exhibits a protein incorporation profile commonly associated with structural centriole components in which the majority of Poc1 is stably incorporated during new centriole assembly. A second dynamic population assembles throughout the cell cycle. Our experiments identify novel roles for Poc1 in centriole stability and ciliogenesis

    RhoB controls coordination of adult angiogenesis and lymphangiogenesis following injury by regulating VEZF1-mediated transcription

    Get PDF
    Mechanisms governing the distinct temporal dynamics that characterize post-natal angiogenesis and lymphangiogenesis elicited by cutaneous wounds and inflammation remain unclear. RhoB, a stress-induced small GTPase, modulates cellular responses to growth factors, genotoxic stress and neoplastic transformation. Here we show, using RhoB null mice, that loss of RhoB decreases pathological angiogenesis in the ischaemic retina and reduces angiogenesis in response to cutaneous wounding, but enhances lymphangiogenesis following both dermal wounding and inflammatory challenge. We link these unique and opposing roles of RhoB in blood versus lymphatic vasculatures to the RhoB-mediated differential regulation of sprouting and proliferation in primary human blood versus lymphatic endothelial cells. We demonstrate that nuclear RhoB-GTP controls expression of distinct gene sets in each endothelial lineage by regulating VEZF1-mediated transcription. Finally, we identify a small-molecule inhibitor of VEZF1–DNA interaction that recapitulates RhoB loss in ischaemic retinopathy. Our findings establish the first intra-endothelial molecular pathway governing the phased response of angiogenesis and lymphangiogenesis following injury

    Note sur le Pin Noir d'Autriche : région sud

    Get PDF
    International audienc

    Innovative technologies for under-resourced language documentation: The BULB Project

    No full text
    International audienceThe project Breaking the Unwritten Language Barrier (BULB), which brings together linguists and computer scientists, aims at supporting linguists in documenting unwritten languages. In order to achieve this we will develop tools tailored to the needs of documentary linguists by building upon technology and expertise from the area of natural language processing, most prominently automatic speech recognition and machine translation. As a development and test bed for this we have chosen three less-resourced African languages from the Bantu family: Basaa, Myene and Embosi. Work within the project is divided into three main steps: 1) Collection of a large corpus of speech (100h per language) at a reasonable cost. After initial recording, the data is re-spoken by a reference speaker to enhance the signal quality and orally translated into French. 2) Automatic transcription of the Bantu languages at phoneme level and the French translation at word level. The recognized Bantu phonemes and French words will then be automatically aligned. 3) Tool development. In close cooperation and discussion with the linguists, the speech and language technologists will design and implement tools that will support the linguists in their work, taking into account the linguists' needs and technology's capabilities. The data collection has begun for the three languages. For this we use standard mobile devices and a dedicated software—LIG-AIKUMA, which proposes a range of different speech collection modes (recording, respeaking, translation and elicitation). LIG-AIKUMA 's improved features include a smart generation and handling of speaker metadata as well as respeaking and parallel audio data mapping

    Reconnaissance de scènes multimodale embarquée

    Get PDF
    Context: This PhD takes place in the contexts of Ambient Intelligence and (Mobile) Context/Scene Awareness. Historically, the project comes from the company ST-Ericsson. The project was depicted as a need to develop and embed a “context server” on the smartphone that would get and provide context information to applications that would require it. One use case was given for illustration: when someone is involved in a meeting and receives a call, then thanks to the understanding of the current scene (meet at work), the smartphone is able to automatically act and, in this case, switch to vibrate mode in order not to disturb the meeting. The main problems consist of i) proposing a definition of what is a scene and what examples of scenes would suit the use case, ii) acquiring a corpus of data to be exploited with machine learning based approaches, and iii) propose algorithmic solutions to the problem of scene recognition.Data collection: After a review of existing databases, it appeared that none fitted the criteria I fixed (long continuous records, multi-sources synchronized records necessarily including audio, relevant labels). Hence, I developed an Android application for collecting data. The application is called RecordMe and has been successfully tested on 10+ devices, running Android 2.3 and 4.0 OS versions. It has been used for 3 different campaigns including the one for scenes. This results in 500+ hours recorded, 25+ volunteers were involved, mostly in Grenoble area but abroad also (Dublin, Singapore, Budapest). The application and the collection protocol both include features for protecting volunteers privacy: for instance, raw audio is not saved, instead MFCCs are saved; sensitive strings (GPS coordinates, device ids) are hashed on the phone.Scene definition: The study of existing works related to the task of scene recognition, along with the analysis of the annotations provided by the volunteers during the data collection, allowed me to propose a definition of a scene. It is defined as a generalisation of a situation, composed of a place and an action performed by one person (the smartphone owner). Examples of scenes include taking a transportation, being involved in a work meeting, walking in the street. The composition allows to get different kinds of information to provide on the current scene. However, the definition is still too generic, and I think that it might be completed with additionnal information, integrated as new elements of the composition.Algorithmics: I have performed experiments involving machine learning techniques, both supervised and unsupervised. The supervised one is about classification. The method is quite standard: find relevant descriptors of the data through the use of an attribute selection method. Then train and test several classifiers (in my case, there were J48 and Random Forest trees ; GMM ; HMM ; and DNN). Also, I have tried a 2-stage system composed of a first step of classifiers trained to identify intermediate concepts and whose predictions are merged in order to estimate the most likely scene. The unsupervised part of the work aimed at extracting information from the data, in an unsupervised way. For this purpose, I applied a bottom-up hierarchical clustering, based on the EM algorithm on acceleration and audio data, taken separately and together. One of the results is the distinction of acceleration into groups based on the amount of agitation.Contexte : Cette thèse se déroule dans les contextes de l'intelligence ambiante et de la reconnaissance de scène (sur mobile). Historiquement, le projet vient de l'entreprise ST-Ericsson. Il émane d'un besoin de développer et intégrer un "serveur de contexte" sur smartphone capable d'estimer et de fournir des informations de contexte pour les applications tierces qui le demandent. Un exemple d'utilisation consiste en une réunion de travail où le téléphone sonne~; grâce à la reconnaissance de la scène, le téléphone peut automatiquement réagir et adapter son comportement, par exemple en activant le mode vibreur pour ne pas déranger.Les principaux problèmes de la thèse sont les suivants : d'abord, proposer une définition de ce qu'est une scène et des exemples de scènes pertinents pour l'application industrielle ; ensuite, faire l'acquisition d'un corpus de données à exploiter par des approches d'apprentissage automatique~; enfin, proposer des solutions algorithmiques au problème de la reconnaissance de scène.Collecte de données : Aucune des bases de données existantes ne remplit les critères fixés (longs enregistrements continus, composés de plusieurs sources de données synchronisées dont l'audio, avec des annotations pertinentes).Par conséquent, j'ai développé une application Android pour la collecte de données. L'application est appelée RecordMe et a été testé avec succès sur plus de 10 appareils. L'application a été utilisée pour 2 campagnes différentes, incluant la collecte de scènes. Cela se traduit par plus de 500 heures enregistrées par plus de 25 bénévoles, répartis principalement dans la région de Grenoble, mais aussi à l'étranger (Dublin, Singapour, Budapest). Pour faire face au problème de protection de la vie privée et de sécurité des données, des mesures ont été mises en place dans le protocole et l'application de collecte. Par exemple, le son n'est pas sauvegardé, mes des coefficients MFCCs sont enregistrés.Définition de scène : L'étude des travaux existants liés à la tâche de reconnaissance de scène, et l'analyse des annotations fournies par les bénévoles lors de la collecte de données, ont permis de proposer une définition d'une scène. Elle est définie comme la généralisation d'une situation, composée d'un lieu et une action effectuée par une seule personne (le propriétaire du smartphone). Des exemples de scènes incluent les moyens de transport, la réunion de travail, ou le déplacement à pied dans la rue. La notion de composition permet de décrire la scène avec plusieurs types d'informations. Cependant, la définition est encore trop générique, et elle pourrait être complétée par des informations additionnelles, intégrée à la définition comme de nouveaux éléments de la composition.Algorithmique : J'ai réalisé plusieurs expériences impliquant des techniques d'apprentissage automatique supervisées et non non-supervisées. La partie supervisée consiste en de la classification. La méthode est commune~: trouver des descripteurs des données pertinents grâce à l'utilisation d'une méthode de sélection d'attribut ; puis, entraîner et tester plusieurs classifieurs (arbres de décisions et forêt d'arbres décisionnels ; GMM ; HMM, et DNN). Également, j'ai proposé un système à 2 étages composé de classifieurs formés pour identifier les concepts intermédiaires et dont les prédictions sont fusionnées afin d'estimer la scène la plus probable. Les expérimentations non-supervisées visent à extraire des informations à partir des données. Ainsi, j'ai appliqué un algorithme de regroupement hiérarchique ascendant, basé sur l'algorithme EM, sur les données d'accélération et acoustiques considérées séparément et ensemble. L'un des résultats est la distinction des données d'accélération en groupes basés sur la quantité d'agitation
    corecore