161 research outputs found

    L'interpretazione telefonica nell'azienda Dualia: Una prospettiva di interpreti e clienti

    Get PDF
    Da una riflessione sul multilinguismo e sul diritto di immigrati e turisti, o più in generale residenti stranieri, di accedere ai servizi pubblici nella propria lingua, nasce l’idea di questo elaborato. Ci si è voluti concentrare sull’interpretazione telefonica (IT) in quanto mezzo utile per accedere a un interprete rapidamente, soprattutto in caso di emergenza. L’elaborato presenta inizialmente un excursus storico sull’interpretazione grazie al quale si giunge a trattare dell’interpretazione a distanza, che viene divisa in videoconference interpreting e interpretazione telefonica (IT): Di quest’ultima, tema dell’elaborato,si analizza l’implementazione e si espongono le controversie che la riguardano. A seguire ci si dedica alla ricerca sull’IT che viene svolta in una realtà aziendale basca di punta nel settore spagnolo: Dualia. Viene descritta l’azienda, la sua storia, i servizi che fornisce, e le modalità di lavoro degli interpreti. La seconda parte dell’elaborato tratta dell’esposizione e analisi dei risultati della ricerca. L’obiettivo dello studio è quello di ricavare la prospettiva di interpreti e clienti sull’IT e trarne spunti di miglioramento per il servizio. La ricerca è avvenuta per mezzo di due tipi di questionari indirizzati uno ai clienti dell’IT, e l’altro agli interpreti telefonici che lavorano con Dualia. Il questionario per i clienti ha riscontrato che essi utilizzano facilmente l’IT, che è un grande aiuto nel loro lavoro e che gli interpreti sono considerati professionali. Il questionario per gli interpreti ha creato un profilo dell’interprete telefonico, ha riscontrato una propensione degli interpreti e ha mostrato le problematiche principali dell’interprete telefonico

    Localization and segmentation of optic disc in retinal images using circular Hough transform and grow-cut algorithm

    Get PDF
    Automated retinal image analysis has been emerging as an important diagnostic tool for early detection of eye-related diseases such as glaucoma and diabetic retinopathy. In this paper, we have presented a robust methodology for optic disc detection and boundary segmentation, which can be seen as the preliminary step in the development of a computer-assisted diagnostic system for glaucoma in retinal images. The proposed method is based on morphological operations, the circular Hough transform and the grow-cut algorithm. The morphological operators are used to enhance the optic disc and remove the retinal vasculature and other pathologies. The optic disc center is approximated using the circular Hough transform, and the grow-cut algorithm is employed to precisely segment the optic disc boundary. The method is quantitatively evaluated on five publicly available retinal image databases DRIVE, DIARETDB1, CHASE_DB1, DRIONS-DB, Messidor and one local Shifa Hospital Database. The method achieves an optic disc detection success rate of 100% for these databases with the exception of 99.09% and 99.25% for the DRIONS-DB, Messidor, and ONHSD databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 78.6%, 85.12%, 83.23%, 85.1%, 87.93%, 80.1%, and 86.1%, respectively, for these databases. This unique method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc

    The role of population PK-PD modelling in paediatric clinical research

    Get PDF
    Children differ from adults in their response to drugs. While this may be the result of changes in dose exposure (pharmacokinetics [PK]) and/or exposure response (pharmacodynamics [PD]) relationships, the magnitude of these changes may not be solely reflected by differences in body weight. As a consequence, dosing recommendations empirically derived from adults dosing regimens using linear extrapolations based on body weight, can result in therapeutic failure, occurrence of adverse effect or even fatalities. In order to define rational, patient-tailored dosing schemes, population PK-PD studies in children are needed. For the analysis of the data, population modelling using non-linear mixed effect modelling is the preferred tool since this approach allows for the analysis of sparse and unbalanced datasets. Additionally, it permits the exploration of the influence of different covariates such as body weight and age to explain the variability in drug response. Finally, using this approach, these PK-PD studies can be designed in the most efficient manner in order to obtain the maximum information on the PK-PD parameters with the highest precision. Once a population PK-PD model is developed, internal and external validations should be performed. If the model performs well in these validation procedures, model simulations can be used to define a dosing regimen, which in turn needs to be tested and challenged in a prospective clinical trial. This methodology will improve the efficacy/safety balance of dosing guidelines, which will be of benefit to the individual child

    Learned Pre-Processing for Automatic Diabetic Retinopathy Detection on Eye Fundus Images

    Full text link
    Diabetic Retinopathy is the leading cause of blindness in the working-age population of the world. The main aim of this paper is to improve the accuracy of Diabetic Retinopathy detection by implementing a shadow removal and color correction step as a preprocessing stage from eye fundus images. For this, we rely on recent findings indicating that application of image dehazing on the inverted intensity domain amounts to illumination compensation. Inspired by this work, we propose a Shadow Removal Layer that allows us to learn the pre-processing function for a particular task. We show that learning the pre-processing function improves the performance of the network on the Diabetic Retinopathy detection task.Comment: Accepted to International Conference on Image Analysis and Recognition ICIAR 2019 Published at https://doi.org/10.1007/978-3-030-27272-2_3

    Plasma and CSF pharmacokinetics of meropenem in neonates and young infants: results from the NeoMero studies.

    Get PDF
    Background: Sepsis and bacterial meningitis are major causes of mortality and morbidity in neonates and infants. Meropenem, a broad-spectrum antibiotic, is not licensed for use in neonates and infants below 3 months of age and sufficient information on its plasma and CSF disposition and dosing in neonates and infants is lacking. Objectives: To determine plasma and CSF pharmacokinetics of meropenem in neonates and young infants and the link between pharmacokinetics and clinical outcomes in babies with late-onset sepsis (LOS). Methods: Data were collected in two recently conducted studies, i.e. NeoMero-1 (neonatal LOS) and NeoMero-2 (neonatal meningitis). Optimally timed plasma samples (n = 401) from 167 patients and opportunistic CSF samples (n = 78) from 56 patients were analysed. Results: A one-compartment model with allometric scaling and fixed maturation gave adequate fit to both plasma and CSF data; the CL and volume (standardized to 70 kg) were 16.7 (95% CI 14.7, 18.9) L/h and 38.6 (95% CI 34.9, 43.4) L, respectively. CSF penetration was low (8%), but rose with increasing CSF protein, with 40% penetration predicted at a protein concentration of 6 g/L. Increased infusion time improved plasma target attainment, but lowered CSF concentrations. For 24 patients with culture-proven Gram-negative LOS, pharmacodynamic target attainment was similar regardless of the test-of-cure visit outcome. Conclusions: Simulations showed that longer infusions increase plasma PTA but decrease CSF PTA. CSF penetration is worsened with long infusions so increasing dose frequency to achieve therapeutic targets should be considered

    Building urban datasets for the SDGs.Six European cities monitoring the 2030 Agenda

    Get PDF
    Local governments stand at the frontline of social, economic and environmental challenges, and even more so in times of emergencies and disruptive changes. European local governments, and cities in particular, are increasingly using the framework of the Sustainable Development Goals (SDGs) as a support to design, monitor and evaluate their strategies and activities. Indeed, the 2030 Agenda and its SDGs have proven to be an added value for the elaboration of strategies at different geographical and institutional levels. The evidence-based approach is one of the main features characterising the 2030 Agenda, which has fostered the development of a common language when discussing sustainable development, in particular with regard to monitoring. In this framework, the Joint Research Centre of the European Commission has developed an integrated approach that combines methodological contributions on the local monitoring of the SDGs as a valuable tool to transpose the 2030 Agenda in the local context and enhancing the creation of SDG ecosystems. This involves hands-on cooperation with cities to test and continuously improve the proposed framework so that it can properly assist municipalities willing to engage in a Local Voluntary Review. This report is one of the building blocks of this work. It illustrates the results of the analyses performed in partnership with six European pilot cities between 2020 and 2021. The report details, for each city – Bratislava (SK), Reggio Emilia (IT), Oulu (FI), Porto (PT), Seville (ES), and Valencia (ES) – the availability of data for calculating the indicators proposed in the first edition of the European Handbook for SDG Voluntary Local Reviews; but also the local alternatives used when data were not available or when cities preferred to measure, in accordance with their local priorities, different indicators. In conclusion, for each city, the report illustrates the overall process of building a local SDG monitoring system and assesses the SDG monitoring capacities of the cities, identifying challenges encountered during the process, gaps to address and points of strength on which to build.JRC.B.3 - Territorial Developmen

    An efficient intelligent analysis system for confocal corneal endothelium images

    Get PDF
    A confocal microscope provides a sequence of images of the corneal layers and structures at different depths from which medical clinicians can extract clinical information on the state of health of the patient's cornea. A hybrid model based on snake and particle swarm optimisation (S-PSO) is proposed in this paper to analyse the confocal endothelium images. The proposed system is able to pre-process images (including quality enhancement and noise reduction), detect cells, measure cell densities and identify abnormalities in the analysed data sets. Three normal corneal data sets acquired using a confocal microscope, and three abnormal confocal endothelium images associated with diseases have been investigated in the proposed system. Promising results are presented and the performance of this system is compared with manual and two morphological based approaches. The average differences between the manual and the automatic cell densities calculated using S-PSO and two other morphological based approaches is 5%, 7% and 13% respectively. The developed system will be deployable as a clinical tool to underpin the expertise of ophthalmologists in analysing confocal corneal images

    A fully automated cell segmentation and morphometric parameter system for quantifying corneal endothelial cell morphology

    Get PDF
    YesBackground and Objective Corneal endothelial cell abnormalities may be associated with a number of corneal and systemic diseases. Damage to the endothelial cells can significantly affect corneal transparency by altering hydration of the corneal stroma, which can lead to irreversible endothelial cell pathology requiring corneal transplantation. To date, quantitative analysis of endothelial cell abnormalities has been manually performed by ophthalmologists using time consuming and highly subjective semi-automatic tools, which require an operator interaction. We developed and applied a fully-automated and real-time system, termed the Corneal Endothelium Analysis System (CEAS) for the segmentation and computation of endothelial cells in images of the human cornea obtained by in vivo corneal confocal microscopy. Methods First, a Fast Fourier Transform (FFT) Band-pass filter is applied to reduce noise and enhance the image quality to make the cells more visible. Secondly, endothelial cell boundaries are detected using watershed transformations and Voronoi tessellations to accurately quantify the morphological parameters of the human corneal endothelial cells. The performance of the automated segmentation system was tested against manually traced ground-truth images based on a database consisting of 40 corneal confocal endothelial cell images in terms of segmentation accuracy and obtained clinical features. In addition, the robustness and efficiency of the proposed CEAS system were compared with manually obtained cell densities using a separate database of 40 images from controls (n = 11), obese subjects (n = 16) and patients with diabetes (n = 13). Results The Pearson correlation coefficient between automated and manual endothelial cell densities is 0.9 (p < 0.0001) and a Bland–Altman plot shows that 95% of the data are between the 2SD agreement lines. Conclusions We demonstrate the effectiveness and robustness of the CEAS system, and the possibility of utilizing it in a real world clinical setting to enable rapid diagnosis and for patient follow-up, with an execution time of only 6 seconds per image
    corecore