640 research outputs found
REMOTE SENSING DATA ANALYSIS FOR ENVIRONMENTAL AND HUMANITARIAN PURPOSES. The automation of information extraction from free satellite data.
This work is aimed at investigating technical possibilities to provide information on environmental
parameters that can be used for risk management.
The World food Program (WFP) is the United Nations Agency which is involved in risk
management for fighting hunger in least-developed and low-income countries, where victims of
natural and manmade disasters, refugees, displaced people and the hungry poor suffer from severe
food shortages.
Risk management includes three different phases (pre-disaster, response and post disaster) to be
managed through different activities and actions. Pre disaster activities are meant to develop and
deliver risk assessment, establish prevention actions and prepare the operative structures for
managing an eventual emergency or disaster. In response and post disaster phase actions planned in
the pre-disaster phase are executed focusing on saving lives and secondly, on social economic
recovery.
In order to optimally manage its operations in the response and post disaster phases, WFP needs
to know, in order to estimate the impact an event will have on future food security as soon as possible,
the areas affected by the natural disaster, the number of affected people, and the effects that the event
can cause to vegetation. For this, providing easy-to-consult thematic maps about the affected areas and
population, with adequate spatial resolution, time frequency and regular updating can result
determining. Satellite remote sensed data have increasingly been used in the last decades in order to
provide updated information about land surface with an acceptable time frequency. Furthermore,
satellite images can be managed by automatic procedures in order to extract synthetic information
about the ground condition in a very short time and can be easily shared in the web.
The work of thesis, focused on the analysis and processing of satellite data, was carried out in
cooperation with the association ITHACA (Information Technology for Humanitarian Assistance,
Cooperation and Action), a center of research which works in cooperation with the WFP in order to
provide IT products and tools for the management of food emergencies caused by natural disasters.
These products should be able to facilitate the forecasting of the effects of catastrophic events, the
estimation of the extension and location of the areas hit by the event, of the affected population and
thereby the planning of interventions on the area that could be affected by food insecurity. The
requested features of the instruments are:
• Regular updating
• Spatial resolution suitable for a synoptic analysis
• Low cost
• Easy consultation
Ithaca is developing different activities to provide georeferenced thematic data to WFP users, such
a spatial data infrastructure for storing, querying and manipulating large amounts of global geographic
information, and for sharing it between a large and differentiated community; a system of early
warning for floods, a drought monitoring tool, procedures for rapid mapping in the response phase in
a case of natural disaster, web GIS tools to distribute and share georeferenced information, that can be
consulted only by means of a web browser.
The work of thesis is aimed at providing applications for the automatic production of base
georeferenced thematic data, by using free global satellite data, which have characteristics suitable for
analysis at a regional scale. In particular the main themes of the applications are water bodies and
vegetation phenology. The first application aims at providing procedures for the automatic extraction
of water bodies and will lead to the creation and update of an historical archive, which can be analyzed
in order to catch the seasonality of water bodies and delineate scenarios of historical flooded areas.
The automatic extraction of phenological parameters from satellite data will allow to integrate the
existing drought monitoring system with information on vegetation seasonality and to provide further
information for the evaluation of food insecurity in the post disaster phase.
In the thesis are described the activities carried on for the development of procedures for the
automatic processing of free satellite data in order to produce customized layers according to the
exigencies in format and distribution of the final users.
The main activities, which focused on the development of an automated procedure for the
extraction of flooded areas, include the research of an algorithm for the classification of water bodies
from satellite data, an important theme in the field of management of the emergencies due to flood
events. Two main technologies are generally used: active sensors (radar) and passive sensors (optical
data). Advantages for active sensors include the ability to obtain measurements anytime, regardless of
the time of day or season, while passive sensors can only be used in the daytime cloud free conditions.
Even if with radar technologies is possible to get information on the ground in all weather conditions,
it is not possible to use radar data to obtain a continuous archive of flooded areas, because of the lack
of a predetermined frequency in the acquisition of the images. For this reason the choice of the dataset
went in favor of MODIS (Moderate Resolution Imaging Spectroradiometer), optical data with a daily
frequency, a spatial resolution of 250 meters and an historical archive of 10 years. The presence of
cloud coverage prevents from the acquisition of the earth surface, and the shadows due to clouds can
be wrongly classified as water bodies because of the spectral response very similar to the one of water.
After an analysis of the state of the art of the algorithms of automated classification of water bodies in
images derived from optical sensors, the author developed an algorithm that allows to classify the data
of reflectivity and to temporally composite them in order to obtain flooded areas scenarios for each
event. This procedure was tested in the Bangladesh areas, providing encouraging classification
accuracies.
For the vegetation theme, the main activities performed, here described, include the review of the
existing methodologies for phenological studies and the automation of the data flow between inputs
and outputs with the use of different global free satellite datasets. In literature, many studies
demonstrated the utility of the NDVI (Normalized Difference Vegetation Index) indices for the
monitoring of vegetation dynamics, in the study of cultivations, and for the survey of the vegetation
water stress. The author developed a procedure for creating layers of phenological parameters which
integrates the TIMESAT software, produced by Lars Eklundh and Per Jönsson, for processing NDVI
indices derived from different satellite sensors: MODIS (Moderate Resolution Imaging
Spectroradiometer), AVHRR (Advanced Very High Resolution Radiometer) AND SPOT (Système Pour
l'Observation de la Terre) VEGETATION. The automated procedure starts from data downloading, calls
in a batch mode the software and provides customized layers of phenological parameters such as the
starting of the season or length of the season and many others
Deep and wide tiny machine learning
DOTTORATONegli ultimi decenni e in particolare negli ultimi anni, le soluzioni di Deep Learning sono velocemente diventate lo stato dell'arte in diversi scenari applicativi ``intelligenti''. Gli esempi più noti sono: la classificazione, il rilevamento e l'identificazione di oggetti nelle immagini; la classificazione di video e la creazione automatica di sottotitoli o descrizioni; la traduzione di discorsi; il riconoscimento di comandi vocali; le diagnosi mediche; l'analisi del linguaggio scritto; le intelligenze artificiali nei giochi; i sistemi di navigazione automatica delle automobili o dei droni; e i sistemi di raccomandazione ad esempio di film o prodotti in un mercato.
Nello stesso periodo, anche le tecnologie pervasive hanno vissuto una rapida espansione in vari scenari applicativi, come nei dispositivi medici; nelle automobili (ad esempio nella gestione degli airbag, nel mantenimento di una velocità di crociera, nel controllo di trazione e del sistema frenante); e nelle cosiddette Smart Cities (ovvero l'utilizzo di sistemi pervasivi in vari ambiti urbani, come la gestione dell'illuminazione pubblica, del trasporto pubblico o il monitoraggio ambientale). Esempi di dispositivi pervasivi sono i sistemi embedded, l'Internet of Things (IoT) e i micro-controllori, di seguito indicati per brevità come unità IoT.
La necessità di spostare gli algoritmi intelligenti (ad esempio, per il riconoscimento di guasti o di cambiamenti nell'ambiente) il più vicino possibile al punto in cui i dati vengono generati è l'immediata conseguenza della diffusione pervasiva delle unità IoT.
Il paradigma tradizionale dove un sensore pervasivo acquisisce dati e li inoltra verso un server remoto (ad esempio, sul Cloud) --dove viene svolta tutta la computazione intelligente-- in attesa di una risposta (ad esempio un comando per gli attuatori) non è più realistica e attuabile per tre motivi. In primo luogo, la connessione verso i server remoti deve essere stabile e ad alta velocità affinché l'intera soluzione sia attiva in maniera continuativa. In secondo luogo, delegare la computazione intelligente a un server remoto non è applicabile in tutti quegli scenari applicativi critici dove sono presenti requisiti stretti in termini di latenza tra l'acquisizione del dato e l'attuazione della corrispondente decisione. Infine, è preferibile non ricorrere a un server remoto laddove i dati che vengono acquisiti e inviati sono sensibili (ad esempio, quando vengono analizzate diagnosi mediche o immagini di persone in un sistema di video-sorveglianza).
Il problema principale nell'eseguire algoritmi intelligenti (e.g., basati su Deep Learning) su unità IoT è la complessità di quest'ultimi. Infatti, i requisiti in termini di memoria, computazione ed energia dei modelli di Deep Learning sono quasi sempre in contrasto con le corrispondenti capacità in termini di memoria, computazione ed energia disponibili nelle unità IoT. Per avere un'idea, i modelli convolutivi usati nel dominio delle immagini hanno decine di milioni di parametri (i modelli di ResNet hanno da 11 a 60 milioni di parametri, mentre quelli Inception da 24 a 43), mentre i modelli usati per modellare il linguaggio, come BERT, richiedono centinaia di milioni o miliardi di parametri. Siccome ogni parametro viene solitamente rappresentato con un tipo di dato a 32 bit, si osserva subito come la memoria richiesta soltanto per i parametri dei modelli scala facilmente da decine di mega-byte ai giga-byte. In termini di computazione richiesta, invece, il numero di operazioni richieste per classificare una singola immagine da modelli come la ResNet o l'Inception varia dai 5 agli 11 milioni.
L'altra faccia della medaglia sono le capacità delle unità IoT. Ad esempio, il micro-controllore STM32H743ZI ha a disposizione 1024 kilo-byte di memoria e un processore Cortex M7 a 480 MHz, mentre gli altri micro-controllori hanno a disposizione dai 96 ai 512 kilo-byte di memoria e processori a frequenze minori.
In letteratura, questo problema è affrontato in maniera molto frammentata, con numerosi lavori che si pongono l'obiettivo di ridurre i requisiti di memoria o computazione delle soluzioni Deep Learning, ma con pochi lavori che hanno una visione d'insieme definendo quindi modelli di Deep Learning pensati per essere eseguiti su unità IoT.
In particolare, si possono individuare tre principali aree di ricerca.
La prima si concentra nello sviluppo di soluzioni hardware dedicate. Le piattaforme hardware risultanti sono caratterizzate dalle migliori prestazioni in termini di latenza (tempo di esecuzione dell'algoritmo per cui sono state pensate), di consumi energetici e di potenza richiesta. Tuttavia il processo di sviluppo risulta particolarmente complesso e le soluzioni sviluppate sono caratterizzate da una minore flessibilità.
La seconda area di ricerca introduce varie tecniche di approssimazione per ridurre la complessità in termini di memoria o di computazione dei modelli di Deep Learning. Esempi di tali tecniche sono: la riduzione della precisione nella rappresentazione dei parametri (da un tipo di dato a 32 bit verso rappresentazioni a 16, 8 e perfino 2 o 1 bit); l'introduzione di tecniche di pruning (letteralmente potare in inglese) sui parametri o su alcuni task dei modelli di Deep Learning stessi; e l'introduzione di uscite intermedie che possono essere prese quando il modello di Deep Learning acquisisce abbastanza confidenza sulla decisione finale, saltando di conseguenza tutta la computazione rimanente.
Infine, la terza e ultima direzione di ricerca divide i modelli di Deep Learning in task semplici e compatibili con le unità IoT e studia il modo migliore per distribuire questi task su un insieme di unità IoT connesse e potenzialmente eterogenee.
Recentemente, è emersa una nuova area di ricerca, chiamata Tiny Machine Learning (TML), con l'obiettivo di sviluppare soluzioni di (Machine e) Deep Learning tenendo in considerazione i vincoli tecnologici dell'unità IoT su cui si pensa di eseguirle. L'impronta di memoria dei modelli TML generati deve essere quindi nell'ordine di grandezza di pochi kilo-byte, mentre il consumo energetico nell'ordine dei micro- o milli-Watt.
La maggior parte delle soluzioni TML disponibili in letteratura (come anche parte delle soluzioni presentate in questo lavoro) si focalizza sullo sviluppo di soluzioni a supporto dell'inferenza dei modelli di Deep Learning, ovvero la gestione di un singolo dato in ingresso, come la classificazione di una immagine o la traduzione di una porzione di testo. Un'ulteriore direzione di ricerca, pressoché inesplorata in letteratura, ambisce a ideare soluzioni che permettono quello che in inglese viene definito come on-device learning, ovvero introdurre la possibilità di apprendimento per i modelli TML direttamente sull'unità IoT su cui vengono eseguiti.
Il motivo di questa mancanza risiede principalmente nella particolare complessità delle tecniche di apprendimento rispetto alla semplice inferenza. Tuttavia, la capacità di apprendimento e quindi adattamento dei modelli TML direttamente sulle unità IoT è cruciale. L'ambiente in cui questi modelli operano è infatti tipicamente non stazionario, con effetti potenzialmente catastrofici sulle prestazioni dei modelli TML che assumono un ambiente immutabile nel tempo (esempi di cambiamenti che si riflettono nei dati acquisiti sono dovuti a guasti nei sensori di acquisizione, alla stagionalità o a effetti di invecchiamento).
L'obiettivo di questo lavoro è la definizione di una metodologia per lo sviluppo di soluzioni di Deep and Wide Tiny Machine Learning, dove l'aggettivo deep (profondo in inglese) suggerisce l'utilizzo dei modelli di Deep Learning, mentre il termine wide (largo, ampio in inglese) la possibilità di definire task da distribuire su più unità IoT potenzialmente eterogenee. In aggiunta, questo lavoro definisce una prima soluzione al problema dell'on-device learning.
La metodologia è stata validata sui dataset e sui benchmark disponibili per dimostrarne l'efficacia. Inoltre, secondo un approccio from the lab to the wild (dal laboratorio al selvaggio), alcune delle tecniche proposte sono state applicate in due scenari applicativi reali: il riconoscimento di vocalizzi di uccelli in aree remote (dove la connettività è assente o limitata) attraverso l'analisi di audio; e la caratterizzazione (e in futuro predizione) delle attività solari --altamente non stazionarie-- attraverso l'analisi dei magnetogrammi solari acquisiti dalla Terra. Infine, in questo lavoro viene presentato un approccio deep-learning-as-a-service in cui l'esecuzione dei modelli di Deep Learning viene eseguita su dati criptati, per gestire tutti i casi non coperti dalla metodologia presentata in cui è necessario utilizzare servizi sul Cloud, ma al tempo stesso garantire la privacy dei dati che vengono elaborati.In the last decades and, in particular, in the last few years, Deep Learning (DL) solutions emerged as state of the art in several domains, e.g., image classification, object detection, speech translation and command identification, medical diagnoses, natural language processing, artificial players in games, and many others.
In the same period, following the massive spread of pervasive technologies such as Internet of Things (IoT) units, embedded systems, or Micro-Controller Units (MCUs) in various application scenarios (e.g., automotive, medical devices, and smart cities, to name a few), the need for intelligent processing mechanisms as close as possible to data generation emerged as well. The traditional paradigm of having a pervasive sensor (or pervasive network of sensors) that acquires data to be processed by a remote high-performance computer is overcome by real-time requirements and connectivity issues.
Nevertheless, the memory and computational requirements characterizing deep learning models and algorithms are much larger than the corresponding abilities in memory and computation of embedded systems or IoT units, significantly limiting their application. The related literature in this field is highly fragmented, with several works aiming to reduce the complexity of deep learning solutions. However, only a few aim to deploy such DL algorithms on IoT units or even on MCUs. All these works fall under the umbrella of a novel research area, namely Tiny Machine Learning (TML), whose goal is to design machine and deep learning models and algorithms able to take into account the constraints on memory, computation, and also energy the embedded systems, the IoT, and the micro-controller units impose.
This work aims to introduce a methodology as well as algorithms and solutions to close the gap between the complexity of Deep Learning solutions and the capabilities of embedded, IoT, or micro-controller units.
Achieving this goal required operating at different levels. First, the methodology aims at proposing inference-based Deep Tiny Machine Learning solutions, i.e., DL algorithms that can run on tiny devices after their training has been carried out elsewhere. Second, the first approaches to on-device Deep Tiny Machine Learning training are proposed. Finally, the methodology encompasses Wide Deep TML solutions that distribute the DL processing on a network of embedded systems, IoT, and MCUs.
The methodology has been validated on available benchmarks and datasets to prove its effectiveness. Moreover, in a ``from the laboratory to the wild'' approach, the methodology has been validated in two different real-world scenarios, i.e., the detection of bird calls within audio waveforms in remote environments and the characterization and prediction of solar activity from solar magnetograms. Finally, a deep-learning-as-a-service approach to support privacy-preserving deep learning solutions (i.e., able to operate on encrypted data) has been proposed to deal with the need to acquire and process sensitive data on the Cloud.DIPARTIMENTO DI ELETTRONICA, INFORMAZIONE E BIOINGEGNERIAComputer Science and Engineering34GATTI, NICOLAPIRODDI, LUIG
REMOTE SENSING DATA ANALYSIS FOR ENVIRONMENTAL AND HUMANITARIAN PURPOSES. The automation of information extraction from free satellite data
This work is aimed at investigating technical possibilities to provide information on environmental parameters that can be used for risk management. The World food Program (WFP) is the United Nations Agency which is involved in risk management for fighting hunger in least-developed and low-income countries, where victims of natural and manmade disasters, refugees, displaced people and the hungry poor suffer from severe food shortages. Risk management includes three different phases (pre-disaster, response and post disaster) to be managed through different activities and actions. Pre disaster activities are meant to develop and deliver risk assessment, establish prevention actions and prepare the operative structures for managing an eventual emergency or disaster. In response and post disaster phase actions planned in the pre-disaster phase are executed focusing on saving lives and secondly, on social economic recovery. In order to optimally manage its operations in the response and post disaster phases, WFP needs to know, in order to estimate the impact an event will have on future food security as soon as possible, the areas affected by the natural disaster, the number of affected people, and the effects that the event can cause to vegetation. For this, providing easy-to-consult thematic maps about the affected areas and population, with adequate spatial resolution, time frequency and regular updating can result determining. Satellite remote sensed data have increasingly been used in the last decades in order to provide updated information about land surface with an acceptable time frequency. Furthermore, satellite images can be managed by automatic procedures in order to extract synthetic information about the ground condition in a very short time and can be easily shared in the web. The work of thesis, focused on the analysis and processing of satellite data, was carried out in cooperation with the association ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action), a center of research which works in cooperation with the WFP in order to provide IT products and tools for the management of food emergencies caused by natural disasters. These products should be able to facilitate the forecasting of the effects of catastrophic events, the estimation of the extension and location of the areas hit by the event, of the affected population and thereby the planning of interventions on the area that could be affected by food insecurity. The requested features of the instruments are: • Regular updating • Spatial resolution suitable for a synoptic analysis • Low cost • Easy consultation Ithaca is developing different activities to provide georeferenced thematic data to WFP users, such a spatial data infrastructure for storing, querying and manipulating large amounts of global geographic information, and for sharing it between a large and differentiated community; a system of early warning for floods, a drought monitoring tool, procedures for rapid mapping in the response phase in a case of natural disaster, web GIS tools to distribute and share georeferenced information, that can be consulted only by means of a web browser. The work of thesis is aimed at providing applications for the automatic production of base georeferenced thematic data, by using free global satellite data, which have characteristics suitable for analysis at a regional scale. In particular the main themes of the applications are water bodies and vegetation phenology. The first application aims at providing procedures for the automatic extraction of water bodies and will lead to the creation and update of an historical archive, which can be analyzed in order to catch the seasonality of water bodies and delineate scenarios of historical flooded areas. The automatic extraction of phenological parameters from satellite data will allow to integrate the existing drought monitoring system with information on vegetation seasonality and to provide further information for the evaluation of food insecurity in the post disaster phase. In the thesis are described the activities carried on for the development of procedures for the automatic processing of free satellite data in order to produce customized layers according to the exigencies in format and distribution of the final users. The main activities, which focused on the development of an automated procedure for the extraction of flooded areas, include the research of an algorithm for the classification of water bodies from satellite data, an important theme in the field of management of the emergencies due to flood events. Two main technologies are generally used: active sensors (radar) and passive sensors (optical data). Advantages for active sensors include the ability to obtain measurements anytime, regardless of the time of day or season, while passive sensors can only be used in the daytime cloud free conditions. Even if with radar technologies is possible to get information on the ground in all weather conditions, it is not possible to use radar data to obtain a continuous archive of flooded areas, because of the lack of a predetermined frequency in the acquisition of the images. For this reason the choice of the dataset went in favor of MODIS (Moderate Resolution Imaging Spectroradiometer), optical data with a daily frequency, a spatial resolution of 250 meters and an historical archive of 10 years. The presence of cloud coverage prevents from the acquisition of the earth surface, and the shadows due to clouds can be wrongly classified as water bodies because of the spectral response very similar to the one of water. After an analysis of the state of the art of the algorithms of automated classification of water bodies in images derived from optical sensors, the author developed an algorithm that allows to classify the data of reflectivity and to temporally composite them in order to obtain flooded areas scenarios for each event. This procedure was tested in the Bangladesh areas, providing encouraging classification accuracies. For the vegetation theme, the main activities performed, here described, include the review of the existing methodologies for phenological studies and the automation of the data flow between inputs and outputs with the use of different global free satellite datasets. In literature, many studies demonstrated the utility of the NDVI (Normalized Difference Vegetation Index) indices for the monitoring of vegetation dynamics, in the study of cultivations, and for the survey of the vegetation water stress. The author developed a procedure for creating layers of phenological parameters which integrates the TIMESAT software, produced by Lars Eklundh and Per Jönsson, for processing NDVI indices derived from different satellite sensors: MODIS (Moderate Resolution Imaging Spectroradiometer), AVHRR (Advanced Very High Resolution Radiometer) AND SPOT (Système Pour l'Observation de la Terre) VEGETATION. The automated procedure starts from data downloading, calls in a batch mode the software and provides customized layers of phenological parameters such as the starting of the season or length of the season and many other
Urban detection using Decision Tree classifier: a case study
This work constitutes a first step towards the definition of a methodology for automatic urban extraction from medium spatial resolution Landsat data. Decision Tree is investigated as classification technique due to its ability in establishing which is the most relevant information to be used for the classification process and its capability of extracting rules that can be further ap-plied to other inputs. The attention was focused on the evaluation of parameters that better define the training set to be used for the learning phase of the classifier since its definition affects all the next steps of the process. Different training sets were created by combining different features, such as different level of radiometric pre-processing applied to the input images, the number of classes considered to train the classifier, the temporal extent of the training set and the use of different at-tributes (bands or spectral indexes). Different post-processing techniques were also evaluated. Classifiers, obtained by the generated training sets, were evaluated in two different areas of Pied-mont Region, where the official regional cartography at scale 1:10000 was used for validation. Accuracies round 81% in the Torino case study and around 96%-97% in Asti case study were reached, thanks to the use of indexes such as NDVI and NDBBBI and the use of post-processing such as majority filtering that allowed enhancing classifier performances
The effects of Ajax web technologies on user expectations: a workflow approach
This paper aims to define users' information expectations as web technologies continue to improve in loading time and uninterrupted interface interactivity. Do web technologies like Ajax-or, more abstractly, a quicker fulfilling of user needs- change these needs, or do they merely fulfill preexisting expectations? Users navigated through a mock e-commerce site where each page that loads has a 50% chance of implementing Ajax technology, from functions of the shopping cart to expanding categories of products. Users were observed through eye tracking and measuring their pulse and respiratory effort. Questionnaires were administered before and after these tasks to assess their thoughts about the study. Qualitative and quatitative observation found users almost unanimously favored the Ajax functions over the non-Ajax. Users emphasized the usability concerns of switching to Ajax, especially concerning feedback
Rapid Mapping: geomatics role and research opportunities
In recent years an increasing number of extreme meteorological events have been recorded. Geomatics techniques have been historically adopted to support the different phases of the Emergency Management cycle with a main focus on emergency response, initial recovery and preparedness through the acquisition, processing, management and dissemination of geospatial data. In the meantime, the increased availability of geospatial data in terms of reference topographic datasets, made available by authoritative National Mapping Cadastre Agencies or by Collaborative Mapping initiatives like OpenStreetMap, as well as of remotely sensed imagery, poses new challenges to the Geomatics role in defining operational tools and services in support of emergency management activities. This paper is mainly focused on the role of Geomatics in supporting the response phase of the Emergency Management cycle through Rapid Mapping activities, which can be defined as “the on-demand and fast provision (within hours or days) of geospatial information in support of emergency management activities immediately following an emergency event” (source: European Union, http://emergency.copernicus.eu/mapping/ems/service-overview). Management of geospatial datasets (both reference and thematic), Remote Sensing sensors and techniques and spatial information science methodologies applied to Rapid Mapping will be described, with the goal to highlight the role that Geomatics is currently playing in this domain. The major technical requirements, constraints and research opportunities of a Rapid Mapping service will be discussed, with a specific focus on: the time constraints of the service, the data quality requirements, the need to provide replicable products, the need for consistent data models, the advantages of data interoperability, the automation of feature extraction procedures to reduce the need for Computer Aided Photo Interpretation, the dissemination strategies
Improving an Extreme Rainfall Detection System with GPM IMERG data
Many studies have shown a growing trend in terms of frequency and severity of extreme events. As never before, having tools capable to monitor the amount of rain that reaches the Earth’s surface has become a key point for the identification of areas potentially affected by floods. In order to guarantee an almost global spatial coverage, NASA Global Precipitation Measurement (GPM) IMERG products proved to be the most appropriate source of information for precipitation retrievement by satellite. This study is aimed at defining the IMERG accuracy in representing extreme rainfall events for varying time aggregation intervals. This is performed by comparing the IMERG data with the rain gauge ones. The outcomes demonstrate that precipitation satellite data guarantee good results when the rainfall aggregation interval is equal to or greater than 12 h. More specifically, a 24-h aggregation interval ensures a probability of detection (defined as the number of hits divided by the total number of observed events) greater than 80%. The outcomes of this analysis supported the development of the updated version of the ITHACA Extreme Rainfall Detection System (ERDS: erds.ithacaweb.org). This system is now able to provide near real-time alerts about extreme rainfall events using a threshold methodology based on the mean annual precipitation
- …
