4,379 research outputs found
h-multigrid agglomeration based solution strategies for discontinuous Galerkin discretizations of incompressible flow problems
In this work we exploit agglomeration based -multigrid preconditioners to
speed-up the iterative solution of discontinuous Galerkin discretizations of
the Stokes and Navier-Stokes equations. As a distinctive feature -coarsened
mesh sequences are generated by recursive agglomeration of a fine grid,
admitting arbitrarily unstructured grids of complex domains, and agglomeration
based discontinuous Galerkin discretizations are employed to deal with
agglomerated elements of coarse levels. Both the expense of building coarse
grid operators and the performance of the resulting multigrid iteration are
investigated. For the sake of efficiency coarse grid operators are inherited
through element-by-element projections, avoiding the cost of numerical
integration over agglomerated elements. Specific care is devoted to the
projection of viscous terms discretized by means of the BR2 dG method. We
demonstrate that enforcing the correct amount of stabilization on coarse grids
levels is mandatory for achieving uniform convergence with respect to the
number of levels. The numerical solution of steady and unsteady, linear and
non-linear problems is considered tackling challenging 2D test cases and 3D
real life computations on parallel architectures. Significant execution time
gains are documented.Comment: 78 pages, 7 figure
Uncertainty Quantification of geochemical and mechanical compaction in layered sedimentary basins
In this work we propose an Uncertainty Quantification methodology for
sedimentary basins evolution under mechanical and geochemical compaction
processes, which we model as a coupled, time-dependent, non-linear,
monodimensional (depth-only) system of PDEs with uncertain parameters. While in
previous works (Formaggia et al. 2013, Porta et al., 2014) we assumed a
simplified depositional history with only one material, in this work we
consider multi-layered basins, in which each layer is characterized by a
different material, and hence by different properties. This setting requires
several improvements with respect to our earlier works, both concerning the
deterministic solver and the stochastic discretization. On the deterministic
side, we replace the previous fixed-point iterative solver with a more
efficient Newton solver at each step of the time-discretization. On the
stochastic side, the multi-layered structure gives rise to discontinuities in
the dependence of the state variables on the uncertain parameters, that need an
appropriate treatment for surrogate modeling techniques, such as sparse grids,
to be effective. We propose an innovative methodology to this end which relies
on a change of coordinate system to align the discontinuities of the target
function within the random parameter space. The reference coordinate system is
built upon exploiting physical features of the problem at hand. We employ the
locations of material interfaces, which display a smooth dependence on the
random parameters and are therefore amenable to sparse grid polynomial
approximations. We showcase the capabilities of our numerical methodologies
through two synthetic test cases. In particular, we show that our methodology
reproduces with high accuracy multi-modal probability density functions
displayed by target state variables (e.g., porosity).Comment: 25 pages, 30 figure
PVT1: a rising star among oncogenic long non-coding RNAs
It is becoming increasingly clear that short and long noncoding RNAs critically participate in the regulation of cell growth, differentiation, and (mis)function. However, while the functional characterization of short non-coding RNAs has been reaching maturity, there is still a paucity of well characterized long noncoding RNAs, even though large studies in recent years are rapidly increasing the number of annotated ones. The long noncoding RNA PVT1 is encoded by a gene that has been long known since it resides in the well-known cancer risk region 8q24. However, a couple of accidental concurrent conditions have slowed down the study of this gene, that is, a preconception on the primacy of the protein-coding over noncoding RNAs and the prevalent interest in its neighbor MYC oncogene. Recent studies have brought PVT1 under the spotlight suggesting interesting models of functioning, such as competing endogenous RNA activity and regulation of protein stability of important oncogenes, primarily of the MYC oncogene. Despite some advancements in modelling the PVT1 role in cancer, there are many questions that remain unanswered concerning the precise molecular mechanisms underlying its functioning
SWIM: A computational tool to unveiling crucial nodes in complex biological networks
SWItchMiner (SWIM) is a wizard-like software implementation of a procedure, previously described, able to extract information contained in complex networks. Specifically, SWIM allows unearthing the existence of a new class of hubs, called "fight-club hubs", characterized by a marked negative correlation with their first nearest neighbors. Among them, a special subset of genes, called "switch genes", appears to be characterized by an unusual pattern of intra- and inter-module connections that confers them a crucial topological role, interestingly mirrored by the evidence of their clinic-biological relevance. Here, we applied SWIM to a large panel of cancer datasets from The Cancer Genome Atlas, in order to highlight switch genes that could be critically associated with the drastic changes in the physiological state of cells or tissues induced by the cancer development. We discovered that switch genes are found in all cancers we studied and they encompass protein coding genes and non-coding RNAs, recovering many known key cancer players but also many new potential biomarkers not yet characterized in cancer context. Furthermore, SWIM is amenable to detect switch genes in different organisms and cell conditions, with the potential to uncover important players in biologically relevant scenarios, including but not limited to human cancer
Computational analysis identifies a sponge interaction network between long non-coding RNAs and messenger RNAs in human breast cancer
Background: Non-coding RNAs (ncRNAs) are emerging as key regulators of many cellular processes in both physiological and pathological states. Moreover, the constant discovery of new non-coding RNA species suggests that the study of their complex functions is still in its very early stages. This variegated class of RNA species encompasses the well-known microRNAs (miRNAs) and the most recently acknowledged long non-coding RNAs (lncRNAs). Interestingly, in the last couple of years, a few studies have shown that some lncRNAs can act as miRNA sponges, i.e. as competing endogenous RNAs (ceRNAs), able to reduce the amount of miRNAs available to target messenger RNAs (mRNAs).Results: We propose a computational approach to explore the ability of lncRNAs to act as ceRNAs by protecting mRNAs from miRNA repression. A seed match analysis was performed to validate the underlying regression model. We built normal and cancer networks of miRNA-mediated sponge interactions (MMI-networks) using breast cancer expression data provided by The Cancer Genome Atlas.Conclusions: Our study highlights a marked rewiring in the ceRNA program between normal and pathological breast tissue, documented by its " on/off" switch from normal to cancer, and vice-versa. This mutually exclusive activation confers an interesting character to ceRNAs as potential oncosuppressive, or oncogenic, protagonists in cancer. At the heart of this phenomenon is the lncRNA PVT1, as illustrated by both the width of its antagonist mRNAs in normal-MMI-network, and the relevance of the latter in breast cancer. Interestingly, PVT1 revealed a net binding preference towards the mir-200 family as the bone of contention with its rival mRNAs. © 2014 Paci et al.; licensee BioMed Central Ltd
Evaluation of vegetation post-fire resilience in the Alpine region using descriptors derived from MODIS spectral index time series
In this study a method based on the analysis of MODerate-resolution Imaging Spectroradiometer (MODIS) time
series is proposed to estimate the post-fire resilience of mountain vegetation (broadleaf forest and prairies) in the
Italian Alps. Resilience is defined herewith as the ability of a dynamical system to counteract disturbances. It
can be quantified by the amount of time the disturbed system takes to resume, in statistical terms, an ecological
functionality comparable with its undisturbed behavior.
Satellite images of the Normalized Difference Vegetation Index (NDVI) and of the Enhanced Vegetation Index
(EVI) with spatial resolution of 250m and temporal resolution of 16 days in the 2000-2012 time period were used.
Wildfire affected areas in the Lombardy region between the years 2000 and 2010 were analysed. Only large fires
(affected area >40ha) were selected. For each burned area, an undisturbed adjacent control site was located. Data
pre-processing consisted in the smoothing of MODIS time series for noise removal and then a double logistic
function was fitted. Land surface phenology descriptors (proxies for growing season start/end/length and green
biomass) were extracted in order to characterize the time evolution of the vegetation. Descriptors from a burned
area were compared to those extracted from the respective control site by means of the one-way analysis of
variance. According to the number of subsequent years which exhibit statistically meaningful difference between
burned and control site, five classes of resilience were identified and a set of thematic maps was created for each
descriptor. The same method was applied to all 84 aggregated events and to events aggregated by main land cover.
EVI index results more sensitive to fire impact than NDVI index. Analysis shows that fire causes both a reduction
of the biomass and a variation in the phenology of the Alpine vegetation. Results suggest an average ecosystem
resilience of 6-7 years. Moreover, broadleaf forest and prairies show different post-fire behavior in terms of land
surface phenology descriptors.
In addition to the above analysis, another method is proposed, which derives from the qualitative theory of
dynamical systems. The (time dependent) spectral index of a burned area over the period of one year was plotted
against its counterpart from the control site. Yearly plots (or scattergrams) before and after the fire were obtained.
Each plot is a sequence of points on the plane, which are the vertices of a generally self-intersecting polygonal
chain. Some geometrical descriptors were obtained from the yearly chains of each fire. Principal Components
Analysis (PCA) of geometrical descriptors was applied to a set of case studies and the obtained results provide a
system dynamics interpretation of the natural process.JRC.H.3 - Forest Resources and Climat
Modeling Gross Primary Production of Agro-Forestry Ecosystems by Assimilation of Satellite-Derived Information in a Process-Based Model
In this paper we present the results obtained in the framework of a regional-scale analysis of the carbon budget of poplar plantations in the northern Italy. We explored the ability of the processbased model BIOME-BGC to estimate the gross primary production (GPP) of these agro-forestry ecosystems exploiting eddy covariance and satellite data using an inverse modeling approach.
We present a modified version of BIOME-BGC (named PROSAILH-BGC) which was coupled with the radiative transfer models PROSPECT and SAILH with the aims of i) improving the BIOME-BGC description of the radiative transfer regime within the canopy and ii) allowing the assimilation of remotely-sensed vegetation indexes time series, such as MODIS NDVI, into the BIOME-BGC.
In summary, this study showed that assimilation of eddy covariance and remote sensing data in a process model can provide important information for estimating the carbon budget at regional scale.JRC.H.2 - Climate chang
Time domain diffuse correlation spectroscopy : models and experiments
LAUREA MAGISTRALENegli anni passati sono state sviluppate diverse tecniche spettroscopiche per la misura non invasiva dei parametri emodinamici del tessuto umano. I due esempi principali sono la spettroscopia nel vicino infrarosso (NIRS), che quantifica la composizione/struttura del tessuto e la spettroscopia di correlazione diffusa (DCS), che quantifica il flusso sanguigno (BF). NIRS nel Dominio del Tempo (TD NIRS), utilizzando sorgenti di luce impulsata, è in grado di misurare la composizione del tessuto con risoluzione in profondità. Molto recentemente, è stata proposta una tecnica spettroscopica simile, spettroscopia di correlazione diffusa nel dominio del tempo (TD DCS), per la valutazione risolta in profondità e simultanea del BF e delle concentrazioni dei costituenti del tessuto.
Lo scopo della mia tesi è stato di convalidare la tecnica TD DCS e di estenderne l'uso da fantocci simulanti il tessuto umano a misure in-vivo su esseri umani. Ciò è stato fatto con l'uso di una sorgente laser esistente, un laser Ti:Sapphire con mode-locking di tipo attivo, che consente di ottenere la sufficiente lunghezza di coerenza per la tecnica considerata. Il setup è stato adattato per le misure di TD DCS. Ho partecipato alle misurazioni e mi sono concentrato sulla parte di analisi dei dati. Innanzitutto, la tecnica è stata convalidata con l'uso di fantocci che imitano il tessuto biologico. Quindi, passando alle misurazioni sull'uomo è stata mostrata, per la prima volta a mia conoscenza, una misura completamente ottica e risolta in profondità del BF in-vivo, con una risoluzione temporale di ~ 1 s. Poiché questa tecnica mira alla quantificazione assoluta di BF, è necessario un modello accurato per estrarre il BF dalle misure. Per questo motivo, mi sono concentrato sullo studio e sullo sviluppo del modello teorico corretto da utilizzare per l'analisi dei dati. Questo studio è stato fatto con l'aiuto di simulazioni Monte Carlo.
Questo lavoro sperimentale e teorico aprirà la strada al trasferimento della tecnica all'uso pratico.In the past years, several spectroscopic techniques have been developed for the non-invasive measurement of tissue haemodynamic parameters in humans. The two mayor examples are near infrared spectroscopy (NIRS), that quantifies tissue composition (in particular haemoglobin concentration) and microstructure, and diffuse correlation spectroscopy (DCS), that quantifies blood flow (BF). Time domain near infrared spectroscopy (TD NIRS), using pulsed light sources, is capable of a depth-resolved measurement of tissue composition. Very recently a similar technique, time domain diffuse correlation spectroscopy (TD DCS), has been proposed for depth resolved and simultaneous evaluation of BF and tissue composition.
The aim of my thesis has been to validate the TD DCS technique and to extend its use from tissue-mimicking phantoms to in-vivo measurements on humans. This was done choosing an existing laser source, an actively mode locked Ti:Sapphire laser, that has the sufficient temporal coherence for the considered technique, and building the necessary TD DCS setup. I participated to the measurements and focused on the data analysis part. First, the technique was validated with the use of phantoms that mimic biological tissue. Then passing to measurements on humans, an in-vivo depth-resolved BF measurement, with a temporal resolution down to ~ 1 s, has been shown for the first time at my knowledge. Since this technique aims at absolute BF quantification, an accurate model for extracting the BF from the measurements is needed. For this reason, I focused on the study and the development of the correct theoretical model to be used for data analysis. This study was done with the help of numerical simulations.
This experimental and theoretical work opens the way to translation of the technique to practical use
Blood Libels between Trento and the Bodensee: Heinrich Kramer’s 1475 Mission
This article examines a little-known event that took place in 1475. This year saw a well-publicized trial at which the Jews of Trento were falsely accused of kidnapping a Christian toddler named Simon and murdering him in a religious ritual. Pope Sixtus IV, abiding by the long-standing papal tradition of defending the Jews against such accusations of ritual murder (known as blood libels), sent an apostolic commissioner to Trento to investigate the legitimacy of the proceedings opened by the local prince-bishop, Johannes Hinderbach. To defend his version of the story, namely that the Jews were responsible for the crime and that the ritual killing of Christian children was a long-established Jewish practice, Hinderbach sent the Dominican friar Heinrich Kramer von Schlettstadt (later known for his role in the development of the witch hunt) on a mission to the Bodensee region to uncover reports of previous alleged Jewish ritual murders. During his journey, Kramer managed to obtain a bundle of written notarial documents that attempt to prove the thesis of the magistrates and the bishop of Trento. Kramer’s mission is analysed by comparing the documents produced for him in southern Germany with other sources concerning the same cases and within the broader context of the blood libel of Trento
A web-based health technology assessment in tele-echocardiography: the experience within an Italian project
Due to major advances in the information technology, telemedicine applications are ready for a widespread use. Nonetheless, to allow their diffusion in National Health Care Systems (NHCSs) specific methodologies of health technology assessment (HTA ) should be used to assess the standardization, the overall quality, the interoperability, the addressing to legal, economic and cost benefit aspects. One of the limits to the diffusion of the digital tele-echocardiography (T-E) applications in the NHCS lacking of a specific methodology for the HTA . In the present study, a solution offering a structured HTA of T-E products was designed. The methodology assured also the definition of standardized quality levels for the application. The first level represents the minimum level of acceptance; the other levels are accessory levels useful for a more accurate assessment of the product. The methodology showed to be useful to rationalize the process of standardization and has received a high degree of acceptance by the subjects involved in the study.Grazie ai grandi progressi nell\u27information technology le applicazioni di telemedicina sono mature per un uso diffuso. Tuttavia per permettere la loro introduzione nel sistema sanitario nazionale devono essere utilizzate specifiche metodologie di health technology assessment (HTA ) per valutare il grado di standardizzazione, la qualit? totale, l\u27interoperabilit?, il rispetto dei requisiti legali ed economici e il rapporto costo-beneficio. Con riferimento alla tele-ecocardiografia digitale uno dei limiti ? la mancanza di una specifica metodologia di HTA . Nel presente studio, ? stata proposta una soluzione che offre un HTA strutturato di prodotti di tele-ecocardiografia (T-E) digitale. La metodologia ha assicurato anche la definizione di livelli standardizzati di qualit? per l\u27applicazione. Il primo livello rappresenta il livello minimo di accettazione; gli altri livelli riguardano aspetti accessori e sono utili per una pi? accurata valutazione del prodotto. La metodologia si ? mostrata di utilit? per razionalizzare il processo di standardizzazione ed ha ricevuto un elevato grado di accettazione dei soggetti coinvolti
- …
