3,013 research outputs found
Europeana communication bug: which intervention strategy for a better cooperation with creative industry?
Although Europeana as well as many GLAMs are very engaged - beside the main mission, i.e. spreading cultural heritage knowledge- in developing new strategies in order to make digital contents reusable for creative industry, these efforts have been successful just only in sporadic cases. A significant know how deficits in communication often compromises expected outcomes and impact. Indeed, what prevails is an idea of communication like an enhancement “instrument” intended on the one hand in purely economic (development) sense, on the other hand as a way for increasing and spreading knowledge. The main reference model is more or less as follows: digital objects are to be captured and/or transformed by digital technologies into sellable goods to put into circulation. Nevertheless, this approach risks neglecting the real nature of communication, and more in detail the one of digital heritage where it is strategic not so much producing objects and goods as taking part into sharing environments creation (media) by engaged communities, small or large they may be. The environments act as meeting and interchange point, and consequently as driving force of enhancing. Only in a complex context of network interaction on line accessible digital heritage contents become a strategic resource for creating environments in which their re/mediation can occur – provided that credible strategies exist, shared by stakeholders and users. This paper particularly describes a case study including proposals for an effective connection among Europeana, GLAMs and Creative Industry in the framework of Food and Drink digital heritage enhancement and promotion. Experimental experiences as the one described in this paper anyway confirm the relevance of up-to-date policies based on an adequate communication concept, on solid partnerships with enterprise and association networks, on collaborative on line environments, on effective availability at least for most of contents by increasing free licensing, and finally on grassroots content implementation involving prosumers audience, even if filtered by GLAMs
The concentration of homocysteine-derived disulfides in human coronary artery
*Background* 
Based on previous findings, we have estimated that, in injured coronary artery tissue, the low molecular weight disulfides homocystine and cysteine-homocysteine, otherwise identified as oxidized homocysteine equivalents (OHcyE), may achieve a total concentration that is higher than the aqueous solubility of homocystine at room temperature. In order to verify whether or not OHcyE could reach their saturation limit in the vascular tissue, we have measured the solubility of homocystine in physiological-like condition.

*Materials and methods* 
The solubility of homocystine has been measured in aqueous sodium chloride solutions at 37 °C by differential pulse polarography based on the reduction of homocystine to homocysteine.

*Results* 
We have estimated that the concentration achieved by OHcyE in injured coronary artery tissue is at least near-saturating, because the solubility of homocystine in physiological-like condition, above which deposition of homocystine and/or cysteine-homocysteine as solid phase occurs, almost exactly matches its value. Near-saturation levels of OHcyE within the vascular tissue means that significant leakage of intracellular fluid can promote OHcyE crystallization in tissue fluids, which may serve to initiate inflammation. 

*Conclusions* 
We speculate that deposition of OHcyE crystals could damage blood vessels and act as a primer of homocysteine-triggered inflammation, thus being along the causal pathway that leads to vascular dysfunction
Environmental Influences on the Morphology and Dynamics of Group Size Haloes
We use group size haloes identified with a ``friends of friends'' (FOF)
algorithm in a concordance GADGET2 (dark matter only)
simulation to investigate the dependence of halo properties on the environment
at . The study is carried out using samples of haloes at different
distances from their nearest massive {\em cluster} halo. We find that the
fraction of haloes with substructure typically increases in high density
regions. The halo mean axial ratio also increases in overdense regions,
a fact which is true for the whole range of halo mass studied. This can be
explained as a reflection of an earlier halo formation time in high-density
regions, which gives haloes more time to evolve and become more spherical.
Moreover, this interpretation is supported by the fact that, at a given
halo-cluster distance, haloes with substructure are more elongated than their
equal mass counterparts with no substructure, reflecting that the virialization
(and thus sphericalization) process is interrupted by merger events. The
velocity dispersion of low mass haloes with strong substructure shows a
significant increase near massive clusters with respect to equal mass haloes
with low-levels of substructure or with haloes found in low-density
environments. The alignment signal between the shape and the velocity ellipsoid
principal axes decreases going from lower to higher density regions, while such
an alignment is stronger for haloes without substructure. We also find, in
agreement with other studies, a tendency of halo major axes to be aligned and
of minor axes to lie roughly perpendicular with the orientation of the filament
within which the halo is embedded, an effect which is stronger in the proximity
of the massive clusters.Comment: 11 pages, 12 figures, accepted for publication in MNRA
El matrimonio homosexual en Europa entre Derecho Político y Derecho Jurisprudencial. Reflexiones a raíz de la reciente Jurisprudencia comparada
Effective critical micellar concentration of a zwitterionic detergent: A fluorimetric study on n-dodecyl phosphocholine
We have investigated the effect of ionic strength on the aggregation behavior of n-dodecyl phosphocholine. On the basis of the classical Corrin-Harkins relation, the critical micellar concentration of this detergent decreases with a biphasic trend on lithium chloride addition. It is nearly constant below 150 mM salt, with a mean value of 0.91 mM, whereas it undergoes a dramatic 80-fold decrease in 7 M LiCl. Such a drop in the critical micellar concentration could be explained by the effect of salting out and the implication of phosphocholine head groups on the organization of surrounding water. Knowledge of the effective critical micellar concentration of n-dodecyl phosphocholine could be useful in the purification of membrane proteins in non-denaturing conditions
Energetics of climate models: net energy balance and meridional enthalpy transport
We analyze the publicly released outputs of the simulations performed by climate models (CMs) in preindustrial (PI) and Special Report on Emissions Scenarios A1B (SRESA1B) conditions. In the PI simulations, most CMs feature biases of the order of 1 W m −2 for the net global and the net atmospheric, oceanic, and land energy balances. This does not result from transient effects but depends on the imperfect closure of the energy cycle in the fluid components and on inconsistencies over land. Thus, the planetary emission temperature is underestimated, which may explain the CMs' cold bias. In the PI scenario, CMs agree on the meridional atmospheric enthalpy transport's peak location (around 40°N/S), while discrepancies of ∼20% exist on the intensity. Disagreements on the oceanic transport peaks' location and intensity amount to ∼10° and ∼50%, respectively. In the SRESA1B runs, the atmospheric transport's peak shifts poleward, and its intensity increases up to ∼10% in both hemispheres. In most CMs, the Northern Hemispheric oceanic transport decreases, and the peaks shift equatorward in both hemispheres. The Bjerknes compensation mechanism is active both on climatological and interannual time scales. The total meridional transport peaks around 35° in both hemispheres and scenarios, whereas disagreements on the intensity reach ∼20%. With increased CO 2 concentration, the total transport increases up to ∼10%, thus contributing to polar amplification of global warming. Advances are needed for achieving a self-consistent representation of climate as a nonequilibrium thermodynamical system. This is crucial for improving the CMs' skill in representing past and future climate changes
Recommended from our members
A new framework for climate sensitivity and prediction: a modelling perspective
The sensitivity of climate models to increasing CO2 concentration and the climate response at decadal time-scales are still major factors of uncertainty for the assessment of the long and short term effects of anthropogenic climate change. While the relative slow progress on these issues is partly due to the inherent inaccuracies of numerical climate models, this also hints at the need for stronger theoretical foundations to the problem of studying climate sensitivity and performing climate change predictions with numerical models. Here we demonstrate that it is possible to use Ruelle's response theory to predict the impact of an arbitrary CO2 forcing scenario on the global surface temperature of a general circulation model. Response theory puts the concept of climate sensitivity on firm theoretical grounds, and addresses rigorously the problem of predictability at different time-scales. Conceptually, these results show that performing climate change experiments with general circulation models is a well defined problem from a physical and mathematical point of view. Practically, these results show that considering one single CO2 forcing scenario is enough to construct operators able to predict the response of climatic observables to any other CO2 forcing scenario, without the need to perform additional numerical simulations. We also introduce a general relationship between climate sensitivity and climate response at different time scales, thus providing an explicit definition of the inertia of the system at different time scales. This technique allows also for studying systematically, for a large variety of forcing scenarios, the time horizon at which the climate change signal (in an ensemble sense) becomes statistically significant. While what we report here refers to the linear response, the general theory allows for treating nonlinear effects as well. These results pave the way for redesigning and interpreting climate change experiments from a radically new perspective
Computation of extreme heat waves in climate models using a large deviation algorithm
Studying extreme events and how they evolve in a changing climate is one of the most important current scientific challenges. Starting from complex climate models, a key difficulty is to be able to run long enough simulations to observe those extremely rare events. In physics, chemistry, and biology, rare event algorithms have recently been developed to compute probabilities of events that cannot be observed in direct numerical simulations. Here we propose such an algorithm, specifically designed for extreme heat or cold waves, based on statistical physics. This approach gives an improvement of more than two orders of magnitude in the sampling efficiency. We describe the dynamics of events that would not be observed otherwise. We show that European extreme heat waves are related to a global teleconnection pattern involving North America and Asia. This tool opens up a wide range of possible studies to quantitatively assess the impact of climate change
- …
