831 research outputs found
Error estimate for time-explicit finite volume approximation of strong solutions to systems of conservation laws
International audienceWe study the finite volume approximation of strong solutions to nonlinear systems of conservation laws. We focus on time-explicit schemes on unstructured meshes, with entropy satisfying numerical fluxes. The numerical entropy dissipation is quantified at each interface of the mesh, which enables to prove a weak–BV estimate for the numerical approximation under a strengthen CFL condition. Then we derive error estimates in the multidimensional case, using the relative entropy between the strong solution and its finite volume approximation. The error terms are carefully studied, leading to a classical estimate in under this strengthen CFL condition
Error analysis of a dynamic model adaptation procedure for nonlinear hyperbolic equations
International audienceWe propose a dynamic model adaptation method for a nonlinear conservation law coupled with an ordinary differential equation. This model, called the ''fine model", involves a small time scale and setting this time scale to 0 leads to a classical conservation law, called the ''coarse model", with a flux which depends on the unknown and on space and time. The dynamic model adaptation consists in detecting the regions where the fine model can be replaced by the coarse one in an automatic way, without deteriorating the accuracy of the result. To do so, we provide an error estimate between the solution of the fine model and the solution of the adaptive method, enabling a sharp control of the different parameters. This estimate rests upon stability results for conservation laws with respect to the flux function. Numerical results are presented at the end and show that our estimate is optimal
OSAMOAL: optimized simulations by adapted models using asymptotic limits
We propose in this work to address the problem of model adaptation, dedicated to hyper- bolic models with relaxation and to their parabolic limit. The goal is to replace a hyperbolic system of balance laws (the so-called fine model) by its parabolic limit (the so-called coarse model), in delimited parts of the computational domain. Our method is based on the construction of asymptotic preserving schemes and on interfacial coupling methods between hyperbolic and parabolic models. We study in parallel the cases of the Goldstein-Taylor model and of the p-system with friction
A multi-decade record of high quality fCO2 data in version 3 of the Surface Ocean CO2 Atlas (SOCAT)
The Surface Ocean CO2 Atlas (SOCAT) is a synthesis of quality-controlled fCO2 (fugacity of carbon dioxide) values for the global surface oceans and coastal seas with regular updates. Version 3 of SOCAT has 14.7 million fCO2 values from 3646 data sets covering the years 1957 to 2014. This latest version has an additional 4.6 million fCO2 values relative to version 2 and extends the record from 2011 to 2014. Version 3 also significantly increases the data availability for 2005 to 2013. SOCAT has an average of approximately 1.2 million surface water fCO2 values per year for the years 2006 to 2012. Quality and documentation of the data has improved. A new feature is the data set quality control (QC) flag of E for data from alternative sensors and platforms. The accuracy of surface water fCO2 has been defined for all data set QC flags. Automated range checking has been carried out for all data sets during their upload into SOCAT. The upgrade of the interactive Data Set Viewer (previously known as the Cruise Data Viewer) allows better interrogation of the SOCAT data collection and rapid creation of high-quality figures for scientific presentations. Automated data upload has been launched for version 4 and will enable more frequent SOCAT releases in the future. High-profile scientific applications of SOCAT include quantification of the ocean sink for atmospheric carbon dioxide and its long-term variation, detection of ocean acidification, as well as evaluation of coupled-climate and ocean-only biogeochemical models. Users of SOCAT data products are urged to acknowledge the contribution of data providers, as stated in the SOCAT Fair Data Use Statement. This ESSD (Earth System Science Data) “living data” publication documents the methods and data sets used for the assembly of this new version of the SOCAT data collection and compares these with those used for earlier versions of the data collection (Pfeil et al., 2013; Sabine et al., 2013; Bakker et al., 2014). Individual data set files, included in the synthesis product, can be downloaded here: doi:10.1594/PANGAEA.849770. The gridded products are available here: doi:10.3334/CDIAC/OTG.SOCAT_V3_GRID
Comparative kinetic analysis of two fungal β-glucosidases
<p>Abstract</p> <p>Background</p> <p>The enzymatic hydrolysis of cellulose is still considered as one of the main limiting steps of the biological production of biofuels from lignocellulosic biomass. It is a complex multistep process, and various kinetic models have been proposed. The cellulase enzymatic cocktail secreted by <it>Trichoderma reesei </it>has been intensively investigated. β-glucosidases are one of a number of cellulolytic enzymes, and catalyze the last step releasing glucose from the inhibitory cellobiose. β-glucosidase (BGL1) is very poorly secreted by <it>Trichoderma reesei </it>strains, and complete hydrolysis of cellulose often requires supplementation with a commercial β-glucosidase preparation such as that from <it>Aspergillus niger </it>(Novozymes SP188). Surprisingly, kinetic modeling of β-glucosidases lacks reliable data, and the possible differences between native <it>T. reesei </it>and supplemented β-glucosidases are not taken into consideration, possibly because of the difficulty of purifying BGL1.</p> <p>Results</p> <p>A comparative kinetic analysis of β-glucosidase from <it>Aspergillus niger </it>and BGL1 from <it>Trichoderma reesei</it>, purified using a new and efficient fast protein liquid chromatography protocol, was performed. This purification is characterized by two major steps, including the adsorption of the major cellulases onto crystalline cellulose, and a final purification factor of 53. Quantitative analysis of the resulting β-glucosidase fraction from <it>T. reesei </it>showed it to be 95% pure. Kinetic parameters were determined using cellobiose and a chromogenic artificial substrate. A new method allowing easy and rapid determination of the kinetic parameters was also developed. β-Glucosidase SP188 (K<sub>m </sub>= 0.57 mM; K<sub>p </sub>= 2.70 mM) has a lower specific activity than BGL1 (K<sub>m </sub>= 0.38 mM; K<sub>p </sub>= 3.25 mM) and is also more sensitive to glucose inhibition. A Michaelis-Menten model integrating competitive inhibition by the product (glucose) has been validated and is able to predict the β-glucosidase activity of both enzymes.</p> <p>Conclusions</p> <p>This article provides a useful comparison between the activity of β-glucosidases from two different fungi, and shows the importance of fully characterizing both enzymes. A Michaelis-Menten model was developed, including glucose inhibition and kinetic parameters, which were accurately determined and compared. This model can be further integrated into a cellulose hydrolysis model dissociating β-glucosidase activity from that of other cellulases. It can also help to define the optimal enzymatic cocktails for new β-glucosidase activities.</p
Dynamic model adaptation for multiscale simulation of hyperbolic systems with relaxation
International audienceIn numerous industrial CFD applications, it is usual to use two (or more)different codes to solve a physical phenomenon: where the flow is a priori assumed to have a simple behavior, a code based on a coarse model is applied, while a code based on a fine model is used elsewhere. This leads to a complex coupling problem with fixed interfaces. The aim of the present work is to provide a numerical indicator to optimize to position of these coupling interfaces. In other words, thanks to this numerical indicator, one could verify if the use of the coarser model and of the resulting coupling does not introduce spurious effects. In order to validate this indicator, we use it in a dynamical multiscale method with moving coupling interfaces. The principle of this method is to use as much as possible a coarse model instead of the fine model in the computational domain, in order to obtain an accuracy which is comparable with the one provided by the fine model. We focus here on general hyperbolic systems with stiff relaxation source terms together with the corresponding hyperbolic equilibrium systems. Using a numerical Chapman-Enskog expansion and the distance to the equilibrium manifold, we construct the numerical indicator. Based on several works on the coupling of different hyperbolic models, an original numerical method of dynamic model adaptation is proposed. We prove that this multiscale method preserves invariant domains and that the entropy of the numerical solution decreases with respect to time. The reliability of the adaptation procedure is assessed on various 1D and 2D test cases coming from two-phase flow modeling
Movement Primitive Diffusion: Learning Gentle Robotic Manipulation of Deformable Objects
Policy learning in robot-assisted surgery (RAS) lacks data efficient and
versatile methods that exhibit the desired motion quality for delicate surgical
interventions. To this end, we introduce Movement Primitive Diffusion (MPD), a
novel method for imitation learning (IL) in RAS that focuses on gentle
manipulation of deformable objects. The approach combines the versatility of
diffusion-based imitation learning (DIL) with the high-quality motion
generation capabilities of Probabilistic Dynamic Movement Primitives (ProDMPs).
This combination enables MPD to achieve gentle manipulation of deformable
objects, while maintaining data efficiency critical for RAS applications where
demonstration data is scarce. We evaluate MPD across various simulated and real
world robotic tasks on both state and image observations. MPD outperforms
state-of-the-art DIL methods in success rate, motion quality, and data
efficiency.
Project page: https://scheiklp.github.io/movement-primitive-diffusion
Review article: Insuring the green economy against natural hazards – charting research frontiers in vulnerability assessment
The insurance of green economy assets against natural hazards is a growing market. This study explores whether currently available published knowledge is adequate for the vulnerability assessment of these assets to natural hazards. A matrix is constructed to demonstrate the vulnerability to functional loss of 37 asset classes in the renewable energy, green construction, resource management, carbon capture and storage, energy storage, and sustainable transportation sectors. The 28 hazards adopted range from environmental and geophysical events to oceanic, coastal, and space weather events. A fundamental challenge in constructing the matrix was the lack of an asset–hazard taxonomy for the green economy. Each matrix cell represents the vulnerability of an asset to a specific hazard, based on a comprehensive systematic literature review. A confidence level is assigned to each vulnerability assessment based on a literature density heat map. The latter highlights specific knowledge gaps, in particular a lack of quantitative vulnerability studies that appropriately represent all functional loss mechanisms in green economy assets. Apart from charting research gaps, a main output of this study is the proposal of a representative asset–hazard taxonomy to guide future quantitative research that can be applied by the insurance industry
OSAMOAL: optimized simulations by adapted models using asymptotic limits
We propose in this work to address the problem of model adaptation, dedicated to hyper- bolic models with relaxation and to their parabolic limit. The goal is to replace a hyperbolic system of balance laws (the so-called fine model) by its parabolic limit (the so-called coarse model), in delimited parts of the computational domain. Our method is based on the construction of asymptotic preserving schemes and on interfacial coupling methods between hyperbolic and parabolic models. We study in parallel the cases of the Goldstein-Taylor model and of the p-system with friction
Reconstruction of primary vertices at the ATLAS experiment in Run 1 proton–proton collisions at the LHC
This paper presents the method and performance of primary vertex reconstruction in proton–proton collision data recorded by the ATLAS experiment during Run 1 of the LHC. The studies presented focus on data taken during 2012 at a centre-of-mass energy of √s=8 TeV. The performance has been measured as a function of the number of interactions per bunch crossing over a wide range, from one to seventy. The measurement of the position and size of the luminous region and its use as a constraint to improve the primary vertex resolution are discussed. A longitudinal vertex position resolution of about 30μm is achieved for events with high multiplicity of reconstructed tracks. The transverse position resolution is better than 20μm and is dominated by the precision on the size of the luminous region. An analytical model is proposed to describe the primary vertex reconstruction efficiency as a function of the number of interactions per bunch crossing and of the longitudinal size of the luminous region. Agreement between the data and the predictions of this model is better than 3% up to seventy interactions per bunch crossing
- …
