424 research outputs found
Using strong conflicts to detect quality issues in component-based complex systems
The mainstream adoption of free and open source software (FOSS) has widely popularised notions like software packages or plugins, maintained in a distributed fashion and evolving at a very quick pace. Each of these components is equipped with metadata, such as dependencies, which define the other components it needs to function properly, and the incompatible components it cannot work with. In this paper, we introduce the notion of strong conflicts, defined from the component dependencies, that can be effectively computed. It gives important insights on the quality issues faced when adding or upgrading components in a given component repository, which is one of the facets of the predictable assembly problem.Our work contains concrete examples drawn from the world of GNU/Linux distributions, that validate the proposed approach. It also shows that the measures defined can be easily applied to the Eclipse world, or to any other coarse-grained software component model
GEANT4 : a simulation toolkit
Abstract Geant4 is a toolkit for simulating the passage of particles through matter. It includes a complete range of functionality including tracking, geometry, physics models and hits. The physics processes offered cover a comprehensive range, including electromagnetic, hadronic and optical processes, a large set of long-lived particles, materials and elements, over a wide energy range starting, in some cases, from 250 eV and extending in others to the TeV energy range. It has been designed and constructed to expose the physics models utilised, to handle complex geometries, and to enable its easy adaptation for optimal use in different sets of applications. The toolkit is the result of a worldwide collaboration of physicists and software engineers. It has been created exploiting software engineering and object-oriented technology and implemented in the C++ programming language. It has been used in applications in particle physics, nuclear physics, accelerator design, space engineering and medical physics. PACS: 07.05.Tp; 13; 2
Optimal and Automated Deployment for Microservices
Microservices are highly modular and scalable Service Oriented Architectures.
They underpin automated deployment practices like Continuous Deployment and
Autoscaling. In this paper, we formalize these practices and show that
automated deployment - proven undecidable in the general case - is
algorithmically treatable for microservices. Our key assumption is that the
configuration life-cycle of a microservice is split into two phases: (i)
creation, which entails establishing initial connections with already available
microservices, and (ii) subsequent binding/unbinding with other microservices.
To illustrate the applicability of our approach, we implement an automatic
optimal deployment tool and compute deployment plans for a realistic
microservice architecture, modeled in the Abstract Behavioral Specification
(ABS) language
On the compressibility of large-scale source code datasets
Storing ultra-large amounts of unstructured data (often called objects or blobs) is a fundamental task for several object-based storage engines, data warehouses, data-lake systems, and key–value stores. These systems cannot currently leverage similarities between objects, which could be vital in improving their space and time performance. An important use case in which we can expect the objects to be highly similar is the storage of large-scale versioned source code datasets, such as the Software Heritage Archive (Di Cosmo and Zacchiroli, 2017). This use case is particularly interesting given the extraordinary size (1.5 PiB), the variegated nature, and the high repetitiveness of the at-issue corpus. In this paper we discuss and experiment with content- and context-based compression techniques for source-code collections that tailor known and novel tools to this setting in combination with state-of-the-art general-purpose compressors and the information coming from the Software Heritage Graph. We experiment with our compressors over a random sample of the entire corpus, and four large samples of source code files written in different popular languages: C/C++, Java, JavaScript, and Python. We also consider two scenarios of usage for our compressors, called Backup and File-Access scenario, where the latter adds to the former the support for single file retrieval. As a net result, our experiments show (i) how much “compressible” each language is, (ii) which content- or context-based techniques compress better and are faster to (de)compress by possibly supporting individual file access, and (iii) the ultimate compressed size that, according to our estimate, our best solution could achieve in storing all the source code written in these languages and available in the Software Heritage Archive: namely, in 3 TiB (down from their original 78 TiB total size, with an average compression ratio of 4%)
Archiving and referencing source code with Software Heritage
Software, and software source code in particular, is widely used in modern
research. It must be properly archived, referenced, described and cited in
order to build a stable and long lasting corpus of scientic knowledge. In this
article we show how the Software Heritage universal source code archive
provides a means to fully address the first two concerns, by archiving
seamlessly all publicly available software source code, and by providing
intrinsic persistent identifiers that allow to reference it at various
granularities in a way that is at the same time convenient and effective. We
call upon the research community to adopt widely this approach.Comment: arXiv admin note: substantial text overlap with arXiv:1909.1076
Isomorphisms of types in the presence of higher-order references (extended version)
We investigate the problem of type isomorphisms in the presence of
higher-order references. We first introduce a finitary programming language
with sum types and higher-order references, for which we build a fully abstract
games model following the work of Abramsky, Honda and McCusker. Solving an open
problem by Laurent, we show that two finitely branching arenas are isomorphic
if and only if they are geometrically the same, up to renaming of moves
(Laurent's forest isomorphism). We deduce from this an equational theory
characterizing isomorphisms of types in our language. We show however that
Laurent's conjecture does not hold on infinitely branching arenas, yielding new
non-trivial type isomorphisms in a variant of our language with natural
numbers
Selenium status is positively associated with bone mineral density in healthy aging European men
Objective It is still a matter of debate if subtle changes in selenium (Se) status affect thyroid function tests (TFTs) and bone mineral density (BMD). This is particularly relevant for the elderly, whose nutritional status is more vulnerable. Design and Methods We investigated Se status in a cohort of 387 healthy elderly men (median age 77 yrs; inter quartile range 75-80 yrs) in relation to TFTs and BMD. Se status was determined by measuring both plasma selenoprotein P (SePP) and Se. Results The overall Se status in our population was low normal with only 0.5% (2/387) of subjects meeting the criteria for Se deficiency. SePP and Se levels were not associated with thyroid stimulating hormone (TSH), free thyroxine (FT4), thyroxine (T4), triiodothyronine (T3) or reverse triiodothyronine (rT3) levels. The T3/T4 and T3/rT3 ratios, reflecting peripheral metabolism of thyroid hormone, were not associated with Se status either. SePP and Se were positively associated with total BMD and femoral trochanter BMD. Se, but not SePP, was positively associated with femoral neck and ward's BMD. Multivariate linear analyses showed that these associations remain statistically significant in a model including TSH, FT4, body mass index, physical performance score, age, smoking, diabetes mellitus and number of medication use. Conclusion Our study demonstrates that Se status, within the normal European marginally supplied range, is positively associated with BMD in healthy aging men, independent of thyroid function. Thyroid function tests appear unaffected by Se status in this population
A Serum Resistin and Multicytokine Inflammatory Pathway Is Linked With and Helps Predict All-cause Death in Diabetes
Context: Type 2 diabetes (T2D) shows a high mortality rate, partly mediated by atherosclerotic plaque instability. Discovering novel biomarkers may help identify high-risk patients who would benefit from more aggressive and specific managements. We recently described a serum resistin and multicytokine inflammatory pathway (REMAP), including resistin, interleukin (IL)-1 beta, IL-6, IL-8, and TNF-alpha, that is associated with cardiovascular disease.Objective: We investigated whether REMAP is associated with and improves the prediction of mortality in T2D.Methods: A REMAP score was investigated in 3 cohorts comprising 1528 patients with T2D (409 incident deaths) and in 59 patients who underwent carotid endarterectomy (CEA; 24 deaths). Plaques were classified as unstable/stable according to the modified American Heart Association atherosclerosis classification.Results: REMAP was associated with all-cause mortality in each cohort and in all 1528 individuals (fully adjusted hazard ratio [HR] for 1 SD increase=1.34, P<.001). In CEA patients, REMAP was associated with mortality (HR=1.64, P=.04) and a modest change was observed when plaque stability was taken into account (HR=1.58; P=.07). REMAP improved discrimination and reclassification measures of both Estimation of Mortality Risk in Type 2 Diabetic Patients and Risk Equations for Complications of Type 2 Diabetes, well-established prediction models of mortality in T2D (P<.05-<.001).Conclusion: REMAP is independently associated with and improves predict all-cause mortality in T2D; it can therefore be used to identify high-risk individuals to be targeted with more aggressive management. Whether REMAP can also identify patients who are more responsive to IL-6 and IL-1 beta monoclonal antibodies that reduce cardiovascular burden and total mortality is an intriguing possibility to be tested
GEANT4--a simulation toolkikt
Geant4 is a toolkit for simulating the passage of particles through matter. It includes a complete range of functionality including tracking, geometry, physics models and hits. The physics processes offered cover a comprehensive range, including electromagnetic, hadronic and optical processes, a large set of long-lived particles, materials and elements, over a wide energy range starting, in some cases, from 250 eV and extending in others to the TeV energy range. It has been designed and constructed to expose the physics models utilised, to handle complex geometries, and to enable its easy adaptation for optimal use in different sets of applications. The toolkit is the result of a worldwide collaboration of physicists and software engineers. It has been created exploiting software engineering and object-oriented technology and implemented in the C++ programming language. It has been used in applications in particle physics, nuclear physics, accelerator design, space engineering and medical physics
- …
