255 research outputs found
Fluent temporal logic for discrete-time event-based models
Fluent model checking is an automated technique for verifying that an event-based operational model satisfies some state-based declarative properties. The link between the event-based and state-based formalisms is defined through fluents which are state predicates whose value are determined by the occurrences of initiating and terminating events that make the fluents values become true or false, respectively. The existing fluent temporal logic is convenient for reasoning about untimed event-based models but difficult to use for timed models. The paper extends fluent temporal logic with temporal operators for modelling timed properties of discrete-time event-based models. It presents two approaches that differ on whether the properties model the system state after the occurrence of each event or at a fixed time rate. Model checking of timed properties is made possible by translating them into the existing untimed framework. Copyright 2005 ACM
08031 Abstracts Collection -- Software Engineering for Self-Adaptive Systems
From 13.01. to 18.01.2008, the Dagstuhl Seminar 08031 ``Software Engineering for Self-Adaptive Systems\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Multilevel Contracts for Trusted Components
This article contributes to the design and the verification of trusted
components and services. The contracts are declined at several levels to cover
then different facets, such as component consistency, compatibility or
correctness. The article introduces multilevel contracts and a
design+verification process for handling and analysing these contracts in
component models. The approach is implemented with the COSTO platform that
supports the Kmelia component model. A case study illustrates the overall
approach.Comment: In Proceedings WCSI 2010, arXiv:1010.233
Software Engineering for Self-Adaptive Systems: A second Research Roadmap
The goal of this roadmap paper is to summarize the state of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation:
design space for adaptive solutions, processes, from centralized to decentralized control, and practical run-time verification and validation. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap
on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software
Engineering for Self-Adaptive Systems, which took place in October 2010
UK Hydrological Outlook using historic weather analogues
Skilful seasonal hydrological forecasts are beneficial for water resources planning and disaster risk reduction. The UK Hydrological Outlook (UKHO) provides river flow and groundwater level forecasts at the national scale. Alongside the standard Ensemble Streamflow Prediction (ESP) method, a new Historic Weather Analogues (HWA) method has recently been implemented. The HWA method samples within high resolution historical observations for analogue months that matches the atmospheric circulation patterns forecasted by a dynamical weather forecasting model. In this study, we conduct a hindcast experiment using the GR6J hydrological model to assess where and when the HWA method is skilful across a set of 314 UK catchments for different seasons. We benchmark the skill against the standard ESP and climatology forecasts to understand to what extent the HWA method represents an improvement to existing forecasting methods. Results show the HWA method yields skilful winter river flow forecasts across the UK compared to the standard ESP method where skilful forecasts were only possible in southeast England. Winter river flow forecasts using the HWA method were also more skilful in discriminating high and low flows across all regions. Catchments with the greatest improvement tended to be upland, fast responding catchments with limited catchment storage and where river flow variability is strongly tied with climate variability. Skilful winter river flow predictability was possible due to relatively high forecast skill of atmospheric circulation patterns (e.g. winter NAO) and the ability of the HWA method to derive high resolution meteorological inputs suitable for hydrological modelling. However, skill was not uniform across different seasons. Improvement in river flow forecast skill for other seasons was modest, such as moderate improvements in northern England and northeast Scotland during spring and little change in autumn. Skilful summer flow predictability remains possible only for southeast England and skill scores were mostly reduced compared to the ESP method elsewhere. This study demonstrates that the HWA method can leverage both climate information from dynamical weather forecasting models and the influence of initial hydrological conditions. An incorporation of climate information improved winter river flow predictability nationally, with the advantage of exploring historically unseen weather sequences. The strong influence of initial hydrological conditions contributed to retaining year-round forecast skill of river flows in southeast England. Overall, this study provides justification for when and where the HWA method is more skilful than existing forecasting approaches and confirms the standard ESP method as a “tough to beat” forecasting system that future improvements should be tested against
Neurodevelopmental Outcome of Young Children with Biliary Atresia and Native Liver: Results from the ChiLDReN Study
OBJECTIVES:
To assess neurodevelopmental outcomes among participants with biliary atresia with their native liver at ages 12 months (group 1) and 24 months (group 2), and to evaluate variables predictive of neurodevelopmental impairment.
STUDY DESIGN:
Participants enrolled in a prospective, longitudinal, multicenter study underwent neurodevelopmental testing with either the Bayley Scales of Infant Development, 2nd edition, or Bayley Scales of Infant and Toddler Development, 3rd edition. Scores (normative mean = 100 ± 15) were categorized as ≥100, 85-99, and <85 for χ2 analysis. Risk for neurodevelopmental impairment (defined as ≥1 score of <85 on the Bayley Scales of Infant Development, 2nd edition, or Bayley Scales of Infant and Toddler Development, 3rd edition, scales) was analyzed using logistic regression.
RESULTS:
There were 148 children who completed 217 Bayley Scales of Infant and Toddler Development, 3rd edition, examinations (group 1, n = 132; group 2, n = 85). Neurodevelopmental score distributions significantly shifted downward compared with test norms at 1 and 2 years of age. Multivariate analysis identified ascites (OR, 3.17; P = .01) and low length z-scores at time of testing (OR, 0.70; P < .04) as risk factors for physical/motor impairment; low weight z-score (OR, 0.57; P = .001) and ascites (OR, 2.89; P = .01) for mental/cognitive/language impairment at 1 year of age. An unsuccessful hepatoportoenterostomy was predictive of both physical/motor (OR, 4.88; P < .02) and mental/cognitive/language impairment (OR, 4.76; P = .02) at 2 years of age.
CONCLUSION:
Participants with biliary atresia surviving with native livers after hepatoportoenterostomy are at increased risk for neurodevelopmental delays at 12 and 24 months of age. Those with unsuccessful hepatoportoenterostomy are >4 times more likely to have neurodevelopmental impairment compared with those with successful hepatoportoenterostomy. Growth delays and/or complications indicating advanced liver disease should alert clinicians to the risk for neurodevelopmental delays, and expedite appropriate interventions
Deriving event-based transition systems from goal-oriented requirements models
Goal-oriented methods are increasingly popular for elaborating software requirements. They offer systematic support for incrementally building intentional, structural, and operational models of the software and its environment. Event-based transition systems on the other hand are convenient formalisms for reasoning about software behaviour at the architectural level. The paper relates these two worlds by presenting a technique for translating formal specification of software operations built according to the KAOS goal-oriented method into event-based transition systems analysable by the LTSA toolset. The translation involves moving from a declarative, state-based, timed, synchronous formalism typical of requirements modelling languages to an operational, event-based, untimed, asynchronous one typical of architecture description languages. The derived model can be used for the formal analysis and animation of KAOS operation models in LTSA. The paper also provides insights into the two complementary formalisms, and shows that the use of synchronous temporal logic for requirements specification hinders a smooth transition from requirements to software architecture models.Fil: Letier, Emmanuel. University College London; Estados Unidos. London Software Systems; Reino UnidoFil: Kramer, Jeff. University College London; Estados Unidos. London Software Systems; Reino UnidoFil: Magee, Jeff. Imperial College London; Reino Unido. London Software Systems; Reino UnidoFil: Uchitel, Sebastian. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria; Argentina. Universidad de Buenos Aires. Facultad de Ingeniería. Departamento de Computacion; Argentina. Imperial College London; Reino Unido. London Software Systems; Reino Unid
Targeted Pten deletion plus p53-R270H mutation in mouse mammary epithelium induces aggressive claudin-low and basal-like breast cancer
SARS-CoV-2 seroprevalence in pregnant women in Kilifi, Kenya from March 2020 to March 2022
BackgroundSeroprevalence studies are an alternative approach to estimating the extent of transmission of SARS-CoV-2 and the evolution of the pandemic in different geographical settings. We aimed to determine the SARS-CoV-2 seroprevalence from March 2020 to March 2022 in a rural and urban setting in Kilifi County, Kenya.MethodsWe obtained representative random samples of stored serum from a pregnancy cohort study for the period March 2020 to March 2022 and tested for antibodies against the spike protein using a qualitative SARS-CoV-2 ELISA kit (Wantai, total antibodies). All positive samples were retested for anti-SARS-CoV-2 anti-nucleocapsid antibodies (Euroimmun, ELISA kits, NCP, qualitative, IgG) and anti-spike protein antibodies (Euroimmun, ELISA kits, QuantiVac; quantitative, IgG).ResultsA total of 2,495 (of 4,703 available) samples were tested. There was an overall trend of increasing seropositivity from a low of 0% [95% CI 0–0.06] in March 2020 to a high of 89.4% [95% CI 83.36–93.82] in Feb 2022. Of the Wantai test-positive samples, 59.7% [95% CI 57.06–62.34] tested positive by the Euroimmun anti-SARS-CoV-2 NCP test and 37.4% [95% CI 34.83–40.04] tested positive by the Euroimmun anti-SARS-CoV-2 QuantiVac test. No differences were observed between the urban and rural hospital but villages adjacent to the major highway traversing the study area had a higher seroprevalence.ConclusionAnti-SARS-CoV-2 seroprevalence rose rapidly, with most of the population exposed to SARS-CoV-2 within 23 months of the first cases. The high cumulative seroprevalence suggests greater population exposure to SARS-CoV-2 than that reported from surveillance data
Processing of Ice Cloud In-Situ Data Collected by Bulk Water, Scattering, and Imaging Probes: Fundamentals, Uncertainties and Efforts towards Consistency
In-situ observations of cloud properties made by airborne probes play a critical role in ice cloud research through their role in process studies, parameterization development, and evaluation of simulations and remote sensing retrievals. To determine how cloud properties vary with environmental conditions, in-situ data collected during different field projects processed by different groups must be used. However, due to the diverse algorithms and codes that are used to process measurements, it can be challenging to compare the results. Therefore it is vital to understand both the limitations of specific probes and uncertainties introduced by processing algorithms. Since there is currently no universally accepted framework regarding how in-situ measurements should be processed, there is a need for a general reference that describes the most commonly applied algorithms along with their strengths and weaknesses. Methods used to process data from bulk water probes, single particle light scattering spectrometers and cloud imaging probes are reviewed herein, with emphasis on measurements of the ice phase. Particular attention is paid to how uncertainties, caveats and assumptions in processing algorithms affect derived products since there is currently no consensus on the optimal way of analyzing data. Recommendations for improving the analysis and interpretation of in-situ data include the following: establishment of a common reference library of individual processing algorithms; better documentation of assumptions used in these algorithms; development and maintenance of sustainable community software for processing in-situ observations; and more studies that compare different algorithms with the same benchmark data sets
- …
