769 research outputs found

    Systems Engineering Leading Indicators Guide, Version 1.0

    Get PDF
    The Systems Engineering Leading Indicators guide set reflects the initial subset of possible indicators that were considered to be the highest priority for evaluating effectiveness before the fact. A leading indicator is a measure for evaluating the effectiveness of a how a specific activity is applied on a program in a manner that provides information about impacts that are likely to affect the system performance objectives. A leading indicator may be an individual measure, or collection of measures, that are predictive of future system performance before the performance is realized. Leading indicators aid leadership in delivering value to customers and end users, while assisting in taking interventions and actions to avoid rework and wasted effort. The Systems Engineering Leading Indicators Guide was initiated as a result of the June 2004 Air Force/LAI Workshop on Systems Engineering for Robustness, this guide supports systems engineering revitalization. Over several years, a group of industry, government, and academic stakeholders worked to define and validate a set of thirteen indicators for evaluating the effectiveness of systems engineering on a program. Released as version 1.0 in June 2007 the leading indicators provide predictive information to make informed decisions and where necessary, take preventative or corrective action during the program in a proactive manner. While the leading indicators appear similar to existing measures and often use the same base information, the difference lies in how the information is gathered, evaluated, interpreted and used to provide a forward looking perspective

    Systems Engineering Leading Indicators Guide, Version 2.0

    Get PDF
    The Systems Engineering Leading Indicators Guide editorial team is pleased to announce the release of Version 2.0. Version 2.0 supersedes Version 1.0, which was released in July 2007 and was the result of a project initiated by the Lean Advancement Initiative (LAI) at MIT in cooperation with: the International Council on Systems Engineering (INCOSE), Practical Software and Systems Measurement (PSM), and the Systems Engineering Advancement Research Initiative (SEAri) at MIT. A leading indicator is a measure for evaluating the effectiveness of how a specific project activity is likely to affect system performance objectives. A leading indicator may be an individual measure or a collection of measures and associated analysis that is predictive of future systems engineering performance. Systems engineering performance itself could be an indicator of future project execution and system performance. Leading indicators aid leadership in delivering value to customers and end users and help identify interventions and actions to avoid rework and wasted effort. Conventional measures provide status and historical information. Leading indicators use an approach that draws on trend information to allow for predictive analysis. By analyzing trends, predictions can be forecast on the outcomes of certain activities. Trends are analyzed for insight into both the entity being measured and potential impacts to other entities. This provides leaders with the data they need to make informed decisions and where necessary, take preventative or corrective action during the program in a proactive manner. Version 2.0 guide adds five new leading indicators to the previous 13 for a new total of 18 indicators. The guide addresses feedback from users of the previous version of the guide, as well as lessons learned from implementation and industry workshops. The document format has been improved for usability, and several new appendices provide application information and techniques for determining correlations of indicators. Tailoring of the guide for effective use is encouraged. Additional collaborating organizations involved in Version 2.0 include the Naval Air Systems Command (NAVAIR), US Department of Defense Systems Engineering Research Center (SERC), and National Defense Industrial Association (NDIA) Systems Engineering Division (SED). Many leading measurement and systems engineering experts from government, industry, and academia volunteered their time to work on this initiative

    LARVA - safer monitoring of real-time Java programs (tool paper)

    Get PDF
    The use of runtime verification, as a lightweight approach to guarantee properties of systems, has been increasingly employed on real-life software. In this paper, we present the tool LARVA, for the runtime verification of properties of Java programs, including real-time properties. Properties can be expressed in a number of notations, including timed-automata enriched with stopwatches, Lustre, and a subset of the duration calculus. The tool has been successfully used on a number of case-studies, including an industrial system handling financial transactions. LARVA also performs analysis of real-time properties, to calculate, if possible, an upper-bound on the memory and temporal overheads induced by monitoring. Moreover, through property analysis, LARVA assesses the impact of slowing down the system through monitoring, on the satisfaction of the properties.peer-reviewe

    PolyLarva : technology agnostic runtime verification

    Get PDF
    With numerous specialised technologies available to industry, it is become increasingly frequent for computer systems to be composed of heterogeneous components, built over, and using different technologies and languages. While this enables developers to use the appropriate technologies for specific contexts, it becomes more challenging to ensure the correctness of the overall system. In this paper we propose a framework to enable extensible technology agnostic runtime verification and we present an extension of polyLarva, a runtime-verification tool able to handle the monitoring of heterogeneous-component system. The approach is then applied to a case study with C and Java components.peer-reviewe

    iDAH Research Software Engineering (RSE) Steering Group Working Paper

    Get PDF
    This working paper resulted from two (hybrid) workshops conducted in May and June 2022, chaired by Professor James Smithies (King’s College London) at the request of Tao-Tao Chang, AHRC Head of (Research) Infrastructure. The workshops were conceived and organised by Dr. Anna-Maria Sichani (AHRC Policy and Engagement Fellow), and hosted at The Alan Turing Institute. Contributors are listed in Appendix 4
    corecore