280 research outputs found

    Analysing Switch-Case Code with Abstract Execution

    Get PDF
    Constructing the control-flow graph (CFG) of machine code is made difficult by dynamic transfers of control (DTC), where the address of the next instruction is computed at run-time. Switchcase statements make compilers generate a large variety of machine-code forms with DTC. Two analysis approaches are commonly used: pattern-matching methods identify predefined instruction patterns to extract the target addresses, while analytical methods try to compute the set of target addresses using a general value-analysis. We tested the abstract execution method of the SWEET tool as a value analysis for switch-case code. SWEET is here used as a plugin to the Bound-T tool: thus our work can also be seen as an experiment in modular tool design, where a general value-analysis tool is used to aid the CFG construction in a WCET analysis tool. We find that the abstract-execution analysis works at least as well as the switch-case analyses in Bound-T itself, which are mostly based on pattern-matching. However, there are still some weaknesses: the abstract domains available in SWEET are not well suited to representing sets of DTC target addresses, which are small but sparse and irregular. Also, in some cases the abstract-execution analysis fails because the used domain is not relational, that is, does not model arithmetic relationships between the values of different variables. Future work will be directed towards the design of abstract domains eliminating these weaknesses

    Input-dependency analysis for hard real-time software

    Get PDF
    The execution time of soft-ware for hard real-time systems must be predictable. Further safe and not overly pessimistic bounds for the worst-case execution time (WCET) must be computable. We conceived a programming strategy called WCET-oriented programming and a code transformation strategy, the single-path conversion, that aid programmers in producing code that meets these requirements. These strategies avoid respectively eliminate input-data dependencies in the code. The paper describes the formal analysis, based on abstract interpretation, that identifies input-data dependencies in the code and thus forms the basis for the strategies provided for hard real-time code development

    Formal derivation of concurrent assignments from scheduled single assignments

    Get PDF
    Concurrent assignments are commonly used to describe synchronous parallel computations. We show how a sequence of concurrent assignments can be formally derived from the schedule of an acyclic single assignment task graph and a memory allocation. In order to do this we develop a formal model of memory allocation in synchronous systems. We use weakest precondition semantics to show that the sequence of concurrent assignments computes the same values as the scheduled single assignments. We give a lower bound on the memory requirements of memory allocations for a given schedule. This bound is tight: we define a class of memory allocations whose memory requirements always meets the bound. This class corresponds to conventional register allocation for DAGs and is suitable when memory access times are uniform. We furthermore define a class of simple ``shift register'' memory allocation. These allocations have the advantage of a minimum of explicit storage control and they yield local or nearest-neighbour accesses in distributed systems whenever the schedule allows this. Thus, this class of allocations is suitable when designing parallel special-purpose hardware, like systolic arrays

    Computing transitive closure on systolic arrays of fixed size

    Get PDF
    Forming the transitive closure of a binary relation (or directed graph) is an important part of many algorithms. When the relation is represented by a bit matrix, the transitive closure can be efficiently computed in parallel in a systolic array. Various such arrays for computing the transitive closure have been proposed. They all have in common, though, that the size of the array must be proportional to the number of nodes. Here we propose two ways of computing the transitive closure of an arbitrarily big graph on a systolic array of fixed size. The first method is a simple partitioning of a well-known systolic algorithm for computing the transitive closure. The second is a block-structured algorithm for computing the transitive closure. This algorithm is suitable for execution on a systolic array, that can multiply fixed size bit matrices and compute transitive closure of graphs with a fixed number of nodes. The algorithm is, however, not limited to systolic array implementations; it works on any parallel architecture that can form the transitive closure and product of fixed-size bit matrices efficiently. The shortest path problem, for directed graphs with weighted edges, can also be solved for arbitrarily large graphs on a fixed-size systolic array in the same manner, devised above, as the transitive closure is computed

    EFFECTS OF OCCUPATION SAFETY AND HEALTH MANAGEMENT PRACTICES ON EMPLOYEE PRODUCTIVITY: A CASE OF NAKURU WATER AND SANITATION COMPANY

    Get PDF
    The International Labour Organization estimates that, globally, about 2.2 million people die annually from occupational accidents and diseases another 270 million suffer from serious non-fatal injuries while 160 million fall ill for shorter or longer periods from work-related causes. The estimated costs of occupational accidents and occupational diseases amount to approximately 4 percent of the world’s gross domestic product. This implies a considerable loss resulting to negative impact on economic growth and which puts a burden to the society. Thus preventing occupational accidents and diseases should make economic sense for society as well as being good business practice for companies. Nakuru Water and Sanitation Services Company is one of the institutions within Nakuru County involved in dangerous activities. However, the health and safety practices in place and how they affect employee productivity is not clear. The general objective of the study therefore was to assess the effects of safety and health management on employee productivity at Nakuru Water and Sanitation Services Company. Specific objectives of the study were: to establish the effects of management commitment to safety and health affects employee productivity, to assess how job risk and hazard assessment affects employee productivity, to establish how provision of personal protective equipment affects the productivity of employees and to assess the effects of safety trainings on productivity of employees at Nakuru Water and Sanitation Services Company. The study adopted a descriptive survey research design. Target population comprised all the technical staff of Nakuru Water and Sanitation Services Company in water treatment and distribution, there is 335 staff in field offices dealing with water distribution that formed the target population. These include plumbers, technicians, engineers and chemists. A sample of 77 technical staff was selected using stratified random sampling technique. Primary data was collected using self administered questionnaires while in the analysis, descriptive statistics were obtained for all objectives which include the mean, standard deviation, frequencies and percentages. Relationship between occupational safety and health management and employee productivity was obtained using a regression analysis. The study found out that Management commitment to implementation of occupation safety and health has the highest effect on employee productivity followed by provision of personal protective equipment and safety trainings. Less emphasis was placed on job risk and hazard assessment which was also found not to have a significant direct effect on employee productivity. The study therefore recommended that management commitment should be emphasized in implementation of occupational safety and health across all industries as it creates a social bond with the employees which translate to improvement in productivity. Further, Nakuru Water and Sanitation Company should place greater emphasis and enhance proactive job risk and hazard assessment for both routine and new project

    The WCET Tool Challenge 2011

    Get PDF
    Following the successful WCET Tool Challenges in 2006 and 2008, the third event in this series was organized in 2011, again with support from the ARTIST DESIGN Network of Excellence. Following the practice established in the previous Challenges, the WCET Tool Challenge 2011 (WCC'11) defined two kinds of problems to be solved by the Challenge participants with their tools, WCET problems, which ask for bounds on the execution time, and flow-analysis problems, which ask for bounds on the number of times certain parts of the code can be executed. The benchmarks to be used in WCC'11 were debie1, PapaBench, and an industrial-strength application from the automotive domain provided by Daimler AG. Two default execution platforms were suggested to the participants, the ARM7 as "simple target'' and the MPC5553/5554 as a "complex target,'' but participants were free to use other platforms as well. Ten tools participated in WCC'11: aiT, Astr\'ee, Bound-T, FORTAS, METAMOC, OTAWA, SWEET, TimeWeaver, TuBound and WCA

    Data Cache Locking for Higher Program Predictability

    Get PDF
    ABSTRACT Caches have become increasingly important with the widening gap between main memory and processor speeds. However, they are a source of unpredictability due to their characteristics, resulting in programs behaving in a different way than expected. Cache locking mechanisms adapt caches to the needs of real-time systems. Locking the cache is a solution that trades performance for predictability: at a cost of generally lower performance, the time of accessing the memory becomes predictable. This paper combines compile-time cache analysis with data cache locking to estimate the worst-case memory performance (WCMP) in a safe, tight and fast way. In order to get predictable cache behavior, we first lock the cache for those parts of the code where the static analysis fails. To minimize the performance degradation, our method loads the cache, if necessary, with data likely to be accessed. Experimental results show that this scheme is fully predictable, without compromising the performance of the transformed program. When compared to an algorithm that assumes compulsory misses when the state of the cache is unknown, our approach eliminates all overestimation for the set of benchmarks, giving an exact WCMP of the transformed program without any significant decrease in performance
    corecore