635 research outputs found
Formal Verification of Input-Output Mappings of Tree Ensembles
Recent advances in machine learning and artificial intelligence are now being
considered in safety-critical autonomous systems where software defects may
cause severe harm to humans and the environment. Design organizations in these
domains are currently unable to provide convincing arguments that their systems
are safe to operate when machine learning algorithms are used to implement
their software.
In this paper, we present an efficient method to extract equivalence classes
from decision trees and tree ensembles, and to formally verify that their
input-output mappings comply with requirements. The idea is that, given that
safety requirements can be traced to desirable properties on system
input-output patterns, we can use positive verification outcomes in safety
arguments. This paper presents the implementation of the method in the tool
VoTE (Verifier of Tree Ensembles), and evaluates its scalability on two case
studies presented in current literature.
We demonstrate that our method is practical for tree ensembles trained on
low-dimensional data with up to 25 decision trees and tree depths of up to 20.
Our work also studies the limitations of the method with high-dimensional data
and preliminarily investigates the trade-off between large number of trees and
time taken for verification
A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing
Edge computing is promoted to meet increasing performance needs of
data-driven services using computational and storage resources close to the end
devices, at the edge of the current network. To achieve higher performance in
this new paradigm one has to consider how to combine the efficiency of resource
usage at all three layers of architecture: end devices, edge devices, and the
cloud. While cloud capacity is elastically extendable, end devices and edge
devices are to various degrees resource-constrained. Hence, an efficient
resource management is essential to make edge computing a reality. In this
work, we first present terminology and architectures to characterize current
works within the field of edge computing. Then, we review a wide range of
recent articles and categorize relevant aspects in terms of 4 perspectives:
resource type, resource management objective, resource location, and resource
use. This taxonomy and the ensuing analysis is used to identify some gaps in
the existing research. Among several research gaps, we found that research is
less prevalent on data, storage, and energy as a resource, and less extensive
towards the estimation, discovery and sharing objectives. As for resource
types, the most well-studied resources are computation and communication
resources. Our analysis shows that resource management at the edge requires a
deeper understanding of how methods applied at different levels and geared
towards different resource types interact. Specifically, the impact of mobility
and collaboration schemes requiring incentives are expected to be different in
edge architectures compared to the classic cloud solutions. Finally, we find
that fewer works are dedicated to the study of non-functional properties or to
quantifying the footprint of resource management techniques, including
edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless
Communications and Mobile Computing journa
Understanding Shared Memory Bank Access Interference in Multi-Core Avionics
Deployment of multi-core platforms in safety-critical applications requires reliable estimation of worst-case response time (WCRT) for critical processes. Determination of WCRT needs to accurately estimate and measure the interferences arising from multiple processes and multiple cores. Earlier works have proposed frameworks in which CPU, shared cache, and shared memory (DRAM) interferences can be estimated using some application and platform-dependent parameters. In this work we examine a recent work in which single core equivalent (SCE) worst case execution time is used as a basis for deriving WCRT. We describe the specific requirements in an avionics context including the sharing of memory banks by multiple processes on multiple cores, and adapt the SCE framework to account for them. We present the needed adaptations to a real-time operating system to enforce the requirements, and present a methodology for validating the theoretical WCRT through measurements on the resulting platform. The work reveals that the framework indeed creates a (pessimistic) bound on the WCRT. It also discloses that the maximum interference for memory accesses does not arise when all cores share the same memory bank
Boumediene v. Bush: Another Chapter in the Court’s Jurisprudence on Civil Liberties at Guantanamo Bay
A recent surge in the usage of instant messaging (IM) applications on mobile devices has brought the energy efficiency of these applications into focus of attention. Although IM applications are changing the message communication landscape, this work illustrates that the current versions of IM applications differ vastly in energy consumption when using the third generation (3G) cellular communication. This paper shows the interdependency between energy consumption and IM data patterns in this context. We analyse the user interaction pattern using a IM dataset, consisting of 1043370 messages collected from 51 mobile users. Based on the usage characteristics, we propose a message bundling technique that aggregates consecutive messages over time, reducing the energy consumption with a trade-off against latency. The results show that message bundling can save up to 43% in energy consumption while still maintaining the conversation function. Finally, the energy cost of a common functionality used in IM applications that informs that the user is currently typing a response, so called typing notification, is evaluated showing an energy increase ranging from 40-104%
Challenges to clinical research in a rural african hospital; a personal perspective from Tanzania.
UNLABELLED: This article is based on a talk given at the Japanese Society for Tropical medicine Annual Meeting in 2014. The severe febrile illness study was established in 2005. The aim of the project was to define the aetiology of febrile disease in children admitted to a hospital in Tanzania. Challenges arose in many areas: STUDY DESIGN: An initial plan to recruit only the severely ill was revised to enroll all febrile admissions leading to a more comprehensive dataset but much increased costs. Operationally a decision was made to set up a paediatric acute admissions unit (PAAU) in the hospital to facilitate recruitment and to provide appropriate initial care in line with perceived ethical obligations. This had knock on effects relating to the responsibilities that were taken on but also some unexpected positive outcomes. Study personnel: Local research staff were sometimes called upon to make up temporary shortfalls in the hospital staffing. Lack of staff made it impossible to recruit patients around the clock, seven days a week creating the challenge of ensuring representative sampling. Quality control: Studies based on clinical examination create unique quality control challenges-how to ensure that clinical staff are examining in a systematic and reproducible way. We designed a sub-study to both explore this and improve quality. SUMMARY: Setting up clinical research projects is severely resource poor settings creates many challenges including those of an operational, technical and ethical nature. Whilst there are no 'right answers' an awareness of these problems can help overcome them
Point-of-care measurement of blood lactate in children admitted with febrile illness to an African District Hospital.
BACKGROUND: Lactic acidosis is a consistent predictor of mortality owing to severe infectious disease, but its detection in low-income settings is limited to the clinical sign of "deep breathing" because of the lack of accessible technology for its measurement. We evaluated the use of a point-of-care (POC) diagnostic device for blood lactate measurement to assess the severity of illness in children admitted to a district hospital in Tanzania. METHODS: Children between the ages of 2 months and 13 years with a history of fever were enrolled in the study during a period of 1 year. A full clinical history and examination were undertaken, and blood was collected for culture, microscopy, complete blood cell count, and POC measurement of blood lactate and glucose. RESULTS: The study included 3248 children, of whom 164 (5.0%) died; 45 (27.4%) of these had raised levels of blood lactate (>5 mmol/L) but no deep breathing. Compared with mortality in children with lactate levels of ≤ 3 mmol/L, the unadjusted odds of dying were 1.6 (95% confidence interval [CI].8-3.0), 3.4 (95% CI, 1.5-7.5), and 8.9 (95% CI, 4.7-16.8) in children with blood lactate levels of 3.1-5.0, 5.1-8.0, or >8.0 mmol/L, respectively. The prevalence of raised lactate levels (>5 mmol/L) was greater in children with malaria than in children with nonmalarial febrile illness (P < .001) although the associated mortality was greater in slide-negative children. CONCLUSIONS: POC lactate measurement can contribute to the assessment of children admitted to hospital with febrile illness and can also create an opportunity for more hospitals in resource-poor settings to participate in clinical trials of interventions to reduce mortality associated with hyperlactatemia
A közfoglalkoztatás térbeli egyenlőtlenségei
In the event of a disaster, telecommunication infrastructures can be severely damaged or overloaded. Hastily formed networks can provide communication services in an ad hoc manner. These networks are challenging due to the chaotic context where intermittent connection is the norm and the identity and number of participants cannot be assumed. In such environments malicious actors may try to disrupt the communications to create more chaos for their own benefit. This paper proposes a general security framework for monitoring and reacting to disruptive attacks. It includes a collection of functions to detect anomalies, diagnose them, and perform mitigation. The measures are deployed in each node in a fully distributed fashion, but their collective impact is a significant resilience to attacks, so the actors can disseminate information under adverse conditions. The approach is evaluated in the context of a simulated disaster area network with a many-cast dissemination protocol, Random Walk Gossip, with a store-and-forward mechanism. A challenging threat model where adversaries may 1) try to drain the resources both at node level (battery life) and network level (bandwidth), or 2) reduce message dissemination in their vicinity, without spending much of their own energy, is adopted. The results demonstrate that the approach diminishes the impact of the attacks considerably.funding agencies|Swedish Civil Contingencies Agency (MSB)||national Graduate school in computer science (CUGS)||project Hastily Formed Networks|37|</p
Adsorption and Reduction of NO on Tin(W)Oxide Doped with Chromium(lll) Oxide
Functional Reactive Programming (FRP) is claimed to be a good choice for event handling applications. Current object- oriented telecom applications are known to suffer from additional complexity due to event handling code. In this paper we study the maintainability of FRP programs in the tele- com domain compared to traditional object-oriented programming (OOP), with the motivation that higher maintainability increases the service quality and decreases the costs. Two implementations of the same procedure are created: one using Haskell and the reactive-banana FRP frame- work and one using C++ and the OOP paradigm. Four software experts each with over 20 years of experience and three development engineers working on a product subject to study were engaged in evaluations, based on a questionnaire involving five different aspects of maintainability. The evaluations indicate a higher maintainability profile for FRP compared with OOP. This is confirmed by a more detailed analysis of the code size. While performance was not a main criteria, a preliminary evaluation shows that the OOP prototype is 8-10 times faster than the FRP prototype in the current (non-optimised) implementations.Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected].</p
Inpatient child mortality by travel time to hospital in a rural area of Tanzania.
OBJECTIVE: To investigate the association, if any, between child mortality and distance to the nearest hospital. METHODS: The study was based on data from a 1-year study of the cause of illness in febrile paediatric admissions to a district hospital in north-east Tanzania. All villages in the catchment population were geolocated, and travel times were estimated from availability of local transport. Using bands of travel time to hospital, we compared admission rates, inpatient case fatality rates and child mortality rates in the catchment population using inpatient deaths as the numerator. RESULTS: Three thousand hundred and eleven children under the age of 5 years were included of whom 4.6% died; 2307 were admitted from <3 h away of whom 3.4% died and 804 were admitted from ≥ 3 h away of whom 8.0% died. The admission rate declined from 125/1000 catchment population at <3 h away to 25/1000 at ≥ 3 h away, and the corresponding hospital deaths/catchment population were 4.3/1000 and 2.0/1000, respectively. Children admitted from more than 3 h away were more likely to be male, had a longer pre-admission duration of illness and a shorter time between admission and death. Assuming uniform mortality in the catchment population, the predicted number of deaths not benefiting from hospital admission prior to death increased by 21.4% per hour of travel time to hospital. If the same admission and death rates that were found at <3 h from the hospital applied to the whole catchment population and if hospital care conferred a 30% survival benefit compared to home care, then 10.3% of childhood deaths due to febrile illness in the catchment population would have been averted. CONCLUSIONS: The mortality impact of poor access to hospital care in areas of high paediatric mortality is likely to be substantial although uncertainty over the mortality benefit of inpatient care is the largest constraint in making an accurate estimate
Wheels within Wheels: Making Fault Management Cost-Effective
Local design and optimization of the components of a fault management system results in sub-optimal decisions. This means that the target system will likely not meet its objectives (under-performs) or cost too much if conditions, objectives, or constraints change. We can fix this by applying a nested, management system for the fault-management system itself. We believe that doing so will produce a more resilient, self-aware, system that can operate more effectively across a wider range of conditions, and provide better behavior at closer to optimal cost.
This document summarizes the results of the Working Group 7
- ``Cost-Effective Fault Management\u27\u27 - at the Dagstuhl Seminar 09201
``Self-Healing and Self-Adaptive Systems\u27\u27 (organized by
A. Andrzejak, K. Geihs, O. Shehory and J. Wilkes).
The seminar was held from May 10th 2009 to May 15th 2009
in Schloss Dagstuhl~--~Leibniz Center for Informatics
- …
