1,076 research outputs found
Avatar: A Time- and Space-Efficient Self-Stabilizing Overlay Network
Overlay networks present an interesting challenge for fault-tolerant
computing. Many overlay networks operate in dynamic environments (e.g. the
Internet), where faults are frequent and widespread, and the number of
processes in a system may be quite large. Recently, self-stabilizing overlay
networks have been presented as a method for managing this complexity.
\emph{Self-stabilizing overlay networks} promise that, starting from any
weakly-connected configuration, a correct overlay network will eventually be
built. To date, this guarantee has come at a cost: nodes may either have high
degree during the algorithm's execution, or the algorithm may take a long time
to reach a legal configuration. In this paper, we present the first
self-stabilizing overlay network algorithm that does not incur this penalty.
Specifically, we (i) present a new locally-checkable overlay network based upon
a binary search tree, and (ii) provide a randomized algorithm for
self-stabilization that terminates in an expected polylogarithmic number of
rounds \emph{and} increases a node's degree by only a polylogarithmic factor in
expectation
Natural Notation for the Domestic Internet of Things
This study explores the use of natural language to give instructions that
might be interpreted by Internet of Things (IoT) devices in a domestic `smart
home' environment. We start from the proposition that reminders can be
considered as a type of end-user programming, in which the executed actions
might be performed either by an automated agent or by the author of the
reminder. We conducted an experiment in which people wrote sticky notes
specifying future actions in their home. In different conditions, these notes
were addressed to themselves, to others, or to a computer agent.We analyse the
linguistic features and strategies that are used to achieve these tasks,
including the use of graphical resources as an informal visual language. The
findings provide a basis for design guidance related to end-user development
for the Internet of Things.Comment: Proceedings of the 5th International symposium on End-User
Development (IS-EUD), Madrid, Spain, May, 201
Preventing Advanced Persistent Threats in Complex Control Networks
An Advanced Persistent Threat (APT) is an emerging attack against Industrial Control and Automation Systems, that is executed over a long period of time and is difficult to detect. In this context, graph theory can be applied to model the interaction among nodes and the complex attacks affecting them, as well as to design recovery techniques that ensure the survivability of the network. Accordingly, we leverage a decision model to study how a set of hierarchically selected nodes can collaborate to detect an APT within the network, concerning the presence of changes in its topology. Moreover, we implement a response service based on redundant links that dynamically uses a secret sharing scheme and applies a flexible routing protocol depending on the severity of the attack. The ultimate goal is twofold: ensuring the reachability between nodes despite the changes and preventing the path followed by messages from being discovered.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Formal specification of a self-sustainable holonic system for smart electrical micro-grids
Stand-alone micro-grids have emerged within the smart grids field, facing important challenges related to their proper and efficient operation. An example is the self-sustainability when the micro-grid is disconnected from the main utility, e.g., due to a failure in the main utility or due to geographical situations, which requires the efficient control of energy demand and production. This paper describes the formal specification of a holonic system architecture that is able to perform the automation control functions in electrical stand-alone micro-grids, particularly aiming to improve their self-sustainability. The system aims at optimizing the power flow among the different electrical players, both producers and consumers, to keep the micro-grid operating even under adverse situations. The behaviour of each individual holon and their coordination patterns were modelled, analysed and validated using the Petri net formalism, allowing the complete verification of the system correctness during the design phase.info:eu-repo/semantics/publishedVersio
Hierarchic interactive path planning in virtual reality
To save time and money while designing new products, industry needs tools to design, test and validate the product using virtual prototypes. These vir- tual prototypes must enable to test the product at all Product Life-cycle Management (PLM) stages. Many operations in PLM involve human manipulation of product com- ponents in cluttered environment (product assembly, disassembly or maintenance). Virtual Reality (VR) enables real operators to perform these tests with virtual proto- types. This work introduces a novel path planning architecture allowing collaboration between a VR user and an automatic path planning system. It is based on an origi- nal environment model including semantic, topological and geometric information, and an automatic path planning process split in two phases: coarse (semantic and topological information) and fine (semantic and geometric information) planning. The collaboration between VR user and automatic path planner is made of 3 main aspects. First, the VR user is guided along a pre-computed path through a haptic device whereas he VR user can go away from the proposed path to explore possible better ways. Second the authority of automatic planning system is balanced to let the user free to explore alternatives (geometric layer). Third the intents of VR user are predicted (on topological layer) to be integrated in the re-planning process. Exper- iments are provided to illustrate the multi-layer representation of the environment, the path planning process, the control sharing and the intent prediction
Vertex importance extension of betweenness centrality algorithm
Variety of real-life structures can be simplified by a graph. Such simplification emphasizes the structure represented by vertices connected via edges. A common method for the analysis of the vertices importance in a network is betweenness centrality. The centrality is computed using the information about the shortest paths that exist in a graph. This approach puts the importance on the edges that connect the vertices. However, not all vertices are equal. Some of them might be more important than others or have more significant influence on the behavior of the network. Therefore, we introduce the modification of the betweenness centrality algorithm that takes into account the vertex importance. This approach allows the further refinement of the betweenness centrality score to fulfill the needs of the network better. We show this idea on an example of the real traffic network. We test the performance of the algorithm on the traffic network data from the city of Bratislava, Slovakia to prove that the inclusion of the modification does not hinder the original algorithm much. We also provide a visualization of the traffic network of the city of Ostrava, the Czech Republic to show the effect of the vertex importance adjustment. The algorithm was parallelized by MPI (http://www.mpi-forum.org/) and was tested on the supercomputer Salomon (https://docs.it4i.cz/) at IT4Innovations National Supercomputing Center, the Czech Republic.808726
Your Proof Fails? Testing Helps to Find the Reason
Applying deductive verification to formally prove that a program respects its
formal specification is a very complex and time-consuming task due in
particular to the lack of feedback in case of proof failures. Along with a
non-compliance between the code and its specification (due to an error in at
least one of them), possible reasons of a proof failure include a missing or
too weak specification for a called function or a loop, and lack of time or
simply incapacity of the prover to finish a particular proof. This work
proposes a new methodology where test generation helps to identify the reason
of a proof failure and to exhibit a counter-example clearly illustrating the
issue. We describe how to transform an annotated C program into C code suitable
for testing and illustrate the benefits of the method on comprehensive
examples. The method has been implemented in STADY, a plugin of the software
analysis platform FRAMA-C. Initial experiments show that detecting
non-compliances and contract weaknesses allows to precisely diagnose most proof
failures.Comment: 11 pages, 10 figure
A wide-spectrum language for verification of programs on weak memory models
Modern processors deploy a variety of weak memory models, which for
efficiency reasons may (appear to) execute instructions in an order different
to that specified by the program text. The consequences of instruction
reordering can be complex and subtle, and can impact on ensuring correctness.
Previous work on the semantics of weak memory models has focussed on the
behaviour of assembler-level programs. In this paper we utilise that work to
extract some general principles underlying instruction reordering, and apply
those principles to a wide-spectrum language encompassing abstract data types
as well as low-level assembler code. The goal is to support reasoning about
implementations of data structures for modern processors with respect to an
abstract specification.
Specifically, we define an operational semantics, from which we derive some
properties of program refinement, and encode the semantics in the rewriting
engine Maude as a model-checking tool. The tool is used to validate the
semantics against the behaviour of a set of litmus tests (small assembler
programs) run on hardware, and also to model check implementations of data
structures from the literature against their abstract specifications
Refinement algebra for probabilistic programs
We identify a refinement algebra for reasoning about probabilistic program transformations in a total-correctness setting. The algebra is equipped with operators that determine whether a program is enabled or terminates respectively. As well as developing the basic theory of the algebra we demonstrate how it may be used to explain key differences and similarities between standard (i.e. non-probabilistic) and probabilistic programs and verify important transformation theorems for probabilistic action systems.29 page(s
Counterexample-Guided Polynomial Loop Invariant Generation by Lagrange Interpolation
We apply multivariate Lagrange interpolation to synthesize polynomial
quantitative loop invariants for probabilistic programs. We reduce the
computation of an quantitative loop invariant to solving constraints over
program variables and unknown coefficients. Lagrange interpolation allows us to
find constraints with less unknown coefficients. Counterexample-guided
refinement furthermore generates linear constraints that pinpoint the desired
quantitative invariants. We evaluate our technique by several case studies with
polynomial quantitative loop invariants in the experiments
- …
