411 research outputs found
The McKay correspondence as an equivalence of derived categories
The classical McKay correspondence relates representations of a finite subgroup
G ⊂ SL(2,C) to the cohomology of the well-known minimal resolution of the
Kleinian singularity C2/G. Gonzalez-Sprinberg and Verdier [10] interpreted the
McKay correspondence as an isomorphism on K theory, observing that the representation
ring of G is equal to the G-equivariant K theory of C2. More precisely,
they identify a basis of the K theory of the resolution consisting of the classes of
certain tautological sheaves associated to the irreducible representations of G
Eliminating stack overflow by abstract interpretation
ManuscriptAn important correctness criterion for software running on embedded microcontrollers is stack safety: a guarantee that the call stack does not overflow. Our first contribution is a method for statically guaranteeing stack safety of interrupt-driven embedded software using an approach based on context-sensitive dataflow analysis of object code. We have implemented a prototype stack analysis tool that targets software for Atmel AVR microcontrollers and tested it on embedded applications compiled from up to 30,000 lines of C. We experimentally validate the accuracy of the tool, which runs in under 10 sec on the largest programs that we tested. The second contribution of this paper is the development of two novel ways to reduce stack memory requirements of embedded software
Evolving real-time systems using hierarchical scheduling and concurrency analysis
Journal ArticleWe have developed a new way to look at real-time and embedded software: as a collection of execution environments created by a hierarchy of schedulers. Common schedulers include those that run interrupts, bottom-half handlers, threads, and events. We have created algorithms for deriving response times, scheduling overheads, and blocking terms for tasks in systems containing multiple execution environments. We have also created task scheduler logic, a formalism that permits checking systems for race conditions and other errors. Concurrency analysis of low-level software is challenging because there are typically several kinds of locks, such as thread mutexes and disabling interrupts, and groups of cooperating tasks may need to acquire some, all, or none of the available types of locks to create correct software. Our high-level goal is to create systems that are evolvable: they are easier to modify in response to changing requirements than are systems created using traditional techniques. We have applied our approach to two case studies in evolving software for networked sensor nodes
Lock inference for systems software
Journal ArticleWe have developed task scheduler logic (TSL) to automate reasoning about scheduling and concurrency in systems software. TSL can detect race conditions and other errors as well as supporting lock inference: the derivation of an appropriate lock implementation for each critical section in a system. Lock inference solves a number of problems in creating flexible, reliable, and efficient systems software. TSL is based on a notion of asymmetrical preemption relations and it exploits the hierarchical inheritance of scheduling properties that is common in systems software
Static and dynamic structure in design patterns
technical reportDesign patterns are a valuable mechanism for emphasizing structure, capturing design expertise, and facilitating restructuring of software systems. Patterns are typically applied in the context of an object-oriented language and are implemented so that the pattern participants correspond to object instances that are created and connected at run-time. This paper describes a complementary realization of design patterns, in which the pattern participants are statically instantiated and connected components. Our approach separates the static parts of the software design from the dynamic parts of the system behavior. This separation makes the software design more amenable to analysis, enabling more effective and domain specific detection of system design errors, prediction of run-time behavior, and more effective optimization. This technique is applicable to imperative, functional, and object-oriented languages: we have extended C, Scheme, and Java with our component model. In this paper, we illustrate this approach in the context of the OSKit, a collection of operating system components written in C
Lung clearance index in adults and children with cystic fibrosis
Background: Lung clearance index (LCI) has good clinimetric properties and an acceptable feasibility profile as a surrogate endpoint in Cystic Fibrosis (CF). Although most studies to date have been in children, increasing numbers of adults with CF also have normal spirometry. Further study of LCI as an endpoint in CF adults is required. Therefore, the purpose of this study was to determine the clinimetric properties of LCI over the complete age range of people with CF.Methods: Clinically stable adults and children with CF and age matched healthy controls were recruited.Results: LCI and spirometry data for 110 CF subjects and 61 controls were collected at a stable visit. CF Questionnaire-Revised (CFQ-R) was completed by 80/110 CF subjects. Fifty-six CF subjects completed a second stable visit. The LCI CV% was 4.1% in adults and 6.3% in children with CF. The coefficient of repeatability of LCI was 1.2 in adults and 1.3 in children. In both adults and children, LCI (AUCROC=0.93 and 0.84) had greater combined sensitivity and specificity to discriminate between people with CF and controls compared to FEV1 (AUCROC=0.88 and 0.60) and FEF25-75 (AUCROC=0.87 and 0.68). LCI correlated significantly with the CFQ-R treatment burden in adults (r=-0.37; p<0.01) and children (r=-0.50; p<0.01). Washout tests were successful in 90% of CF subjects and were perceived as comfortable and easy to perform in both adults and children.Conclusions: These data support the use of LCI as a surrogate outcome measure in CF clinical trials in adults as well as children
The X factor : reflections on containment
The X factor (Oxforddictionaries.com, n.d.) noun 1. a noteworthy special talent or quality: 'There are plenty of luxury cars around, but the S-Type has that special X factor.' 2. a variable in a given situation that could have the most significant impact on the outcome: 'The young vote may turn out to be the X factor.' Around a year ago I began an academic learning journey when I undertook to study for an MSc in residential child care. My initial goal when undertaking this was to be more effective in improving outcomes for the young people I worked with. Up until this point in my career, my practice development and understanding of my role as a residential worker had been shaped by a combination of observing experienced practitioners, short training courses, and trial and error. I observed individuals whose practice resonated with my own thoughts on positive practice and I tried to emulate them. Throughout my six years in residential care I have been lucky enough to work alongside a small number of individuals who seemed to have a magical 'X factor' within their practice that resonated calm, understanding and at the same time professionalism
Defining interfaces between hardware and software: Quality and performance
One of the most important interfaces in a computer system is the interface between hardware and software. This interface is the contract between the hardware designer and the programmer that defines the functional behaviour of the hardware. This thesis examines two critical aspects of defining the hardware-software interface: quality and performance.
The first aspect is creating a high quality specification of the interface as conventionally defined in an instruction set architecture. The majority of this thesis is concerned with creating a specification that covers the full scope of the interface; that is applicable to all current implementations of the architecture; and that can be trusted to accurately describe the behaviour of implementations of the architecture. We describe the development of a formal specification of the two major types of Arm processors: A-class (for mobile devices such as phones and tablets) and M-class (for micro-controllers). These specifications are unparalleled in their scope, applicability and trustworthiness. This thesis identifies and illustrates what we consider the key ingredient in achieving this goal: creating a specification that is used by many different user groups. Supporting many different groups leads to improved quality as each group finds different problems in the specification; and, by providing value to each different group, it helps justify the considerable effort required to create a high quality specification of a major processor architecture. The work described in this thesis led to a step change in Arm's ability to use formal verification techniques to detect errors in their processors; enabled extensive testing of the specification against Arm's official architecture conformance suite; improved the quality of Arm's architecture conformance suite based on measuring the architectural coverage of the tests; supported earlier, faster development of architecture extensions by enabling animation of changes as they are being made; and enabled early detection of problems created from architecture extensions by performing formal validation of the specification against semi-structured natural language specifications. As far as we are aware, no other mainstream processor architecture has this capability. The formal specifications are included in Arm's publicly released architecture reference manuals and the A-class specification is also released in machine-readable form.
The second aspect is creating a high performance interface by defining the hardware-software interface of a software-defined radio subsystem using a programming language. That is, an interface that allows software to exploit the potential performance of the underlying hardware. While the hardware-software interface is normally defined in terms of machine code, peripheral control registers and memory maps, we define it using a programming language instead. This higher level interface provides the opportunity for compilers to hide some of the low-level differences between different systems from the programmer: a potentially very efficient way of providing a stable, portable interface without having to add hardware to provide portability between different hardware platforms. We describe the design and implementation of a set of extensions to the C programming language to support programming high performance, energy efficient, software defined radio systems. The language extensions enable the programmer to exploit the pipeline parallelism typically present in digital signal processing applications and to make efficient use of the asymmetric multiprocessor systems designed to support such applications. The extensions consist primarily of annotations that can be checked for consistency and that support annotation inference in order to reduce the number of annotations required. Reducing the number of annotations does not just save programmer effort, it also improves portability by reducing the number of annotations that need to be changed when porting an application from one platform to another. This work formed part of a project that developed a high-performance, energy-efficient, software defined radio capable of implementing the physical layers of the 4G cellphone standard (LTE), 802.11a WiFi and Digital Video Broadcast (DVB) with a power and silicon area budget that was competitive with a conventional custom ASIC solution.
The Arm architecture is the largest computer architecture by volume in the world. It behooves us to ensure that the interface it describes is appropriately defined
A precise semantics for ultraloose specifications
All formal specifiers face the danger of overspecification: accidentally writing an overly restrictive specification. This problem is particularly acute for axiomatic specifications because it is so easy to write axioms which hold for some of the intended implementations but not for all of them (or, rather, it is so hard not to write overly strong axioms). One of the best developed ways of recovering some of those implementations which do not literally satisfy the specification is to apply a "behavioural abstraction operator" to a specification: adding in those implementations which have the same "behaviour" as an implementation which does satisfy the specification. In two recent papers Wirsing and Broy propose an alternative (and apparently simpler) approach which they call "ultraloose specification." This approach is based on a particular style of writing axioms which avoids certain forms of overspecification. An important, unanswered question is "How does the ultraloose approach re-late to the other solutions?" The major achievement of this thesis is a proof that the ultraloose approach is semantically equivalent to the use of the "behavioural abstraction operator." This result is rather surprising in the light of a result by Schoett which seems to say that such a result is impossible
- …
