125,866 research outputs found
Unwrapping the Comfort of Sameness With Spanish Immersion Elementary School
I watched my 6-year-old hover around the periphery of the table, unable to find somewhere to sit. The cafeteria was a cacophony of little voices, Spanish and English, tumbling over each other, her classmates sitting close and waiting to be dismissed to homeroom.
I couldn’t help but notice how different Noelle looked from most of the children, with her liquid blond hair and saucerlike blue eyes. [excerpt
Recommended from our members
Debugging real-time software in a host-target environment
A common paradigm for the development of process-control or embedded computer software is to do most of the implementation and testing on a large host computer, then retarget the code for final checkout and production execution on the target machine. The host machine is usually large and provides a variety of program development tools, while the target may be a small, bare machine. A difficulty with the paradigm arises when the software developed has real-time constraints and is composed of multiple communicating processes. If a test execution on the target fails, it may be exceptionally tedious to determine the cause of the failure. Host machine debuggers cannot normally be applied, because the same program processing the same data will frequently exhibit different behavior on the host. Differences in processor speed, scheduling algorithm, and the like, account for the disparity. This paper proposes a partial solution to this problem, in which the errant execution reconstructed and made amenable to source language level debugging on the host. The solution involves the integrated application of a static concurrency analyzer, an interactive interpreter, and a graphical program visualization aid. Though generally applicable, the solution is described here in the context of multi-tasked real-time Ada* programs
Jesus Lives, but Should He Live in My Front Yard?
As I drove home from church, I eyed the bright foam sign my 6-year-old daughter held. “Jesus is Alive” it read in kid scrawl. “We’re supposed to put them in our yards!” Noelle beamed, eyeing her creation proudly through pink-rimmed glasses.
I imagined our wide, open yard in Pennsylvania, the green grass stretching without fences from one neighbor to the next. Our best friends in the neighborhood, secular humanists, would easily see it. I cringed. What would they think? [excerpt
An analysis of internal/external event ordering strategies for COTS distributed simulation
Distributed simulation is a technique that is used to link together several models so that they can work together (or interoperate) as a single model. The High Level Architecture (HLA) (IEEE 1516.2000) is the de facto standard that defines the technology for this interoperation. The creation of a distributed simulation of models developed in COTS Simulation Packages (CSPs) is of interest. The motivation is to attempt to reduce lead times of simulation projects by reusing models that have already been developed. This paper discusses one of the issues involved in distributed simulation with CSPs. This is the issue of synchronising data sent between models with the simulation of a model by a CSP, the so-called external/internal event ordering problem. The motivation is that the particular algorithm employed can represent a significant overhead on performance
Microfossils with possible affinities to the zygomycetous fungi in a Carboniferous cordaitalean ovule
Recommended from our members
Steps to an advanced Ada programming environment
Conceptual simplicity, tight coupling of tools, and effective support of host-target software development will characterize advanced Ada programming support environments. Several important principles have been demonstrated in the Arcturus system, including template-assisted Ada editing, command completion using Ada as a command language, and combining the advantages of interpretation and compliation. Other principles, relating to analysis, testing, and debugging of concurrent Ada programs, have appeared in other contexts. This paper discusses several of these topics, considers how they can be integrated, and argues for their inclusion in an environment appropriate for software development in the late 1980's
Recommended from our members
The use of sequencing information in software specification for verification
Software requirements specifications, virtual machine definitions, and algorithmic design all place constraints on the sequence of operations that are permissible during a program's execution. This paper discusses how these constraints can be captured and used to aid in the program verification process. The sequencing constraints can be expressed as a grammar over the alphabet of program operations. Several techniques can be used in support of testing or verification based on these specifications. Dynamic aalysis and static analysis are considered here. The automatic generation of some of these aids is feasible; the means of doing so is described
On the Moduli Space of SU(3) Seiberg-Witten Theory with Matter
We present a qualitative model of the Coulomb branch of the moduli space of
low-energy effective N=2 SQCD with gauge group SU(3) and up to five flavours of
massive matter. Overall, away from double cores, we find a situation broadly
similar to the case with no matter, but with additional complexity due to the
proliferation of extra BPS states. We also include a revised version of the
pure SU(3) model which can accommodate just the orthodox weak coupling
spectrum.Comment: 32 pages, 25 figures, uses JHEP.cls, added references, deleted joke
Speeding-up the execution of credit risk simulations using desktop grid computing: A case study
This paper describes a case study that was
undertaken at a leading European Investment
bank in which desktop grid computing was used
to speed-up the execution of Monte Carlo credit risk simulations. The credit risk simulations were modelled using commercial-off-the-shelf simulation packages (CSPs). The CSPs did not incorporate built-in support for desktop grids, and therefore the authors implemented a middleware for desktop grid computing, called WinGrid, and interfaced it with the CSP. The performance results show that WinGrid can speed-up the execution of CSP-based Monte Carlo simulations. However, since WinGrid was installed on non-dedicated PCs, the speed-up
achieved varied according to users’ PC usage.
Finally, the paper presents some lessons learnt from this case study. It is expected that this paper will encourage simulation practitioners and CSP vendors to experiment with desktop grid computing technologies with the objective of speeding-up simulation experimentation
- …
