95,559 research outputs found
Speeding-up the execution of credit risk simulations using desktop grid computing: A case study
This paper describes a case study that was
undertaken at a leading European Investment
bank in which desktop grid computing was used
to speed-up the execution of Monte Carlo credit risk simulations. The credit risk simulations were modelled using commercial-off-the-shelf simulation packages (CSPs). The CSPs did not incorporate built-in support for desktop grids, and therefore the authors implemented a middleware for desktop grid computing, called WinGrid, and interfaced it with the CSP. The performance results show that WinGrid can speed-up the execution of CSP-based Monte Carlo simulations. However, since WinGrid was installed on non-dedicated PCs, the speed-up
achieved varied according to users’ PC usage.
Finally, the paper presents some lessons learnt from this case study. It is expected that this paper will encourage simulation practitioners and CSP vendors to experiment with desktop grid computing technologies with the objective of speeding-up simulation experimentation
Using a desktop grid to support simulation modelling
Simulation is characterized by the need to run multiple sets of computationally intensive experiments. We argue that Grid computing can reduce the overall execution time of such experiments by tapping into the typically underutilized network of departmental desktop PCs, collectively known as desktop grids. Commercial-off-the-shelf simulation packages (CSPs) are used in industry to simulate models. To investigate if Grid computing can benefit simulation, this paper introduces our desktop grid, WinGrid, and discusses how this can be used to support the processing needs of CSPs. Results indicate a linear speed up and that Grid computing does indeed hold promise for simulation
Comparing conventional and distributed approaches to simulation in complex supply-chain health systems
Decision making in modern supply chains can be extremely daunting due to their complex nature. Discrete-event simulation is a technique that can support decision making by providing what-if analysis and evaluation of quantitative data. However, modelling supply chain systems can result in massively large and complicated models that can take a very long time to run even with today's powerful desktop computers. Distributed simulation has been suggested as a possible solution to this problem, by enabling the use of multiple computers to run models. To investigate this claim, this paper presents experiences in implementing a simulation model with a 'conventional' approach and with a distributed approach. This study takes place in a healthcare setting, the supply chain of blood from donor to recipient. The study compares conventional and distributed model execution times of a supply chain model simulated in the simulation package Simul8. The results show that the execution time of the conventional approach increases almost linearly with the size of the system and also the simulation run period. However, the distributed approach to this problem follows a more linear distribution of the execution time in terms of system size and run time and appears to offer a practical alternative. On the basis of this, the paper concludes that distributed simulation can be successfully applied in certain situations
Modelling very large complex systems using distributed simulation: A pilot study in a healthcare setting
Modern manufacturing supply chains are hugely complex and like all stochastic systems, can benefit from simulation. Unfortunately supply chain systems often result in massively large and complicated models, which even today’s powerful computers cannot run efficiently. This paper presents one possible solution - distributed simulation. This pilot study is implemented in a healthcare setting, the supply chain of blood from donor to recipient
Methods for microbiological and immunological studies of space flight crews
Systematic laboratory procedures compiled as an outgrowth of a joint U.S./U.S.S.R. microbiological-immunological experiment performed during the Apollo-Soyuz Test Project space flight are presented. Included are mutually compatible methods for the identification of aerobic and microaerophilic bacteria, yeast and yeastlike microorganisms, and filamentous fungi; methods for the bacteriophage typing of Staphylococcus aureus; and methods for determining the sensitivity of S. aureus to antibiotics. Immunological methods using blood and immunological and biochemical methods using salivary parotid fluid are also described. Formulas for media and laboratory reagents used are listed
Analytical simulation of the Langley Research Center integrated life-support system, volume 1
Analytical simulation of integrated life support system and oxygen recovery syste
Developing a grid computing system for commercial-off-the-shelf simulation packages
Today simulation is becoming an increasingly
pervasive technology across major business
sectors. Advances in COTS Simulation Packages
and Commercial Simulation Software have made
it easier for users to build models, often of large complex processes. These two factors combined are to be welcomed and when used correctly can be of great benefit to organisations that make use of the technology. However, it is also the case
that users hungry for answers do not always have the time, or possibly the patience, to wait for results from multiple replications and multiple experiments as standard simulation practice would demand. There is therefore a need to support this advance in the use of simulation within today’s business with improved computing technology. Grid computing has been put forward as a potential commercial solution to this requirement. To this end, Saker Solutions and the Distributed Systems Research Group at Brunel University have developed a dedicated Grid Computing System (SakerGrid) to support the deployment of simulation models across a desktop grid of PCs. The paper identifies route taken to solve this challenging issue and suggests where the future may lie for this exciting integration of two effective but underused technologies
A critical comparison of approaches to resource name management within the IEC common information model
Copyright @ 2012 IEEEElectricity network resources are frequently identified within different power systems by inhomogeneous names and identities due to the legacy of their administration by different utility business domains. The IEC 61970 Common Information Model (CIM) enables network modeling to reflect the reality of multiple names for unique network resources. However this issue presents a serious challenge to the integrity of a shared CIM repository that has the task of maintaining a resource manifest, linking network resources to master identities, when unique network resources may have multiple names and identities derived from different power system models and other power system applications. The current approach, using CIM 15, is to manage multiple resource names within a singular CIM namespace utilizing the CIM “IdentifiedObject” and “Name” classes. We compare this approach to one using additional namespaces relating to different power systems, similar to the practice used in CIM extensions, in order to more clearly identify the genealogy of a network resource, provide faster model import times and a simpler means of supporting the relationship between multiple resource names and identities and a master resource identity.This study is supported by the UK National Grid and Brunel University
Selection of neutralizing antibody escape mutants with type A influenza virus HA-specific polyclonal antisera: possible significance for antigenic drift
Ten antisera were produced in rabbits by two or three intravenous injections of inactivated whole influenza type A virions. All contained haemagglutination-inhibition (HI) antibody directed predominantly to an epitope in antigenic site B and, in addition, various amounts of antibodies to an epitope in site A and in site D. The ability of untreated antisera to select neutralization escape mutants was investigated by incubating virus possessing the homologous haemagglutinin with antiserum adjusted to contain anti-B epitope HI titres of 100, 1000 and 10000 HIU/ml. Virus-antiserum mixtures were inoculated into embryonated hen's eggs, and progeny virus examined without further selection. Forty percent of the antisera at a titre of 1000 HIU/ml selected neutralizing antibody escape mutants as defined by their lack of reactivity to Mab HC10 (site B), and unchanged reactivity to other Mabs to site A and site D epitopes. All escape mutant-selecting antisera had a ratio of anti-site B (HC10)-epitope antibody[ratio]other antibodies of [gt-or-equal, slanted]2·0[ratio]1. The antiserum with the highest ratio (7·4[ratio]1) selected escape mutants in all eggs tested in four different experiments. No antiserum used at a titre of 10000 HIU/ml allowed multiplication of any virus. All antisera used at a titre of 100 HIU/ml permitted virus growth, but this was wild-type (wt) virus. We conclude that a predominant epitope-specific antibody response, a titre of [gt-or-equal, slanted]1000 HIU/ml, and a low absolute titre of other antibodies ([less-than-or-eq, slant]500 HIU/ml) are three requirements for the selection of escape mutants. None of the antisera in this study could have selected escape mutants without an appropriate dilution factor, so the occurrence of an escape mutant-selecting antiserum in nature is likely to be a rare event
A Web Services Component Discovery and Deployment Architecture for Simulation Model Reuse
CSPs are widely used in industry, although have yet to operate across organizational boundaries. Reuse across organizations is restricted by the same semantic issues that restrict the inter-organization use of web services. The current representations of web components are predominantly syntactic in nature lacking the fundamental semantic underpinning required to support discovery on the emerging semantic web. Semantic models, in the form of ontology, utilized by web service discovery and deployment architecture provide one approach to support simulation model reuse. Semantic interoperation is achieved through the use of simulation component ontology to identify required components at varying levels of granularity (including both abstract and specialized components). Selected simulation components are loaded into a CSP, modified according to the requirements of the new model and executed. The paper presents the development carried out within CSPI-PDG and Fluidity Group at Brunel University, of an ontology, connector software and web service discovery architecture. The ontology is extracted from simulation scenarios involving airport, restaurant and kitchen service suppliers. The ontology engineering framework and discovery architecture provide a novel approach to inter-organization simulation, adopting a less intrusive interface between participants. Although specific to CSPs the work has wider implications for the simulation community
- …
