1,423 research outputs found
Quality criteria for multimedia
The meaning of the term quality as used by multimedia workers in the field has become devalued. Almost every package is promoted by its developers as being of the ‘highest quality’. This paper draws on practical experience from a number of major projects to argue, from a quality‐assurance position, that multimedia materials should meet pre‐defined criteria relating to their objectives, content and incidence of errors
Formal verification of AI software
The application of formal verification techniques to Artificial Intelligence (AI) software, particularly expert systems, is investigated. Constraint satisfaction and model inversion are identified as two formal specification paradigms for different classes of expert systems. A formal definition of consistency is developed, and the notion of approximate semantics is introduced. Examples are given of how these ideas can be applied in both declarative and imperative forms
Reasoning about the Reliability of Diverse Two-Channel Systems in which One Channel is "Possibly Perfect"
This paper considers the problem of reasoning about the reliability of fault-tolerant systems with two "channels" (i.e., components) of which one, A, supports only a claim of reliability, while the other, B, by virtue of extreme simplicity and extensive analysis, supports a plausible claim of "perfection." We begin with the case where either channel can bring the system to a safe state. We show that, conditional upon knowing pA (the probability that A fails on a randomly selected demand) and pB (the probability that channel B is imperfect), a conservative bound on the probability that the system fails on a randomly selected demand is simply pA.pB. That is, there is conditional independence between the events "A fails" and "B is imperfect." The second step of the reasoning involves epistemic uncertainty about (pA, pB) and we show that under quite plausible assumptions, a conservative bound on system pfd can be constructed from point estimates for just three parameters. We discuss the feasibility of establishing credible estimates for these parameters. We extend our analysis from faults of omission to those of commission, and then combine these to yield an analysis for monitored architectures of a kind proposed for aircraft
Valuing the benefits of a health intervention using three different approaches to contingent valuation: re-treatment of mosquito bed-nets in Nigeria.
OBJECTIVES: To determine the level of willingness to pay (WTP) for re-treatment of mosquito nets and to compare the theoretical validity of WTP estimates from three contingent valuation question formats: the bidding game, binary with follow-up technique, and a novel structured haggling technique that mimicked price-taking behaviour in the study area. METHODS: WTP was elicited from randomly selected respondents from three villages in Southeast Nigeria, using pretested interviewer-administered questionnaires. Respondents' WTP for insecticide-treated nets (ITNs) was first elicited before their WTP for re-treatment of ITNs. Ordinary least-squares regression was used to assess theoretical validity. RESULTS: More than 95% of the respondents were willing to pay for re-treatment. The mean WTP was 37.1 Naira, 43.4 Naira and 49.2 Naira in the bidding game, binary with follow-up and structured haggling groups, respectively (US dollar 1.00 = 120 Naira). The WTP estimates elicited across the three question formats were statistically different (P < 0.01). Ordinary least-squares estimation showed that WTP was positively related to many variables, especially stated WTP for ITNs (P < 0.05). Structured haggling generated the highest number of statistically significant variables to explain WTP. CONCLUSIONS: The three contingent valuation approaches generated different distributions of WTP for net retreatment, possibly due to their inherent differences. Structured haggling generated the most theoretically valid estimates of WTP. The levels of WTP identified suggest that user fees exceeding 50 Naira per net re-treatment may discourage demand for the service. This is an important challenge for ITN programmes
Measuring the effect of opportunity cost of time on participation in sports and exercise
This article has been made available through the Brunel Open Access Publishing Fund.Background: There is limited research on the association between opportunity cost of time and sports and exercise due to lack of data on opportunity cost of time. Using a sample of 14142 adults from Health Survey for England (2006), we develop and test a composite index of op-portunity cost of time (to address the current issues with data constraint on opportunity cost of time) in order to explore the relationship between opportunity cost of time and sports participation. Methods: Probit regression models are fitted adjusting for a range of covariates. Opportunity cost of time is measured with two proxy measures: a) composite index (consisting of various indicators of wage earnings) con-structed using principal component analysis; and b) education and employment, approach in the literature. We estimate the relative impact of the composite index compared with current proxy measures, on prediction of sports participation. Findings: Findings suggest that higher opportunity cost of time is associated with increased likelihood of sports participation, regardless of the time intensity of activity or the measure of opportunity cost of time used. The relative impacts of the two proxy measures are comparable. Sports and exercise was found to be positively correlated with income. Another important positive correlate of sports and exercise is participation in voluntary activity. The research and policy implications of our findings are discussed
Physical activity in England: Who is meeting the recommended level of participation through sports and exercise?
This article is available through the Brunel Open Access Publishing Fund. Copyright © 2012 Anokye et al.Background: Little is known about the correlates of meeting recommended levels of participation in physical activity (PA) and how this understanding informs public health policies on behaviour change. Objective: To analyse who meets the recommended level of participation in PA in males and females separately by applying ‘process’ modelling frameworks (single vs. sequential 2-step process). Methods: Using the Health Survey for England 2006, (n = 14 142; ≥16 years), gender-specific regression models were estimated using bivariate probit with selectivity correction and single probit models. A ‘sequential, 2-step process’ modelled participation and meeting the recommended level separately, whereas the ‘single process’ considered both participation and level together. Results: In females, meeting the recommended level was associated with degree holders [Marginal effect (ME) = 0.013] and age (ME = −0.001), whereas in males, age was a significant correlate (ME = −0.003 to −0.004). The order of importance of correlates was similar across genders, with ethnicity being the most important correlate in both males (ME = −0.060) and females (ME = −0.133). In females, the ‘sequential, 2-step process’ performed better (ρ = −0.364, P < 0.001) than that in males (ρ = 0.154). Conclusion: The degree to which people undertake the recommended level of PA through vigorous activity varies between males and females, and the process that best predicts such decisions, i.e. whether it is a sequential, 2-step process or a single-step choice, is also different for males and females. Understanding this should help to identify subgroups that are less likely to meet the recommended level of PA (and hence more likely to benefit from any PA promotion intervention).This study was funded by the Department of Health’s Policy Research Programme
Formal verification of a fault tolerant clock synchronization algorithm
A formal specification and mechanically assisted verification of the interactive convergence clock synchronization algorithm of Lamport and Melliar-Smith is described. Several technical flaws in the analysis given by Lamport and Melliar-Smith were discovered, even though their presentation is unusally precise and detailed. It seems that these flaws were not detected by informal peer scrutiny. The flaws are discussed and a revised presentation of the analysis is given that not only corrects the flaws but is also more precise and easier to follow. Some of the corrections to the flaws require slight modifications to the original assumptions underlying the algorithm and to the constraints on its parameters, and thus change the external specifications of the algorithm. The formal analysis of the interactive convergence clock synchronization algorithm was performed using the Enhanced Hierarchical Development Methodology (EHDM) formal specification and verification environment. This application of EHDM provides a demonstration of some of the capabilities of the system
Software Verification and System Assurance
Littlewood [1] introduced the idea that software may be possibly perfect and that we can contemplate its probability of (im)perfection. We review this idea and show how it provides a bridge between correctness, which is the goal of software verification (and especially formal verification), and the probabilistic properties such as reliability that are the targets for system-level assurance. We enumerate the hazards to formal verification, consider how each of these may be countered, and propose relative weightings that an assessor may employ in assigning a probability of perfection
- …
