7,538 research outputs found
An Assurance Framework for Independent Co-assurance of Safety and Security
Integrated safety and security assurance for complex systems is difficult for
many technical and socio-technical reasons such as mismatched processes,
inadequate information, differing use of language and philosophies, etc.. Many
co-assurance techniques rely on disregarding some of these challenges in order
to present a unified methodology. Even with this simplification, no methodology
has been widely adopted primarily because this approach is unrealistic when met
with the complexity of real-world system development.
This paper presents an alternate approach by providing a Safety-Security
Assurance Framework (SSAF) based on a core set of assurance principles. This is
done so that safety and security can be co-assured independently, as opposed to
unified co-assurance which has been shown to have significant drawbacks. This
also allows for separate processes and expertise from practitioners in each
domain. With this structure, the focus is shifted from simplified unification
to integration through exchanging the correct information at the right time
using synchronisation activities
Detecting Pulsatile Hormone Secretion Events: A Bayesian Approach
Many challenges arise in the analysis of pulsatile, or episodic, hormone concentration time series data. Among these challenges is the determination of the number and location of pulsatile events and the discrimination of events from noise. Analyses of these data are typically performed in two stages. In the first stage, the number and approximate location of the pulses are determined. In the second stage, a model (typically a deconvolution model) is fit to the data conditional on the number of pulses. Any error made in the first stage is carried over to the second stage. Furthermore, current methods, except two, assume that the underlying basal concentration is constant. We present a fully Bayesian deconvolution model that simultaneously estimates the number of secretion episodes, as well as their locations, and a non-constant basal concentration. This model obviates the need to determine the number of events a priori. Furthermore, we estimate probabilities for all ``candidate\u27\u27 event locations. We demonstrate our method on a real data set
Recommended from our members
Contemporary American cartographic research: a review and prospective
A Bayesian Hierarchical Approach to Multirater Correlated ROC Analysis
In a common ROC study design, several readers are asked to rate diagnostics of the same cases processed under different modalities. We describe a Bayesian hierarchical model that facilitates the analysis of this study design by explicitly modeling the three sources of variation inherent to it. In so doing, we achieve substantial reductions in the posterior uncertainty associated with estimates of the differences in areas under the estimated ROC curves and corresponding reductions in the mean squared error (MSE) of these estimates. Based on simulation studies, both the widths of confidence intervals and MSE of estimates of differences in the area under the curves appear to be reduced by a factor that often exceeds two. Thus, our methodology has important implications for increasing the power of analyses based on ROC data collected from an available study population
Generalised Swan modules and the D(2) problem
We give a detailed proof that, for any natural number n, each algebraic two
complex over C_n \times C_\infty is realised up to congruence by a geometric
complex arising from a presentation for the group.Comment: This is the version published by Algebraic & Geometric Topology on 24
February 200
GraFIX: a semiautomatic approach for parsing low- and high-quality eye-tracking data
Fixation durations (FD) have been used widely as a measurement of information processing and attention. However, issues like data quality can seriously influence the accuracy of the fixation detection methods and, thus, affect the validity of our results (Holmqvist, Nyström, & Mulvey, 2012). This is crucial when studying special populations such as infants, where common issues with testing (e.g., high degree of movement, unreliable eye detection, low spatial precision) result in highly variable data quality and render existing FD detection approaches highly time consuming (hand-coding) or imprecise (automatic detection). To address this problem, we present GraFIX, a novel semiautomatic method consisting of a two-step process in which eye-tracking data is initially parsed by using velocity-based algorithms whose input parameters are adapted by the user and then manipulated using the graphical interface, allowing accurate and rapid adjustments of the algorithms’ outcome. The present algorithms (1) smooth the raw data, (2) interpolate missing data points, and (3) apply a number of criteria to automatically evaluate and remove artifactual fixations. The input parameters (e.g., velocity threshold, interpolation latency) can be easily manually adapted to fit each participant. Furthermore, the present application includes visualization tools that facilitate the manual coding of fixations. We assessed this method by performing an intercoder reliability analysis in two groups of infants presenting low- and high-quality data and compared it with previous methods. Results revealed that our two-step approach with adaptable FD detection criteria gives rise to more reliable and stable measures in low- and high-quality data
Favoring the job applications of military veterans has little effect on workforce quality in the U.S. federal government
For over a century, the U.S. federal government has biased its hiring procedures to increase the employment of military veterans. In a recent study, Tim Johnson examines the effect of these hiring procedures on the quality of the U.S. federal workforce. Contrary to both conventional wisdom and past research indicating that preferential hiring degrades workforce quality, he finds that veterans who benefit from preferential hiring reach quality benchmarks at rates comparable to other employees working in the same job circumstances
Improving the NRTidal model for binary neutron star systems
Accurate and fast gravitational waveform (GW) models are essential to extract
information about the properties of compact binary systems that generate GWs.
Building on previous work, we present an extension of the NRTidal model for
binary neutron star (BNS) waveforms. The upgrades are: (i) a new closed-form
expression for the tidal contribution to the GW phase which includes further
analytical knowledge and is calibrated to more accurate numerical relativity
data than previously available; (ii) a tidal correction to the GW amplitude;
(iii) an extension of the spin-sector incorporating equation-of-state-dependent
finite size effects at quadrupolar and octupolar order; these appear in the
spin-spin tail terms and cubic-in-spin terms, both at 3.5PN. We add the new
description to the precessing binary black hole waveform model IMRPhenomPv2 to
obtain a frequency-domain precessing binary neutron star model. In addition, we
extend the SEOBNRv4_ROM and IMRPhenomD aligned-spin binary black hole waveform
models with the improved tidal phase corrections. Focusing on the new
IMRPhenomPv2_NRTidalv2 approximant, we test the model by comparing with
numerical relativity waveforms as well as hybrid waveforms combining tidal
effective-one-body and numerical relativity data. We also check consistency
against a tidal effective-one-body model across large regions of the BNS
parameter space.Comment: Accepted manuscrip
- …
