1,401 research outputs found
Talking quiescence: a rigorous theory that supports parallel composition, action hiding and determinisation
The notion of quiescence - the absence of outputs - is vital in both
behavioural modelling and testing theory. Although the need for quiescence was
already recognised in the 90s, it has only been treated as a second-class
citizen thus far. This paper moves quiescence into the foreground and
introduces the notion of quiescent transition systems (QTSs): an extension of
regular input-output transition systems (IOTSs) in which quiescence is
represented explicitly, via quiescent transitions. Four carefully crafted rules
on the use of quiescent transitions ensure that our QTSs naturally capture
quiescent behaviour.
We present the building blocks for a comprehensive theory on QTSs supporting
parallel composition, action hiding and determinisation. In particular, we
prove that these operations preserve all the aforementioned rules.
Additionally, we provide a way to transform existing IOTSs into QTSs, allowing
even IOTSs as input that already contain some quiescent transitions. As an
important application, we show how our QTS framework simplifies the fundamental
model-based testing theory formalised around ioco.Comment: In Proceedings MBT 2012, arXiv:1202.582
Computing Distances between Probabilistic Automata
We present relaxed notions of simulation and bisimulation on Probabilistic
Automata (PA), that allow some error epsilon. When epsilon is zero we retrieve
the usual notions of bisimulation and simulation on PAs. We give logical
characterisations of these notions by choosing suitable logics which differ
from the elementary ones, L with negation and L without negation, by the modal
operator. Using flow networks, we show how to compute the relations in PTIME.
This allows the definition of an efficiently computable non-discounted distance
between the states of a PA. A natural modification of this distance is
introduced, to obtain a discounted distance, which weakens the influence of
long term transitions. We compare our notions of distance to others previously
defined and illustrate our approach on various examples. We also show that our
distance is not expansive with respect to process algebra operators. Although L
without negation is a suitable logic to characterise epsilon-(bi)simulation on
deterministic PAs, it is not for general PAs; interestingly, we prove that it
does characterise weaker notions, called a priori epsilon-(bi)simulation, which
we prove to be NP-difficult to decide.Comment: In Proceedings QAPL 2011, arXiv:1107.074
Testing Reactive Probabilistic Processes
We define a testing equivalence in the spirit of De Nicola and Hennessy for
reactive probabilistic processes, i.e. for processes where the internal
nondeterminism is due to random behaviour. We characterize the testing
equivalence in terms of ready-traces. From the characterization it follows that
the equivalence is insensitive to the exact moment in time in which an internal
probabilistic choice occurs, which is inherent from the original testing
equivalence of De Nicola and Hennessy. We also show decidability of the testing
equivalence for finite systems for which the complete model may not be known
An Algorithm for Probabilistic Alternating Simulation
In probabilistic game structures, probabilistic alternating simulation
(PA-simulation) relations preserve formulas defined in probabilistic
alternating-time temporal logic with respect to the behaviour of a subset of
players. We propose a partition based algorithm for computing the largest
PA-simulation, which is to our knowledge the first such algorithm that works in
polynomial time, by extending the generalised coarsest partition problem (GCPP)
in a game-based setting with mixed strategies. The algorithm has higher
complexities than those in the literature for non-probabilistic simulation and
probabilistic simulation without mixed actions, but slightly improves the
existing result for computing probabilistic simulation with respect to mixed
actions.Comment: We've fixed a problem in the SOFSEM'12 conference versio
Using schedulers to test probabilistic distributed systems
This is the author's accepted manuscript. The final publication is available at Springer via http://dx.doi.org/10.1007/s00165-012-0244-5. Copyright © 2012, British Computer Society.Formal methods are one of the most important approaches to increasing the confidence in the correctness of software systems. A formal specification can be used as an oracle in testing since one can determine whether an observed behaviour is allowed by the specification. This is an important feature of formal testing: behaviours of the system observed in testing are compared with the specification and ideally this comparison is automated. In this paper we study a formal testing framework to deal with systems that interact with their environment at physically distributed interfaces, called ports, and where choices between different possibilities are probabilistically quantified. Building on previous work, we introduce two families of schedulers to resolve nondeterministic choices among different actions of the system. The first type of schedulers, which we call global schedulers, resolves nondeterministic choices by representing the environment as a single global scheduler. The second type, which we call localised schedulers, models the environment as a set of schedulers with there being one scheduler for each port. We formally define the application of schedulers to systems and provide and study different implementation relations in this setting
Effectiveness of dolutegravir-based regimens as either first-line or switch antiretroviral therapy: data from the Icona cohort
Introduction: Concerns about dolutegravir (DTG) tolerability in the real-life setting have recently arisen. We aimed to estimate the risk of treatment discontinuation and virological failure of DTG-based regimens from a large cohort of HIV-infected individuals. Methods: We performed a multicentre, observational study including all antiretroviral therapy (ART)-naïve and virologically suppressed treatment-experienced (TE) patients from the Icona (Italian Cohort Naïve Antiretrovirals) cohort who started, for the first time, a DTG-based regimen from January 2015 to December 2017. We estimated the cumulative risk of DTG discontinuation regardless of the reason and for toxicity, and of virological failure using Kaplan–Meier curves. We used Cox regression model to investigate predictors of DTG discontinuation. Results: About 1679 individuals (932 ART-naïve, 747 TE) were included. The one- and two-year probabilities (95% CI) of DTG discontinuation were 6.7% (4.9 to 8.4) and 11.5% (8.7 to 14.3) for ART-naïve and 6.6% (4.6 to 8.6) and 7.6% (5.4 to 9.8) for TE subjects. In both ART-naïve and TE patients, discontinuations of DTG were mainly driven by toxicity with an estimated risk (95% CI) of 4.0% (2.6 to 5.4) and 2.5% (1.3 to 3.6) by one year and 5.6% (3.8 to 7.5) and 4.0% (2.4 to 5.6) by two years respectively. Neuropsychiatric events were the main reason for stopping DTG in both ART-naïve (2.1%) and TE (1.7%) patients. In ART-naïve, a concomitant AIDS diagnosis predicted the risk of discontinuing DTG for any reason (adjusted relative hazard (aRH) = 3.38, p = 0.001), whereas starting DTG in combination with abacavir (ABC) was associated with a higher risk of discontinuing because of toxicity (aRH = 3.30, p = 0.009). TE patients starting a DTG-based dual therapy compared to a triple therapy had a lower risk of discontinuation for any reason (adjusted hazard ratio (aHR) = 2.50, p = 0.037 for ABC-based triple-therapies, aHR = 3.56, p = 0.012 for tenofovir-based) and for toxicity (aHR = 5.26, p = 0.030 for ABC-based, aHR = 6.60, p = 0.024 for tenofovir-based). The one- and two-year probabilities (95% CI) of virological failure were 1.2% (0.3 to 2.0) and 4.6% (2.7 to 6.5) in the ART naïve group and 2.2% (1.0 to 3.3) and 2.9% (1.5 to 4.3) in the TE group. Conclusions: In this large cohort, DTG showed excellent efficacy and optimal tolerability both as first-line and switching ART. The low risk of treatment-limiting toxicities in ART-naïve as well as in treated individuals reassures on the use of DTG in everyday clinical practice
Search for the standard model Higgs boson in the H to ZZ to 2l 2nu channel in pp collisions at sqrt(s) = 7 TeV
A search for the standard model Higgs boson in the H to ZZ to 2l 2nu decay
channel, where l = e or mu, in pp collisions at a center-of-mass energy of 7
TeV is presented. The data were collected at the LHC, with the CMS detector,
and correspond to an integrated luminosity of 4.6 inverse femtobarns. No
significant excess is observed above the background expectation, and upper
limits are set on the Higgs boson production cross section. The presence of the
standard model Higgs boson with a mass in the 270-440 GeV range is excluded at
95% confidence level.Comment: Submitted to JHE
Combined search for the quarks of a sequential fourth generation
Results are presented from a search for a fourth generation of quarks
produced singly or in pairs in a data set corresponding to an integrated
luminosity of 5 inverse femtobarns recorded by the CMS experiment at the LHC in
2011. A novel strategy has been developed for a combined search for quarks of
the up and down type in decay channels with at least one isolated muon or
electron. Limits on the mass of the fourth-generation quarks and the relevant
Cabibbo-Kobayashi-Maskawa matrix elements are derived in the context of a
simple extension of the standard model with a sequential fourth generation of
fermions. The existence of mass-degenerate fourth-generation quarks with masses
below 685 GeV is excluded at 95% confidence level for minimal off-diagonal
mixing between the third- and the fourth-generation quarks. With a mass
difference of 25 GeV between the quark masses, the obtained limit on the masses
of the fourth-generation quarks shifts by about +/- 20 GeV. These results
significantly reduce the allowed parameter space for a fourth generation of
fermions.Comment: Replaced with published version. Added journal reference and DO
Probabilistic Bisimulation for Realistic Schedulers
Weak distribution bisimilarity is an equivalence notion on probabilistic automata, originally proposed for Markov automata. It has gained some popularity as the coarsest behavioral equivalence enjoying valuable properties like preservation of trace distribution equivalence and compositionality. This holds in the classical context of arbitrary schedulers, but it has been argued that this class of schedulers is unrealistically powerful. This paper studies a strictly coarser notion of bisimilarity, which still enjoys these properties in the context of realistic subclasses of schedulers: Trace distribution equivalence is implied for partial information schedulers, and compositionality is preserved by distributed schedulers. The intersection of the two scheduler classes thus spans a coarser and still reasonable compositional theory of behavioral semantics
Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV
The performance of muon reconstruction, identification, and triggering in CMS
has been studied using 40 inverse picobarns of data collected in pp collisions
at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection
criteria covering a wide range of physics analysis needs have been examined.
For all considered selections, the efficiency to reconstruct and identify a
muon with a transverse momentum pT larger than a few GeV is above 95% over the
whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4,
while the probability to misidentify a hadron as a muon is well below 1%. The
efficiency to trigger on single muons with pT above a few GeV is higher than
90% over the full eta range, and typically substantially better. The overall
momentum scale is measured to a precision of 0.2% with muons from Z decays. The
transverse momentum resolution varies from 1% to 6% depending on pseudorapidity
for muons with pT below 100 GeV and, using cosmic rays, it is shown to be
better than 10% in the central region up to pT = 1 TeV. Observed distributions
of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO
- …
