574 research outputs found
Extension of PRISM by Synthesis of Optimal Timeouts in Fixed-Delay CTMC
We present a practically appealing extension of the probabilistic model
checker PRISM rendering it to handle fixed-delay continuous-time Markov chains
(fdCTMCs) with rewards, the equivalent formalism to the deterministic and
stochastic Petri nets (DSPNs). fdCTMCs allow transitions with fixed-delays (or
timeouts) on top of the traditional transitions with exponential rates. Our
extension supports an evaluation of expected reward until reaching a given set
of target states. The main contribution is that, considering the fixed-delays
as parameters, we implemented a synthesis algorithm that computes the
epsilon-optimal values of the fixed-delays minimizing the expected reward. We
provide a performance evaluation of the synthesis on practical examples
A note on the invariant distribution of a quasi-birth-and-death process
The aim of this paper is to give an explicit formula of the invariant
distribution of a quasi-birth-and-death process in terms of the block entries
of the transition probability matrix using a matrix-valued orthogonal
polynomials approach. We will show that the invariant distribution can be
computed using the squared norms of the corresponding matrix-valued orthogonal
polynomials, no matter if they are or not diagonal matrices. We will give an
example where the squared norms are not diagonal matrices, but nevertheless we
can compute its invariant distribution
Optimizing Performance of Continuous-Time Stochastic Systems using Timeout Synthesis
We consider parametric version of fixed-delay continuous-time Markov chains
(or equivalently deterministic and stochastic Petri nets, DSPN) where
fixed-delay transitions are specified by parameters, rather than concrete
values. Our goal is to synthesize values of these parameters that, for a given
cost function, minimise expected total cost incurred before reaching a given
set of target states. We show that under mild assumptions, optimal values of
parameters can be effectively approximated using translation to a Markov
decision process (MDP) whose actions correspond to discretized values of these
parameters
An approximation approach for the deviation matrix of continuous-time Markov processes with application to Markov decision theory
We present an update formula that allows the expression of the deviation matrix of a continuous-time Markov process with denumerable state space having generator matrix Q* through a continuous-time Markov process with generator matrix Q. We show that under suitable stability conditions the algorithm converges at a geometric rate. By applying the concept to three different examples, namely, the M/M/1 queue with vacations, the M/G/1 queue, and a tandem network, we illustrate the broad applicability of our approach. For a problem in admission control, we apply our approximation algorithm toMarkov decision theory for computing the optimal control policy. Numerical examples are presented to highlight the efficiency of the proposed algorithm. © 2010 INFORMS
Temporally correlated zero-range process with open boundaries: Steady state and fluctuations
19 pages, 14 figures, v2: minor revisions, close to final published version at http://dx.doi.org/10.1103/PhysRevE.92.02213
The preemptive repeat hybrid server interruption model
We analyze a discrete-time queueing system with server interruptions and a hybrid preemptive repeat interruption discipline. Such a discipline encapsulates both the preemptive repeat identical and the preemptive repeat different disciplines. By the introduction and analysis of so-called service completion times, we significantly reduce the complexity of the analysis. Our results include a.o. the probability generating functions and moments of queue content and delay. Finally, by means of some numerical examples, we assess how performance measures are affected by the specifics of the interruption discipline
Distributed Synthesis in Continuous Time
We introduce a formalism modelling communication of distributed agents
strictly in continuous-time. Within this framework, we study the problem of
synthesising local strategies for individual agents such that a specified set
of goal states is reached, or reached with at least a given probability. The
flow of time is modelled explicitly based on continuous-time randomness, with
two natural implications: First, the non-determinism stemming from interleaving
disappears. Second, when we restrict to a subclass of non-urgent models, the
quantitative value problem for two players can be solved in EXPTIME. Indeed,
the explicit continuous time enables players to communicate their states by
delaying synchronisation (which is unrestricted for non-urgent models). In
general, the problems are undecidable already for two players in the
quantitative case and three players in the qualitative case. The qualitative
undecidability is shown by a reduction to decentralized POMDPs for which we
provide the strongest (and rather surprising) undecidability result so far
A differential equation for a class of discrete lifetime distributions with an application in reliability: A demonstration of the utility of computer algebra
YesIt is shown that the probability generating function of a lifetime random variable T on a finite lattice with polynomial failure rate satisfies a certain differential equation. The interrelationship with Markov chain theory is highlighted. The differential equation gives rise to a system of differential equations which, when inverted, can be used in the limit to express the polynomial coefficients in terms of the factorial moments of T. This then can be used to estimate the polynomial coefficients. Some special cases are worked through symbolically using Computer Algebra. A simulation study is used to validate the approach and to explore its potential in the reliability context
- …
