108 research outputs found

    A Characterization of the optimal risk-Sensitive average cost in finite controlled Markov chains

    Full text link
    This work concerns controlled Markov chains with finite state and action spaces. The transition law satisfies the simultaneous Doeblin condition, and the performance of a control policy is measured by the (long-run) risk-sensitive average cost criterion associated to a positive, but otherwise arbitrary, risk sensitivity coefficient. Within this context, the optimal risk-sensitive average cost is characterized via a minimization problem in a finite-dimensional Euclidean space.Comment: Published at http://dx.doi.org/10.1214/105051604000000585 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    An optimality system for finite average Markov decision chains under risk-aversion

    Get PDF
    summary:This work concerns controlled Markov chains with finite state space and compact action sets. The decision maker is risk-averse with constant risk-sensitivity, and the performance of a control policy is measured by the long-run average cost criterion. Under standard continuity-compactness conditions, it is shown that the (possibly non-constant) optimal value function is characterized by a system of optimality equations which allows to obtain an optimal stationary policy. Also, it is shown that the optimal superior and inferior limit average cost functions coincide

    Risk-sensitive Markov stopping games with an absorbing state

    Get PDF
    summary:This work is concerned with discrete-time Markov stopping games with two players. At each decision time player II can stop the game paying a terminal reward to player I, or can let the system to continue its evolution. In this latter case player I applies an action affecting the transitions and entitling him to receive a running reward from player II. It is supposed that player I has a no-null and constant risk-sensitivity coefficient, and that player II tries to minimize the utility of player I. The performance of a pair of decision strategies is measured by the risk-sensitive (expected) total reward of player I and, besides mild continuity-compactness conditions, the main structural assumption on the model is the existence of an absorbing state which is accessible from any starting point. In this context, it is shown that the value function of the game is characterized by an equilibrium equation, and the existence of a Nash equilibrium is established

    Denumerable Markov stopping games with risk-sensitive total reward criterion

    Get PDF
    summary:This paper studies Markov stopping games with two players on a denumerable state space. At each decision time player II has two actions: to stop the game paying a terminal reward to player I, or to let the system to continue it evolution. In this latter case, player I selects an action affecting the transitions and charges a running reward to player II. The performance of each pair of strategies is measured by the risk-sensitive total expected reward of player I. Under mild continuity and compactness conditions on the components of the model, it is proved that the value of the game satisfies an equilibrium equation, and the existence of a Nash equilibrium is established

    Markov stopping games with an absorbing state and total reward criterion

    Get PDF
    summary:This work is concerned with discrete-time zero-sum games with Markov transitions on a denumerable space. At each decision time player II can stop the system paying a terminal reward to player I, or can let the system to continue its evolution. If the system is not halted, player I selects an action which affects the transitions and receives a running reward from player II. Assuming the existence of an absorbing state which is accessible from any other state, the performance of a pair of decision strategies is measured by the total expected reward criterion. In this context it is shown that the value function of the game is characterized by an equilibrium equation, and the existence of a Nash equilibrium is established

    Risk-sensitive control for a class of nonlinear systems with multiplicative noise

    Get PDF
    In this paper, we consider the problem of optimal control for a class of nonlinear stochastic systems with multiplicative noise. The nonlinearity consists of quadratic terms in the state and control variables. The optimality criteria are of a risk-sensitive and generalised risk-sensitive type. The optimal control is found in an explicit closed-form by the completion of squares and the change of measure methods. As applications, we outline two special cases of our results. We show that a subset of the class of models which we consider leads to a generalised quadratic-affine term structure model (QATSM) for interest rates. We also demonstrate how our results lead to generalisation of exponential utility as a criterion in optimal investment. © 2013 Elsevier B.V. All rights reserved

    Risk-Sensitive Optimal Control for Markov Decision Processes with Monotone Cost

    Full text link
    corecore