A Theory of Markovian Time Inconsistent Stochastic Control in Continuous Time

A Theory of Markovian Time Inconsistent Stochastic Control in Continuous Time
Author: Tomas Bjork
Publisher:
Total Pages: 44
Release: 2016
Genre:
ISBN:

In this paper, which is a continuation of the discrete time paper, we develop a theory for continuous time stochastic control problems which, in various ways, are time inconsistent in the sense that they do not admit a Bellman optimality principle. We study these problems within a game theoretic framework, and we look for Nash subgame perfect equilibrium points. Within the framework of a controlled SDE and a fairly general objective functional we derive an extension of the standard Hamilton-Jacobi-Bellman equation, in the form of a system of non-linear equations, for the determination for the equilibrium strategy as well as the equilibrium value function. As applications of the general theory we study non exponential discounting as well as a time inconsistent linear quadratic regulator. We also present a study of time inconsistency within the framework of a general equilibrium production economy of Cox-Ingersoll-Ross type.

A General Theory of Markovian Time Inconsistent Stochastic Control Problems

A General Theory of Markovian Time Inconsistent Stochastic Control Problems
Author: Tomas Bjork
Publisher:
Total Pages: 55
Release: 2016
Genre:
ISBN:

We develop a theory for stochastic control problems which, in various ways, are time inconsistent in the sense that they do not admit a Bellman optimality principle. We attach these problems by viewing them within a game theoretic framework, and we look for Nash subgame perfect equilibrium points. For a general controlled Markov process and a fairly general objective functional we derive an extension of the standard Hamilton-Jacobi-Bellman equation, in the form of a system of on-linear equations, for the determination for the equilibrium strategy as well as the equilibrium value function. All known examples of time inconsistency in the literature are easily seen to be special cases of the present theory. We also prove that for every time inconsistent problem, there exists an associated time consistent problem such that the optimal control and the optimal value function for the consistent problem coincides with the equilibrium control and value function respectively for the time inconsistent problem. We also study some concrete examples.

Controlled Markov Processes and Viscosity Solutions

Controlled Markov Processes and Viscosity Solutions
Author: Wendell H. Fleming
Publisher: Springer Science & Business Media
Total Pages: 436
Release: 2006-02-04
Genre: Mathematics
ISBN: 0387310711

This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games.

Controlled Markov Processes and Viscosity Solutions

Controlled Markov Processes and Viscosity Solutions
Author: Wendell Helms Fleming
Publisher:
Total Pages: 428
Release: 2006
Genre: Markov processes
ISBN: 9786610461998

This book is intended as an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions. The authors approach stochastic control problems by the method of dynamic programming. The text provides an introduction to dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. A new Chapter X gives an introduction to the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets. Chapter VI of the First Edition has been completely rewritten, to emphasize the relationships between logarithmic transformations and risk sensitivity. A new Chapter XI gives a concise introduction to two-controller, zero-sum differential games. Also covered are controlled Markov diffusions and viscosity solutions of Hamilton-Jacobi-Bellman equations. The authors have tried, through illustrative examples and selective material, to connect stochastic control theory with other mathematical areas (e.g. large deviations theory) and with applications to engineering, physics, management, and finance.; In this Second Edition, new material on applications to mathematical finance has been added. Concise introductions to risk-sensitive control theory, nonlinear H-infinity control and differential games are also included.

Numerical Methods for Stochastic Control Problems in Continuous Time

Numerical Methods for Stochastic Control Problems in Continuous Time
Author: Harold Joseph Kushner
Publisher: Springer Science & Business Media
Total Pages: 439
Release: 1992
Genre: Distribution (Probability theory)
ISBN: 9780387978345

Stochastic control is a very active area of research and this monograph, written by two leading authorities in the field, has been updated to reflect the latest developments. It covers effective numerical methods for stochastic control problems in continuous time on two levels: that of practice (algorithms and applications) and that of mathematical development. It is broadly accessible for graduate students and researchers.

Continuous-Time Markov Decision Processes

Continuous-Time Markov Decision Processes
Author: Xianping Guo
Publisher: Springer Science & Business Media
Total Pages: 240
Release: 2009-09-18
Genre: Mathematics
ISBN: 3642025471

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.

Stochastic Analysis, Filtering, and Stochastic Optimization

Stochastic Analysis, Filtering, and Stochastic Optimization
Author: George Yin
Publisher: Springer Nature
Total Pages: 466
Release: 2022-04-22
Genre: Mathematics
ISBN: 3030985199

This volume is a collection of research works to honor the late Professor Mark H.A. Davis, whose pioneering work in the areas of Stochastic Processes, Filtering, and Stochastic Optimization spans more than five decades. Invited authors include his dissertation advisor, past collaborators, colleagues, mentees, and graduate students of Professor Davis, as well as scholars who have worked in the above areas. Their contributions may expand upon topics in piecewise deterministic processes, pathwise stochastic calculus, martingale methods in stochastic optimization, filtering, mean-field games, time-inconsistency, as well as impulse, singular, risk-sensitive and robust stochastic control.

Time-Inconsistent Control Theory with Finance Applications

Time-Inconsistent Control Theory with Finance Applications
Author: Tomas Björk
Publisher: Springer Nature
Total Pages: 328
Release: 2021-11-02
Genre: Mathematics
ISBN: 3030818438

This book is devoted to problems of stochastic control and stopping that are time inconsistent in the sense that they do not admit a Bellman optimality principle. These problems are cast in a game-theoretic framework, with the focus on subgame-perfect Nash equilibrium strategies. The general theory is illustrated with a number of finance applications. In dynamic choice problems, time inconsistency is the rule rather than the exception. Indeed, as Robert H. Strotz pointed out in his seminal 1955 paper, relaxing the widely used ad hoc assumption of exponential discounting gives rise to time inconsistency. Other famous examples of time inconsistency include mean-variance portfolio choice and prospect theory in a dynamic context. For such models, the very concept of optimality becomes problematic, as the decision maker’s preferences change over time in a temporally inconsistent way. In this book, a time-inconsistent problem is viewed as a non-cooperative game between the agent’s current and future selves, with the objective of finding intrapersonal equilibria in the game-theoretic sense. A range of finance applications are provided, including problems with non-exponential discounting, mean-variance objective, time-inconsistent linear quadratic regulator, probability distortion, and market equilibrium with time-inconsistent preferences. Time-Inconsistent Control Theory with Finance Applications offers the first comprehensive treatment of time-inconsistent control and stopping problems, in both continuous and discrete time, and in the context of finance applications. Intended for researchers and graduate students in the fields of finance and economics, it includes a review of the standard time-consistent results, bibliographical notes, as well as detailed examples showcasing time inconsistency problems. For the reader unacquainted with standard arbitrage theory, an appendix provides a toolbox of material needed for the book.

Selected Topics on Continuous-time Controlled Markov Chains and Markov Games

Selected Topics on Continuous-time Controlled Markov Chains and Markov Games
Author: Tomas Prieto-Rumeau
Publisher: World Scientific
Total Pages: 292
Release: 2012
Genre: Mathematics
ISBN: 1848168497

This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas. An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown. This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book.

Discrete-Time Markov Control Processes

Discrete-Time Markov Control Processes
Author: Onesimo Hernandez-Lerma
Publisher: Springer Science & Business Media
Total Pages: 223
Release: 2012-12-06
Genre: Mathematics
ISBN: 1461207290

This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued.