Robust Time-Inconsistent Stochastic Control Problems

Robust Time-Inconsistent Stochastic Control Problems
Author: Chi Seng Pun
Publisher:
Total Pages: 20
Release: 2018
Genre:
ISBN:

This paper establishes a general analytical framework for continuous-time stochastic control problems for an ambiguity-averse agent (AAA) with time-inconsistent preference, where the control problems do not satisfy Bellman's principle of optimality. The AAA is concerned about model uncertainty in the sense that she is not completely confident in the reference model of the controlled Markov state process and rather considers some similar alternative models. The problems of interest are studied within a set of dominated models and the AAA seeks for an optimal decision that is robust with respect to model risks. We adopt a game-theoretic framework and the concept of subgame perfect Nash equilibrium to derive an extended dynamic programming equation and extended Hamilton -- Jacobi -- Bellman -- Isaacs equations for characterizing the robust dynamically optimal control of the problem. We also prove a verification theorem to theoretically support our construction of robust control. To illustrate the tractability of the proposed framework, we study an example of robust dynamic mean-variance portfolio selection under two cases: 1. constant risk aversion; and 2. state-dependent risk aversion.

Robust Time-Inconsistent Stochastic Linear-Quadratic Control

Robust Time-Inconsistent Stochastic Linear-Quadratic Control
Author: Bingyan Han
Publisher:
Total Pages: 41
Release: 2019
Genre:
ISBN:

This paper studies stochastic linear-quadratic control problems for an ambiguity-adverse agent with a time-inconsistent objective. We allow the agent to incorporate disturbances into the state's drift or choose an alternative model among a set of models equivalent to the reference model, to reflect her ambiguity aversion on the drift coefficient of the state process. We adopt an innovative two-step equilibrium control approach to characterize the robust time-consistent controls and simultaneously preserve the preference order. Under a general framework allowing random parameters, we derive a sufficient condition for equilibrium controls using the forward-backward stochastic differential equation approach. We also provide analytical solutions to mean-variance portfolio problems for various settings. Our empirical studies confirm the improvement in the portfolio's performance in terms of Sharpe ratio by incorporating robustness.

A Theory of Markovian Time Inconsistent Stochastic Control in Continuous Time

A Theory of Markovian Time Inconsistent Stochastic Control in Continuous Time
Author: Tomas Bjork
Publisher:
Total Pages: 44
Release: 2016
Genre:
ISBN:

In this paper, which is a continuation of the discrete time paper, we develop a theory for continuous time stochastic control problems which, in various ways, are time inconsistent in the sense that they do not admit a Bellman optimality principle. We study these problems within a game theoretic framework, and we look for Nash subgame perfect equilibrium points. Within the framework of a controlled SDE and a fairly general objective functional we derive an extension of the standard Hamilton-Jacobi-Bellman equation, in the form of a system of non-linear equations, for the determination for the equilibrium strategy as well as the equilibrium value function. As applications of the general theory we study non exponential discounting as well as a time inconsistent linear quadratic regulator. We also present a study of time inconsistency within the framework of a general equilibrium production economy of Cox-Ingersoll-Ross type.

A General Theory of Markovian Time Inconsistent Stochastic Control Problems

A General Theory of Markovian Time Inconsistent Stochastic Control Problems
Author: Tomas Bjork
Publisher:
Total Pages: 55
Release: 2016
Genre:
ISBN:

We develop a theory for stochastic control problems which, in various ways, are time inconsistent in the sense that they do not admit a Bellman optimality principle. We attach these problems by viewing them within a game theoretic framework, and we look for Nash subgame perfect equilibrium points. For a general controlled Markov process and a fairly general objective functional we derive an extension of the standard Hamilton-Jacobi-Bellman equation, in the form of a system of on-linear equations, for the determination for the equilibrium strategy as well as the equilibrium value function. All known examples of time inconsistency in the literature are easily seen to be special cases of the present theory. We also prove that for every time inconsistent problem, there exists an associated time consistent problem such that the optimal control and the optimal value function for the consistent problem coincides with the equilibrium control and value function respectively for the time inconsistent problem. We also study some concrete examples.

Numerical Methods for Stochastic Control Problems in Continuous Time

Numerical Methods for Stochastic Control Problems in Continuous Time
Author: Harold Kushner
Publisher: Springer Science & Business Media
Total Pages: 480
Release: 2013-11-27
Genre: Mathematics
ISBN: 146130007X

Stochastic control is a very active area of research. This monograph, written by two leading authorities in the field, has been updated to reflect the latest developments. It covers effective numerical methods for stochastic control problems in continuous time on two levels, that of practice and that of mathematical development. It is broadly accessible for graduate students and researchers.

Stochastic Control in Discrete and Continuous Time

Stochastic Control in Discrete and Continuous Time
Author: Atle Seierstad
Publisher: Springer Science & Business Media
Total Pages: 299
Release: 2008-11-11
Genre: Mathematics
ISBN: 0387766162

This book contains an introduction to three topics in stochastic control: discrete time stochastic control, i. e. , stochastic dynamic programming (Chapter 1), piecewise - terministic control problems (Chapter 3), and control of Ito diffusions (Chapter 4). The chapters include treatments of optimal stopping problems. An Appendix - calls material from elementary probability theory and gives heuristic explanations of certain more advanced tools in probability theory. The book will hopefully be of interest to students in several ?elds: economics, engineering, operations research, ?nance, business, mathematics. In economics and business administration, graduate students should readily be able to read it, and the mathematical level can be suitable for advanced undergraduates in mathem- ics and science. The prerequisites for reading the book are only a calculus course and a course in elementary probability. (Certain technical comments may demand a slightly better background. ) As this book perhaps (and hopefully) will be read by readers with widely diff- ing backgrounds, some general advice may be useful: Don’t be put off if paragraphs, comments, or remarks contain material of a seemingly more technical nature that you don’t understand. Just skip such material and continue reading, it will surely not be needed in order to understand the main ideas and results. The presentation avoids the use of measure theory.

Modeling, Stochastic Control, Optimization, and Applications

Modeling, Stochastic Control, Optimization, and Applications
Author: George Yin
Publisher: Springer
Total Pages: 593
Release: 2019-07-16
Genre: Mathematics
ISBN: 3030254984

This volume collects papers, based on invited talks given at the IMA workshop in Modeling, Stochastic Control, Optimization, and Related Applications, held at the Institute for Mathematics and Its Applications, University of Minnesota, during May and June, 2018. There were four week-long workshops during the conference. They are (1) stochastic control, computation methods, and applications, (2) queueing theory and networked systems, (3) ecological and biological applications, and (4) finance and economics applications. For broader impacts, researchers from different fields covering both theoretically oriented and application intensive areas were invited to participate in the conference. It brought together researchers from multi-disciplinary communities in applied mathematics, applied probability, engineering, biology, ecology, and networked science, to review, and substantially update most recent progress. As an archive, this volume presents some of the highlights of the workshops, and collect papers covering a broad range of topics.

An Extended McKean -- Vlasov Dynamic Programming Approach to Robust Equilibrium Controls Under Ambiguous Covariance Matrix

An Extended McKean -- Vlasov Dynamic Programming Approach to Robust Equilibrium Controls Under Ambiguous Covariance Matrix
Author: Qian Lei
Publisher:
Total Pages: 0
Release: 2020
Genre:
ISBN:

This paper studies a general class of time-inconsistent stochastic control problems under ambiguous covariance matrix. The time-inconsistency is caused in various ways by a general objective functional and thus the associated control problem does not admit Bellman's principle of optimality. Moreover, we model the state by a McKean -- Vlasov dynamics under a set of non-dominated probability measures induced by the ambiguous covariance matrix of the noises. We apply a game-theoretic concept of subgame perfect Nash equilibrium to develop a robust equilibrium control approach, which can yield robust time-consistent decisions. We characterize the robust equilibrium control and equilibrium value function by an extended optimality principle and then we further deduce a system of Bellman -- Isaacs equations to determine the equilibrium solution on the Wasserstein space of probability measures. The proposed analytical framework is illustrated with its applications to robust continuous-time mean-variance portfolio selection problems with risk aversion coefficient being constant or state-dependent, under the ambiguity stemming from ambiguous volatilities of multiple assets or ambiguous correlation between two risky assets. The explicit equilibrium portfolio solutions are represented in terms of the probability law.

Methods for Optimal Stochastic Control and Optimal Stopping Problems Featuring Time-Inconsistency

Methods for Optimal Stochastic Control and Optimal Stopping Problems Featuring Time-Inconsistency
Author: Christopher Wells Miller
Publisher:
Total Pages: 101
Release: 2016
Genre:
ISBN:

This thesis presents novel methods for computing optimal pre-commitment strategies in time-inconsistent optimal stochastic control and optimal stopping problems. We demonstrate how a time-inconsistent problem can often be re-written in terms of a sequential optimization problem involving the value function of a time-consistent optimal control problem in a higher-dimensional state-space. In particular, we obtain optimal pre-commitment strategies in a non-linear optimal stopping problem, in an optimal stochastic control problem involving conditional value-at-risk, and in an optimal stopping problem with a distribution constraint on the admissible stopping times. In each case, we relate the original problem to auxiliary time-consistent problems, the value functions of which may be characterized in terms of viscosity solutions of a Hamilton-Jacobi-Bellman equation.