An Extended McKean -- Vlasov Dynamic Programming Approach to Robust Equilibrium Controls Under Ambiguous Covariance Matrix

An Extended McKean -- Vlasov Dynamic Programming Approach to Robust Equilibrium Controls Under Ambiguous Covariance Matrix
Author: Qian Lei
Publisher:
Total Pages: 0
Release: 2020
Genre:
ISBN:

This paper studies a general class of time-inconsistent stochastic control problems under ambiguous covariance matrix. The time-inconsistency is caused in various ways by a general objective functional and thus the associated control problem does not admit Bellman's principle of optimality. Moreover, we model the state by a McKean -- Vlasov dynamics under a set of non-dominated probability measures induced by the ambiguous covariance matrix of the noises. We apply a game-theoretic concept of subgame perfect Nash equilibrium to develop a robust equilibrium control approach, which can yield robust time-consistent decisions. We characterize the robust equilibrium control and equilibrium value function by an extended optimality principle and then we further deduce a system of Bellman -- Isaacs equations to determine the equilibrium solution on the Wasserstein space of probability measures. The proposed analytical framework is illustrated with its applications to robust continuous-time mean-variance portfolio selection problems with risk aversion coefficient being constant or state-dependent, under the ambiguity stemming from ambiguous volatilities of multiple assets or ambiguous correlation between two risky assets. The explicit equilibrium portfolio solutions are represented in terms of the probability law.

Control of McKean-Vlasov Systems and Applications

Control of McKean-Vlasov Systems and Applications
Author: Xiaoli Wei
Publisher:
Total Pages: 0
Release: 2018
Genre:
ISBN:

This thesis deals with the study of optimal control of McKean-Vlasov dynamics and its applications in mathematical finance. This thesis contains two parts. In the first part, we develop the dynamic programming (DP) method for solving McKean-Vlasov control problem. Using suitable admissible controls, we propose to reformulate the value function of the problem with the law (resp. conditional law) of the controlled state process as sole state variable and get the flow property of the law (resp. conditional law) of the process, which allow us to derive in its general form the Bellman programming principle. Then by relying on the notion of differentiability with respect to probability measures introduced by P.L. Lions [Lio12], and Itô's formula along measure-valued processes, we obtain the corresponding Bellman equation. At last we show the viscosity property and uniqueness of the value function to the Bellman equation. In the first chapter, we summarize some useful results of differential calculus and stochastic analysis on the Wasserstein space. In the second chapter, we consider the optimal control of nonlinear stochastic dynamical systems in discrete time of McKean-Vlasov type. The third chapter focuses on the stochastic optimal control problem of McKean-Vlasov SDEs without common noise in continuous time where the coefficients may depend upon the joint law of the state and control. In the last chapter, we are interested in the optimal control of stochastic McKean-Vlasov dynamics in the presence of common noise in continuous time.In the second part, we propose a robust portfolio selection model, which takes into account ambiguity about both expected rate of return and correlation matrix of multiply assets, in a continuous-time mean-variance setting. This problem is formulated as a mean-field type differential game. Then we derive a separation principle for the associated problem. Our explicit results provide an explanation to under-diversification, as documented in empirical studies.

Mean Field Games

Mean Field Games
Author: Yves Achdou
Publisher: Springer Nature
Total Pages: 316
Release: 2021-01-19
Genre: Mathematics
ISBN: 3030598373

This volume provides an introduction to the theory of Mean Field Games, suggested by J.-M. Lasry and P.-L. Lions in 2006 as a mean-field model for Nash equilibria in the strategic interaction of a large number of agents. Besides giving an accessible presentation of the main features of mean-field game theory, the volume offers an overview of recent developments which explore several important directions: from partial differential equations to stochastic analysis, from the calculus of variations to modeling and aspects related to numerical methods. Arising from the CIME Summer School "Mean Field Games" held in Cetraro in 2019, this book collects together lecture notes prepared by Y. Achdou (with M. Laurière), P. Cardaliaguet, F. Delarue, A. Porretta and F. Santambrogio. These notes will be valuable for researchers and advanced graduate students who wish to approach this theory and explore its connections with several different fields in mathematics.

Brain, Body and Machine

Brain, Body and Machine
Author: Jorge Angeles
Publisher: Springer Science & Business Media
Total Pages: 364
Release: 2010-10-01
Genre: Technology & Engineering
ISBN: 3642162592

The reader will find here papers on human-robot interaction as well as human safety algorithms; haptic interfaces; innovative instruments and algorithms for the sensing of motion and the identification of brain neoplasms; and, even a paper on a saxophone-playing robot.

Nonlinear Data Assimilation

Nonlinear Data Assimilation
Author: Peter Jan Van Leeuwen
Publisher: Springer
Total Pages: 130
Release: 2015-07-22
Genre: Mathematics
ISBN: 3319183478

This book contains two review articles on nonlinear data assimilation that deal with closely related topics but were written and can be read independently. Both contributions focus on so-called particle filters. The first contribution by Jan van Leeuwen focuses on the potential of proposal densities. It discusses the issues with present-day particle filters and explorers new ideas for proposal densities to solve them, converging to particle filters that work well in systems of any dimension, closing the contribution with a high-dimensional example. The second contribution by Cheng and Reich discusses a unified framework for ensemble-transform particle filters. This allows one to bridge successful ensemble Kalman filters with fully nonlinear particle filters, and allows a proper introduction of localization in particle filters, which has been lacking up to now.

Stochastic Control Theory

Stochastic Control Theory
Author: Makiko Nisio
Publisher: Springer
Total Pages: 263
Release: 2014-11-27
Genre: Mathematics
ISBN: 4431551239

This book offers a systematic introduction to the optimal stochastic control theory via the dynamic programming principle, which is a powerful tool to analyze control problems. First we consider completely observable control problems with finite horizons. Using a time discretization we construct a nonlinear semigroup related to the dynamic programming principle (DPP), whose generator provides the Hamilton–Jacobi–Bellman (HJB) equation, and we characterize the value function via the nonlinear semigroup, besides the viscosity solution theory. When we control not only the dynamics of a system but also the terminal time of its evolution, control-stopping problems arise. This problem is treated in the same frameworks, via the nonlinear semigroup. Its results are applicable to the American option price problem. Zero-sum two-player time-homogeneous stochastic differential games and viscosity solutions of the Isaacs equations arising from such games are studied via a nonlinear semigroup related to DPP (the min-max principle, to be precise). Using semi-discretization arguments, we construct the nonlinear semigroups whose generators provide lower and upper Isaacs equations. Concerning partially observable control problems, we refer to stochastic parabolic equations driven by colored Wiener noises, in particular, the Zakai equation. The existence and uniqueness of solutions and regularities as well as Itô's formula are stated. A control problem for the Zakai equations has a nonlinear semigroup whose generator provides the HJB equation on a Banach space. The value function turns out to be a unique viscosity solution for the HJB equation under mild conditions. This edition provides a more generalized treatment of the topic than does the earlier book Lectures on Stochastic Control Theory (ISI Lecture Notes 9), where time-homogeneous cases are dealt with. Here, for finite time-horizon control problems, DPP was formulated as a one-parameter nonlinear semigroup, whose generator provides the HJB equation, by using a time-discretization method. The semigroup corresponds to the value function and is characterized as the envelope of Markovian transition semigroups of responses for constant control processes. Besides finite time-horizon controls, the book discusses control-stopping problems in the same frameworks.

Probabilistic Theory of Mean Field Games with Applications I

Probabilistic Theory of Mean Field Games with Applications I
Author: René Carmona
Publisher: Springer
Total Pages: 728
Release: 2018-03-01
Genre: Mathematics
ISBN: 3319589202

This two-volume book offers a comprehensive treatment of the probabilistic approach to mean field game models and their applications. The book is self-contained in nature and includes original material and applications with explicit examples throughout, including numerical solutions. Volume I of the book is entirely devoted to the theory of mean field games without a common noise. The first half of the volume provides a self-contained introduction to mean field games, starting from concrete illustrations of games with a finite number of players, and ending with ready-for-use solvability results. Readers are provided with the tools necessary for the solution of forward-backward stochastic differential equations of the McKean-Vlasov type at the core of the probabilistic approach. The second half of this volume focuses on the main principles of analysis on the Wasserstein space. It includes Lions' approach to the Wasserstein differential calculus, and the applications of its results to the analysis of stochastic mean field control problems. Together, both Volume I and Volume II will greatly benefit mathematical graduate students and researchers interested in mean field games. The authors provide a detailed road map through the book allowing different access points for different readers and building up the level of technical detail. The accessible approach and overview will allow interested researchers in the applied sciences to obtain a clear overview of the state of the art in mean field games.

Controlled Markov Processes and Viscosity Solutions

Controlled Markov Processes and Viscosity Solutions
Author: Wendell H. Fleming
Publisher: Springer Science & Business Media
Total Pages: 436
Release: 2006-02-04
Genre: Mathematics
ISBN: 0387310711

This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games.

Fixed Point Theory

Fixed Point Theory
Author: Andrzej Granas
Publisher: Springer Science & Business Media
Total Pages: 706
Release: 2013-03-09
Genre: Mathematics
ISBN: 038721593X

The theory of Fixed Points is one of the most powerful tools of modern mathematics. This book contains a clear, detailed and well-organized presentation of the major results, together with an entertaining set of historical notes and an extensive bibliography describing further developments and applications. From the reviews: "I recommend this excellent volume on fixed point theory to anyone interested in this core subject of nonlinear analysis." --MATHEMATICAL REVIEWS

Dynamic Programming

Dynamic Programming
Author: Moshe Sniedovich
Publisher: CRC Press
Total Pages: 624
Release: 2010-09-10
Genre: Business & Economics
ISBN: 9781420014631

Incorporating a number of the author’s recent ideas and examples, Dynamic Programming: Foundations and Principles, Second Edition presents a comprehensive and rigorous treatment of dynamic programming. The author emphasizes the crucial role that modeling plays in understanding this area. He also shows how Dijkstra’s algorithm is an excellent example of a dynamic programming algorithm, despite the impression given by the computer science literature. New to the Second Edition Expanded discussions of sequential decision models and the role of the state variable in modeling A new chapter on forward dynamic programming models A new chapter on the Push method that gives a dynamic programming perspective on Dijkstra’s algorithm for the shortest path problem A new appendix on the Corridor method Taking into account recent developments in dynamic programming, this edition continues to provide a systematic, formal outline of Bellman’s approach to dynamic programming. It looks at dynamic programming as a problem-solving methodology, identifying its constituent components and explaining its theoretical basis for tackling problems.