Controlled Diffusion Processes

Controlled Diffusion Processes
Author: N. V. Krylov
Publisher: Springer Science & Business Media
Total Pages: 314
Release: 2008-09-26
Genre: Science
ISBN: 3540709142

Stochastic control theory is a relatively young branch of mathematics. The beginning of its intensive development falls in the late 1950s and early 1960s. ~urin~ that period an extensive literature appeared on optimal stochastic control using the quadratic performance criterion (see references in Wonham [76]). At the same time, Girsanov [25] and Howard [26] made the first steps in constructing a general theory, based on Bellman's technique of dynamic programming, developed by him somewhat earlier [4]. Two types of engineering problems engendered two different parts of stochastic control theory. Problems of the first type are associated with multistep decision making in discrete time, and are treated in the theory of discrete stochastic dynamic programming. For more on this theory, we note in addition to the work of Howard and Bellman, mentioned above, the books by Derman [8], Mine and Osaki [55], and Dynkin and Yushkevich [12]. Another class of engineering problems which encouraged the development of the theory of stochastic control involves time continuous control of a dynamic system in the presence of random noise. The case where the system is described by a differential equation and the noise is modeled as a time continuous random process is the core of the optimal control theory of diffusion processes. This book deals with this latter theory.

Variational Calculus, Optimal Control and Applications

Variational Calculus, Optimal Control and Applications
Author: Leonhard Bittner
Publisher: Birkhäuser
Total Pages: 354
Release: 2012-12-06
Genre: Mathematics
ISBN: 3034888023

The 12th conference on "Variational Calculus, Optimal Control and Applications" took place September 23-27, 1996, in Trassenheide on the Baltic Sea island of Use dom. Seventy mathematicians from ten countries participated. The preceding eleven conferences, too, were held in places of natural beauty throughout West Pomerania; the first time, in 1972, in Zinnowitz, which is in the immediate area of Trassenheide. The conferences were founded, and led ten times, by Professor Bittner (Greifswald) and Professor KlCitzler (Leipzig), who both celebrated their 65th birthdays in 1996. The 12th conference in Trassenheide, was, therefore, also dedicated to L. Bittner and R. Klotzler. Both scientists made a lasting impression on control theory in the former GDR. Originally, the conferences served to promote the exchange of research results. In the first years, most of the lectures were theoretical, but in the last few conferences practical applications have been given more attention. Besides their pioneering theoretical works, both honorees have also always dealt with applications problems. L. Bittner has, for example, examined optimal control of nuclear reactors and associated safety aspects. Since 1992 he has been working on applications in optimal control in flight dynamics. R. Klotzler recently applied his results on optimal autobahn planning to the south tangent in Leipzig. The contributions published in these proceedings reflect the trend to practical problems; starting points are often questions from flight dynamics.

Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations

Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations
Author: Martino Bardi
Publisher: Springer Science & Business Media
Total Pages: 588
Release: 2009-05-21
Genre: Science
ISBN: 0817647554

This softcover book is a self-contained account of the theory of viscosity solutions for first-order partial differential equations of Hamilton–Jacobi type and its interplay with Bellman’s dynamic programming approach to optimal control and differential games. It will be of interest to scientists involved in the theory of optimal control of deterministic linear and nonlinear systems. The work may be used by graduate students and researchers in control theory both as an introductory textbook and as an up-to-date reference book.

Stochastic Analysis, Control, Optimization and Applications

Stochastic Analysis, Control, Optimization and Applications
Author: William M. McEneaney
Publisher: Springer Science & Business Media
Total Pages: 660
Release: 2012-12-06
Genre: Technology & Engineering
ISBN: 1461217849

In view of Professor Wendell Fleming's many fundamental contributions, his profound influence on the mathematical and systems theory communi ties, his service to the profession, and his dedication to mathematics, we have invited a number of leading experts in the fields of control, optimiza tion, and stochastic systems to contribute to this volume in his honor on the occasion of his 70th birthday. These papers focus on various aspects of stochastic analysis, control theory and optimization, and applications. They include authoritative expositions and surveys as well as research papers on recent and important issues. The papers are grouped according to the following four major themes: (1) large deviations, risk sensitive and Hoc control, (2) partial differential equations and viscosity solutions, (3) stochastic control, filtering and parameter esti mation, and (4) mathematical finance and other applications. We express our deep gratitude to all of the authors for their invaluable contributions, and to the referees for their careful and timely reviews. We thank Harold Kushner for having graciously agreed to undertake the task of writing the foreword. Particular thanks go to H. Thomas Banks for his help, advice and suggestions during the entire preparation process, as well as for the generous support of the Center for Research in Scientific Computation. The assistance from the Birkhauser professional staff is also greatly appreciated.

Controlled Markov Processes and Viscosity Solutions

Controlled Markov Processes and Viscosity Solutions
Author: Wendell H. Fleming
Publisher: Springer Science & Business Media
Total Pages: 436
Release: 2006-02-04
Genre: Mathematics
ISBN: 0387310711

This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games.

Stochastic Controls

Stochastic Controls
Author: Jiongmin Yong
Publisher: Springer Science & Business Media
Total Pages: 459
Release: 2012-12-06
Genre: Mathematics
ISBN: 1461214661

As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing: (Q) What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.

Risk-averse Optimal Control of Diffusion Processes

Risk-averse Optimal Control of Diffusion Processes
Author: Jianing Yao
Publisher:
Total Pages: 86
Release: 2017
Genre:
ISBN:

This work analyzes an optimal control problem for which the performance is measured by a dynamic risk measure. While dynamic risk measures in discrete-time and the control problems associated are well understood, the continuous-time framework brings great challenges both in theory and practice. This study addresses modeling, numerical schemes and applications. In the first part, we focus on the formulation of a risk-averse control problem. Specifically, we make use of a decoupled forward-backward system of stochastic differential equations to evaluate a fixed policy: the forward stochastic differential equation (SDE) characterizes the evolution of states, and the backward stochastic differential equation (BSDE) does the risk evaluation at any instant of time. Relying on the Markovian structure of the system, we obtain the corresponding dynamic programming equation via weak formulation and strong formulation; in the meanwhile, the risk-averse Hamilton-Jacobi-Bellman equation and its verification are derived under suitable assumptions. In the second part, the main thrust is to find a convergent numerical method to solve the system in discrete-time setting. Specifically, we construct a piecewise-constant Markovian control to show its arbitrarily closeness to the optimal control. The results heavily relies on the regularity of the solution to generalized Hamilton-Jacobi-Bellman PDE. In the third part, we propose a numerical method for risk evaluation defined by BSDE. Using dual representation of the risk measure, we converted risk valuation to a stochastic control problem, where the control is the Radon-Nikodym derivative process. The optimality conditions of such control problem enables us to use a piecewise-constant density (control) to arrive at a close approximation on a short interval. Then, the Bellman principle extends the approximation to any finite time horizon problem. Lastly, we give a financial application in risk management in conjunction with nested simulation.