Discrete-Time Markov Control Processes

Discrete-Time Markov Control Processes
Author: Onesimo Hernandez-Lerma
Publisher: Springer Science & Business Media
Total Pages: 223
Release: 2012-12-06
Genre: Mathematics
ISBN: 1461207290

This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued.

Further Topics on Discrete-Time Markov Control Processes

Further Topics on Discrete-Time Markov Control Processes
Author: Onesimo Hernandez-Lerma
Publisher: Springer Science & Business Media
Total Pages: 286
Release: 2012-12-06
Genre: Mathematics
ISBN: 1461205611

Devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes, the text is mainly confined to MCPs with Borel state and control spaces. Although the book follows on from the author's earlier work, an important feature of this volume is that it is self-contained and can thus be read independently of the first. The control model studied is sufficiently general to include virtually all the usual discrete-time stochastic control models that appear in applications to engineering, economics, mathematical population processes, operations research, and management science.

Adaptive Markov Control Processes

Adaptive Markov Control Processes
Author: Onesimo Hernandez-Lerma
Publisher: Springer Science & Business Media
Total Pages: 160
Release: 2012-12-06
Genre: Mathematics
ISBN: 1441987142

This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e. , CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained,in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided.

Discrete-Time Markov Jump Linear Systems

Discrete-Time Markov Jump Linear Systems
Author: O.L.V. Costa
Publisher: Springer Science & Business Media
Total Pages: 287
Release: 2006-03-30
Genre: Mathematics
ISBN: 1846280826

This will be the most up-to-date book in the area (the closest competition was published in 1990) This book takes a new slant and is in discrete rather than continuous time

Discrete-Time Markov Chains

Discrete-Time Markov Chains
Author: George Yin
Publisher: Springer Science & Business Media
Total Pages: 372
Release: 2005
Genre: Business & Economics
ISBN: 9780387219486

Focusing on discrete-time-scale Markov chains, the contents of this book are an outgrowth of some of the authors' recent research. The motivation stems from existing and emerging applications in optimization and control of complex hybrid Markovian systems in manufacturing, wireless communication, and financial engineering. Much effort in this book is devoted to designing system models arising from these applications, analyzing them via analytic and probabilistic techniques, and developing feasible computational algorithms so as to reduce the inherent complexity. This book presents results including asymptotic expansions of probability vectors, structural properties of occupation measures, exponential bounds, aggregation and decomposition and associated limit processes, and interface of discrete-time and continuous-time systems. One of the salient features is that it contains a diverse range of applications on filtering, estimation, control, optimization, and Markov decision processes, and financial engineering. This book will be an important reference for researchers in the areas of applied probability, control theory, operations research, as well as for practitioners who use optimization techniques. Part of the book can also be used in a graduate course of applied probability, stochastic processes, and applications.

Discrete-Time Markov Chains

Discrete-Time Markov Chains
Author: G. George Yin
Publisher: Springer Science & Business Media
Total Pages: 354
Release: 2005-10-04
Genre: Mathematics
ISBN: 0387268715

This book focuses on two-time-scale Markov chains in discrete time. Our motivation stems from existing and emerging applications in optimization and control of complex systems in manufacturing, wireless communication, and ?nancial engineering. Much of our e?ort in this book is devoted to designing system models arising from various applications, analyzing them via analytic and probabilistic techniques, and developing feasible compu- tionalschemes. Ourmainconcernistoreducetheinherentsystemcompl- ity. Although each of the applications has its own distinct characteristics, all of them are closely related through the modeling of uncertainty due to jump or switching random processes. Oneofthesalientfeaturesofthisbookistheuseofmulti-timescalesin Markovprocessesandtheirapplications. Intuitively,notallpartsorcom- nents of a large-scale system evolve at the same rate. Some of them change rapidly and others vary slowly. The di?erent rates of variations allow us to reduce complexity via decomposition and aggregation. It would be ideal if we could divide a large system into its smallest irreducible subsystems completely separable from one another and treat each subsystem indep- dently. However, this is often infeasible in reality due to various physical constraints and other considerations. Thus, we have to deal with situations in which the systems are only nearly decomposable in the sense that there are weak links among the irreducible subsystems, which dictate the oc- sional regime changes of the system. An e?ective way to treat such near decomposability is time-scale separation. That is, we set up the systems as if there were two time scales, fast vs. slow. xii Preface Followingthetime-scaleseparation,weusesingularperturbationmeth- ology to treat the underlying systems.

Markov Decision Processes

Markov Decision Processes
Author: Martin L. Puterman
Publisher: John Wiley & Sons
Total Pages: 544
Release: 2014-08-28
Genre: Mathematics
ISBN: 1118625870

The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "This text is unique in bringing together so many results hitherto found only in part in other texts and papers. . . . The text is fairly self-contained, inclusive of some basic mathematical results needed, and provides a rich diet of examples, applications, and exercises. The bibliographical material at the end of each chapter is excellent, not only from a historical perspective, but because it is valuable for researchers in acquiring a good perspective of the MDP research potential." —Zentralblatt fur Mathematik ". . . it is of great value to advanced-level students, researchers, and professional practitioners of this field to have now a complete volume (with more than 600 pages) devoted to this topic. . . . Markov Decision Processes: Discrete Stochastic Dynamic Programming represents an up-to-date, unified, and rigorous treatment of theoretical and computational aspects of discrete-time Markov decision processes." —Journal of the American Statistical Association

Continuous-Time Markov Decision Processes

Continuous-Time Markov Decision Processes
Author: Xianping Guo
Publisher: Springer Science & Business Media
Total Pages: 240
Release: 2009-09-18
Genre: Mathematics
ISBN: 3642025471

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.