Approximate Solutions Of Common Fixed Point Problems
Download Approximate Solutions Of Common Fixed Point Problems full books in PDF, epub, and Kindle. Read online free Approximate Solutions Of Common Fixed Point Problems ebook anywhere anytime directly on your device. Fast Download speed and no annoying ads. We cannot guarantee that every ebooks is available!
Author | : Alexander J. Zaslavski |
Publisher | : Springer |
Total Pages | : 457 |
Release | : 2016-06-30 |
Genre | : Mathematics |
ISBN | : 3319332554 |
This book presents results on the convergence behavior of algorithms which are known as vital tools for solving convex feasibility problems and common fixed point problems. The main goal for us in dealing with a known computational error is to find what approximate solution can be obtained and how many iterates one needs to find it. According to know results, these algorithms should converge to a solution. In this exposition, these algorithms are studied, taking into account computational errors which remain consistent in practice. In this case the convergence to a solution does not take place. We show that our algorithms generate a good approximate solution if computational errors are bounded from above by a small positive constant. Beginning with an introduction, this monograph moves on to study: · dynamic string-averaging methods for common fixed point problems in a Hilbert space · dynamic string methods for common fixed point problems in a metric space“/p> · dynamic string-averaging version of the proximal algorithm · common fixed point problems in metric spaces · common fixed point problems in the spaces with distances of the Bregman type · a proximal algorithm for finding a common zero of a family of maximal monotone operators · subgradient projections algorithms for convex feasibility problems in Hilbert spaces
Author | : Alexander J. Zaslavski |
Publisher | : Springer |
Total Pages | : 320 |
Release | : 2018-05-02 |
Genre | : Mathematics |
ISBN | : 3319774379 |
This book details approximate solutions to common fixed point problems and convex feasibility problems in the presence of perturbations. Convex feasibility problems search for a common point of a finite collection of subsets in a Hilbert space; common fixed point problems pursue a common fixed point of a finite collection of self-mappings in a Hilbert space. A variety of algorithms are considered in this book for solving both types of problems, the study of which has fueled a rapidly growing area of research. This monograph is timely and highlights the numerous applications to engineering, computed tomography, and radiation therapy planning. Totaling eight chapters, this book begins with an introduction to foundational material and moves on to examine iterative methods in metric spaces. The dynamic string-averaging methods for common fixed point problems in normed space are analyzed in Chapter 3. Dynamic string methods, for common fixed point problems in a metric space are introduced and discussed in Chapter 4. Chapter 5 is devoted to the convergence of an abstract version of the algorithm which has been called component-averaged row projections (CARP). Chapter 6 studies a proximal algorithm for finding a common zero of a family of maximal monotone operators. Chapter 7 extends the results of Chapter 6 for a dynamic string-averaging version of the proximal algorithm. In Chapters 8 subgradient projections algorithms for convex feasibility problems are examined for infinite dimensional Hilbert spaces.
Author | : Alexander J. Zaslavski |
Publisher | : Springer Nature |
Total Pages | : 434 |
Release | : 2021-08-09 |
Genre | : Mathematics |
ISBN | : 3030788490 |
This book is devoted to a detailed study of the subgradient projection method and its variants for convex optimization problems over the solution sets of common fixed point problems and convex feasibility problems. These optimization problems are investigated to determine good solutions obtained by different versions of the subgradient projection algorithm in the presence of sufficiently small computational errors. The use of selected algorithms is highlighted including the Cimmino type subgradient, the iterative subgradient, and the dynamic string-averaging subgradient. All results presented are new. Optimization problems where the underlying constraints are the solution sets of other problems, frequently occur in applied mathematics. The reader should not miss the section in Chapter 1 which considers some examples arising in the real world applications. The problems discussed have an important impact in optimization theory as well. The book will be useful for researches interested in the optimization theory and its applications.
Author | : Alexander J. Zaslavski |
Publisher | : Springer Nature |
Total Pages | : 535 |
Release | : |
Genre | : |
ISBN | : 3031707109 |
Author | : Alexander J. Zaslavski |
Publisher | : Springer Nature |
Total Pages | : 392 |
Release | : |
Genre | : |
ISBN | : 3031508793 |
Author | : Monther Alfuraidan |
Publisher | : Academic Press |
Total Pages | : 444 |
Release | : 2016-06-20 |
Genre | : Mathematics |
ISBN | : 0128043652 |
Fixed Point Theory and Graph Theory provides an intersection between the theories of fixed point theorems that give the conditions under which maps (single or multivalued) have solutions and graph theory which uses mathematical structures to illustrate the relationship between ordered pairs of objects in terms of their vertices and directed edges. This edited reference work is perhaps the first to provide a link between the two theories, describing not only their foundational aspects, but also the most recent advances and the fascinating intersection of the domains. The authors provide solution methods for fixed points in different settings, with two chapters devoted to the solutions method for critically important non-linear problems in engineering, namely, variational inequalities, fixed point, split feasibility, and hierarchical variational inequality problems. The last two chapters are devoted to integrating fixed point theory in spaces with the graph and the use of retractions in the fixed point theory for ordered sets. - Introduces both metric fixed point and graph theory in terms of their disparate foundations and common application environments - Provides a unique integration of otherwise disparate domains that aids both students seeking to understand either area and researchers interested in establishing an integrated research approach - Emphasizes solution methods for fixed points in non-linear problems such as variational inequalities, split feasibility, and hierarchical variational inequality problems that is particularly appropriate for engineering and core science applications
Author | : Alexander J. Zaslavski |
Publisher | : Springer Nature |
Total Pages | : 148 |
Release | : 2020-11-25 |
Genre | : Mathematics |
ISBN | : 3030603008 |
This focused monograph presents a study of subgradient algorithms for constrained minimization problems in a Hilbert space. The book is of interest for experts in applications of optimization to engineering and economics. The goal is to obtain a good approximate solution of the problem in the presence of computational errors. The discussion takes into consideration the fact that for every algorithm its iteration consists of several steps and that computational errors for different steps are different, in general. The book is especially useful for the reader because it contains solutions to a number of difficult and interesting problems in the numerical optimization. The subgradient projection algorithm is one of the most important tools in optimization theory and its applications. An optimization problem is described by an objective function and a set of feasible points. For this algorithm each iteration consists of two steps. The first step requires a calculation of a subgradient of the objective function; the second requires a calculation of a projection on the feasible set. The computational errors in each of these two steps are different. This book shows that the algorithm discussed, generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if computational errors for the two steps of the algorithm are known, one discovers an approximate solution and how many iterations one needs for this. In addition to their mathematical interest, the generalizations considered in this book have a significant practical meaning.
Author | : Yeol Je Cho |
Publisher | : Springer Nature |
Total Pages | : 503 |
Release | : 2021-06-05 |
Genre | : Mathematics |
ISBN | : 9813366478 |
This book collects papers on major topics in fixed point theory and its applications. Each chapter is accompanied by basic notions, mathematical preliminaries and proofs of the main results. The book discusses common fixed point theory, convergence theorems, split variational inclusion problems and fixed point problems for asymptotically nonexpansive semigroups; fixed point property and almost fixed point property in digital spaces, nonexpansive semigroups over CAT(κ) spaces, measures of noncompactness, integral equations, the study of fixed points that are zeros of a given function, best proximity point theory, monotone mappings in modular function spaces, fuzzy contractive mappings, ordered hyperbolic metric spaces, generalized contractions in b-metric spaces, multi-tupled fixed points, functional equations in dynamic programming and Picard operators. This book addresses the mathematical community working with methods and tools of nonlinear analysis. It also serves as a reference, source for examples and new approaches associated with fixed point theory and its applications for a wide audience including graduate students and researchers.
Author | : Alexander J. Zaslavski |
Publisher | : Springer Nature |
Total Pages | : 364 |
Release | : 2020-01-31 |
Genre | : Mathematics |
ISBN | : 3030378225 |
The book is devoted to the study of approximate solutions of optimization problems in the presence of computational errors. It contains a number of results on the convergence behavior of algorithms in a Hilbert space, which are known as important tools for solving optimization problems. The research presented in the book is the continuation and the further development of the author's (c) 2016 book Numerical Optimization with Computational Errors, Springer 2016. Both books study the algorithms taking into account computational errors which are always present in practice. The main goal is, for a known computational error, to find out what an approximate solution can be obtained and how many iterates one needs for this. The main difference between this new book and the 2016 book is that in this present book the discussion takes into consideration the fact that for every algorithm, its iteration consists of several steps and that computational errors for different steps are generally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error and these two computational errors are different in general. It may happen that the feasible set is simple and the objective function is complicated. As a result, the computational error, made when one calculates the projection, is essentially smaller than the computational error of the calculation of the subgradient. Clearly, an opposite case is possible too. Another feature of this book is a study of a number of important algorithms which appeared recently in the literature and which are not discussed in the previous book. This monograph contains 12 chapters. Chapter 1 is an introduction. In Chapter 2 we study the subgradient projection algorithm for minimization of convex and nonsmooth functions. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 3 we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions, under the presence of computational errors. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the second one we solve an auxiliary minimization problem on the set of feasible points. In each of these two steps there is a computational error. We generalize the results of [NOCE] and establish results which has no prototype in [NOCE]. In Chapter 4 we analyze the projected gradient algorithm with a smooth objective function under the presence of computational errors. In Chapter 5 we consider an algorithm, which is an extension of the projection gradient algorithm used for solving linear inverse problems arising in signal/image processing. In Chapter 6 we study continuous subgradient method and continuous subgradient projection algorithm for minimization of convex nonsmooth functions and for computing the saddle points of convex-concave functions, under the presence of computational errors. All the results of this chapter has no prototype in [NOCE]. In Chapters 7-12 we analyze several algorithms under the presence of computational errors which were not considered in [NOCE]. Again, each step of an iteration has a computational errors and we take into account that these errors are, in general, different. An optimization problems with a composite objective function is studied in Chapter 7. A zero-sum game with two-players is considered in Chapter 8. A predicted decrease approximation-based method is used in Chapter 9 for constrained convex optimization. Chapter 10 is devoted to minimization of quasiconvex functions. Minimization of sharp weakly convex functions is discussed in Chapter 11. Chapter 12 is devoted to a generalized projected subgradient method for minimization of a convex function over a set which is not necessarily convex. The book is of interest for researchers and engineers working in optimization. It also can be useful in preparation courses for graduate students. The main feature of the book which appeals specifically to this audience is the study of the influence of computational errors for several important optimization algorithms. The book is of interest for experts in applications of optimization to engineering and economics.
Author | : Hemant Kumar Pathak |
Publisher | : Springer |
Total Pages | : 845 |
Release | : 2018-05-19 |
Genre | : Mathematics |
ISBN | : 9811088667 |
This book systematically introduces the theory of nonlinear analysis, providing an overview of topics such as geometry of Banach spaces, differential calculus in Banach spaces, monotone operators, and fixed point theorems. It also discusses degree theory, nonlinear matrix equations, control theory, differential and integral equations, and inclusions. The book presents surjectivity theorems, variational inequalities, stochastic game theory and mathematical biology, along with a large number of applications of these theories in various other disciplines. Nonlinear analysis is characterised by its applications in numerous interdisciplinary fields, ranging from engineering to space science, hydromechanics to astrophysics, chemistry to biology, theoretical mechanics to biomechanics and economics to stochastic game theory. Organised into ten chapters, the book shows the elegance of the subject and its deep-rooted concepts and techniques, which provide the tools for developing more realistic and accurate models for a variety of phenomena encountered in diverse applied fields. It is intended for graduate and undergraduate students of mathematics and engineering who are familiar with discrete mathematical structures, differential and integral equations, operator theory, measure theory, Banach and Hilbert spaces, locally convex topological vector spaces, and linear functional analysis.