Intelligent Systems

Intelligent Systems
Author: Ricardo Cerri
Publisher: Springer Nature
Total Pages: 682
Release: 2020-10-15
Genre: Computers
ISBN: 3030613801

The two-volume set LNAI 12319 and 12320 constitutes the proceedings of the 9th Brazilian Conference on Intelligent Systems, BRACIS 2020, held in Rio Grande, Brazil, in October 2020. The total of 90 papers presented in these two volumes was carefully reviewed and selected from 228 submissions. The contributions are organized in the following topical section: Part I: Evolutionary computation, metaheuristics, constrains and search, combinatorial and numerical optimization; neural networks, deep learning and computer vision; and text mining and natural language processing. Part II: Agent and multi-agent systems, planning and reinforcement learning; knowledge representation, logic and fuzzy systems; machine learning and data mining; and multidisciplinary artificial and computational intelligence and applications. Due to the Corona pandemic BRACIS 2020 was held as a virtual event.

Constrained Markov Decision Processes

Constrained Markov Decision Processes
Author: Eitan Altman
Publisher: Routledge
Total Pages: 256
Release: 2021-12-17
Genre: Mathematics
ISBN: 1351458248

This book provides a unified approach for the study of constrained Markov decision processes with a finite state space and unbounded costs. Unlike the single controller case considered in many other books, the author considers a single controller with several objectives, such as minimizing delays and loss, probabilities, and maximization of throughputs. It is desirable to design a controller that minimizes one cost objective, subject to inequality constraints on other cost objectives. This framework describes dynamic decision problems arising frequently in many engineering fields. A thorough overview of these applications is presented in the introduction. The book is then divided into three sections that build upon each other.

Partially Observed Markov Decision Processes

Partially Observed Markov Decision Processes
Author: Vikram Krishnamurthy
Publisher: Cambridge University Press
Total Pages: 491
Release: 2016-03-21
Genre: Mathematics
ISBN: 1107134609

This book covers formulation, algorithms, and structural results of partially observed Markov decision processes, whilst linking theory to real-world applications in controlled sensing. Computations are kept to a minimum, enabling students and researchers in engineering, operations research, and economics to understand the methods and determine the structure of their optimal solution.

Sustainable City Logistics Planning

Sustainable City Logistics Planning
Author: Anjali Awasthi
Publisher:
Total Pages: 314
Release: 2020-02-26
Genre: City planning
ISBN: 9781536166095

Modern cities are facing the growing problem of congestion, poor air quality and lack of public space. To ameliorate the condition of goods transport in cities, sustainable city logistics planning is essential. It requires a collaborative approach among city logistics stakeholders for consolidated goods distribution inside city centers to minimize their negative impacts on city residents and their environment. The book presents theoretical studies, state of the art, and practical applications in the area of sustainable city logistics. It is composed of nine chapters. A brief description of the various chapters is provided as follows: Chapter 1 by Sharfuddin Ahmed Khan and Syed Tahaur Rehman presents a review of literature and future prospects on sustainable city logistics. Globalization, governmental rules, and regulations enforce decision makers and managers to incorporate sustainability in every aspect of their decision making (DM) specifically in city logistics. The area of sustainable city logistics is still in its developing stage and not many authors explore sustainability aspects in city logistics. The focus of this chapter is to review existing literature related to city logistics that considered sustainability in DM. A total of 40 articles that were published between 2010 to 2019 have been considered and categorized in terms of objective of study, area of research focus such as qualitative, quantitative, case study etc., and multi criteria DM methods. Finally, future prospects and directions has been proposed in sustainable city logistics. Chapter 2 by Sättar Ezzati presents challenges and opportunities in maritime logistics empty container repositioning. Maritime logistics and freight transportation are extensive and complex sectors that involve large material resources and represent intricate relationships between trade-off the various decisions and policies affecting different components. Because of the globalization, e-market and high level of customization trends, the sector has faced diversified challenges on different levels of planning including designing, scheduling, fleet sizing, decisions about container ownership, leasing and empty container repositioning, uncertainty and collaboration opportunities that already has provoked advanced coordination and intelligent optimization techniques for its global operations from strategic and tactical perspectives. Large attention of this chapter concentrates on empty containers repositioning problem and potential pathways to address this issue and how container shipping companies can handle this challenge with the help of operations research techniques from the perspectives of shipping business industry. To do so, this chapter presents a comprehensive and systematic literature review mainly focused on recent publications correspond to these logistics that maritime industries are facing. Chapter 3 by Yisha Luo, Ali Alaghbandrad, Tersoo Kelechukwu, and Amin Hammad addresses the theme of smart multi-purpose utility tunnels. In terms of sustainable practices, the conventional method of open cut utility installation has proven to be a short-term solution, considering its negative impact on the environment, and its social disruptive nature. An alternative to open cut utility installation is Multi-purpose Utility Tunnels (MUTs), as it offers an economic, sustainable, and easy to manage and inspect method of utility placement. The risks associated with MUTs are both natural and manmade. As a way of tackling these risks, smart MUTs with the use of sensors will reduce the effects of the risks while supporting the operation and maintenance processes for MUT operators. To enhance decision making, data collected from the sensors are used in the MUT Information Modelling (MUTIM). MUTIM includes the utility tunnel structural model with utilities, equipment, sensors, and devices that can be used for emergency management increasing the sustainability and resilience of smart cities. Chapter 4 by Léonard Ryo Morin, Fabian Bastin, Emma Frejinger, and Martin Trépanier model truck route choices in an urban area using a recursive logit model and GPS data. They explore the use of GPS devices to capture heavy truck routes in the Montreal urban road network. The main focus lies on trips that originate or depart from intermodal terminals (rail yard, port). They descriptively analyse GPS data and use the data to estimate a recursive logit model by means of maximum likelihood. The results show the main factors affecting the route choice decisions. Using this type of predictive models when planning and designing the transport network nearby intermodal terminals could offer opportunities to reduce the negative impacts on truck movements, as the CO2 emissions. Chapter 5 by Akolade Adegoke presents a literature review on benchmarking port sustainability performance. Sustainable development agendas are challenging the world and ports, in particular, to find ways to become more efficient while meeting economic, social and environmental objectives. Although there has been a considerable body of documentation on port green practices and performance in Europe and America, there is limited synthesis about evaluation of sustainable practices in the context of Canadian ports. This chapter provides a review of literature and initiatives employed by global ports authorities and identifies major sustainability performance indicators. Chapter 6 by Silke Hoehl, Kai-Oliver Schocke, and Petra Schaefer presents analysis and recommendations of delivery strategies in urban and suburban areas. A research series about commercial transport started in the region of Frankfurt/Main (Germany) started in 2014. The first project dealt with the commercial transport in the city centre of Frankfurt/Main. One hypothesis was that CEP vehicles are congesting the streets. A data base was built by collecting data in two streets in the centre of Frankfurt. Contrary to the expectation a significant part of commercial transport is caused by vehicles of craftsmen. After that, in 2016 the second project examined the delivery strategies of four CEP companies in Frankfurt. One research question was if CEP companies use different delivery strategies in different parts of the city. Therefore 40 delivery tours were accompanied and data was collected e.g. number of stops, number of parcels per stops, car type, transport situation, parking situation, shift lengths or GPS-track. In parallel, the traffic situation in several districts of Frankfurt were analyzed. In a third part, the two streams were put together to recommend delivery strategies for CEP-companies as well as useful insights for local authorities. As a third project of the research series a new project has just begun. The study area has been extended to the entire RheinMain region. It deals with the commercial transport and faces the challenge to manage commercial transport at a low emission level. On the one hand, the methodologies of the two preceding projects will be applied to a suburban area in the region. Recommendations will be developed. On the other hand, loading zones for electric vehicles in Frankfurt will be identified and developed. After that, a conference will give a wide overview of existing delivery concepts. By pointing out critical situations in the delivery chain, the whole last mile will be described. Chapter 7 by Shuai Ma, Jia Yu, and Ahmet Satir presents a scheme for sequential decision making with a risk-sensitive objective and constraints in a dynamic scenario. A neural network is trained as an approximator of the mapping from parameter space to space of risk and policy with risk-sensitive constraints. For a given risk-sensitive problem, in which the objective and constraints are, or can be estimated by, functions of the mean and variance of return, we generate a synthetic dataset as training data. Parameters defining a targeted process might be dynamic, i.e., they might vary over time, so we sample them within specified intervals to deal with these dynamics. We show that: i). Most risk measures can be estimated with the return variance; ii). By virtue of the state-augmentation transformation, practical problems modeled by Markov decision processes with stochastic rewards can be solved in a risk-sensitive scenario; and iii). The proposed scheme is validated by a numerical experiment. Chapter 8 by J.H.R. van Duin, B. Enserink, J.J. Daleman, and M. Vaandrager addresses the theme of sustainable alternatives selection for parcel delivery. The GHG-emissions of the transport sector are still increasing. This trend is accompanied by the strong growth of the e-commerce sector, leading to more transport movements on our road networks. In order to mitigate the externalities of the e-commerce related parcel delivery market and try to make it more sustainable, the following research question has been drafted: How could the last mile parcel delivery process beco

Active Inference

Active Inference
Author: Thomas Parr
Publisher: MIT Press
Total Pages: 313
Release: 2022-03-29
Genre: Science
ISBN: 0262362287

The first comprehensive treatment of active inference, an integrative perspective on brain, cognition, and behavior used across multiple disciplines. Active inference is a way of understanding sentient behavior—a theory that characterizes perception, planning, and action in terms of probabilistic inference. Developed by theoretical neuroscientist Karl Friston over years of groundbreaking research, active inference provides an integrated perspective on brain, cognition, and behavior that is increasingly used across multiple disciplines including neuroscience, psychology, and philosophy. Active inference puts the action into perception. This book offers the first comprehensive treatment of active inference, covering theory, applications, and cognitive domains. Active inference is a “first principles” approach to understanding behavior and the brain, framed in terms of a single imperative to minimize free energy. The book emphasizes the implications of the free energy principle for understanding how the brain works. It first introduces active inference both conceptually and formally, contextualizing it within current theories of cognition. It then provides specific examples of computational models that use active inference to explain such cognitive phenomena as perception, attention, memory, and planning.

Handbook of Reinforcement Learning and Control

Handbook of Reinforcement Learning and Control
Author: Kyriakos G. Vamvoudakis
Publisher: Springer Nature
Total Pages: 833
Release: 2021-06-23
Genre: Technology & Engineering
ISBN: 3030609901

This handbook presents state-of-the-art research in reinforcement learning, focusing on its applications in the control and game theory of dynamic systems and future directions for related research and technology. The contributions gathered in this book deal with challenges faced when using learning and adaptation methods to solve academic and industrial problems, such as optimization in dynamic environments with single and multiple agents, convergence and performance analysis, and online implementation. They explore means by which these difficulties can be solved, and cover a wide range of related topics including: deep learning; artificial intelligence; applications of game theory; mixed modality learning; and multi-agent reinforcement learning. Practicing engineers and scholars in the field of machine learning, game theory, and autonomous control will find the Handbook of Reinforcement Learning and Control to be thought-provoking, instructive and informative.

Markov Decision Processes

Markov Decision Processes
Author: Martin L. Puterman
Publisher: John Wiley & Sons
Total Pages: 544
Release: 2014-08-28
Genre: Mathematics
ISBN: 1118625870

The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "This text is unique in bringing together so many results hitherto found only in part in other texts and papers. . . . The text is fairly self-contained, inclusive of some basic mathematical results needed, and provides a rich diet of examples, applications, and exercises. The bibliographical material at the end of each chapter is excellent, not only from a historical perspective, but because it is valuable for researchers in acquiring a good perspective of the MDP research potential." —Zentralblatt fur Mathematik ". . . it is of great value to advanced-level students, researchers, and professional practitioners of this field to have now a complete volume (with more than 600 pages) devoted to this topic. . . . Markov Decision Processes: Discrete Stochastic Dynamic Programming represents an up-to-date, unified, and rigorous treatment of theoretical and computational aspects of discrete-time Markov decision processes." —Journal of the American Statistical Association

Reinforcement Learning and Dynamic Programming Using Function Approximators

Reinforcement Learning and Dynamic Programming Using Function Approximators
Author: Lucian Busoniu
Publisher: CRC Press
Total Pages: 280
Release: 2017-07-28
Genre: Computers
ISBN: 1439821097

From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dynamic systems, its practical value was limited by algorithms that lacked the capacity to scale up to realistic problems. However, in recent years, dramatic developments in Reinforcement Learning (RL), the model-free counterpart of DP, changed our understanding of what is possible. Those developments led to the creation of reliable methods that can be applied even when a mathematical model of the system is unavailable, allowing researchers to solve challenging control problems in engineering, as well as in a variety of other disciplines, including economics, medicine, and artificial intelligence. Reinforcement Learning and Dynamic Programming Using Function Approximators provides a comprehensive and unparalleled exploration of the field of RL and DP. With a focus on continuous-variable problems, this seminal text details essential developments that have substantially altered the field over the past decade. In its pages, pioneering experts provide a concise introduction to classical RL and DP, followed by an extensive presentation of the state-of-the-art and novel methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, they elaborate on their work with illustrative examples and insightful comparisons. Three individual chapters are dedicated to representative algorithms from each of the major classes of techniques: value iteration, policy iteration, and policy search. The features and performance of these algorithms are highlighted in extensive experimental studies on a range of control applications. The recent development of applications involving complex systems has led to a surge of interest in RL and DP methods and the subsequent need for a quality resource on the subject. For graduate students and others new to the field, this book offers a thorough introduction to both the basics and emerging methods. And for those researchers and practitioners working in the fields of optimal and adaptive control, machine learning, artificial intelligence, and operations research, this resource offers a combination of practical algorithms, theoretical analysis, and comprehensive examples that they will be able to adapt and apply to their own work. Access the authors' website at www.dcsc.tudelft.nl/rlbook/ for additional material, including computer code used in the studies and information concerning new developments.