Interpretability of Computational Intelligence-Based Regression Models

Interpretability of Computational Intelligence-Based Regression Models
Author: Tamás Kenesei
Publisher: Springer
Total Pages: 89
Release: 2015-10-22
Genre: Computers
ISBN: 3319219421

The key idea of this book is that hinging hyperplanes, neural networks and support vector machines can be transformed into fuzzy models, and interpretability of the resulting rule-based systems can be ensured by special model reduction and visualization techniques. The first part of the book deals with the identification of hinging hyperplane-based regression trees. The next part deals with the validation, visualization and structural reduction of neural networks based on the transformation of the hidden layer of the network into an additive fuzzy rule base system. Finally, based on the analogy of support vector regression and fuzzy models, a three-step model reduction algorithm is proposed to get interpretable fuzzy regression models on the basis of support vector regression. The authors demonstrate real-world use of the algorithms with examples taken from process engineering, and they support the text with downloadable Matlab code. The book is suitable for researchers, graduate students and practitioners in the areas of computational intelligence and machine learning.

Interpretable Machine Learning

Interpretable Machine Learning
Author: Christoph Molnar
Publisher: Lulu.com
Total Pages: 320
Release: 2020
Genre: Computers
ISBN: 0244768528

This book is about making machine learning models and their decisions interpretable. After exploring the concepts of interpretability, you will learn about simple, interpretable models such as decision trees, decision rules and linear regression. Later chapters focus on general model-agnostic methods for interpreting black box models like feature importance and accumulated local effects and explaining individual predictions with Shapley values and LIME. All interpretation methods are explained in depth and discussed critically. How do they work under the hood? What are their strengths and weaknesses? How can their outputs be interpreted? This book will enable you to select and correctly apply the interpretation method that is most suitable for your machine learning project.

Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
Author: Wojciech Samek
Publisher: Springer Nature
Total Pages: 435
Release: 2019-09-10
Genre: Computers
ISBN: 3030289540

The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines. For sensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.

Genetic Programming Theory and Practice II

Genetic Programming Theory and Practice II
Author: Una-May O'Reilly
Publisher: Springer Science & Business Media
Total Pages: 330
Release: 2006-03-16
Genre: Computers
ISBN: 0387232540

The work described in this book was first presented at the Second Workshop on Genetic Programming, Theory and Practice, organized by the Center for the Study of Complex Systems at the University of Michigan, Ann Arbor, 13-15 May 2004. The goal of this workshop series is to promote the exchange of research results and ideas between those who focus on Genetic Programming (GP) theory and those who focus on the application of GP to various re- world problems. In order to facilitate these interactions, the number of talks and participants was small and the time for discussion was large. Further, participants were asked to review each other's chapters before the workshop. Those reviewer comments, as well as discussion at the workshop, are reflected in the chapters presented in this book. Additional information about the workshop, addendums to chapters, and a site for continuing discussions by participants and by others can be found at http://cscs.umich.edu:8000/GPTP-20041. We thank all the workshop participants for making the workshop an exciting and productive three days. In particular we thank all the authors, without whose hard work and creative talents, neither the workshop nor the book would be possible. We also thank our keynote speakers Lawrence ("Dave") Davis of NuTech Solutions, Inc., Jordan Pollack of Brandeis University, and Richard Lenski of Michigan State University, who delivered three thought-provoking speeches that inspired a great deal of discussion among the participants.

Explanatory Model Analysis

Explanatory Model Analysis
Author: Przemyslaw Biecek
Publisher: CRC Press
Total Pages: 312
Release: 2021-02-15
Genre: Business & Economics
ISBN: 0429651376

Explanatory Model Analysis Explore, Explain and Examine Predictive Models is a set of methods and tools designed to build better predictive models and to monitor their behaviour in a changing environment. Today, the true bottleneck in predictive modelling is neither the lack of data, nor the lack of computational power, nor inadequate algorithms, nor the lack of flexible models. It is the lack of tools for model exploration (extraction of relationships learned by the model), model explanation (understanding the key factors influencing model decisions) and model examination (identification of model weaknesses and evaluation of model's performance). This book presents a collection of model agnostic methods that may be used for any black-box model together with real-world applications to classification and regression problems.

Machine Learning and Artificial Intelligence in Radiation Oncology

Machine Learning and Artificial Intelligence in Radiation Oncology
Author: Barry S. Rosenstein
Publisher: Academic Press
Total Pages: 480
Release: 2023-12-02
Genre: Science
ISBN: 0128220015

Machine Learning and Artificial Intelligence in Radiation Oncology: A Guide for Clinicians is designed for the application of practical concepts in machine learning to clinical radiation oncology. It addresses the existing void in a resource to educate practicing clinicians about how machine learning can be used to improve clinical and patient-centered outcomes. This book is divided into three sections: the first addresses fundamental concepts of machine learning and radiation oncology, detailing techniques applied in genomics; the second section discusses translational opportunities, such as in radiogenomics and autosegmentation; and the final section encompasses current clinical applications in clinical decision making, how to integrate AI into workflow, use cases, and cross-collaborations with industry. The book is a valuable resource for oncologists, radiologists and several members of biomedical field who need to learn more about machine learning as a support for radiation oncology. - Presents content written by practicing clinicians and research scientists, allowing a healthy mix of both new clinical ideas as well as perspectives on how to translate research findings into the clinic - Provides perspectives from artificial intelligence (AI) industry researchers to discuss novel theoretical approaches and possibilities on academic collaborations - Brings diverse points-of-view from an international group of experts to provide more balanced viewpoints on a complex topic

AI 2023: Advances in Artificial Intelligence

AI 2023: Advances in Artificial Intelligence
Author: Tongliang Liu
Publisher: Springer Nature
Total Pages: 509
Release: 2023-11-26
Genre: Computers
ISBN: 9819983916

This two-volume set LNAI 14471-14472 constitutes the refereed proceedings of the 36th Australasian Joint Conference on Artificial Intelligence, AI 2023, held in Brisbane, QLD, Australia during November 28 – December 1, 2023. The 23 full papers presented together with 59 short papers were carefully reviewed and selected from 213 submissions. They are organized in the following topics: computer vision; deep learning; machine learning and data mining; optimization; medical AI; knowledge representation and NLP; explainable AI; reinforcement learning; and genetic algorithm..

Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning

Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning
Author: Uday Kamath
Publisher: Springer Nature
Total Pages: 328
Release: 2021-12-15
Genre: Computers
ISBN: 3030833569

This book is written both for readers entering the field, and for practitioners with a background in AI and an interest in developing real-world applications. The book is a great resource for practitioners and researchers in both industry and academia, and the discussed case studies and associated material can serve as inspiration for a variety of projects and hands-on assignments in a classroom setting. I will certainly keep this book as a personal resource for the courses I teach, and strongly recommend it to my students. --Dr. Carlotta Domeniconi, Associate Professor, Computer Science Department, GMU This book offers a curriculum for introducing interpretability to machine learning at every stage. The authors provide compelling examples that a core teaching practice like leading interpretive discussions can be taught and learned by teachers and sustained effort. And what better way to strengthen the quality of AI and Machine learning outcomes. I hope that this book will become a primer for teachers, data Science educators, and ML developers, and together we practice the art of interpretive machine learning. --Anusha Dandapani, Chief Data and Analytics Officer, UNICC and Adjunct Faculty, NYU This is a wonderful book! I’m pleased that the next generation of scientists will finally be able to learn this important topic. This is the first book I’ve seen that has up-to-date and well-rounded coverage. Thank you to the authors! --Dr. Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, Statistical Science, and Biostatistics & Bioinformatics Literature on Explainable AI has up until now been relatively scarce and featured mainly mainstream algorithms like SHAP and LIME. This book has closed this gap by providing an extremely broad review of various algorithms proposed in the scientific circles over the previous 5-10 years. This book is a great guide to anyone who is new to the field of XAI or is already familiar with the field and is willing to expand their knowledge. A comprehensive review of the state-of-the-art Explainable AI methods starting from visualization, interpretable methods, local and global explanations, time series methods, and finishing with deep learning provides an unparalleled source of information currently unavailable anywhere else. Additionally, notebooks with vivid examples are a great supplement that makes the book even more attractive for practitioners of any level. Overall, the authors provide readers with an enormous breadth of coverage without losing sight of practical aspects, which makes this book truly unique and a great addition to the library of any data scientist. Dr. Andrey Sharapov, Product Data Scientist, Explainable AI Expert and Speaker, Founder of Explainable AI-XAI Group

Interpretable Machine Learning with Python

Interpretable Machine Learning with Python
Author: Serg Masís
Publisher: Packt Publishing Ltd
Total Pages: 737
Release: 2021-03-26
Genre: Computers
ISBN: 1800206577

A deep and detailed dive into the key aspects and challenges of machine learning interpretability, complete with the know-how on how to overcome and leverage them to build fairer, safer, and more reliable models Key Features Learn how to extract easy-to-understand insights from any machine learning model Become well-versed with interpretability techniques to build fairer, safer, and more reliable models Mitigate risks in AI systems before they have broader implications by learning how to debug black-box models Book DescriptionDo you want to gain a deeper understanding of your models and better mitigate poor prediction risks associated with machine learning interpretation? If so, then Interpretable Machine Learning with Python deserves a place on your bookshelf. We’ll be starting off with the fundamentals of interpretability, its relevance in business, and exploring its key aspects and challenges. As you progress through the chapters, you'll then focus on how white-box models work, compare them to black-box and glass-box models, and examine their trade-off. You’ll also get you up to speed with a vast array of interpretation methods, also known as Explainable AI (XAI) methods, and how to apply them to different use cases, be it for classification or regression, for tabular, time-series, image or text. In addition to the step-by-step code, this book will also help you interpret model outcomes using examples. You’ll get hands-on with tuning models and training data for interpretability by reducing complexity, mitigating bias, placing guardrails, and enhancing reliability. The methods you’ll explore here range from state-of-the-art feature selection and dataset debiasing methods to monotonic constraints and adversarial retraining. By the end of this book, you'll be able to understand ML models better and enhance them through interpretability tuning. What you will learn Recognize the importance of interpretability in business Study models that are intrinsically interpretable such as linear models, decision trees, and Naïve Bayes Become well-versed in interpreting models with model-agnostic methods Visualize how an image classifier works and what it learns Understand how to mitigate the influence of bias in datasets Discover how to make models more reliable with adversarial robustness Use monotonic constraints to make fairer and safer models Who this book is for This book is primarily written for data scientists, machine learning developers, and data stewards who find themselves under increasing pressures to explain the workings of AI systems, their impacts on decision making, and how they identify and manage bias. It’s also a useful resource for self-taught ML enthusiasts and beginners who want to go deeper into the subject matter, though a solid grasp on the Python programming language and ML fundamentals is needed to follow along.