- Table View
- List View
Explainable Artificial Intelligence: Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part IV (Communications in Computer and Information Science #2156)
by Luca Longo Christin Seifert Sebastian LapuschkinThis four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on: Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI. Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI. Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI. Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.
Explainable Edge AI: A Futuristic Computing Perspective (Studies in Computational Intelligence #1072)
by Ankit Garg Aboul Ella Hassanien Deepak Gupta Anuj Kumar SinghThis book presents explainability in edge AI, an amalgamation of edge computing and AI. The issues of transparency, fairness, accountability, explainability, interpretability, data-fusion, and comprehensibility that are significant for edge AI are being addressed in this book through explainable models and techniques. The concept of explainable edge AI is new in front of the academic and research community, and consequently, it will undoubtedly explore multiple research dimensions. The book presents the concept of explainability in edge AI which is the amalgamation of edge computing and AI. In the futuristic computing scenario, the goal of explainable edge AI will be to execute the AI tasks and produce explainable results at the edge. First, this book explains the fundamental concepts of explainable artificial intelligence (XAI), then it describes the concept of explainable edge AI, and finally, it elaborates on the technicalities of explainability in edge AI. Owing to the quick transition in the current computing scenario and integration with the latest AI-based technologies, it is significant to facilitate people-centric computing through explainable edge AI. Explainable edge AI will facilitate enhanced prediction accuracy with the comprehensible decision and traceability of actions performed at the edge and have a significant impact on futuristic computing scenarios. This book is highly relevant to graduate/postgraduate students, academicians, researchers, engineers, professionals, and other personnel working in artificial intelligence, machine learning, and intelligent systems.
Explainable Fuzzy Systems: Paving the Way from Interpretable Fuzzy Systems to Explainable AI Systems (Studies in Computational Intelligence #970)
by Luis Magdalena Ciro Castiello Corrado Mencar Jose Maria Alonso MoralThe importance of Trustworthy and Explainable Artificial Intelligence (XAI) is recognized in academia, industry and society. This book introduces tools for dealing with imprecision and uncertainty in XAI applications where explanations are demanded, mainly in natural language. Design of Explainable Fuzzy Systems (EXFS) is rooted in Interpretable Fuzzy Systems, which are thoroughly covered in the book. The idea of interpretability in fuzzy systems, which is grounded on mathematical constraints and assessment functions, is firstly introduced. Then, design methodologies are described. Finally, the book shows with practical examples how to design EXFS from interpretable fuzzy systems and natural language generation. This approach is supported by open source software. The book is intended for researchers, students and practitioners who wish to explore EXFS from theoretical and practical viewpoints. The breadth of coverage will inspire novel applications and scientific advancements.
Explainable IoT Applications: A Demystification (Information Systems Engineering and Management #21)
by Suneeta Satpathy Sachi Nandan Mohanty Subhendu Kumar Pani Xiaochun ChengExplainable IoT Application: A Demystification is an in-depth guide that examines the intersection of the Internet of Things (IoT) with AI and Machine Learning, focusing on the crucial need for transparency and interpretability in IoT systems. As IoT devices become more integrated into daily life, from smart homes to industrial automation, it is increasingly important to understand and trust the decisions they make. The book starts by covering the basics of IoT, highlighting its importance in modern technology and its wide-ranging applications in fields such as healthcare, transportation, and smart cities. It then delves into the concept of explainability, stressing the need to prevent IoT systems from being perceived as opaque, black-box operations. The authors explore various techniques and methods for achieving explainability, including rule-based systems and machine learning models, while also addressing the challenge of balancing explainability with performance. Through practical examples, the book shows how explainability can be successfully implemented in IoT applications, such as in smart healthcare systems. Furthermore, the book addresses the significant challenges of securing IoT systems in an increasingly connected world. It examines the unique vulnerabilities that come with the widespread use of IoT devices, such as data breaches, cyberattacks, and privacy issues, and discusses the complexities of managing these risks. The authors emphasize the importance of implementing security strategies that strike a balance between fostering innovations and protecting user data. The book concludes with a comprehensive exploration of the challenges and opportunities in making IoT systems more transparent and interpretable, offering valuable insights for researchers, developers, and decision-makers aiming to create IoT applications that are both trustworthy and understandable.
Explainable Machine Learning Models and Architectures
by Suman Lata Tripathi Mufti MahmudEXPLAINABLE MACHINE LEARNING MODELS AND ARCHITECTURES This cutting-edge new volume covers the hardware architecture implementation, the software implementation approach, and the efficient hardware of machine learning applications. Machine learning and deep learning modules are now an integral part of many smart and automated systems where signal processing is performed at different levels. Signal processing in the form of text, images, or video needs large data computational operations at the desired data rate and accuracy. Large data requires more use of integrated circuit (IC) area with embedded bulk memories that further lead to more IC area. Trade-offs between power consumption, delay and IC area are always a concern of designers and researchers. New hardware architectures and accelerators are needed to explore and experiment with efficient machine-learning models. Many real-time applications like the processing of biomedical data in healthcare, smart transportation, satellite image analysis, and IoT-enabled systems have a lot of scope for improvements in terms of accuracy, speed, computational powers, and overall power consumption. This book deals with the efficient machine and deep learning models that support high-speed processors with reconfigurable architectures like graphic processing units (GPUs) and field programmable gate arrays (FPGAs), or any hybrid system. Whether for the veteran engineer or scientist working in the field or laboratory, or the student or academic, this is a must-have for any library.
Explainable Machine Learning for Multimedia Based Healthcare Applications
by Deepak Gupta M. Shamim Hossain Utku KoseThis book covers the latest research studies regarding Explainable Machine Learning used in multimedia-based healthcare applications. In this context, the content includes not only introductions for applied research efforts but also theoretical touches and discussions targeting open problems as well as future insights. In detail, a comprehensive topic coverage is ensured by focusing on remarkable healthcare problems solved with Artificial Intelligence. Because today’s conditions in medical data processing are often associated with multimedia, the book considers research studies with especially multimedia data processing.
Explainable Machine Learning in Medicine (Synthesis Lectures on Engineering, Science, and Technology)
by Karol Przystalski Rohit M. ThankiThis book covers a variety of advanced communications technologies that can be used to analyze medical data and can be used to diagnose diseases in clinic centers. The book is a primer of methods for medicine, providing an overview of explainable artificial intelligence (AI) techniques that can be applied in different medical challenges. The authors discuss how to select and apply the proper technology depending on the provided data and the analysis desired. Because a variety of data can be used in the medical field, the book explains how to deal with challenges connected with each type. A number of scenarios are introduced that can happen in real-time environments, with each pared with a type of machine learning that can be used to solve it.
Explainable Neural Networks Based on Fuzzy Logic and Multi-criteria Decision Tools (Studies in Fuzziness and Soft Computing #408)
by József Dombi Orsolya CsiszárThe research presented in this book shows how combining deep neural networks with a special class of fuzzy logical rules and multi-criteria decision tools can make deep neural networks more interpretable – and even, in many cases, more efficient. Fuzzy logic together with multi-criteria decision-making tools provides very powerful tools for modeling human thinking. Based on their common theoretical basis, we propose a consistent framework for modeling human thinking by using the tools of all three fields: fuzzy logic, multi-criteria decision-making, and deep learning to help reduce the black-box nature of neural models; a challenge that is of vital importance to the whole research community.
Explainable Uncertain Rule-Based Fuzzy Systems
by Jerry M. MendelThe third edition of this textbook presents a further updated approach to fuzzy sets and systems that can model uncertainty — i.e., “type-2” fuzzy sets and systems. The author demonstrates how to overcome the limitations of classical fuzzy sets and systems, enabling a wide range of applications, from time-series forecasting to knowledge mining to classification to control and to explainable AI (XAI). This latest edition again begins by introducing classical (type-1) fuzzy sets and systems, and then explains how they can be modified to handle uncertainty, leading to type-2 fuzzy sets and systems. New material is included about how to obtain fuzzy set word models that are needed for XAI, similarity of fuzzy sets, a quantitative methodology that lets one explain in a simple way why the different kinds of fuzzy systems have the potential for performance improvements over each other, and new parameterizations of membership functions that have the potential for achieving even greater performance for all kinds of fuzzy systems. For hands-on experience, the book provides information on accessing MATLAB, Java, and Python software to complement the content. The book features a full suite of classroom material.
Explainable and Customizable Job Sequencing and Scheduling: Advancing Production Control and Management with XAI (SpringerBriefs in Applied Sciences and Technology)
by Tin-Chih Toly ChenThis book systematically reviews the progress in explainable AI (XAI) and introduces the methods, tools, and applications of XAI technologies in job sequencing and scheduling. Relevant references and real case studies are provided as supporting evidence. To date, artificial intelligence (AI) technologies have been widely applied in job sequencing and scheduling. However, some advanced AI methods are not easy to understand or communicate, especially for factory workers with insufficient background knowledge of AI. This undoubtedly limits the practicability of these methods. To address this issue, explainable AI has been considered a viable strategy. XAI methods suitable for job sequencing and scheduling differ from those for other fields in manufacturing, such as pattern recognition, defect analysis, estimation, and prediction. This is the first book to systematically integrate current knowledge in XAI and demonstrate its application to manufacturing.
Explainable and Interpretable Models in Computer Vision and Machine Learning (The Springer Series on Challenges in Machine Learning)
by Xavier Baró Sergio Escalera Hugo Jair Escalante Isabelle Guyon Yağmur Güçlütürk Umut Güçlü Marcel Van GervenThis book compiles leading research on the development of explainable and interpretable machine learning methods in the context of computer vision and machine learning. Research progress in computer vision and pattern recognition has led to a variety of modeling techniques with almost human-like performance. Although these models have obtained astounding results, they are limited in their explainability and interpretability: what is the rationale behind the decision made? what in the model structure explains its functioning? Hence, while good performance is a critical required characteristic for learning machines, explainability and interpretability capabilities are needed to take learning machines to the next step to include them in decision support systems involving human supervision. This book, written by leading international researchers, addresses key topics of explainability and interpretability, including the following: · Evaluation and Generalization in Interpretable Machine Learning · Explanation Methods in Deep Learning · Learning Functional Causal Models with Generative Neural Networks · Learning Interpreatable Rules for Multi-Label Classification · Structuring Neural Networks for More Explainable Predictions · Generating Post Hoc Rationales of Deep Visual Classification Decisions · Ensembling Visual Explanations · Explainable Deep Driving by Visualizing Causal Attention · Interdisciplinary Perspective on Algorithmic Job Candidate Search · Multimodal Personality Trait Analysis for Explainable Modeling of Job Interview Decisions · Inherent Explainability Pattern Theory-based Video Event Interpretations
Explainable and Interpretable Reinforcement Learning for Robotics (Synthesis Lectures on Artificial Intelligence and Machine Learning)
by Dinesh Manocha Aaron M. Roth Ram D. Sriram Elham TabassiThis book surveys the state of the art in explainable and interpretable reinforcement learning (RL) as relevant for robotics. While RL in general has grown in popularity and been applied to increasingly complex problems, several challenges have impeded the real-world adoption of RL algorithms for robotics and related areas. These include difficulties in preventing safety constraints from being violated and the issues faced by systems operators who desire explainable policies and actions. Robotics applications present a unique set of considerations and result in a number of opportunities related to their physical, real-world sensory input and interactions. The authors consider classification techniques used in past surveys and papers and attempt to unify terminology across the field. The book provides an in-depth exploration of 12 attributes that can be used to classify explainable/interpretable techniques. These include whether the RL method is model-agnostic or model-specific, self-explainable or post-hoc, as well as additional analysis of the attributes of scope, when-produced, format, knowledge limits, explanation accuracy, audience, predictability, legibility, readability, and reactivity. The book is organized around a discussion of these methods broken down into 42 categories and subcategories, where each category can be classified according to some of the attributes. The authors close by identifying gaps in the current research and highlighting areas for future investigation.
Explainable and Responsible Artificial Intelligence in Healthcare
by Rishabha Malviya Sonali SundramThis book presents the fundamentals of explainable artificial intelligence (XAI) and responsible artificial intelligence (RAI), discussing their potential to enhance diagnosis, treatment, and patient outcomes. This book explores the transformative potential of explainable artificial intelligence (XAI) and responsible AI (RAI) in healthcare. It provides a roadmap for navigating the complexities of healthcare-based AI while prioritizing patient safety and well-being. The content is structured to highlight topics on smart health systems, neuroscience, diagnostic imaging, and telehealth. The book emphasizes personalized treatment and improved patient outcomes in various medical fields. In addition, this book discusses osteoporosis risk, neurological treatment, and bone metastases. Each chapter provides a distinct viewpoint on how XAI and RAI approaches can help healthcare practitioners increase diagnosis accuracy, optimize treatment plans, and improve patient outcomes. Readers will find the book: explains recent XAI and RAI breakthroughs in the healthcare system; discusses essential architecture with computational advances ranging from medical imaging to disease diagnosis; covers the latest developments and applications of XAI and RAI-based disease management applications; demonstrates how XAI and RAI can be utilized in healthcare and what problems the technology faces in the future. Audience The main audience for this book is targeted to scientists, healthcare professionals, biomedical industries, hospital management, engineers, and IT professionals interested in using AI to improve human health.
Explainable and Transparent AI and Multi-Agent Systems: 4th International Workshop, EXTRAAMAS 2022, Virtual Event, May 9–10, 2022, Revised Selected Papers (Lecture Notes in Computer Science #13283)
by Davide Calvaresi Amro Najjar Kary Främling Michael WinikoffThis book constitutes the refereed proceedings of the 4th International Workshop on Explainable and Transparent AI and Multi-Agent Systems, EXTRAAMAS 2022, held virtually during May 9–10, 2022. The 14 full papers included in this book were carefully reviewed and selected from 25 submissions. They were organized in topical sections as follows: explainable machine learning; explainable neuro-symbolic AI; explainable agents; XAI measures and metrics; and AI & law.
Explainable and Transparent AI and Multi-Agent Systems: 5th International Workshop, EXTRAAMAS 2023, London, UK, May 29, 2023, Revised Selected Papers (Lecture Notes in Computer Science #14127)
by Andrea Omicini Davide Calvaresi Amro Najjar Kary Främling Reyhan Aydogan Rachele Carli Giovanni Ciatto Yazan MuallaThis volume LNCS 14127 constitutes the refereed proceedings of the 5th International Workshop, EXTRAAMAS 2023, held in London, UK, in May 2023. The 15 full papers presented together with 1 short paper were carefully reviewed and selected from 26 submissions. The workshop focuses on Explainable Agents and multi-agent systems; Explainable Machine Learning; and Cross-domain applied XAI.
Explainable and Transparent AI and Multi-Agent Systems: 6th International Workshop, EXTRAAMAS 2024, Auckland, New Zealand, May 6–10, 2024, Revised Selected Papers (Lecture Notes in Computer Science #14847)
by Andrea Omicini Davide Calvaresi Amro Najjar Kary Främling Reyhan Aydogan Rachele Carli Giovanni Ciatto Joris HulstijnThis volume constitutes the papers of several workshops which were held in conjunction with the 6th International Workshop on Explainable and Transparent AI and Multi-Agent Systems, EXTRAAMAS 2024, in Auckland, New Zealand, during May 6–10, 2024. The 13 full papers presented in this book were carefully reviewed and selected from 25 submissions. The papers are organized in the following topical sections: User-centric XAI; XAI and Reinforcement Learning; Neuro-symbolic AI and Explainable Machine Learning; and XAI & Ethics.
Explainable and Transparent AI and Multi-Agent Systems: Third International Workshop, EXTRAAMAS 2021, Virtual Event, May 3–7, 2021, Revised Selected Papers (Lecture Notes in Computer Science #12688)
by Davide Calvaresi Amro Najjar Kary Främling Michael WinikoffThis book constitutes the proceedings of the Third International Workshop on Explainable, Transparent AI and Multi-Agent Systems, EXTRAAMAS 2021, which was held virtually due to the COVID-19 pandemic.The 19 long revised papers and 1 short contribution were carefully selected from 32 submissions. The papers are organized in the following topical sections: XAI & machine learning; XAI vision, understanding, deployment and evaluation; XAI applications; XAI logic and argumentation; decentralized and heterogeneous XAI.
Explainable, Transparent Autonomous Agents and Multi-Agent Systems: First International Workshop, EXTRAAMAS 2019, Montreal, QC, Canada, May 13–14, 2019, Revised Selected Papers (Lecture Notes in Computer Science #11763)
by Michael Schumacher Davide Calvaresi Amro Najjar Kary FrämlingThis book constitutes the proceedings of the First International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, EXTRAAMAS 2019, held in Montreal, Canada, in May 2019. The 12 revised and extended papers presented were carefully selected from 23 submissions. They are organized in topical sections on explanation and transparency; explainable robots; opening the black box; explainable agent simulations; planning and argumentation; explainable AI and cognitive science.
Explainable, Transparent Autonomous Agents and Multi-Agent Systems: Second International Workshop, EXTRAAMAS 2020, Auckland, New Zealand, May 9–13, 2020, Revised Selected Papers (Lecture Notes in Computer Science #12175)
by Davide Calvaresi Amro Najjar Kary Främling Michael WinikoffThis book constitutes the proceedings of the Second International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, EXTRAAMAS 2020, which was due to be held in Auckland, New Zealand, in May 2020. The conference was held virtually due to the COVID-19 pandemic.The 8 revised and extended papers were carefully selected from 20 submissions and are presented here with one demo paper. The papers are organized in the following topical sections: explainable agents; cross disciplinary XAI; explainable machine learning; demos.
Explaining Algorithms Using Metaphors
by Michal Forišek Monika SteinováThere is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the "classic textbook" algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes for teachers based on the authors' experiences when using the metaphor in a classroom setting.
Explanatory Animations in the Classroom: Student-Authored Animations as Digital Pedagogy (SpringerBriefs in Education)
by Brendan JacobsThis book provides groundbreaking evidence demonstrating how student-authored explanatory animations can embody and document learning as an exciting new development within digital pedagogy. Explanatory animations can be an excellent resource for teaching and learning but there has been an underlying assumption that students are predominately viewers rather than animation authors. The methodology detailed in this book reverses this scenario by putting students in the driver’s seat of their own learning. This signals not just a change in perspective, but a complete change in activity that, to continue the analogy, will forever change the conversation and make redundant phrases like “Are we there yet?” and “How much longer?” The digital nature of such practices provides compelling evidence for reconceptualising explanatory animation creation as a pedagogical activity that generates multimodal assessment data. Tying together related themes to advance approaches to evidence-based assessment using digital technologies, this book is intended for educators at any stage of their journey, including pre-service teachers.
Explanatory Model Analysis: Explore, Explain, and Examine Predictive Models (Chapman & Hall/CRC Data Science Series)
by Tomasz Burzykowski Przemyslaw BiecekExplanatory Model Analysis Explore, Explain and Examine Predictive Models is a set of methods and tools designed to build better predictive models and to monitor their behaviour in a changing environment. Today, the true bottleneck in predictive modelling is neither the lack of data, nor the lack of computational power, nor inadequate algorithms, nor the lack of flexible models. It is the lack of tools for model exploration (extraction of relationships learned by the model), model explanation (understanding the key factors influencing model decisions) and model examination (identification of model weaknesses and evaluation of model's performance). This book presents a collection of model agnostic methods that may be used for any black-box model together with real-world applications to classification and regression problems.
Exploding Data: Reclaiming Our Cyber Security in the Digital Age
by Michael ChertoffA former Secretary of Homeland Security examines our outdated laws regarding the protection of personal information, and the pressing need for change. Nothing undermines our freedom more than losing control of information about ourselves. And yet, as daily events underscore, we are ever more vulnerable to cyber-attack. In this bracing book, Michael Chertoff makes clear that our laws and policies surrounding the protection of personal information, written for an earlier time, are long overdue for a complete overhaul. On the one hand, the collection of data—more widespread by business than by government, and impossible to stop—should be facilitated as an ultimate protection for society. On the other, standards under which information can be inspected, analyzed, or used must be significantly tightened. In offering his compelling call for action, Chertoff argues that what is at stake is not so much the simple loss of privacy, which is almost impossible to protect, but of individual autonomy—the ability to make personal choices free of manipulation or coercion. Offering vivid stories over many decades that illuminate the three periods of data gathering we have experienced, Chertoff explains the complex legalities surrounding issues of data collection and dissemination today, and charts a path that balances the needs of government, business, and individuals alike. &“Surveys the brave new world of data collection and analysis…The world of data as illuminated here would have scared George Orwell.&”―Kirkus Reviews &“Chertoff has a unique perspective on data security and its implications for citizen rights as he looks at the history of and changes in privacy laws since the founding of the U.S.&”—Booklist
Exploding the Myths Surrounding ISO9000
by Andrew W. NicholsThe secrets of successful ISO9000 implementation Thousands of companies worldwide are reaping the benefits from adopting the ISO9000 family of quality management standards. However, there are many conflicting opinions about ISO9000s best practice approach. Some companies have delayed adopting ISO9000, or have chosen not to undertake implementation at all. This might be because of a lack of time and resources to investigate it properly, or because of misunderstandings about the way it works. So, how do we know who and what to believe? In Exploding the Myths Surrounding ISO9000, Andrew W Nichols debunks many of the common misconceptions about ISO9001 and describes the many advantages it brings. Drawing on more than 25 years of hands-on experience, Andy gives clear, practical and up-to-date advice on how to implement a Quality Management System to maximum effect. Full of real-life examples, this book enables you to read and interpret the ISO9000 documentation. With the advice in this book, you can foster an effective ISO9000 system that brings increased efficiencies, reduces waste and fuels growth in sales as you better understand and meet the needs of your customers. About the author Andrew W Nichols has more than 25 years of experience of management systems, in both the UK and the USA. As a trainer, he has delivered hundreds of ISO9000 related courses to audiences ranging from shop-floor personnel to CEOs of Fortune 500 companies. He has also led and contributed to the development of best in class training courses for a number of international standards. Andy is a regular contributor to the well-known Elsmar Cove internet forum for management systems. Read this unique book and make ISO9000 work for you.
Exploration of Novel Intelligent Optimization Algorithms: 12th International Symposium, ISICA 2021, Guangzhou, China, November 20–21, 2021, Revised Selected Papers (Communications in Computer and Information Science #1590)
by Yong Liu Kangshun Li Wenxiang WangThis book constitutes the refereed proceedings of the 12th International Symposium, ISICA 2021, held in Guangzhou, China, during November 19–21, 2021. The 48 full papers included in this book were carefully reviewed and selected from 99 submissions. They were organized in topical sections as follows: new frontier of multi-objective evolutionary algorithms; intelligent multi-media; data modeling and application of artificial intelligence; exploration of novel intelligent optimization algorithm; and intelligent application of industrial production.