Browse Results

Showing 40,976 through 41,000 of 59,834 results

Neural Information Processing: 29th International Conference, ICONIP 2022, Virtual Event, November 22–26, 2022, Proceedings, Part I (Lecture Notes in Computer Science #13623)

by Mohammad Tanveer Sonali Agarwal Seiichi Ozawa Asif Ekbal Adam Jatowt

The three-volume set LNCS 13623, 13624, and 13625 constitutes the refereed proceedings of the 29th International Conference on Neural Information Processing, ICONIP 2022, held as a virtual event, November 22–26, 2022.The 146 papers presented in the proceedings set were carefully reviewed and selected from 810 submissions. They were organized in topical sections as follows: Theory and Algorithms; Cognitive Neurosciences; Human Centered Computing; and Applications.The ICONIP conference aims to provide a leading international forum for researchers, scientists, and industry professionals who are working in neuroscience, neural networks, deep learning, and related fields to share their new ideas, progress, and achievements.

Neural Information Processing: 29th International Conference, ICONIP 2022, Virtual Event, November 22–26, 2022, Proceedings, Part VI (Communications in Computer and Information Science #1793)

by Mohammad Tanveer Sonali Agarwal Seiichi Ozawa Asif Ekbal Adam Jatowt

The four-volume set CCIS 1791, 1792, 1793 and 1794 constitutes the refereed proceedings of the 29th International Conference on Neural Information Processing, ICONIP 2022, held as a virtual event, November 22–26, 2022. The 213 papers presented in the proceedings set were carefully reviewed and selected from 810 submissions. They were organized in topical sections as follows: Theory and Algorithms; Cognitive Neurosciences; Human Centered Computing; and Applications.The ICONIP conference aims to provide a leading international forum for researchers, scientists, and industry professionals who are working in neuroscience, neural networks, deep learning, and related fields to share their new ideas, progress, and achievements.

Neural Information Processing: 29th International Conference, ICONIP 2022, Virtual Event, November 22–26, 2022, Proceedings, Part V (Communications in Computer and Information Science #1792)

by Mohammad Tanveer Sonali Agarwal Seiichi Ozawa Asif Ekbal Adam Jatowt

The four-volume set CCIS 1791, 1792, 1793 and 1794 constitutes the refereed proceedings of the 29th International Conference on Neural Information Processing, ICONIP 2022, held as a virtual event, November 22–26, 2022. The 213 papers presented in the proceedings set were carefully reviewed and selected from 810 submissions. They were organized in topical sections as follows: Theory and Algorithms; Cognitive Neurosciences; Human Centered Computing; and Applications.The ICONIP conference aims to provide a leading international forum for researchers, scientists, and industry professionals who are working in neuroscience, neural networks, deep learning, and related fields to share their new ideas, progress, and achievements.

Neural Information Processing: 27th International Conference, ICONIP 2020, Bangkok, Thailand, November 18–22, 2020, Proceedings, Part IV (Communications in Computer and Information Science #1332)

by Haiqin Yang Kitsuchart Pasupa Andrew Chi-Sing Leung James T. Kwok Jonathan H. Chan Irwin King

The two-volume set CCIS 1332 and 1333 constitutes thoroughly refereed contributions presented at the 27th International Conference on Neural Information Processing, ICONIP 2020, held in Bangkok, Thailand, in November 2020.*For ICONIP 2020 a total of 378 papers was carefully reviewed and selected for publication out of 618 submissions. The 191 papers included in this volume set were organized in topical sections as follows: data mining; healthcare analytics-improving healthcare outcomes using big data analytics; human activity recognition; image processing and computer vision; natural language processing; recommender systems; the 13th international workshop on artificial intelligence and cybersecurity; computational intelligence; machine learning; neural network models; robotics and control; and time series analysis. * The conference was held virtually due to the COVID-19 pandemic.

Neural Information Processing: 27th International Conference, ICONIP 2020, Bangkok, Thailand, November 23–27, 2020, Proceedings, Part I (Lecture Notes in Computer Science #12532)

by Haiqin Yang Kitsuchart Pasupa Andrew Chi-Sing Leung James T. Kwok Jonathan H. Chan Irwin King

The three-volume set of LNCS 12532, 12533, and 12534 constitutes the proceedings of the 27th International Conference on Neural Information Processing, ICONIP 2020, held in Bangkok, Thailand, in November 2020. Due to COVID-19 pandemic the conference was held virtually.The 187 full papers presented were carefully reviewed and selected from 618 submissions. The papers address the emerging topics of theoretical research, empirical studies, and applications of neural information processing techniques across different domains. The first volume, LNCS 12532, is organized in topical sections on human-computer interaction; image processing and computer vision; natural language processing.

Neural Information Processing: 27th International Conference, ICONIP 2020, Bangkok, Thailand, November 23–27, 2020, Proceedings, Part III (Lecture Notes in Computer Science #12534)

by Haiqin Yang Kitsuchart Pasupa Andrew Chi-Sing Leung James T. Kwok Jonathan H. Chan Irwin King

The three-volume set of LNCS 12532, 12533, and 12534 constitutes the proceedings of the 27th International Conference on Neural Information Processing, ICONIP 2020, held in Bangkok, Thailand, in November 2020. Due to COVID-19 pandemic the conference was held virtually.The 187 full papers presented were carefully reviewed and selected from 618 submissions. The papers address the emerging topics of theoretical research, empirical studies, and applications of neural information processing techniques across different domains. The third volume, LNCS 12534, is organized in topical sections on biomedical information; neural data analysis; neural network models; recommender systems; time series analysis.

Neural Information Processing: 27th International Conference, ICONIP 2020, Bangkok, Thailand, November 23–27, 2020, Proceedings, Part II (Lecture Notes in Computer Science #12533)

by Haiqin Yang Kitsuchart Pasupa Andrew Chi-Sing Leung James T. Kwok Jonathan H. Chan Irwin King

The three-volume set of LNCS 12532, 12533, and 12534 constitutes the proceedings of the 27th International Conference on Neural Information Processing, ICONIP 2020, held in Bangkok, Thailand, in November 2020. Due to COVID-19 pandemic the conference was held virtually. The 187 full papers presented were carefully reviewed and selected from 618 submissions. The papers address the emerging topics of theoretical research, empirical studies, and applications of neural information processing techniques across different domains. The second volume, LNCS 12533, is organized in topical sections on computational intelligence; machine learning; robotics and control.

Neural Machine Translation

by Philipp Koehn

Deep learning is revolutionizing how machine translation systems are built today. This book introduces the challenge of machine translation and evaluation - including historical, linguistic, and applied context -- then develops the core deep learning methods used for natural language applications. Code examples in Python give readers a hands-on blueprint for understanding and implementing their own machine translation systems. <p><p>The book also provides extensive coverage of machine learning tricks, issues involved in handling various forms of data, model enhancements, and current challenges and methods for analysis and visualization. <p><p>Summaries of the current research in the field make this a state-of-the-art textbook for undergraduate and graduate classes, as well as an essential reference for researchers and developers interested in other applications of neural methods in the broader field of human language processing.

Neural Modeling of Speech Processing and Speech Learning: An Introduction

by Bernd J. Kröger Trevor Bekolay

This book explores the processes of spoken language production and perception from a neurobiological perspective. After presenting the basics of speech processing and speech acquisition, a neurobiologically-inspired and computer-implemented neural model is described, which simulates the neural processes of speech processing and speech acquisition. This book is an introduction to the field and aimed at students and scientists in neuroscience, computer science, medicine, psychology and linguistics.

Neural Network Analysis, Architectures and Applications

by Antony Browne

Neural Network Analysis, Architectures and Applications discusses the main areas of neural networks, with each authoritative chapter covering the latest information from different perspectives. Divided into three parts, the book first lays the groundwork for understanding and simplifying networks. It then describes novel architectures and algorithms, including pulse-stream techniques, cellular neural networks, and multiversion neural computing. The book concludes by examining various neural network applications, such as neuron-fuzzy control systems and image compression. This final part of the book also provides a case study involving oil spill detection. This book is invaluable for students and practitioners who have a basic understanding of neural computing yet want to broaden and deepen their knowledge of the field.

A Neural Network Approach to Fluid Quantity Measurement in Dynamic Environments

by Edin Terzic Romesh Nagarajah Muhammad Alamgir Jenny Terzic

Sloshing causes liquid to fluctuate, making accurate level readings difficult to obtain in dynamic environments. The measurement system described uses a single-tube capacitive sensor to obtain an instantaneous level reading of the fluid surface, thereby accurately determining the fluid quantity in the presence of slosh. A neural network based classification technique has been applied to predict the actual quantity of the fluid contained in a tank under sloshing conditions. In A neural network approach to fluid quantity measurement in dynamic environments, effects of temperature variations and contamination on the capacitive sensor are discussed, and the authors propose that these effects can also be eliminated with the proposed neural network based classification system. To examine the performance of the classification system, many field trials were carried out on a running vehicle at various tank volume levels that range from 5 L to 50 L. The effectiveness of signal enhancement on the neural network based signal classification system is also investigated. Results obtained from the investigation are compared with traditionally used statistical averaging methods, and proves that the neural network based measurement system can produce highly accurate fluid quantity measurements in a dynamic environment. Although in this case a capacitive sensor was used to demonstrate measurement system this methodology is valid for all types of electronic sensors. The approach demonstrated in A neural network approach to fluid quantity measurement in dynamic environments can be applied to a wide range of fluid quantity measurement applications in the automotive, naval and aviation industries to produce accurate fluid level readings. Students, lecturers, and experts will find the description of current research about accurate fluid level measurement in dynamic environments using neural network approach useful.

Neural Network-Based Adaptive Control of Uncertain Nonlinear Systems

by Kasra Esfandiari Farzaneh Abdollahi Heidar A. Talebi

The focus of this book is the application of artificial neural networks in uncertain dynamical systems. It explains how to use neural networks in concert with adaptive techniques for system identification, state estimation, and control problems. The authors begin with a brief historical overview of adaptive control, followed by a review of mathematical preliminaries. In the subsequent chapters, they present several neural network-based control schemes. Each chapter starts with a concise introduction to the problem under study, and a neural network-based control strategy is designed for the simplest case scenario. After these designs are discussed, different practical limitations (i.e., saturation constraints and unavailability of all system states) are gradually added, and other control schemes are developed based on the primary scenario. Through these exercises, the authors present structures that not only provide mathematical tools for navigating control problems, but also supply solutions that are pertinent to real-life systems.

Neural Network Modeling: Statistical Mechanics and Cybernetic Perspectives

by P. S. Neelakanta Dolores DeGroff

Neural Network Modeling offers a cohesive approach to the statistical mechanics and principles of cybernetics as a basis for neural network modeling. It brings together neurobiologists and the engineers who design intelligent automata to understand the physics of collective behavior pertinent to neural elements and the self-control aspects of neurocybernetics. The theoretical perspectives and explanatory projections portray the most current information in the field, some of which counters certain conventional concepts in the visualization of neuronal interactions.

Neural Network Perspectives on Cognition and Adaptive Robotics

by A. Browne

Featuring an international team of authors, Neural Network Perspectives on Cognition and Adaptive Robotics presents several approaches to the modeling of human cognition and language using neural computing techniques. It also describes how adaptive robotic systems can be produced using neural network architectures. Covering a wide range of mainstream area and trends, each chapter provides the latest information from a different perspective.

Neural Network Programming with Java

by Alan M.F. Souza Fabio M. Soares

Create and unleash the power of neural networks by implementing professional Java code About This Book * Learn to build amazing projects using neural networks including forecasting the weather and pattern recognition * Explore the Java multi-platform feature to run your personal neural networks everywhere * This step-by-step guide will help you solve real-world problems and links neural network theory to their application Who This Book Is For This book is for Java developers with basic Java programming knowledge. No previous knowledge of neural networks is required as this book covers the concepts from scratch. What You Will Learn * Get to grips with the basics of neural networks and what they are used for * Develop neural networks using hands-on examples * Explore and code the most widely-used learning algorithms to make your neural network learn from most types of data * Discover the power of neural network's unsupervised learning process to extract the intrinsic knowledge hidden behind the data * Apply the code generated in practical examples, including weather forecasting and pattern recognition * Understand how to make the best choice of learning parameters to ensure you have a more effective application * Select and split data sets into training, test, and validation, and explore validation strategies * Discover how to improve and optimize your neural network In Detail Vast quantities of data are produced every second. In this context, neural networks become a powerful technique to extract useful knowledge from large amounts of raw, seemingly unrelated data. One of the most preferred languages for neural network programming is Java as it is easier to write code using it, and most of the most popular neural network packages around already exist for Java. This makes it a versatile programming language for neural networks. This book gives you a complete walkthrough of the process of developing basic to advanced practical examples based on neural networks with Java. You will first learn the basics of neural networks and their process of learning. We then focus on what Perceptrons are and their features. Next, you will implement self-organizing maps using the concepts you've learned. Furthermore, you will learn about some of the applications that are presented in this book such as weather forecasting, disease diagnosis, customer profiling, and characters recognition (OCR). Finally, you will learn methods to optimize and adapt neural networks in real time. All the examples generated in the book are provided in the form of illustrative source code, which merges object-oriented programming (OOP) concepts and neural network features to enhance your learning experience. Style and approach This book adopts a step-by-step approach to neural network development and provides many hands-on examples using Java programming. Each neural network concept is explored through real-world problems and is delivered in an easy-to-comprehend manner.

Neural Network Programming with Java - Second Edition

by Fabio M. Soares Alan M. Souza

Create and unleash the power of neural networks by implementing professional Java code About This Book • Learn to build amazing projects using neural networks including forecasting the weather and pattern recognition • Explore the Java multi-platform feature to run your personal neural networks everywhere • This step-by-step guide will help you solve real-world problems and links neural network theory to their application Who This Book Is For This book is for Java developers who want to know how to develop smarter applications using the power of neural networks. Those who deal with a lot of complex data and want to use it efficiently in their day-to-day apps will find this book quite useful. Some basic experience with statistical computations is expected. What You Will Learn • Develop an understanding of neural networks and how they can be fitted • Explore the learning process of neural networks • Build neural network applications with Java using hands-on examples • Discover the power of neural network's unsupervised learning process to extract the intrinsic knowledge hidden behind the data • Apply the code generated in practical examples, including weather forecasting and pattern recognition • Understand how to make the best choice of learning parameters to ensure you have a more effective application • Select and split data sets into training, test, and validation, and explore validation strategies In Detail Want to discover the current state-of-art in the field of neural networks that will let you understand and design new strategies to apply to more complex problems? This book takes you on a complete walkthrough of the process of developing basic to advanced practical examples based on neural networks with Java, giving you everything you need to stand out. You will first learn the basics of neural networks and their process of learning. We then focus on what Perceptrons are and their features. Next, you will implement self-organizing maps using practical examples. Further on, you will learn about some of the applications that are presented in this book such as weather forecasting, disease diagnosis, customer profiling, generalization, extreme machine learning, and characters recognition (OCR). Finally, you will learn methods to optimize and adapt neural networks in real time. All the examples generated in the book are provided in the form of illustrative source code, which merges object-oriented programming (OOP) concepts and neural network features to enhance your learning experience. Style and approach This book takes you on a steady learning curve, teaching you the important concepts while being rich in examples. You'll be able to relate to the examples in the book while implementing neural networks in your day-to-day applications.

Neural Network Programming with TensorFlow: Unleash the power of TensorFlow to train efficient neural networks

by Rajdeep Dua Manpreet Singh Ghotra

Neural Networks and their implementation decoded with TensorFlow About This Book • Develop a strong background in neural network programming from scratch, using the popular Tensorflow library. • Use Tensorflow to implement different kinds of neural networks – from simple feedforward neural networks to multilayered perceptrons, CNNs, RNNs and more. • A highly practical guide including real-world datasets and use-cases to simplify your understanding of neural networks and their implementation. Who This Book Is For This book is meant for developers with a statistical background who want to work with neural networks. Though we will be using TensorFlow as the underlying library for neural networks, book can be used as a generic resource to bridge the gap between the math and the implementation of deep learning. If you have some understanding of Tensorflow and Python and want to learn what happens at a level lower than the plain API syntax, this book is for you. What You Will Learn • Learn Linear Algebra and mathematics behind neural network. • Dive deep into Neural networks from the basic to advanced concepts like CNN, RNN Deep Belief Networks, Deep Feedforward Networks. • Explore Optimization techniques for solving problems like Local minima, Global minima, Saddle points • Learn through real world examples like Sentiment Analysis. • Train different types of generative models and explore autoencoders. • Explore TensorFlow as an example of deep learning implementation. In Detail If you're aware of the buzz surrounding the terms such as "machine learning," "artificial intelligence," or "deep learning," you might know what neural networks are. Ever wondered how they help in solving complex computational problem efficiently, or how to train efficient neural networks? This book will teach you just that. You will start by getting a quick overview of the popular TensorFlow library and how it is used to train different neural networks. You will get a thorough understanding of the fundamentals and basic math for neural networks and why TensorFlow is a popular choice Then, you will proceed to implement a simple feed forward neural network. Next you will master optimization techniques and algorithms for neural networks using TensorFlow. Further, you will learn to implement some more complex types of neural networks such as convolutional neural networks, recurrent neural networks, and Deep Belief Networks. In the course of the book, you will be working on real-world datasets to get a hands-on understanding of neural network programming. You will also get to train generative models and will learn the applications of autoencoders. By the end of this book, you will have a fair understanding of how you can leverage the power of TensorFlow to train neural networks of varying complexities, without any hassle. While you are learning about various neural network implementations you will learn the underlying mathematics and linear algebra and how they map to the appropriate TensorFlow constructs. Style and Approach This book is designed to give you just the right number of concepts to back up the examples. With real-world use cases and problems solved, this book is a handy guide for you. Each concept is backed by a generic and real-world problem, followed by a variation, making you independent and able to solve any problem with neural networks. All of the content is demystified by a simple and straightforward approach.

Neural Network Programming with TensorFlow

by Manpreet Singh Ghotra

<P><P>Neural Networks and their implementation decoded with TensorFlow <P><P>About This Book <P><P>Develop a strong background in neural network programming from scratch, using the popular Tensorflow library. <P><P>Use Tensorflow to implement different kinds of neural networks – from simple feedforward neural networks to multilayered perceptrons, CNNs, RNNs and more. <P><P>A highly practical guide including real-world datasets and use-cases to simplify your understanding of neural networks and their implementation. <P><P>Who This Book Is For <P><P>This book is meant for developers with a statistical background who want to work with neural networks. Though we will be using TensorFlow as the underlying library for neural networks, book can be used as a generic resource to bridge the gap between the math and the implementation of deep learning. If you have some understanding of Tensorflow and Python and want to learn what happens at a level lower than the plain API syntax, this book is for you. <P><P>What You Will Learn <P><P>Learn Linear Algebra and mathematics behind neural network. <P><P>Dive deep into Neural networks from the basic to advanced concepts like CNN, RNN Deep Belief Networks, Deep Feedforward Networks. <P><P>Explore Optimization techniques for solving problems like Local minima, Global minima, Saddle points <P><P>Learn through real world examples like Sentiment Analysis. <P><P>Train different types of generative models and explore autoencoders. <P><P>Explore TensorFlow as an example of deep learning implementation. <P><P>In Detail <P><P>If you're aware of the buzz surrounding the terms such as "machine learning," "artificial intelligence," or "deep learning," you might know what neural networks are. Ever wondered how they help in solving complex computational problem efficiently, or how to train efficient neural networks? This book will teach you just that. <P><P>You will start by getting a quick overview of the popular TensorFlow library and how it is used to train different neural networks. You will get a thorough understanding of the fundamentals and basic math for neural networks and why TensorFlow is a popular choice Then, you will proceed to implement a simple feed forward neural network. Next you will master optimization techniques and algorithms for neural networks using TensorFlow. Further, you will learn to implement some more complex types of neural networks such as convolutional neural networks, recurrent neural networks, and Deep Belief Networks. In the course of the book, you will be working on real-world datasets to get a hands-on understanding of neural network programming. You will also get to train generative models and will learn the applications of autoencoders. <P><P>By the end of this book, you will have a fair understanding of how you can leverage the power of TensorFlow to train neural networks of varying complexities, without any hassle. While you are learning about various neural network implementations you will learn the underlying mathematics and linear algebra and how they map to the appropriate TensorFlow constructs.

Neural Network Projects with Python: The ultimate guide to using Python to explore the true power of neural networks through six projects

by James Loy

Build your Machine Learning portfolio by creating 6 cutting-edge Artificial Intelligence projects using neural networks in PythonKey FeaturesDiscover neural network architectures (like CNN and LSTM) that are driving recent advancements in AIBuild expert neural networks in Python using popular libraries such as KerasIncludes projects such as object detection, face identification, sentiment analysis, and moreBook DescriptionNeural networks are at the core of recent AI advances, providing some of the best resolutions to many real-world problems, including image recognition, medical diagnosis, text analysis, and more. This book goes through some basic neural network and deep learning concepts, as well as some popular libraries in Python for implementing them.It contains practical demonstrations of neural networks in domains such as fare prediction, image classification, sentiment analysis, and more. In each case, the book provides a problem statement, the specific neural network architecture required to tackle that problem, the reasoning behind the algorithm used, and the associated Python code to implement the solution from scratch. In the process, you will gain hands-on experience with using popular Python libraries such as Keras to build and train your own neural networks from scratch.By the end of this book, you will have mastered the different neural network architectures and created cutting-edge AI projects in Python that will immediately strengthen your machine learning portfolio.What you will learnLearn various neural network architectures and its advancements in AIMaster deep learning in Python by building and training neural networkMaster neural networks for regression and classificationDiscover convolutional neural networks for image recognitionLearn sentiment analysis on textual data using Long Short-Term MemoryBuild and train a highly accurate facial recognition security systemWho this book is forThis book is a perfect match for data scientists, machine learning engineers, and deep learning enthusiasts who wish to create practical neural network projects in Python. Readers should already have some basic knowledge of machine learning and neural networks.

Neural-Network Simulation of Strongly Correlated Quantum Systems (Springer Theses)

by Stefanie Czischek

Quantum systems with many degrees of freedom are inherently difficult to describe and simulate quantitatively. The space of possible states is, in general, exponentially large in the number of degrees of freedom such as the number of particles it contains. Standard digital high-performance computing is generally too weak to capture all the necessary details, such that alternative quantum simulation devices have been proposed as a solution. Artificial neural networks, with their high non-local connectivity between the neuron degrees of freedom, may soon gain importance in simulating static and dynamical behavior of quantum systems. Particularly promising candidates are neuromorphic realizations based on analog electronic circuits which are being developed to capture, e.g., the functioning of biologically relevant networks. In turn, such neuromorphic systems may be used to measure and control real quantum many-body systems online. This thesis lays an important foundation for the realization of quantum simulations by means of neuromorphic hardware, for using quantum physics as an input to classical neural nets and, in turn, for using network results to be fed back to quantum systems. The necessary foundations on both sides, quantum physics and artificial neural networks, are described, providing a valuable reference for researchers from these different communities who need to understand the foundations of both.

Neural Networks and Deep Learning: A Textbook

by Charu C. Aggarwal

This book covers both classical and modern models in deep learning. The primary focus is on the theory and algorithms of deep learning. The theory and algorithms of neural networks are particularly important for understanding important concepts, so that one can understand the important design concepts of neural architectures in different applications. Why do neural networks work? When do they work better than off-the-shelf machine-learning models? When is depth useful? Why is training neural networks so hard? What are the pitfalls? The book is also rich in discussing different applications in order to give the practitioner a flavor of how neural architectures are designed for different types of problems. Applications associated with many different areas like recommender systems, machine translation, image captioning, image classification, reinforcement-learning based gaming, and text analytics are covered. The chapters of this book span three categories: The basics of neural networks: Many traditional machine learning models can be understood as special cases of neural networks. An emphasis is placed in the first two chapters on understanding the relationship between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks. These methods are studied together with recent feature engineering methods like word2vec. Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 3 and 4. Chapters 5 and 6 present radial-basis function (RBF) networks and restricted Boltzmann machines. Advanced topics in neural networks: Chapters 7 and 8 discuss recurrent neural networks and convolutional neural networks. Several advanced topics like deep reinforcement learning, neural Turing machines, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 9 and 10. The book is written for graduate students, researchers, and practitioners. Numerous exercises are available along with a solution manual to aid in classroom teaching. Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of techniques.

Neural Networks and Deep Learning: A Textbook

by Charu C. Aggarwal

This textbook covers both classical and modern models in deep learning and includes examples and exercises throughout the chapters. Deep learning methods for various data domains, such as text, images, and graphs are presented in detail. The chapters of this book span three categories: The basics of neural networks: The backpropagation algorithm is discussed in Chapter 2.Many traditional machine learning models can be understood as special cases of neural networks. Chapter 3 explores the connections between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks. Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 4 and 5. Chapters 6 and 7 present radial-basis function (RBF) networks and restricted Boltzmann machines. Advanced topics in neural networks: Chapters 8, 9, and 10 discuss recurrent neural networks, convolutional neural networks, and graph neural networks. Several advanced topics like deep reinforcement learning, attention mechanisms, transformer networks, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 11 and 12. The textbook is written for graduate students and upper under graduate level students. Researchers and practitioners working within this related field will want to purchase this as well.Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of techniques.The second edition is substantially reorganized and expanded with separate chapters on backpropagation and graph neural networks. Many chapters have been significantly revised over the first edition.Greater focus is placed on modern deep learning ideas such as attention mechanisms, transformers, and pre-trained language models.

Neural Networks and Learning Algorithms in MATLAB (Synthesis Lectures on Intelligent Technologies)

by Oscar Castillo Rathinasamy Sakthivel Mohammad Hosein Sabzalian Fayez F. El-Sousy Ardahir Mohammadazadeh Saleh Mobayen

This book explains the basic concepts, theory and applications of neural networks in a simple unified approach with clear examples and simulations in the MATLAB programming language. The scripts herein are coded for general purposes to be easily extended to a variety of problems in different areas of application. They are vectorized and optimized to run faster and be applicable to high-dimensional engineering problems. This book will serve as a main reference for graduate and undergraduate courses in neural networks and applications. This book will also serve as a main basis for researchers dealing with complex problems that require neural networks for finding good solutions in areas, such as time series prediction, intelligent control and identification. In addition, the problem of designing neural network by using metaheuristics, such as the genetic algorithms and particle swarm optimization, with one objective and with multiple objectives, is presented.

Neural Networks and Micromechanics

by Tatiana Baidyk Ernst Kussul Donald C. Wunsch

This is an interdisciplinary field of research involving the use of neural network techniques for image recognition applied to tasks in the area of micromechanics. The book is organized into chapters on classic neural networks and novel neural classifiers; recognition of textures and object forms; micromechanics; and adaptive algorithms with neural and image recognition applications. The authors include theoretical analysis of the proposed approach, they describe their machine tool prototypes in detail, and they present results from experiments involving microassembly, and handwriting and face recognition. This book will benefit scientists, researchers and students working in artificial intelligence, particularly in the fields of image recognition and neural networks, and practitioners in the area of microengineering.

Neural Networks and Statistical Learning

by Ke-Lin Du M. N. Swamy

This book provides a broad yet detailed introduction to neural networks and machine learning in a statistical framework. A single, comprehensive resource for study and further research, it explores the major popular neural network models and statistical learning approaches with examples and exercises and allows readers to gain a practical working understanding of the content. This updated new edition presents recently published results and includes six new chapters that correspond to the recent advances in computational learning theory, sparse coding, deep learning, big data and cloud computing.Each chapter features state-of-the-art descriptions and significant research findings. The topics covered include:• multilayer perceptron;• the Hopfield network;• associative memory models;• clustering models and algorithms;• t he radial basis function network;• recurrent neural networks;• nonnegative matrix factorization;• independent component analysis;•probabilistic and Bayesian networks; and• fuzzy sets and logic.Focusing on the prominent accomplishments and their practical aspects, this book provides academic and technical staff, as well as graduate students and researchers with a solid foundation and comprehensive reference on the fields of neural networks, pattern recognition, signal processing, and machine learning.

Refine Search

Showing 40,976 through 41,000 of 59,834 results