Browse Results

Showing 40,101 through 40,125 of 61,668 results

Neural Information Processing: 30th International Conference, ICONIP 2023, Changsha, China, November 20–23, 2023, Proceedings, Part XIV (Communications in Computer and Information Science #1968)

by Hongyi Li Long Cheng Zheng-Guang Wu Biao Luo Chaojie Li

The nine-volume set constitutes the refereed proceedings of the 30th International Conference on Neural Information Processing, ICONIP 2023, held in Changsha, China, in November 2023. The 1274 papers presented in the proceedings set were carefully reviewed and selected from 652 submissions. The ICONIP conference aims to provide a leading international forum for researchers, scientists, and industry professionals who are working in neuroscience, neural networks, deep learning, and related fields to share their new ideas, progress, and achievements.

Neural Information Processing: 30th International Conference, ICONIP 2023, Changsha, China, November 20–23, 2023, Proceedings, Part XV (Communications in Computer and Information Science #1969)

by Hongyi Li Long Cheng Zheng-Guang Wu Biao Luo Chaojie Li

The nine-volume set constitutes the refereed proceedings of the 30th International Conference on Neural Information Processing, ICONIP 2023, held in Changsha, China, in November 2023. The 1274 papers presented in the proceedings set were carefully reviewed and selected from 652 submissions. The ICONIP conference aims to provide a leading international forum for researchers, scientists, and industry professionals who are working in neuroscience, neural networks, deep learning, and related fields to share their new ideas, progress, and achievements.

Neural Information Processing: 31st International Conference, ICONIP 2024, Auckland, New Zealand, December 2–6, 2024, Proceedings, Part I (Lecture Notes in Computer Science #15286)

by Kevin Wong M. Tanveer Mufti Mahmud Maryam Doborjeh Andrew Chi Sing Leung Zohreh Doborjeh

The eleven-volume set LNCS 15286-15296 constitutes the refereed proceedings of the 31st International Conference on Neural Information Processing, ICONIP 2024, held in Auckland, New Zealand, in December 2024.The 318 regular papers presented in the proceedings set were carefully reviewed and selected from 1301 submissions. They focus on four main areas, namely: theory and algorithms; cognitive neurosciences; human-centered computing; and applications.

Neural Information Processing: 31st International Conference, ICONIP 2024, Auckland, New Zealand, December 2–6, 2024, Proceedings, Part III (Communications in Computer and Information Science #2284)

by Kevin Wong M. Tanveer Mufti Mahmud Maryam Doborjeh Andrew Chi Sing Leung Zohreh Doborjeh

The sixteen-volume set, CCIS 2282-2297, constitutes the refereed proceedings of the 31st International Conference on Neural Information Processing, ICONIP 2024, held in Auckland, New Zealand, in December 2024. The 472 regular papers presented in this proceedings set were carefully reviewed and selected from 1301 submissions. These papers primarily focus on the following areas: Theory and algorithms; Cognitive neurosciences; Human-centered computing; and Applications.

Neural Information Processing: 31st International Conference, ICONIP 2024, Auckland, New Zealand, December 2–6, 2024, Proceedings, Part IX (Communications in Computer and Information Science #2290)

by Kevin Wong M. Tanveer Mufti Mahmud Maryam Doborjeh Andrew Chi Sing Leung Zohreh Doborjeh

The sixteen-volume set, CCIS 2282-2297, constitutes the refereed proceedings of the 31st International Conference on Neural Information Processing, ICONIP 2024, held in Auckland, New Zealand, in December 2024. The 472 regular papers presented in this proceedings set were carefully reviewed and selected from 1301 submissions. These papers primarily focus on the following areas: Theory and algorithms; Cognitive neurosciences; Human-centered computing; and Applications.

Neural Information Processing: 31st International Conference, ICONIP 2024, Auckland, New Zealand, December 2–6, 2024, Proceedings, Part VIII (Communications in Computer and Information Science #2289)

by Kevin Wong M. Tanveer Mufti Mahmud Maryam Doborjeh Andrew Chi Sing Leung Zohreh Doborjeh

The sixteen-volume set, CCIS 2282-2297, constitutes the refereed proceedings of the 31st International Conference on Neural Information Processing, ICONIP 2024, held in Auckland, New Zealand, in December 2024. The 472 regular papers presented in this proceedings set were carefully reviewed and selected from 1301 submissions. These papers primarily focus on the following areas: Theory and algorithms; Cognitive neurosciences; Human-centered computing; and Applications.

Neural Information Processing: 31st International Conference, ICONIP 2024, Auckland, New Zealand, December 2–6, 2024, Proceedings, Part VIII (Lecture Notes in Computer Science #15293)

by Kevin Wong M. Tanveer Mufti Mahmud Maryam Doborjeh Andrew Chi Sing Leung Zohreh Doborjeh

The eleven-volume set LNCS 15286-15296 constitutes the refereed proceedings of the 31st International Conference on Neural Information Processing, ICONIP 2024, held in Auckland, New Zealand, in December 2024.The 318 regular papers presented in the proceedings set were carefully reviewed and selected from 1301 submissions. They focus on four main areas, namely: theory and algorithms; cognitive neurosciences; human-centered computing; and applications.

Neural Information Processing: 31st International Conference, ICONIP 2024, Auckland, New Zealand, December 2–6, 2024, Proceedings, Part X (Communications in Computer and Information Science #2291)

by Kevin Wong M. Tanveer Mufti Mahmud Maryam Doborjeh Andrew Chi Sing Leung Zohreh Doborjeh

The sixteen-volume set, CCIS 2282-2297, constitutes the refereed proceedings of the 31st International Conference on Neural Information Processing, ICONIP 2024, held in Auckland, New Zealand, in December 2024. The 472 regular papers presented in this proceedings set were carefully reviewed and selected from 1301 submissions. These papers primarily focus on the following areas: Theory and algorithms; Cognitive neurosciences; Human-centered computing; and Applications.

Neural Information Processing: 31st International Conference, ICONIP 2024, Auckland, New Zealand, December 2–6, 2024, Proceedings, Part XI (Lecture Notes in Computer Science #15296)

by Kevin Wong M. Tanveer Mufti Mahmud Maryam Doborjeh Andrew Chi Sing Leung Zohreh Doborjeh

The eleven-volume set LNCS 15286-15295 constitutes the refereed proceedings of the 31st International Conference on Neural Information Processing, ICONIP 2024, held in Auckland, New Zealand, in December 2024. The 318 regular papers presented in the proceedings set were carefully reviewed and selected from 1301 submissions. They focus on four main areas, namely: theory and algorithms; cognitive neurosciences; human-centered computing; and applications.

Neural Machine Translation

by Philipp Koehn

Deep learning is revolutionizing how machine translation systems are built today. This book introduces the challenge of machine translation and evaluation - including historical, linguistic, and applied context -- then develops the core deep learning methods used for natural language applications. Code examples in Python give readers a hands-on blueprint for understanding and implementing their own machine translation systems. <p><p>The book also provides extensive coverage of machine learning tricks, issues involved in handling various forms of data, model enhancements, and current challenges and methods for analysis and visualization. <p><p>Summaries of the current research in the field make this a state-of-the-art textbook for undergraduate and graduate classes, as well as an essential reference for researchers and developers interested in other applications of neural methods in the broader field of human language processing.

Neural Modeling of Speech Processing and Speech Learning: An Introduction

by Bernd J. Kröger Trevor Bekolay

This book explores the processes of spoken language production and perception from a neurobiological perspective. After presenting the basics of speech processing and speech acquisition, a neurobiologically-inspired and computer-implemented neural model is described, which simulates the neural processes of speech processing and speech acquisition. This book is an introduction to the field and aimed at students and scientists in neuroscience, computer science, medicine, psychology and linguistics.

Neural Network Analysis, Architectures and Applications

by Antony Browne

Neural Network Analysis, Architectures and Applications discusses the main areas of neural networks, with each authoritative chapter covering the latest information from different perspectives. Divided into three parts, the book first lays the groundwork for understanding and simplifying networks. It then describes novel architectures and algorithms, including pulse-stream techniques, cellular neural networks, and multiversion neural computing. The book concludes by examining various neural network applications, such as neuron-fuzzy control systems and image compression. This final part of the book also provides a case study involving oil spill detection. This book is invaluable for students and practitioners who have a basic understanding of neural computing yet want to broaden and deepen their knowledge of the field.

Neural Network Methods for Dynamic Equations on Time Scales (SpringerBriefs in Applied Sciences and Technology)

by Svetlin Georgiev

This book aims to handle dynamic equations on time scales using artificial neural network (ANN). Basic facts and methods for ANN modeling are considered. The multilayer artificial neural network (ANN) model is introduced for solving of dynamic equations on arbitrary time scales. A multilayer ANN model with one input layer containing a single node, a hidden layer with m nodes, and one output node are investigated. The feed-forward neural network model and unsupervised error back-propagation algorithm are developed. Modification of network parameters is done without the use of any optimization technique. The regression-based neural network (RBNN) model is introduced for solving dynamic equations on arbitrary time scales. The RBNN trial solution of dynamic equations is obtained by using the RBNN model for single input and single output system. A variety of initial and boundary value problems are solved. The Chebyshev neural network (ChNN) model and Levendre neural network model are developed. The ChNN trial solution of dynamic equations is obtained by using the ChNN model for single input and single output system. This book is addressed to a wide audience of specialists such as mathematicians, physicists, engineers, and biologists. It can be used as a textbook at the graduate level and as a reference book for several disciplines.

Neural Network Modeling: Statistical Mechanics and Cybernetic Perspectives

by P. S. Neelakanta Dolores DeGroff

Neural Network Modeling offers a cohesive approach to the statistical mechanics and principles of cybernetics as a basis for neural network modeling. It brings together neurobiologists and the engineers who design intelligent automata to understand the physics of collective behavior pertinent to neural elements and the self-control aspects of neurocybernetics. The theoretical perspectives and explanatory projections portray the most current information in the field, some of which counters certain conventional concepts in the visualization of neuronal interactions.

Neural Network Perspectives on Cognition and Adaptive Robotics

by A. Browne

Featuring an international team of authors, Neural Network Perspectives on Cognition and Adaptive Robotics presents several approaches to the modeling of human cognition and language using neural computing techniques. It also describes how adaptive robotic systems can be produced using neural network architectures. Covering a wide range of mainstream area and trends, each chapter provides the latest information from a different perspective.

Neural Network Programming with Java

by Fabio M. Soares Alan M.F. Souza

Create and unleash the power of neural networks by implementing professional Java code About This Book * Learn to build amazing projects using neural networks including forecasting the weather and pattern recognition * Explore the Java multi-platform feature to run your personal neural networks everywhere * This step-by-step guide will help you solve real-world problems and links neural network theory to their application Who This Book Is For This book is for Java developers with basic Java programming knowledge. No previous knowledge of neural networks is required as this book covers the concepts from scratch. What You Will Learn * Get to grips with the basics of neural networks and what they are used for * Develop neural networks using hands-on examples * Explore and code the most widely-used learning algorithms to make your neural network learn from most types of data * Discover the power of neural network's unsupervised learning process to extract the intrinsic knowledge hidden behind the data * Apply the code generated in practical examples, including weather forecasting and pattern recognition * Understand how to make the best choice of learning parameters to ensure you have a more effective application * Select and split data sets into training, test, and validation, and explore validation strategies * Discover how to improve and optimize your neural network In Detail Vast quantities of data are produced every second. In this context, neural networks become a powerful technique to extract useful knowledge from large amounts of raw, seemingly unrelated data. One of the most preferred languages for neural network programming is Java as it is easier to write code using it, and most of the most popular neural network packages around already exist for Java. This makes it a versatile programming language for neural networks. This book gives you a complete walkthrough of the process of developing basic to advanced practical examples based on neural networks with Java. You will first learn the basics of neural networks and their process of learning. We then focus on what Perceptrons are and their features. Next, you will implement self-organizing maps using the concepts you've learned. Furthermore, you will learn about some of the applications that are presented in this book such as weather forecasting, disease diagnosis, customer profiling, and characters recognition (OCR). Finally, you will learn methods to optimize and adapt neural networks in real time. All the examples generated in the book are provided in the form of illustrative source code, which merges object-oriented programming (OOP) concepts and neural network features to enhance your learning experience. Style and approach This book adopts a step-by-step approach to neural network development and provides many hands-on examples using Java programming. Each neural network concept is explored through real-world problems and is delivered in an easy-to-comprehend manner.

Neural Network Programming with Java - Second Edition

by Fabio M. Soares Alan M. Souza

Create and unleash the power of neural networks by implementing professional Java code About This Book • Learn to build amazing projects using neural networks including forecasting the weather and pattern recognition • Explore the Java multi-platform feature to run your personal neural networks everywhere • This step-by-step guide will help you solve real-world problems and links neural network theory to their application Who This Book Is For This book is for Java developers who want to know how to develop smarter applications using the power of neural networks. Those who deal with a lot of complex data and want to use it efficiently in their day-to-day apps will find this book quite useful. Some basic experience with statistical computations is expected. What You Will Learn • Develop an understanding of neural networks and how they can be fitted • Explore the learning process of neural networks • Build neural network applications with Java using hands-on examples • Discover the power of neural network's unsupervised learning process to extract the intrinsic knowledge hidden behind the data • Apply the code generated in practical examples, including weather forecasting and pattern recognition • Understand how to make the best choice of learning parameters to ensure you have a more effective application • Select and split data sets into training, test, and validation, and explore validation strategies In Detail Want to discover the current state-of-art in the field of neural networks that will let you understand and design new strategies to apply to more complex problems? This book takes you on a complete walkthrough of the process of developing basic to advanced practical examples based on neural networks with Java, giving you everything you need to stand out. You will first learn the basics of neural networks and their process of learning. We then focus on what Perceptrons are and their features. Next, you will implement self-organizing maps using practical examples. Further on, you will learn about some of the applications that are presented in this book such as weather forecasting, disease diagnosis, customer profiling, generalization, extreme machine learning, and characters recognition (OCR). Finally, you will learn methods to optimize and adapt neural networks in real time. All the examples generated in the book are provided in the form of illustrative source code, which merges object-oriented programming (OOP) concepts and neural network features to enhance your learning experience. Style and approach This book takes you on a steady learning curve, teaching you the important concepts while being rich in examples. You'll be able to relate to the examples in the book while implementing neural networks in your day-to-day applications.

Neural Network Programming with TensorFlow

by Manpreet Singh Ghotra

<P><P>Neural Networks and their implementation decoded with TensorFlow <P><P>About This Book <P><P>Develop a strong background in neural network programming from scratch, using the popular Tensorflow library. <P><P>Use Tensorflow to implement different kinds of neural networks – from simple feedforward neural networks to multilayered perceptrons, CNNs, RNNs and more. <P><P>A highly practical guide including real-world datasets and use-cases to simplify your understanding of neural networks and their implementation. <P><P>Who This Book Is For <P><P>This book is meant for developers with a statistical background who want to work with neural networks. Though we will be using TensorFlow as the underlying library for neural networks, book can be used as a generic resource to bridge the gap between the math and the implementation of deep learning. If you have some understanding of Tensorflow and Python and want to learn what happens at a level lower than the plain API syntax, this book is for you. <P><P>What You Will Learn <P><P>Learn Linear Algebra and mathematics behind neural network. <P><P>Dive deep into Neural networks from the basic to advanced concepts like CNN, RNN Deep Belief Networks, Deep Feedforward Networks. <P><P>Explore Optimization techniques for solving problems like Local minima, Global minima, Saddle points <P><P>Learn through real world examples like Sentiment Analysis. <P><P>Train different types of generative models and explore autoencoders. <P><P>Explore TensorFlow as an example of deep learning implementation. <P><P>In Detail <P><P>If you're aware of the buzz surrounding the terms such as "machine learning," "artificial intelligence," or "deep learning," you might know what neural networks are. Ever wondered how they help in solving complex computational problem efficiently, or how to train efficient neural networks? This book will teach you just that. <P><P>You will start by getting a quick overview of the popular TensorFlow library and how it is used to train different neural networks. You will get a thorough understanding of the fundamentals and basic math for neural networks and why TensorFlow is a popular choice Then, you will proceed to implement a simple feed forward neural network. Next you will master optimization techniques and algorithms for neural networks using TensorFlow. Further, you will learn to implement some more complex types of neural networks such as convolutional neural networks, recurrent neural networks, and Deep Belief Networks. In the course of the book, you will be working on real-world datasets to get a hands-on understanding of neural network programming. You will also get to train generative models and will learn the applications of autoencoders. <P><P>By the end of this book, you will have a fair understanding of how you can leverage the power of TensorFlow to train neural networks of varying complexities, without any hassle. While you are learning about various neural network implementations you will learn the underlying mathematics and linear algebra and how they map to the appropriate TensorFlow constructs.

Neural Network Programming with TensorFlow: Unleash the power of TensorFlow to train efficient neural networks

by Rajdeep Dua Manpreet Singh Ghotra

Neural Networks and their implementation decoded with TensorFlow About This Book • Develop a strong background in neural network programming from scratch, using the popular Tensorflow library. • Use Tensorflow to implement different kinds of neural networks – from simple feedforward neural networks to multilayered perceptrons, CNNs, RNNs and more. • A highly practical guide including real-world datasets and use-cases to simplify your understanding of neural networks and their implementation. Who This Book Is For This book is meant for developers with a statistical background who want to work with neural networks. Though we will be using TensorFlow as the underlying library for neural networks, book can be used as a generic resource to bridge the gap between the math and the implementation of deep learning. If you have some understanding of Tensorflow and Python and want to learn what happens at a level lower than the plain API syntax, this book is for you. What You Will Learn • Learn Linear Algebra and mathematics behind neural network. • Dive deep into Neural networks from the basic to advanced concepts like CNN, RNN Deep Belief Networks, Deep Feedforward Networks. • Explore Optimization techniques for solving problems like Local minima, Global minima, Saddle points • Learn through real world examples like Sentiment Analysis. • Train different types of generative models and explore autoencoders. • Explore TensorFlow as an example of deep learning implementation. In Detail If you're aware of the buzz surrounding the terms such as "machine learning," "artificial intelligence," or "deep learning," you might know what neural networks are. Ever wondered how they help in solving complex computational problem efficiently, or how to train efficient neural networks? This book will teach you just that. You will start by getting a quick overview of the popular TensorFlow library and how it is used to train different neural networks. You will get a thorough understanding of the fundamentals and basic math for neural networks and why TensorFlow is a popular choice Then, you will proceed to implement a simple feed forward neural network. Next you will master optimization techniques and algorithms for neural networks using TensorFlow. Further, you will learn to implement some more complex types of neural networks such as convolutional neural networks, recurrent neural networks, and Deep Belief Networks. In the course of the book, you will be working on real-world datasets to get a hands-on understanding of neural network programming. You will also get to train generative models and will learn the applications of autoencoders. By the end of this book, you will have a fair understanding of how you can leverage the power of TensorFlow to train neural networks of varying complexities, without any hassle. While you are learning about various neural network implementations you will learn the underlying mathematics and linear algebra and how they map to the appropriate TensorFlow constructs. Style and Approach This book is designed to give you just the right number of concepts to back up the examples. With real-world use cases and problems solved, this book is a handy guide for you. Each concept is backed by a generic and real-world problem, followed by a variation, making you independent and able to solve any problem with neural networks. All of the content is demystified by a simple and straightforward approach.

Neural Network Projects with Python: The ultimate guide to using Python to explore the true power of neural networks through six projects

by James Loy

Build your Machine Learning portfolio by creating 6 cutting-edge Artificial Intelligence projects using neural networks in PythonKey FeaturesDiscover neural network architectures (like CNN and LSTM) that are driving recent advancements in AIBuild expert neural networks in Python using popular libraries such as KerasIncludes projects such as object detection, face identification, sentiment analysis, and moreBook DescriptionNeural networks are at the core of recent AI advances, providing some of the best resolutions to many real-world problems, including image recognition, medical diagnosis, text analysis, and more. This book goes through some basic neural network and deep learning concepts, as well as some popular libraries in Python for implementing them.It contains practical demonstrations of neural networks in domains such as fare prediction, image classification, sentiment analysis, and more. In each case, the book provides a problem statement, the specific neural network architecture required to tackle that problem, the reasoning behind the algorithm used, and the associated Python code to implement the solution from scratch. In the process, you will gain hands-on experience with using popular Python libraries such as Keras to build and train your own neural networks from scratch.By the end of this book, you will have mastered the different neural network architectures and created cutting-edge AI projects in Python that will immediately strengthen your machine learning portfolio.What you will learnLearn various neural network architectures and its advancements in AIMaster deep learning in Python by building and training neural networkMaster neural networks for regression and classificationDiscover convolutional neural networks for image recognitionLearn sentiment analysis on textual data using Long Short-Term MemoryBuild and train a highly accurate facial recognition security systemWho this book is forThis book is a perfect match for data scientists, machine learning engineers, and deep learning enthusiasts who wish to create practical neural network projects in Python. Readers should already have some basic knowledge of machine learning and neural networks.

Neural Network-Based Adaptive Control of Uncertain Nonlinear Systems

by Heidar A. Talebi Farzaneh Abdollahi Kasra Esfandiari

The focus of this book is the application of artificial neural networks in uncertain dynamical systems. It explains how to use neural networks in concert with adaptive techniques for system identification, state estimation, and control problems. The authors begin with a brief historical overview of adaptive control, followed by a review of mathematical preliminaries. In the subsequent chapters, they present several neural network-based control schemes. Each chapter starts with a concise introduction to the problem under study, and a neural network-based control strategy is designed for the simplest case scenario. After these designs are discussed, different practical limitations (i.e., saturation constraints and unavailability of all system states) are gradually added, and other control schemes are developed based on the primary scenario. Through these exercises, the authors present structures that not only provide mathematical tools for navigating control problems, but also supply solutions that are pertinent to real-life systems.

Neural Networks and Deep Learning: A Textbook

by Charu C. Aggarwal

This book covers both classical and modern models in deep learning. The primary focus is on the theory and algorithms of deep learning. The theory and algorithms of neural networks are particularly important for understanding important concepts, so that one can understand the important design concepts of neural architectures in different applications. Why do neural networks work? When do they work better than off-the-shelf machine-learning models? When is depth useful? Why is training neural networks so hard? What are the pitfalls? The book is also rich in discussing different applications in order to give the practitioner a flavor of how neural architectures are designed for different types of problems. Applications associated with many different areas like recommender systems, machine translation, image captioning, image classification, reinforcement-learning based gaming, and text analytics are covered. The chapters of this book span three categories: The basics of neural networks: Many traditional machine learning models can be understood as special cases of neural networks. An emphasis is placed in the first two chapters on understanding the relationship between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks. These methods are studied together with recent feature engineering methods like word2vec. Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 3 and 4. Chapters 5 and 6 present radial-basis function (RBF) networks and restricted Boltzmann machines. Advanced topics in neural networks: Chapters 7 and 8 discuss recurrent neural networks and convolutional neural networks. Several advanced topics like deep reinforcement learning, neural Turing machines, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 9 and 10. The book is written for graduate students, researchers, and practitioners. Numerous exercises are available along with a solution manual to aid in classroom teaching. Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of techniques.

Neural Networks and Deep Learning: A Textbook

by Charu C. Aggarwal

This textbook covers both classical and modern models in deep learning and includes examples and exercises throughout the chapters. Deep learning methods for various data domains, such as text, images, and graphs are presented in detail. The chapters of this book span three categories: The basics of neural networks: The backpropagation algorithm is discussed in Chapter 2.Many traditional machine learning models can be understood as special cases of neural networks. Chapter 3 explores the connections between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks. Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 4 and 5. Chapters 6 and 7 present radial-basis function (RBF) networks and restricted Boltzmann machines. Advanced topics in neural networks: Chapters 8, 9, and 10 discuss recurrent neural networks, convolutional neural networks, and graph neural networks. Several advanced topics like deep reinforcement learning, attention mechanisms, transformer networks, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 11 and 12. The textbook is written for graduate students and upper under graduate level students. Researchers and practitioners working within this related field will want to purchase this as well.Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of techniques.The second edition is substantially reorganized and expanded with separate chapters on backpropagation and graph neural networks. Many chapters have been significantly revised over the first edition.Greater focus is placed on modern deep learning ideas such as attention mechanisms, transformers, and pre-trained language models.

Neural Networks and Learning Algorithms in MATLAB (Synthesis Lectures on Intelligent Technologies)

by Oscar Castillo Rathinasamy Sakthivel Mohammad Hosein Sabzalian Fayez F. El-Sousy Ardahir Mohammadazadeh Saleh Mobayen

This book explains the basic concepts, theory and applications of neural networks in a simple unified approach with clear examples and simulations in the MATLAB programming language. The scripts herein are coded for general purposes to be easily extended to a variety of problems in different areas of application. They are vectorized and optimized to run faster and be applicable to high-dimensional engineering problems. This book will serve as a main reference for graduate and undergraduate courses in neural networks and applications. This book will also serve as a main basis for researchers dealing with complex problems that require neural networks for finding good solutions in areas, such as time series prediction, intelligent control and identification. In addition, the problem of designing neural network by using metaheuristics, such as the genetic algorithms and particle swarm optimization, with one objective and with multiple objectives, is presented.

Neural Networks and Micromechanics

by Tatiana Baidyk Ernst Kussul Donald C. Wunsch

This is an interdisciplinary field of research involving the use of neural network techniques for image recognition applied to tasks in the area of micromechanics. The book is organized into chapters on classic neural networks and novel neural classifiers; recognition of textures and object forms; micromechanics; and adaptive algorithms with neural and image recognition applications. The authors include theoretical analysis of the proposed approach, they describe their machine tool prototypes in detail, and they present results from experiments involving microassembly, and handwriting and face recognition. This book will benefit scientists, researchers and students working in artificial intelligence, particularly in the fields of image recognition and neural networks, and practitioners in the area of microengineering.

Refine Search

Showing 40,101 through 40,125 of 61,668 results