Browse Results

Showing 41,001 through 41,025 of 59,846 results

Neural Networks and Deep Learning: A Textbook

by Charu C. Aggarwal

This book covers both classical and modern models in deep learning. The primary focus is on the theory and algorithms of deep learning. The theory and algorithms of neural networks are particularly important for understanding important concepts, so that one can understand the important design concepts of neural architectures in different applications. Why do neural networks work? When do they work better than off-the-shelf machine-learning models? When is depth useful? Why is training neural networks so hard? What are the pitfalls? The book is also rich in discussing different applications in order to give the practitioner a flavor of how neural architectures are designed for different types of problems. Applications associated with many different areas like recommender systems, machine translation, image captioning, image classification, reinforcement-learning based gaming, and text analytics are covered. The chapters of this book span three categories: The basics of neural networks: Many traditional machine learning models can be understood as special cases of neural networks. An emphasis is placed in the first two chapters on understanding the relationship between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks. These methods are studied together with recent feature engineering methods like word2vec. Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 3 and 4. Chapters 5 and 6 present radial-basis function (RBF) networks and restricted Boltzmann machines. Advanced topics in neural networks: Chapters 7 and 8 discuss recurrent neural networks and convolutional neural networks. Several advanced topics like deep reinforcement learning, neural Turing machines, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 9 and 10. The book is written for graduate students, researchers, and practitioners. Numerous exercises are available along with a solution manual to aid in classroom teaching. Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of techniques.

Neural Networks and Deep Learning: A Textbook

by Charu C. Aggarwal

This textbook covers both classical and modern models in deep learning and includes examples and exercises throughout the chapters. Deep learning methods for various data domains, such as text, images, and graphs are presented in detail. The chapters of this book span three categories: The basics of neural networks: The backpropagation algorithm is discussed in Chapter 2.Many traditional machine learning models can be understood as special cases of neural networks. Chapter 3 explores the connections between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks. Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 4 and 5. Chapters 6 and 7 present radial-basis function (RBF) networks and restricted Boltzmann machines. Advanced topics in neural networks: Chapters 8, 9, and 10 discuss recurrent neural networks, convolutional neural networks, and graph neural networks. Several advanced topics like deep reinforcement learning, attention mechanisms, transformer networks, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 11 and 12. The textbook is written for graduate students and upper under graduate level students. Researchers and practitioners working within this related field will want to purchase this as well.Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of techniques.The second edition is substantially reorganized and expanded with separate chapters on backpropagation and graph neural networks. Many chapters have been significantly revised over the first edition.Greater focus is placed on modern deep learning ideas such as attention mechanisms, transformers, and pre-trained language models.

Neural Networks and Learning Algorithms in MATLAB (Synthesis Lectures on Intelligent Technologies)

by Oscar Castillo Rathinasamy Sakthivel Mohammad Hosein Sabzalian Fayez F. El-Sousy Ardahir Mohammadazadeh Saleh Mobayen

This book explains the basic concepts, theory and applications of neural networks in a simple unified approach with clear examples and simulations in the MATLAB programming language. The scripts herein are coded for general purposes to be easily extended to a variety of problems in different areas of application. They are vectorized and optimized to run faster and be applicable to high-dimensional engineering problems. This book will serve as a main reference for graduate and undergraduate courses in neural networks and applications. This book will also serve as a main basis for researchers dealing with complex problems that require neural networks for finding good solutions in areas, such as time series prediction, intelligent control and identification. In addition, the problem of designing neural network by using metaheuristics, such as the genetic algorithms and particle swarm optimization, with one objective and with multiple objectives, is presented.

Neural Networks and Micromechanics

by Tatiana Baidyk Ernst Kussul Donald C. Wunsch

This is an interdisciplinary field of research involving the use of neural network techniques for image recognition applied to tasks in the area of micromechanics. The book is organized into chapters on classic neural networks and novel neural classifiers; recognition of textures and object forms; micromechanics; and adaptive algorithms with neural and image recognition applications. The authors include theoretical analysis of the proposed approach, they describe their machine tool prototypes in detail, and they present results from experiments involving microassembly, and handwriting and face recognition. This book will benefit scientists, researchers and students working in artificial intelligence, particularly in the fields of image recognition and neural networks, and practitioners in the area of microengineering.

Neural Networks and Statistical Learning

by Ke-Lin Du M. N. Swamy

This book provides a broad yet detailed introduction to neural networks and machine learning in a statistical framework. A single, comprehensive resource for study and further research, it explores the major popular neural network models and statistical learning approaches with examples and exercises and allows readers to gain a practical working understanding of the content. This updated new edition presents recently published results and includes six new chapters that correspond to the recent advances in computational learning theory, sparse coding, deep learning, big data and cloud computing.Each chapter features state-of-the-art descriptions and significant research findings. The topics covered include:• multilayer perceptron;• the Hopfield network;• associative memory models;• clustering models and algorithms;• t he radial basis function network;• recurrent neural networks;• nonnegative matrix factorization;• independent component analysis;•probabilistic and Bayesian networks; and• fuzzy sets and logic.Focusing on the prominent accomplishments and their practical aspects, this book provides academic and technical staff, as well as graduate students and researchers with a solid foundation and comprehensive reference on the fields of neural networks, pattern recognition, signal processing, and machine learning.

Neural Networks and Statistical Learning

by Ke-Lin Du M. N. S. Swamy

Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardware implementations, and some machine learning topics. Applications to biometric/bioinformatics and data mining are also included. Focusing on the prominent accomplishments and their practical aspects, academic and technical staff, graduate students and researchers will find that this provides a solid foundation and encompassing reference for the fields of neural networks, pattern recognition, signal processing, machine learning, computational intelligence, and data mining.

Neural Networks for Applied Sciences and Engineering: From Fundamentals to Complex Pattern Recognition

by null Sandhya Samarasinghe

In response to the exponentially increasing need to analyze vast amounts of data, Neural Networks for Applied Sciences and Engineering: From Fundamentals to Complex Pattern Recognition provides scientists with a simple but systematic introduction to neural networks. Beginning with an introductory discussion on the role of neural networks in

Neural Networks for Electronics Hobbyists: A Non-technical Project-based Introduction

by Richard McKeon

Learn how to implement and build a neural network with this non-technical, project-based book as your guide. As you work through the chapters, you'll build an electronics project, providing a hands-on experience in training a network. There are no prerequisites here and you won't see a single line of computer code in this book. Instead, it takes a hardware approach using very simple electronic components. You'll start off with an interesting non-technical introduction to neural networks, and then construct an electronics project. The project isn't complicated, but it illustrates how back propagation can be used to adjust connection strengths or "weights" and train a network. By the end of this book, you'll be able to take what you've learned and apply it to your own projects. If you like to tinker around with components and build circuits on a breadboard, Neural Networks for Electronics Hobbyists is the book for you. What You'll LearnGain a practical introduction to neural networksReview techniques for training networks with electrical hardware and supervised learningUnderstand how parallel processing differs from standard sequential programmingWho This Book Is ForAnyone interest in neural networks, from electronic hobbyists looking for an interesting project to build, to a layperson with no experience. Programmers familiar with neural networks but have only implemented them using computer code will also benefit from this book.

Neural Networks in Unity: C# Programming For Windows 10 Uwp

by Abhishek Nandy Manisha Biswas

Learn the core concepts of neural networks and discover the different types of neural network, using Unity as your platform. In this book you will start by exploring back propagation and unsupervised neural networks with Unity and C#. You’ll then move onto activation functions, such as sigmoid functions, step functions, and so on. The author also explains all the variations of neural networks such as feed forward, recurrent, and radial.Once you’ve gained the basics, you’ll start programming Unity with C#. In this section the author discusses constructing neural networks for unsupervised learning, representing a neural network in terms of data structures in C#, and replicating a neural network in Unity as a simulation. Finally, you’ll define back propagation with Unity C#, before compiling your project.What You'll LearnDiscover the concepts behind neural networksWork with Unity and C# See the difference between fully connected and convolutional neural networksMaster neural network processing for Windows 10 UWPWho This Book Is ForGaming professionals, machine learning and deep learning enthusiasts.

Neural Networks, Machine Learning, and Image Processing: Mathematical Modeling and Applications

by Manoj Sahni Ritu Sahni Jose M. Merigo

SECTION I Mathematical Modeling and Neural Network’ Mathematical Essence Chapter 1 Mathematical Modeling on Thermoregulation in Sarcopenia1.1. Introduction 1.2. Discretization 1.3. Modeling and Simulation of Basal Metabolic Rate and Skin Layers Thickness 1.4. Mathematical Model and Boundary Conditions 1.5. Solution of the Model 1.6. Numerical Results and discussion 1.7. Conclusion References Chapter 2 Multi-objective University Course Scheduling for Un

Neural Networks with Discontinuous/Impact Activations

by Marat Akhmet Enes Yılmaz

This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided.

Neural Networks with Keras Cookbook: Over 70 recipes leveraging deep learning techniques across image, text, audio, and game bots

by V Kishore Ayyadevara

Implement neural network architectures by building them from scratch for multiple real-world applications.Key FeaturesFrom scratch, build multiple neural network architectures such as CNN, RNN, LSTM in KerasDiscover tips and tricks for designing a robust neural network to solve real-world problemsGraduate from understanding the working details of neural networks and master the art of fine-tuning themBook DescriptionThis book will take you from the basics of neural networks to advanced implementations of architectures using a recipe-based approach.We will learn about how neural networks work and the impact of various hyper parameters on a network's accuracy along with leveraging neural networks for structured and unstructured data.Later, we will learn how to classify and detect objects in images. We will also learn to use transfer learning for multiple applications, including a self-driving car using Convolutional Neural Networks.We will generate images while leveraging GANs and also by performing image encoding. Additionally, we will perform text analysis using word vector based techniques. Later, we will use Recurrent Neural Networks and LSTM to implement chatbot and Machine Translation systems.Finally, you will learn about transcribing images, audio, and generating captions and also use Deep Q-learning to build an agent that plays Space Invaders game.By the end of this book, you will have developed the skills to choose and customize multiple neural network architectures for various deep learning problems you might encounter.What you will learnBuild multiple advanced neural network architectures from scratchExplore transfer learning to perform object detection and classificationBuild self-driving car applications using instance and semantic segmentationUnderstand data encoding for image, text and recommender systemsImplement text analysis using sequence-to-sequence learningLeverage a combination of CNN and RNN to perform end-to-end learningBuild agents to play games using deep Q-learningWho this book is forThis intermediate-level book targets beginners and intermediate-level machine learning practitioners and data scientists who have just started their journey with neural networks. This book is for those who are looking for resources to help them navigate through the various neural network architectures; you'll build multiple architectures, with concomitant case studies ordered by the complexity of the problem. A basic understanding of Python programming and a familiarity with basic machine learning are all you need to get started with this book.

Neural Networks with Model Compression (Computational Intelligence Methods and Applications)

by Baochang Zhang Tiancheng Wang Sheng Xu David Doermann

Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.

Neural Networks with R

by Giuseppe Ciaburro Balaji Venkateswaran

Uncover the power of artificial neural networks by implementing them through R code. About This Book • Develop a strong background in neural networks with R, to implement them in your applications • Build smart systems using the power of deep learning • Real-world case studies to illustrate the power of neural network models Who This Book Is For This book is intended for anyone who has a statistical background with knowledge in R and wants to work with neural networks to get better results from complex data. If you are interested in artificial intelligence and deep learning and you want to level up, then this book is what you need! What You Will Learn • Set up R packages for neural networks and deep learning • Understand the core concepts of artificial neural networks • Understand neurons, perceptrons, bias, weights, and activation functions • Implement supervised and unsupervised machine learning in R for neural networks • Predict and classify data automatically using neural networks • Evaluate and fine-tune the models you build. In Detail Neural networks are one of the most fascinating machine learning models for solving complex computational problems efficiently. Neural networks are used to solve wide range of problems in different areas of AI and machine learning. This book explains the niche aspects of neural networking and provides you with foundation to get started with advanced topics. The book begins with neural network design using the neural net package, then you'll build a solid foundation knowledge of how a neural network learns from data, and the principles behind it. This book covers various types of neural network including recurrent neural networks and convoluted neural networks. You will not only learn how to train neural networks, but will also explore generalization of these networks. Later we will delve into combining different neural network models and work with the real-world use cases. By the end of this book, you will learn to implement neural network models in your applications with the help of practical examples in the book. Style and approach A step-by-step guide filled with real-world practical examples.

Neural Networks with TensorFlow and Keras: Training, Generative Models, and Reinforcement Learning

by Philip Hua

Explore the capabilities of machine learning and neural networks. This comprehensive guidebook is tailored for professional programmers seeking to deepen their understanding of neural networks, machine learning techniques, and large language models (LLMs). The book explores the core of machine learning techniques, covering essential topics such as data pre-processing, model selection, and customization. It provides a robust foundation in neural network fundamentals, supplemented by practical case studies and projects. You will explore various network topologies, including Deep Neural Networks (DNN), Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) networks, Variational Autoencoders (VAE), Generative Adversarial Networks (GAN), and Large Language Models (LLMs). Each concept is explained with clear, step-by-step instructions and accompanied by Python code examples using the latest versions of TensorFlow and Keras, ensuring a hands-on learning experience. By the end of this book, you will gain practical skills to apply these techniques to solving problems. Whether you are looking to advance your career or enhance your programming capabilities, this book provides the tools and knowledge needed to excel in the rapidly evolving field of machine learning and neural networks. What You Will Learn Grasp the fundamentals of various neural network topologies, including DNN, RNN, LSTM, VAE, GAN, and LLMs Implement neural networks using the latest versions of TensorFlow and Keras, with detailed Python code examples Know the techniques for data pre-processing, model selection, and customization to optimize machine learning models Apply machine learning and neural network techniques in various professional scenarios Who This Book Is For Data scientists, machine learning enthusiasts, and software developers who wish to deepen their understanding of neural networks and machine learning techniques

Neural Representations of Natural Language (Studies in Computational Intelligence #783)

by Lyndon White Roberto Togneri Wei Liu Mohammed Bennamoun

This book offers an introduction to modern natural language processing using machine learning, focusing on how neural networks create a machine interpretable representation of the meaning of natural language. Language is crucially linked to ideas – as Webster’s 1923 “English Composition and Literature” puts it: “A sentence is a group of words expressing a complete thought”. Thus the representation of sentences and the words that make them up is vital in advancing artificial intelligence and other “smart” systems currently being developed. Providing an overview of the research in the area, from Bengio et al.’s seminal work on a “Neural Probabilistic Language Model” in 2003, to the latest techniques, this book enables readers to gain an understanding of how the techniques are related and what is best for their purposes. As well as a introduction to neural networks in general and recurrent neural networks in particular, this book details the methods used for representing words, senses of words, and larger structures such as sentences or documents. The book highlights practical implementations and discusses many aspects that are often overlooked or misunderstood. The book includes thorough instruction on challenging areas such as hierarchical softmax and negative sampling, to ensure the reader fully and easily understands the details of how the algorithms function. Combining practical aspects with a more traditional review of the literature, it is directly applicable to a broad readership. It is an invaluable introduction for early graduate students working in natural language processing; a trustworthy guide for industry developers wishing to make use of recent innovations; and a sturdy bridge for researchers already familiar with linguistics or machine learning wishing to understand the other.

Neural Search - From Prototype to Production with Jina: Build deep learning–powered search systems that you can deploy and manage with ease

by Bo Wang Cristian Mitroi Feng Wang Shubham Saboo Susana Guzman

Implement neural search systems on the cloud by leveraging Jina design patternsKey FeaturesIdentify the different search techniques and discover applications of neural searchGain a solid understanding of vector representation and apply your knowledge in neural searchUnlock deeper levels of knowledge of Jina for neural searchBook DescriptionSearch is a big and ever-growing part of the tech ecosystem. Traditional search, however, has limitations that are hard to overcome because of the way it is designed. Neural search is a novel approach that uses the power of machine learning to retrieve information using vector embeddings as first-class citizens, opening up new possibilities of improving the results obtained through traditional search. Although neural search is a powerful tool, it is new and finetuning it can be tedious as it requires you to understand the several components on which it relies. Jina fills this gap by providing an infrastructure that reduces the time and complexity involved in creating deep learning–powered search engines. This book will enable you to learn the fundamentals of neural networks for neural search, its strengths and weaknesses, as well as how to use Jina to build a search engine. With the help of step-by-step explanations, practical examples, and self-assessment questions, you'll become well-versed with the basics of neural search and core Jina concepts, and learn to apply this knowledge to build your own search engine. By the end of this deep learning book, you'll be able to make the most of Jina's neural search design patterns to build an end-to-end search solution for any modality.What you will learnUnderstand how neural search and legacy search workGrasp the machine learning and math fundamentals needed for neural searchGet to grips with the foundation of vector representationExplore the basic components of JinaAnalyze search systems with different modalitiesUncover the capabilities of Jina with the help of practical examplesWho this book is forIf you are a machine learning, deep learning, or artificial intelligence engineer interested in building a search system of any kind (text, QA, image, audio, PDF, 3D models, or others) using modern software architecture, this book is for you. This book is perfect for Python engineers who are interested in building a search system of any kind using state-of-the-art deep learning techniques.

Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks

by Russell D. Reed Robert J. Marks

Artificial neural networks are nonlinear mapping systems whose structure is loosely based on principles observed in the nervous systems of humans and animals. The basic idea is that massive systems of simple units linked together in appropriate ways can generate many complex and interesting behaviors. This book focuses on the subset of feedforward artificial neural networks called multilayer perceptrons (MLP). These are the mostly widely used neural networks, with applications as diverse as finance (forecasting), manufacturing (process control), and science (speech and image recognition). This book presents an extensive and practical overview of almost every aspect of MLP methodology, progressing from an initial discussion of what MLPs are and how they might be used to an in-depth examination of technical factors affecting performance. The book can be used as a tool kit by readers interested in applying networks to specific problems, yet it also presents theory and references outlining the last ten years of MLP research.

Neural-Symbolic Learning and Reasoning: 18th International Conference, NeSy 2024, Barcelona, Spain, September 9–12, 2024, Proceedings, Part I (Lecture Notes in Computer Science #14979)

by Tarek R. Besold Artur D’Avila Garcez Ernesto Jimenez-Ruiz Roberto Confalonieri Pranava Madhyastha Benedikt Wagner

This book constitutes the refereed proceedings of the 18th International Conference on Neural-Symbolic Learning and Reasoning, NeSy 2024, held in Barcelona, Spain during September 9-12th, 2024. The 30 full papers and 18 short papers were carefully reviewed and selected from 89 submissions, which presented the latest and ongoing research work on neurosymbolic AI. Neurosymbolic AI aims to build rich computational models and systems by combining neural and symbolic learning and reasoning paradigms. This combination hopes to form synergies among their strengths while overcoming their complementary weaknesses.

Neural-Symbolic Learning and Reasoning: 18th International Conference, NeSy 2024, Barcelona, Spain, September 9–12, 2024, Proceedings, Part II (Lecture Notes in Computer Science #14980)

by Tarek R. Besold Artur D’Avila Garcez Ernesto Jimenez-Ruiz Roberto Confalonieri Pranava Madhyastha Benedikt Wagner

This book constitutes the refereed proceedings of the 18th International Conference on Neural-Symbolic Learning and Reasoning, NeSy 2024, held in Barcelona, Spain during September 9-12th, 2024. The 30 full papers and 18 short papers were carefully reviewed and selected from 89 submissions, which presented the latest and ongoing research work on neurosymbolic AI. Neurosymbolic AI aims to build rich computational models and systems by combining neural and symbolic learning and reasoning paradigms. This combination hopes to form synergies among their strengths while overcoming their complementary weaknesses.

Neural Text-to-Speech Synthesis (Artificial Intelligence: Foundations, Theory, and Algorithms)

by Xu Tan

Text-to-speech (TTS) aims to synthesize intelligible and natural speech based on the given text. It is a hot topic in language, speech, and machine learning research and has broad applications in industry. This book introduces neural network-based TTS in the era of deep learning, aiming to provide a good understanding of neural TTS, current research and applications, and the future research trend. This book first introduces the history of TTS technologies and overviews neural TTS, and provides preliminary knowledge on language and speech processing, neural networks and deep learning, and deep generative models. It then introduces neural TTS from the perspective of key components (text analyses, acoustic models, vocoders, and end-to-end models) and advanced topics (expressive and controllable, robust, model-efficient, and data-efficient TTS). It also points some future research directions and collects some resources related to TTS. This book is the first to introduce neural TTS in a comprehensive and easy-to-understand way and can serve both academic researchers and industry practitioners working on TTS.

The NeurIPS '18 Competition: From Machine Learning to Intelligent Conversations (The Springer Series on Challenges in Machine Learning)

by Sergio Escalera Ralf Herbrich

This volume presents the results of the Neural Information Processing Systems Competition track at the 2018 NeurIPS conference. The competition follows the same format as the 2017 competition track for NIPS. Out of 21 submitted proposals, eight competition proposals were selected, spanning the area of Robotics, Health, Computer Vision, Natural Language Processing, Systems and Physics. Competitions have become an integral part of advancing state-of-the-art in artificial intelligence (AI). They exhibit one important difference to benchmarks: Competitions test a system end-to-end rather than evaluating only a single component; they assess the practicability of an algorithmic solution in addition to assessing feasibility.The eight run competitions aim at advancing the state of the art in deep reinforcement learning, adversarial learning, and auto machine learning, among others, including new applications for intelligent agents in gaming and conversational settings, energy physics, and prosthetics.

Neuro Design: Neuromarketing Insights to Boost Engagement and Profitability

by Darren Bridger

Today, businesses of all sizes generate a great deal of creative graphic media and content, including websites, presentations, videos and social media posts. Most big companies, including Procter & Gamble, Coca-Cola, Tesco and Google, now use neuroscience research and theories to optimise their digital content. Neuro Design opens up this new world of neuromarketing design theories and recommendations, and describes insights from the growing field of neuroaesthetics that will enable readers to enhance customer engagement with their website and boost profitability.

Neuro-Fuzzy Equalizers for Mobile Cellular Channels

by K.C. Raveendranathan

Equalizers are present in all forms of communication systems. Neuro-Fuzzy Equalizers for Mobile Cellular Channels details the modeling of a mobile broadband communication channel and designing of a neuro-fuzzy adaptive equalizer for it. This book focuses on the concept of the simulation of wireless channel equalizers using the adaptive-network-based fuzzy inference system (ANFIS). The book highlights a study of currently existing equalizers for wireless channels. It discusses several techniques for channel equalization, including the type-2 fuzzy adaptive filter (type-2 FAF), compensatory neuro-fuzzy filter (CNFF), and radial basis function (RBF) neural network. Neuro-Fuzzy Equalizers for Mobile Cellular Channels starts with a brief introduction to channel equalizers, and the nature of mobile cellular channels with regard to the frequency reuse and the resulting CCI. It considers the many channel models available for mobile cellular channels, establishes the mobile indoor channel as a Rayleigh fading channel, presents the channel equalization problem, and focuses on various equalizers for mobile cellular channels. The book discusses conventional equalizers like LE and DFE using a simple LMS algorithm and transversal equalizers. It also covers channel equalization with neural networks and fuzzy logic, and classifies various equalizers. This being a fairly new branch of study, the book considers in detail the concept of fuzzy logic controllers in noise cancellation problems and provides the fundamental concepts of neuro-fuzzy. The final chapter offers a recap and explores venues for further research. This book also establishes a common mathematical framework of the equalizers using the RBF model and develops a mathematical model for ultra-wide band (UWB) channels using the channel co-variance matrix (CCM). Introduces the novel concept of the application of adaptive-network-based fuzzy inference system (ANFIS) in the design of wireless channel equalizers Provides model ultra-wide band (UWB) channels using channel co-variance matrix Offers a formulation of a unified radial basis function (RBF) framework for ANFIS-based and fuzzy adaptive filter (FAF) Type II, as well as compensatory neuro-fuzzy equalizers Includes extensive use of MATLAB® as the simulation tool in all the above cases

Neuro-fuzzy Modeling of Multi-field Surface Neuroprostheses for Hand Grasping (Springer Theses)

by Eukene Imatz Ojanguren

This thesis presents a novel neuro-fuzzy modeling approach for grasp neuroprostheses. At first, it offers a detailed study of discomfort due to the application of Functional Electrical Stimulation to the upper limb. Then, it discusses briefly previous methods to model hand movements induced by FES with the purpose of introducing the new modeling approach based on intelligent systems. This approach is thoroughly described in the book, together with the proposed application to induce hand and finger movements by means of a surface FES system based on multi-field electrodes. The validation tests, carried out on both healthy and neurologically impaired subjects, demonstrate the efficacy of the proposed modeling method. All in all, the book proposes an innovative system based on fuzzy neural networks that is expected to improve the design and validation of advanced control systems for non-invasive grasp neuroprostheses.

Refine Search

Showing 41,001 through 41,025 of 59,846 results