Artificial neural networks with Java tools for building neural network applications
Develop neural network applications using the Java environment. After learning the rules involved in neural network processing, this second edition shows you how to manually process your first neural network example. The book covers the internals of front and back propagation and helps you understan...
Otros Autores: | |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
[Place of publication not identified] :
Apress
[2022]
|
Edición: | 2nd ed |
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009634659606719 |
Tabla de Contenidos:
- Intro
- Table of Contents
- About the Author
- About the Technical Reviewers
- Acknowledgments
- Introduction
- Part I: Getting Started with Neural Networks
- Chapter 1: Learning About Neural Networks
- Biological and Artificial Neurons
- Activation Functions
- Summary
- Chapter 2: Internal Mechanics of Neural Network Processing
- Function to Be Approximated
- Network Architecture
- Forward Pass Calculation
- Input Record 1
- Input Record 2
- Input Record 3
- Input Record 4
- Back-Propagation Pass
- Function Derivative and Function Divergent
- Most Commonly Used Function Derivatives
- Summary
- Chapter 3: Manual Neural Network Processing
- Example: Manual Approximation of a Function at a Single Point
- Building the Neural Network
- Forward Pass Calculation
- Hidden Layers
- Output Layer
- Backward Pass Calculation
- Calculating Weight Adjustments for the Output-Layer Neurons
- Calculating Adjustment for W211
- Calculating Adjustment for W212
- Calculating Adjustment for W213
- Calculating Weight Adjustments for Hidden-Layer Neurons
- Calculating Adjustment for W111
- Calculating Adjustment for W112
- Calculating Adjustment for W121
- Calculating Adjustment for W122
- Calculating Adjustment for W131
- Calculating Adjustment for W132
- Updating Network Biases
- Back to the Forward Pass
- Hidden Layers
- Output Layer
- Matrix Form of Network Calculation
- Digging Deeper
- Mini-Batches and Stochastic Gradient
- Summary
- Part II: Neural Network Java Development Environment
- Chapter 4: Configuring Your Development Environment
- Installing the Java Environment and NetBeans on Your Windows Machine
- Installing the Encog Java Framework
- Installing the XChart Package
- Summary
- Chapter 5: Neural Networks Development Using the Java Encog Framework.
- Example: Function Approximation Using Java Environment
- Network Architecture
- Normalizing the Input Datasets
- Building the Java Program That Normalizes Both Datasets
- Building the Neural Network Processing Program
- Program Code
- Debugging and Executing the Program
- Processing Results for the Training Method
- Testing the Network
- Testing Results
- Digging Deeper
- Summary
- Chapter 6: Neural Network Prediction Outside of the Training Range
- Example: Approximating Periodic Functions Outside of the Training Range
- Network Architecture for the Example
- Program Code for the Example
- Testing the Network
- Example: Correct Way of Approximating Periodic Functions Outside of the Training Range
- Preparing the Training Data
- Network Architecture for the Example
- Program Code for Example
- Training Results for Example
- Log of Testing Results for Example 3
- Summary
- Chapter 7: Processing Complex Periodic Functions
- Example: Approximation of a Complex Periodic Function
- Data Preparation
- Reflecting Function Topology in the Data
- Network Architecture
- Program Code
- Training the Network
- Testing the Network
- Digging Deeper
- Summary
- Chapter 8: Approximating Noncontinuous Functions
- Example: Approximating Noncontinuous Functions
- Network Architecture
- Program Code
- Code Fragments for the Training Process
- Unsatisfactory Training Results
- Approximating the Noncontinuous Function Using the Micro-Batch Method
- Program Code for Micro-Batch Processing
- Program Code for the getChart() Method
- Code Fragment 1 of the Training Method
- Code Fragment 2 of the Training Method
- Training Results for the Micro-Batch Method
- Testing the Processing Logic
- Testing the Results for the Micro-Batch Method
- Digging Deeper
- Summary.
- Chapter 9: Approximation of Continuous Functions with Complex Topology
- Example: Approximation of Continuous Functions with Complex Topology Using a Conventional Neural Network Process
- Network Architecture for the Example
- Program Code for the Example
- Training Processing Results for the Example
- Approximation of Continuous Functions with Complex Topology Using the Micro-Batch Method
- Program Code for the Example Using the Micro-Batch Method
- Example: Approximation of Spiral-like Functions
- Network Architecture for the Example
- Program Code for Example
- Approximation of the Same Functions Using Micro-Batch Method
- Summary
- Chapter 10: Using Neural Networks for the Classification of Objects
- Example: Classification of Records
- Training Dataset
- Network Architecture
- Testing Dataset
- Program Code for Data Normalization
- Program Code for Classification
- Training Results
- Testing Results
- Summary
- Chapter 11: The Importance of Selecting the Correct Model
- Example: Predicting Next Month's Stock Market Price
- Including the Function Topology in the Dataset
- Building Micro-Batch Files
- Network Architecture
- Program Code
- Training Process
- Training Results
- Testing Dataset
- Testing Logic
- Testing Results
- Analyzing Testing Results
- Summary
- Chapter 12: Approximation Functions in 3D Space
- Example: Approximation Functions in 3D Space
- Data Preparation
- Network Architecture
- Program Code
- Processing Results
- Summary
- Part III: Introduction to Computer Vision
- Chapter 13: Image Recognition
- Classification of Handwritten Digits
- Preparing the Input Data
- Input Data Conversion
- Building the Conversion Program
- Summary
- Chapter 14: Classification of Handwritten Digits
- Network Architecture
- Program Code
- Programming Logic
- Execution.
- Convolution Neural Network
- Summary
- Index.