Neural network programming with tensorflow unleash the power of tensorflow to train efficient neural networks

Neural Networks and their implementation decoded with TensorFlow About This Book Develop a strong background in neural network programming from scratch, using the popular Tensorflow library. Use Tensorflow to implement different kinds of neural networks ? from simple feedforward neural networks to m...

Descripción completa

Detalles Bibliográficos
Otros Autores: Ghotra, Manpreet Singh, author (author), Dua, Rajdeep, author
Formato: Libro electrónico
Idioma:Inglés
Publicado: Birmingham, England ; Mumbai, [India] : Packt 2017.
Edición:1st edition
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009630389406719
Tabla de Contenidos:
  • Cover
  • Copyright
  • Credits
  • About the Authors
  • About the Reviewer
  • www.PacktPub.com
  • Customer Feedback
  • Table of Contents
  • Preface
  • Chapter 1: Maths for Neural Networks
  • Understanding linear algebra
  • Environment setup
  • Setting up the Python environment in Pycharm
  • Linear algebra structures
  • Scalars, vectors, and matrices
  • Tensors
  • Operations
  • Vectors
  • Matrices
  • Matrix multiplication
  • Trace operator
  • Matrix transpose
  • Matrix diagonals
  • Identity matrix
  • Inverse matrix
  • Solving linear equations
  • Singular value decomposition
  • Eigenvalue decomposition
  • Principal Component Analysis
  • Calculus
  • Gradient
  • Hessian
  • Determinant
  • Optimization
  • Optimizers
  • Summary
  • Chapter 2: Deep Feedforward Networks
  • Defining feedforward networks
  • Understanding backpropagation
  • Implementing feedforward networks with TensorFlow
  • Analyzing the Iris dataset
  • Code execution
  • Implementing feedforward networks with images
  • Analyzing the effect of activation functions on the feedforward networks accuracy
  • Summary
  • Chapter 3: Optimization for Neural Networks
  • What is optimization?
  • Types of optimizers
  • Gradient descent
  • Different variants of gradient descent
  • Algorithms to optimize gradient descent
  • Which optimizer to choose
  • Optimization with an example
  • Summary
  • Chapter 4: Convolutional Neural Networks
  • An overview and the intuition of CNN
  • Single Conv Layer Computation
  • CNN in TensorFlow
  • Image loading in TensorFlow
  • Convolution operations
  • Convolution on an image
  • Strides
  • Pooling
  • Max pool
  • Example code
  • Average pool
  • Image classification with convolutional networks
  • Defining a tensor for input images and the first convolution layer
  • Input tensor
  • First convolution layer
  • Second convolution layer
  • Third convolution layer.
  • Flatten the layer
  • Fully connected layers
  • Defining cost and optimizer
  • Optimizer
  • First epoch
  • Plotting filters and their effects on an image
  • Summary
  • Chapter 5: Recurrent Neural Networks
  • Introduction to RNNs
  • RNN implementation
  • Computational graph
  • RNN implementation with TensorFlow
  • Computational graph
  • Introduction to long short term memory networks
  • Life cycle of LSTM
  • LSTM implementation
  • Computational graph
  • Sentiment analysis
  • Word embeddings
  • Sentiment analysis with an RNN
  • Computational graph
  • Summary
  • Chapter 6: Generative Models
  • Generative models
  • Discriminative versus generative models
  • Types of generative models
  • Autoencoders
  • GAN
  • Sequence models
  • GANs
  • GAN with an example
  • Types of GANs
  • Vanilla GAN
  • Conditional GAN
  • Info GAN
  • Wasserstein GAN
  • Coupled GAN
  • Summary
  • Chapter 7: Deep Belief Networking
  • Understanding deep belief networks
  • DBN implementation
  • Class initialization
  • RBM class
  • Pretraining the DBN
  • Model training
  • Predicting the label
  • Finding the accuracy of the model
  • DBN implementation for the MNIST dataset
  • Loading the dataset
  • Input parameters for a DBN with 256-Neuron RBM layers
  • Output for a DBN with 256-neuron RBN layers
  • Effect of the number of neurons in an RBM layer in a DBN
  • An RBM layer with 512 neurons
  • An&amp
  • #160
  • RBM layer with 128 neurons
  • Comparing the accuracy metrics
  • DBNs with two RBM layers
  • Classifying the NotMNIST dataset with a DBN
  • Summary
  • Chapter 8: Autoencoders
  • Autoencoder algorithms
  • Under-complete autoencoders
  • Dataset
  • Basic autoencoders
  • Autoencoder initialization
  • AutoEncoder class
  • Basic autoencoders with MNIST data
  • Basic autoencoder plot of weights
  • Basic autoencoder recreated images plot
  • Basic autoencoder full code listing.
  • Basic autoencoder summary
  • Additive Gaussian Noise autoencoder
  • Autoencoder class
  • Additive Gaussian Autoencoder with the MNIST dataset
  • Training the model
  • Plotting the weights
  • Plotting the reconstructed images
  • Additive Gaussian autoencoder full code listing
  • Comparing basic encoder costs with the Additive Gaussian Noise autoencoder
  • Additive Gaussian Noise autoencoder summary
  • Sparse autoencoder
  • KL divergence
  • KL divergence in TensorFlow
  • Cost of a sparse autoencoder based on KL Divergence
  • Complete code listing of&amp
  • #160
  • the sparse autoencoder
  • Sparse autoencoder on MNIST data
  • Comparing the Sparse encoder with&amp
  • #160
  • the Additive Gaussian Noise encoder
  • Summary
  • Chapter 9: Research in Neural Networks
  • Avoiding overfitting in neural networks
  • Problem statement
  • Solution
  • Results
  • Large-scale video processing with neural networks
  • Resolution improvements
  • Feature histogram baselines
  • Quantitative results
  • Named entity recognition using a twisted neural network
  • Example of a named entity recognition
  • Defining Twinet
  • Results
  • Bidirectional RNNs
  • BRNN on TIMIT dataset
  • Summary
  • Appendix: Getting started with TensorFlow
  • Environment setup
  • TensorFlow comparison with Numpy
  • Computational graph
  • Graph
  • Session objects
  • Variables
  • Scope
  • Data input
  • Placeholders and feed dictionaries
  • Auto differentiation
  • TensorBoard
  • Index.