举报
会员
Hands-On Neural Networks with Keras
NeuralnetworksareusedtosolveawiderangeofproblemsindifferentareasofAIanddeeplearning.Hands-OnNeuralNetworkswithKeraswillstartwithteachingyouaboutthecoreconceptsofneuralnetworks.Youwilldelveintocombiningdifferentneuralnetworkmodelsandworkwithreal-worldusecases,includingcomputervision,naturallanguageunderstanding,syntheticdatageneration,andmanymore.Movingon,youwillbecomewellversedwithconvolutionalneuralnetworks(CNNs),recurrentneuralnetworks(RNNs),longshort-termmemory(LSTM)networks,autoencoders,andgenerativeadversarialnetworks(GANs)usingreal-worldtrainingdatasets.WewillexaminehowtouseCNNsforimagerecognition,howtousereinforcementlearningagents,andmanymore.Wewilldiveintothespecificarchitecturesofvariousnetworksandthenimplementeachoftheminahands-onmannerusingindustry-gradeframeworks.Bytheendofthisbook,youwillbehighlyfamiliarwithallprominentdeeplearningmodelsandframeworks,andtheoptionsyouhavewhenapplyingdeeplearningtoreal-worldscenariosandembeddingartificialintelligenceasthecorefabricofyourorganization.
最新章节
- 【正版无广】Leave a review - let other readers know what you think
- Other Books You May Enjoy
- Summary
- Contemplating our future
- Technology and society
- Further reading
品牌:中图公司
上架时间:2021-06-24 12:26:43
出版社:Packt Publishing
本书数字版权由中图公司提供,并由其授权上海阅文信息技术有限公司制作发行
- 【正版无广】Leave a review - let other readers know what you think 更新时间:2021-06-24 15:46:36
- Other Books You May Enjoy
- Summary
- Contemplating our future
- Technology and society
- Further reading
- Quantum neural networks
- Distinguishing Q-Bits from classical counterparts
- Quantum superposition
- The advent of quantum computing
- Problems with classical computing
- The road ahead
- Hardware hurdles
- Distributed representation
- Global vectors approach
- The future of AI and neural networks
- Multi-network predictions and ensemble models
- References
- Automatic optimization and evolutionary algorithms
- Tuning hyperparameters
- Encouraging sparse representation learning
- Smoothness of the data
- Normalization
- Vectorization
- Preprocessing and data treatments
- How should a good representation be?
- Engineering representations for machines
- Limits of current neural networks
- DNA and technology
- Learning representations
- Concluding our experiments
- Exercises
- Training the network
- Loading and preprocessing the data
- Adding layers to a model
- Obtaining intermediate layers from a model
- Loading a pretrained model
- Transfer learning on Keras
- Sharing representations with transfer learning
- Contemplating Present and Future Developments
- Section 4: Road Ahead
- Summary
- Conclusion
- Visualizing results across epochs
- Interpreting test loss during training
- Executing the training session
- Evaluate results per epoch
- Training the generator per batch
- Training the discriminator per batch
- Initializing the GAN
- Defining the discriminator labels
- Arguments in the training function
- The training function
- Helper functions to display output
- Helper functions for training
- Putting the GAN together
- Designing the discriminator module
- Designing the generator module
- Pre-processing the data
- Visualizing some instances
- Preparing the data
- Designing a GAN in Keras
- Problems with optimizing GANs
- Diving deeper into GANs
- Utility and practical applications for GANS
- Exploring GANs
- Concluding remarks on VAEs
- Latent space sampling and output generation
- Visualizing the latent space
- Initiating the training session
- Compiling and inspecting the model
- Defining a custom variational layer
- Building the decoder module
- Sampling the latent space
- Building the encoding module in a VAE
- Loading and pre-processing the data
- Designing a VAE in Keras
- Understanding VAEs
- Understanding types of generative networks
- Learning a probability distribution
- Sampling from the latent space
- Using randomness to augment outputs
- Controlled randomness and creativity
- Diving deeper into generative networks
- Identifying concept vectors
- Understanding the notion of latent space
- Replicating versus generating content
- Generative Networks
- Exercise
- Summary
- Visualizing the results
- Training the denoising network
- Denoising autoencoders
- Testing and visualizing the results
- Compiling and training the model
- Deep convolutional autoencoder
- Visualizing the results
- Training the model
- Building the model
- Using functional API to design autoencoders
- Partitioning the data
- Preprocessing the data
- Importing the data
- Understanding the data
- Making some imports
- Designing a deep autoencoder
- Visualizing the results
- Training the autoencoder
- Defining a separate decoder network
- Defining a separate encoder network
- Building the verification model
- Compiling and visualizing the model
- Implementing a sparsity constraint
- Building the model
- Preprocessing the data
- Probing the data
- Making some imports
- Implementing a shallow AE in Keras
- Regularization with contractive autoencoders
- Regularization with denoising autoencoders
- Regularization with sparse autoencoders
- Understanding regularization in autoencoders
- Network size and representational power
- Overviewing autoencoder archetypes
- Training an autoencoder
- Breaking down the autoencoder
- Understanding the limitations of autoencoders
- Automatically encoding information
- Why autoencoders?
- Autoencoders
- Section 3: Hybrid Model Architecture
- Summary
- Improving Q-learning with policy gradients
- Limits of Q-learning
- Exercise
- Dueling network architecture
- Double Q-learning
- Summarizing the Q-learning algorithm
- Testing the model
- Training the model
- Initializing the deep Q-learning agent
- Epsilon-greedy exploration policy
- Balancing exploration with exploitation
- Storing experience in replay memory
- Problems with live learning
- Absence of pooling layers
- Building the network
- Initializing the environment
- Limitations of reward clipping
- Processing rewards
- Processing states in batch
- Processing individual states
- Making an Atari game state processor
- Defining input parameters
- Preprocessing techniques
- Making some imports
- Deep Q-learning in Keras
- Replacing iterative updates with deep learning
- Performing a backward pass in Q-Learning
- Performing a forward pass in Q-learning
- Why use neural networks?
- Updating the Bellman equation iteratively
- Using the Bellman equation
- Assessing the quality of an action
- Assessing the value of a state
- Understanding policy functions
- Markov decision process
- Discounting future rewards
- Trade-off between immediate and future rewards
- Solving the environment randomly
- Interacting with the environment
- Referencing action space
- Referencing observation space
- Rendering the environment
- Understanding the task
- A self-driving taxi cab
- Understanding states actions and rewards
- Simulating environments
- Path to artificial general intelligence
- The explore-exploit dilemma
- The credit assignment problem
- Conditioning machines with reinforcement learning
- A new way of examining learning
- On reward and gratification
- Reinforcement Learning with Deep Q-Networks
- Exercises
- Summary
- Closing comments
- Visualizing results
- Training the model
- Using helper functions
- Stacked LSTM
- Building LSTMs
- Recurrent baseline
- Building a feedforward network
- Baseline neural networks
- Making some imports
- Reshaping the data
- Creating sequences of observations
- The problem with one-step-ahead predictions
- Exponential moving average prediction
- Simple moving average prediction
- Performing one-step-ahead predictions
- Visualizing the curve
- Implementing exponential smoothing
- Denoising the data
- Windowed normalization
- Plotting out training and testing splits
- Splitting up the data
- From DataFrame to tensor
- Sorting and visualizing the trend
- Importing the data
- On modeling stock market data
- Putting our knowledge to use
- Exploring other architectural variations
- Importance of timing and counting
- Understanding peephole connections
- Variations of LSTM and performance
- Computing activations per timestep
- Computing contender memory
- Computing cell state
- Visualizing the flow of information
- Walking through the LSTM
- Conceptualizing the difference
- Importance of the forget gate
- LSTM memory block
- Treating activations and memory separately
- LSTM memory cell
- GRU memory
- Comparing the closest known relative
- Dissecting the LSTM
- The LSTM network
- Breaking down memory
- On processing complex sequences
- Long Short-Term Memory Networks
- Exercise
- Further reading
- Summary
- Visualizing the output of heavier GRU models
- Visualizing output values
- Implementing recurrent dropout
- Bi-directional layer in Keras
- Benefits of re-ordering sequential data
- On processing reality sequentially
- Building bi-directional GRUs
- Building GRUs
- Stacking RNN layers
- Building a SimpleRNN
- Testing multiple models
- Using custom callbacks to generate text
- Testing different RNN models
- Stochastic sampling
- Greedy sampling
- The purpose of controlling stochasticity
- Sampling thresholds
- Modeling character-level probabilities
- Statistics of character modeling
- Vectorizing the training data
- Printing out example sequences
- Preparing training sequences of characters
- Building a dictionary of characters
- Loading in Shakespeare's Hamlet
- Building character-level language models in Keras
- Formalizing the relevance gate
- Preserving relevance between time steps
- Implementing the update scenario
- Implementing the no-update scenario
- Mathematics of the update equation
- Updating the memory value
- Representing the memory cell
- The memory cell
- GRUs
- Preventing vanishing gradients with memory
- Preventing exploding gradients through clipping
- Thinking on the gradient level
- Exploding and vanishing gradients
- Visualizing backpropagation through time
- Backpropagation through time
- The problems of long-term dependencies
- The problem of unidirectional information flow
- Predicting an output per time step
- Simplifying the activation equation
- Computing activations per time step
- Forward propagation
- A generic RNN layer
- How do RNNs learn?
- Summarizing different types of sequence processing tasks
- One-to-many for image captioning
- One-to-many
- Many-to-one
- Encoding many-to-many representations
- Sequence modeling variations in RNNs
- Temporarily shared weights
- Basic RNN architecture
- What's the catch?
- Using RNNs for sequential modeling
- Modeling sequences
- Recurrent Neural Networks
- Summary
- Neural network pareidolia
- Problems with CNNs
- Using multiple filter indices to hallucinate
- Converging a model
- Visualizing maximal activations per output class
- Using the pretrained model for prediction
- Visualizing class activations with Keras-vis
- Gradient weighted class activation mapping
- Exercise
- Searching through layers
- Using Keras's visualization module
- Loading pictures from a local directory
- Visualizing saliency maps with ResNet50
- Understanding saliency
- Visualizing activation maps
- Verifying the number of channels per layer
- Introducing Keras's functional API
- Predictions on an input image
- Visualizing neural activations of intermediate layers
- Visualizing ConvNet learnings
- Neural network fails
- Inside the black box
- The problem with detecting smiles
- Checking model accuracy
- Compiling the model
- Summarizing our model
- Leveraging a fully connected layer for classification
- Max Pooling layer
- Padding input tensors
- Defining the number and size of the filters
- Convolutional layer
- Making some imports
- Normalizing our data
- Verifying the data shape
- Probing our data
- Implementing CNNs in Keras
- Types of pooling operations
- Understanding pooling layers
- Summarizing the convolution operation
- Looking at complex filters
- Visualizing feature extraction with filters
- What are features?
- Stride of a convolution
- Using multiple filters
- Backpropagation of errors in CNNs
- Feature extraction using filters
- Receptive field
- Preserving the spatial structure of an image
- The convolution operation
- Dense versus convolutional layer
- Designing a CNN
- The birth of the modern CNN
- Implementing a hierarchy of neurons
- Defining receptive fields of neurons
- Conceptualizing spatial invariance
- Understanding biological vision
- The birth of vision
- Why CNNs?
- Convolutional Neural Networks
- Section 2: Advanced Neural Network Architectures
- Exercises
- Summary
- Cross validation with scikit-learn API
- Validating your approach using k-fold validation
- Plotting training and test errors
- Compiling the model
- Building the model
- Feature-wise normalization
- Exploring the data
- Loading the data
- Boston Housing Prices dataset
- Predicting continuous variables
- Summary of IMDB
- Probing the predictions
- Accessing model predictions
- Choosing a metric to monitor
- Early stopping and history callbacks
- Callbacks
- Validation data
- Fitting the model
- Compiling the model
- Building a network
- Vectorizing labels
- Vectorizing features
- One-hot encoding
- Preparing the data
- Decoding the reviews
- Plotting a single training instance
- Checking the shape and type
- Loading the dataset
- The internet movie reviews dataset
- Sentiment analysis
- Language processing
- A summary of MNIST
- Complexity and time
- Dropout regularization experiments
- Implementing dropout regularization in Keras
- Weight regularization experiments
- Implementing weight regularization in Keras
- Thinking about dropout intuitively
- Using dropout layers
- Regularizing the weights
- Size experiments
- Adjusting network size
- Regularization
- Evaluating model performance
- Fitting the model
- Compiling the model
- Summarizing your model visually
- Keras activations
- Initializing weights
- Introducting Keras layers
- Building a model
- Checking the dimensions
- Loading the data
- Keras's sequential API
- Making some imports
- Dimensionality of data
- Examples of tensors
- Feeding a neural network
- Images as numbers
- Representing signals with numbers
- Avoiding random memorization
- Representational learning
- Processing signals
- Signal Processing - Data Analysis with Neural Networks
- Summary
- Steps ahead
- Capturing patterns heirarchically
- Experimenting with TensorFlow playground
- A single layered network
- Scaling the perceptron
- The learning rate
- Computing the gradient
- Backpropagation
- Loss as a function of model weights
- Quantifying loss
- Training a perceptron
- The mean squared error loss function
- Learning through errors
- Output
- Understanding the role of the bias term
- Activation functions
- Introducing non-linearity
- Summation
- Weights
- Input
- Building a perceptron
- From the biological to the artificial neuron – the perceptron
- A Deeper Dive into Neural Networks
- Further reading
- Summary
- Irrelevant features and labels
- Bad data
- Overfitting
- Underfitting
- Unbalanced class priors
- The pitfalls of ML
- Functional representations
- Matching a model to use cases
- Algorithmic computation and predictive models
- The curse of dimensionality
- Modeling data in high-dimensional spaces
- From data science to ML
- The nature of data processing
- Cross entropy
- Entropy
- Information theory
- The fundamentals of data science
- Distributed representation and learning
- The mysteries of neural encoding
- Representing information
- The physiology of a neuron
- Building a biological brain
- Observing the brain
- What is a neural network?
- The fundamentals of neural learning
- TensorFlow
- Keras
- Knowing our tools
- Defining our goal
- Overview of Neural Networks
- Section 1: Fundamentals of Neural Networks
- Reviews
- Get in touch
- Conventions used
- Download the color images
- Download the example code files
- To get the most out of this book
- What this book covers
- Who this book is for
- Preface
- Packt is searching for authors like you
- About the reviewer
- About the author
- Contributors
- Packt.com
- Why subscribe?
- About Packt
- Hands-On Neural Networks with Keras
- Copyright and Credits
- Title Page
- coverpage
- coverpage
- Title Page
- Copyright and Credits
- Hands-On Neural Networks with Keras
- About Packt
- Why subscribe?
- Packt.com
- Contributors
- About the author
- About the reviewer
- Packt is searching for authors like you
- Preface
- Who this book is for
- What this book covers
- To get the most out of this book
- Download the example code files
- Download the color images
- Conventions used
- Get in touch
- Reviews
- Section 1: Fundamentals of Neural Networks
- Overview of Neural Networks
- Defining our goal
- Knowing our tools
- Keras
- TensorFlow
- The fundamentals of neural learning
- What is a neural network?
- Observing the brain
- Building a biological brain
- The physiology of a neuron
- Representing information
- The mysteries of neural encoding
- Distributed representation and learning
- The fundamentals of data science
- Information theory
- Entropy
- Cross entropy
- The nature of data processing
- From data science to ML
- Modeling data in high-dimensional spaces
- The curse of dimensionality
- Algorithmic computation and predictive models
- Matching a model to use cases
- Functional representations
- The pitfalls of ML
- Unbalanced class priors
- Underfitting
- Overfitting
- Bad data
- Irrelevant features and labels
- Summary
- Further reading
- A Deeper Dive into Neural Networks
- From the biological to the artificial neuron – the perceptron
- Building a perceptron
- Input
- Weights
- Summation
- Introducing non-linearity
- Activation functions
- Understanding the role of the bias term
- Output
- Learning through errors
- The mean squared error loss function
- Training a perceptron
- Quantifying loss
- Loss as a function of model weights
- Backpropagation
- Computing the gradient
- The learning rate
- Scaling the perceptron
- A single layered network
- Experimenting with TensorFlow playground
- Capturing patterns heirarchically
- Steps ahead
- Summary
- Signal Processing - Data Analysis with Neural Networks
- Processing signals
- Representational learning
- Avoiding random memorization
- Representing signals with numbers
- Images as numbers
- Feeding a neural network
- Examples of tensors
- Dimensionality of data
- Making some imports
- Keras's sequential API
- Loading the data
- Checking the dimensions
- Building a model
- Introducting Keras layers
- Initializing weights
- Keras activations
- Summarizing your model visually
- Compiling the model
- Fitting the model
- Evaluating model performance
- Regularization
- Adjusting network size
- Size experiments
- Regularizing the weights
- Using dropout layers
- Thinking about dropout intuitively
- Implementing weight regularization in Keras
- Weight regularization experiments
- Implementing dropout regularization in Keras
- Dropout regularization experiments
- Complexity and time
- A summary of MNIST
- Language processing
- Sentiment analysis
- The internet movie reviews dataset
- Loading the dataset
- Checking the shape and type
- Plotting a single training instance
- Decoding the reviews
- Preparing the data
- One-hot encoding
- Vectorizing features
- Vectorizing labels
- Building a network
- Compiling the model
- Fitting the model
- Validation data
- Callbacks
- Early stopping and history callbacks
- Choosing a metric to monitor
- Accessing model predictions
- Probing the predictions
- Summary of IMDB
- Predicting continuous variables
- Boston Housing Prices dataset
- Loading the data
- Exploring the data
- Feature-wise normalization
- Building the model
- Compiling the model
- Plotting training and test errors
- Validating your approach using k-fold validation
- Cross validation with scikit-learn API
- Summary
- Exercises
- Section 2: Advanced Neural Network Architectures
- Convolutional Neural Networks
- Why CNNs?
- The birth of vision
- Understanding biological vision
- Conceptualizing spatial invariance
- Defining receptive fields of neurons
- Implementing a hierarchy of neurons
- The birth of the modern CNN
- Designing a CNN
- Dense versus convolutional layer
- The convolution operation
- Preserving the spatial structure of an image
- Receptive field
- Feature extraction using filters
- Backpropagation of errors in CNNs
- Using multiple filters
- Stride of a convolution
- What are features?
- Visualizing feature extraction with filters
- Looking at complex filters
- Summarizing the convolution operation
- Understanding pooling layers
- Types of pooling operations
- Implementing CNNs in Keras
- Probing our data
- Verifying the data shape
- Normalizing our data
- Making some imports
- Convolutional layer
- Defining the number and size of the filters
- Padding input tensors
- Max Pooling layer
- Leveraging a fully connected layer for classification
- Summarizing our model
- Compiling the model
- Checking model accuracy
- The problem with detecting smiles
- Inside the black box
- Neural network fails
- Visualizing ConvNet learnings
- Visualizing neural activations of intermediate layers
- Predictions on an input image
- Introducing Keras's functional API
- Verifying the number of channels per layer
- Visualizing activation maps
- Understanding saliency
- Visualizing saliency maps with ResNet50
- Loading pictures from a local directory
- Using Keras's visualization module
- Searching through layers
- Exercise
- Gradient weighted class activation mapping
- Visualizing class activations with Keras-vis
- Using the pretrained model for prediction
- Visualizing maximal activations per output class
- Converging a model
- Using multiple filter indices to hallucinate
- Problems with CNNs
- Neural network pareidolia
- Summary
- Recurrent Neural Networks
- Modeling sequences
- Using RNNs for sequential modeling
- What's the catch?
- Basic RNN architecture
- Temporarily shared weights
- Sequence modeling variations in RNNs
- Encoding many-to-many representations
- Many-to-one
- One-to-many
- One-to-many for image captioning
- Summarizing different types of sequence processing tasks
- How do RNNs learn?
- A generic RNN layer
- Forward propagation
- Computing activations per time step
- Simplifying the activation equation
- Predicting an output per time step
- The problem of unidirectional information flow
- The problems of long-term dependencies
- Backpropagation through time
- Visualizing backpropagation through time
- Exploding and vanishing gradients
- Thinking on the gradient level
- Preventing exploding gradients through clipping
- Preventing vanishing gradients with memory
- GRUs
- The memory cell
- Representing the memory cell
- Updating the memory value
- Mathematics of the update equation
- Implementing the no-update scenario
- Implementing the update scenario
- Preserving relevance between time steps
- Formalizing the relevance gate
- Building character-level language models in Keras
- Loading in Shakespeare's Hamlet
- Building a dictionary of characters
- Preparing training sequences of characters
- Printing out example sequences
- Vectorizing the training data
- Statistics of character modeling
- Modeling character-level probabilities
- Sampling thresholds
- The purpose of controlling stochasticity
- Greedy sampling
- Stochastic sampling
- Testing different RNN models
- Using custom callbacks to generate text
- Testing multiple models
- Building a SimpleRNN
- Stacking RNN layers
- Building GRUs
- Building bi-directional GRUs
- On processing reality sequentially
- Benefits of re-ordering sequential data
- Bi-directional layer in Keras
- Implementing recurrent dropout
- Visualizing output values
- Visualizing the output of heavier GRU models
- Summary
- Further reading
- Exercise
- Long Short-Term Memory Networks
- On processing complex sequences
- Breaking down memory
- The LSTM network
- Dissecting the LSTM
- Comparing the closest known relative
- GRU memory
- LSTM memory cell
- Treating activations and memory separately
- LSTM memory block
- Importance of the forget gate
- Conceptualizing the difference
- Walking through the LSTM
- Visualizing the flow of information
- Computing cell state
- Computing contender memory
- Computing activations per timestep
- Variations of LSTM and performance
- Understanding peephole connections
- Importance of timing and counting
- Exploring other architectural variations
- Putting our knowledge to use
- On modeling stock market data
- Importing the data
- Sorting and visualizing the trend
- From DataFrame to tensor
- Splitting up the data
- Plotting out training and testing splits
- Windowed normalization
- Denoising the data
- Implementing exponential smoothing
- Visualizing the curve
- Performing one-step-ahead predictions
- Simple moving average prediction
- Exponential moving average prediction
- The problem with one-step-ahead predictions
- Creating sequences of observations
- Reshaping the data
- Making some imports
- Baseline neural networks
- Building a feedforward network
- Recurrent baseline
- Building LSTMs
- Stacked LSTM
- Using helper functions
- Training the model
- Visualizing results
- Closing comments
- Summary
- Exercises
- Reinforcement Learning with Deep Q-Networks
- On reward and gratification
- A new way of examining learning
- Conditioning machines with reinforcement learning
- The credit assignment problem
- The explore-exploit dilemma
- Path to artificial general intelligence
- Simulating environments
- Understanding states actions and rewards
- A self-driving taxi cab
- Understanding the task
- Rendering the environment
- Referencing observation space
- Referencing action space
- Interacting with the environment
- Solving the environment randomly
- Trade-off between immediate and future rewards
- Discounting future rewards
- Markov decision process
- Understanding policy functions
- Assessing the value of a state
- Assessing the quality of an action
- Using the Bellman equation
- Updating the Bellman equation iteratively
- Why use neural networks?
- Performing a forward pass in Q-learning
- Performing a backward pass in Q-Learning
- Replacing iterative updates with deep learning
- Deep Q-learning in Keras
- Making some imports
- Preprocessing techniques
- Defining input parameters
- Making an Atari game state processor
- Processing individual states
- Processing states in batch
- Processing rewards
- Limitations of reward clipping
- Initializing the environment
- Building the network
- Absence of pooling layers
- Problems with live learning
- Storing experience in replay memory
- Balancing exploration with exploitation
- Epsilon-greedy exploration policy
- Initializing the deep Q-learning agent
- Training the model
- Testing the model
- Summarizing the Q-learning algorithm
- Double Q-learning
- Dueling network architecture
- Exercise
- Limits of Q-learning
- Improving Q-learning with policy gradients
- Summary
- Section 3: Hybrid Model Architecture
- Autoencoders
- Why autoencoders?
- Automatically encoding information
- Understanding the limitations of autoencoders
- Breaking down the autoencoder
- Training an autoencoder
- Overviewing autoencoder archetypes
- Network size and representational power
- Understanding regularization in autoencoders
- Regularization with sparse autoencoders
- Regularization with denoising autoencoders
- Regularization with contractive autoencoders
- Implementing a shallow AE in Keras
- Making some imports
- Probing the data
- Preprocessing the data
- Building the model
- Implementing a sparsity constraint
- Compiling and visualizing the model
- Building the verification model
- Defining a separate encoder network
- Defining a separate decoder network
- Training the autoencoder
- Visualizing the results
- Designing a deep autoencoder
- Making some imports
- Understanding the data
- Importing the data
- Preprocessing the data
- Partitioning the data
- Using functional API to design autoencoders
- Building the model
- Training the model
- Visualizing the results
- Deep convolutional autoencoder
- Compiling and training the model
- Testing and visualizing the results
- Denoising autoencoders
- Training the denoising network
- Visualizing the results
- Summary
- Exercise
- Generative Networks
- Replicating versus generating content
- Understanding the notion of latent space
- Identifying concept vectors
- Diving deeper into generative networks
- Controlled randomness and creativity
- Using randomness to augment outputs
- Sampling from the latent space
- Learning a probability distribution
- Understanding types of generative networks
- Understanding VAEs
- Designing a VAE in Keras
- Loading and pre-processing the data
- Building the encoding module in a VAE
- Sampling the latent space
- Building the decoder module
- Defining a custom variational layer
- Compiling and inspecting the model
- Initiating the training session
- Visualizing the latent space
- Latent space sampling and output generation
- Concluding remarks on VAEs
- Exploring GANs
- Utility and practical applications for GANS
- Diving deeper into GANs
- Problems with optimizing GANs
- Designing a GAN in Keras
- Preparing the data
- Visualizing some instances
- Pre-processing the data
- Designing the generator module
- Designing the discriminator module
- Putting the GAN together
- Helper functions for training
- Helper functions to display output
- The training function
- Arguments in the training function
- Defining the discriminator labels
- Initializing the GAN
- Training the discriminator per batch
- Training the generator per batch
- Evaluate results per epoch
- Executing the training session
- Interpreting test loss during training
- Visualizing results across epochs
- Conclusion
- Summary
- Section 4: Road Ahead
- Contemplating Present and Future Developments
- Sharing representations with transfer learning
- Transfer learning on Keras
- Loading a pretrained model
- Obtaining intermediate layers from a model
- Adding layers to a model
- Loading and preprocessing the data
- Training the network
- Exercises
- Concluding our experiments
- Learning representations
- DNA and technology
- Limits of current neural networks
- Engineering representations for machines
- How should a good representation be?
- Preprocessing and data treatments
- Vectorization
- Normalization
- Smoothness of the data
- Encouraging sparse representation learning
- Tuning hyperparameters
- Automatic optimization and evolutionary algorithms
- References
- Multi-network predictions and ensemble models
- The future of AI and neural networks
- Global vectors approach
- Distributed representation
- Hardware hurdles
- The road ahead
- Problems with classical computing
- The advent of quantum computing
- Quantum superposition
- Distinguishing Q-Bits from classical counterparts
- Quantum neural networks
- Further reading
- Technology and society
- Contemplating our future
- Summary
- Other Books You May Enjoy
- 【正版无广】Leave a review - let other readers know what you think 更新时间:2021-06-24 15:46:36

