RNN (Recurrent Neural Networks)

  • RNNs are very computationally expensive
  • RNNs are designed for Time Series Data.
  • There are more complex neural networks that combine CNNs and RNNs.
  • There are Gradient vanishing and exploding problems in RNN.

Recurrent Neural Network is a generalization of feedforward neural network that has an internal memory.

Usage of RNN

RNN (Recurrent Neural Networks) is designed for Sequential Data.

  • Tradings Signals
  • Music Signals
  • NLP (Natural Language Processing)
    • Speech recognition and Handwriting recognition
      • Since they depends on what you said before (sequential).

The Specific Feature of RNN is they have Memory.

Real Life Examples

  • Chat Bots
  • Machine Translation
  • Sentiment Classification
  • Music Generation
  • Trajectory Prediction - Self-Driving Cars
  • Climate Pattern Prediction

In RNN, we forward propagate, But we keep the information from the hidden layer for the second sample. The hidden layer for the second sample is a function of the new inputs, the updated weights, and the hidden layer from the previous iteration.

There are additional weights that lead from the hidden units to themselves.

Indices here indicate time, rather than a specific unit.

Problem of RNN

  • Vanishing gradient
  • Exploding gradient

To overcome these problems we use LSTM (Long Short Term Memory), a very special kind of recurrent network.

LSTM (Long Short Term Memory)

Long Short-Term Memory (LSTM) networks are a modified version of recurrent neural networks, which makes it easier to remember past data in memory.

simple RNN are not useful in many cases, mainly because of vanishing/exploding gradient problem.

Learning Resource

Main Types of Neural Networks and its Applications

Visualizations of Recurrent Neural Networks

Understanding RNN and LSTM

Video: Recurrent Neural Networks | MIT 6.S191

A gentle introduction to the tiresome part of understanding RNN by Yasuto Tamura

RNN Simplified- A beginner’s guide