TowardsMachineLearning

Recurrent Neural Network (RNN) architecture explained in detail

Introduction:-

In this article I would assume that you have a basic understanding of neural networks . In this article,we’ll talk about Recurrent Neural Networks aka RNNs that made a major breakthrough in predictive analytics for sequential data. This article we’ll cover the architecture of RNNs ,what is RNN , what was the need of RNNs ,how they work , Various applications of RNNS, their advantage & disadvantage.

What is Recurrent Neural Network (RNN):-

Recurrent Neural Networks or RNNs , are a very important variant of neural networks heavily used in Natural Language Processing . They’re are a class of neural networks that allow previous outputs to be used as inputs while having hidden states.

RNN has a concept of “memory” which remembers all information about what has been calculated till time step t. RNNs are called recurrent because they perform the same task for every element of a sequence, with the output being depended on the previous computations.

Before we deep dive into the details of what a recurrent neural network is, let’s first understand why do we use RNNs in first place.

Why Recurrent Neural Network (RNN):-

In a general neural network, an input is fed to an input layer and is further processed through number of hidden layers and a final output is produced, with an assumption that two successive inputs are independent of each other or input at time step t has no relation with input at timestep t-1.

However this assumption is not true in a number of real-life scenarios. For instance, if one wants to predict the price of a stock at a given time or wants to predict the next word in a sequence then it is imperative that dependence on previous observations is considered.

To understand the need of RNNs or how RNNs can be helpful , let’s understand it with one real time incident that happened recently.

You must have come across a recent incident where Pakistan batsman Umar Akmal has been trolled after he posted a photo on Twitter with an obvious error. And the caption of the post was ‘Mother from another Brother ’.

And following this incident there has been many such sentences surfaced over internet like below-

  • If being crime is arrest then sexy me
  • You don’t have to be well to travel rich
  • If I’m bad , you’re my dad
  • Policy is the best honesty
  • Health is injurious to smoking
  • If opportunity doesn’t door then build a knock
  • Don’t happy be worry
  • Cure is best prevention
  • Everything is war in fair and love
  • A day a doctor, keeps an apple away
  • I love to cry in walking so nobody see I’m raining
  • Blood is in my cricket
  • Consumption of health is injurious to liquor
  • Always a ravian  , once a ravian
  • Work hard in success , let your silence make noise

So you see a little jumble in the words made the sentence incoherent . There are multiple such tasks in everyday life which get completely disrupted when their sequence is disturbed.

For instance, sentences that we just saw above- the sequence of words define their meaning, a time series data – where time defines the occurrence of events, the data of a genome sequence- where every sequence has a different meaning. There are multiple such cases wherein the sequence of information determines the event itself.

So if we’re trying to  use such data to predict any reasonable output, we need a network ,which has access to some prior knowledge about the data to completely understand it. That’s where Recurrent neural networks come to rescue.

To understand what is memory in RNNs , what is recurrence unit in RNN, how do they store information of previous sequence , let’s first understand the architecture of RNNs.

Architecture of  Recurrent Neural Network :-

The right diagram in below figure in below figure represents a simple Recurrent unit.

Below diagram depicts the architecture with weights –

So from above figure we can write below equations-

            f= Sigmoid , tanh , ReLu

Note:-  function f could be any one of the usual hidden non-linearities that’s usually sigmoid , tanh or ReLu. It’s a hyper parameter just like other types of Neural networks .

This feedback loop makes recurrent neural networks seem kind of mysterious and quite hard to visualize the whole training process of RNNs. So let’s unfold this Recurrent neural to understand its working .

Unfolding a Recurrent Neural Network:-

Here we’d try to visualize the RNNs in terms of a feedforward network.

A recurrent neural network can be thought of as multiple copies of a feedforward network network, each passing a message to a successor.

Now consider what happens if we unroll the loop: Imagine we’ve a sequence of length 5 , if we were to unfold the recurrent neural network in time such that it has no recurrent connections at all then we get this feedforward neural network with 5 hidden layers like shown in below  figure-

 

 

Where ,

How Recurrent Neural Network works:-

The recurrent neural network works as follows:

  1. These all 5 layers of the same weights and bias merge into one single recurring structure.

The above diagram has outputs at each time step, but depending on the task this may not be necessary. For example, when predicting the sentiment of a sentence we may only care about the final output, not the prediction after each word. Similarly, we may not need inputs at each time step. The main feature of an RNN is its hidden state, which captures some information about a sequence.

Applications of RNNs in real life:-

Before we deep dive into the details of what a recurrent neural network is, let’s take a glimpse of what are kind of tasks that one can achieve using such networks.

The beauty of recurrent neural networks lies in their diversity of application such as one can use RNNs to leverage entire sequence of information for classification or prediction. On the other hand,one can use RNNs to predict next value in a sequence with the help of information about past words or sequence  . Data Scientists have praised RNNs for their ability to deal with various input and output types.

  • Varying input size and Fixed output size – I’ve listed down couple of examples ,where one can leverage RNNs for varying input size but fixed size output .
    • Voice classification:-  For example you want to classify between male & female voices then you’d have sound samples from male & female voices. So you classify in to either of the classes by making use of  entire sequence of information. And So here the input would be a voice sample of varying lengths, while output is of a fixed type and size.

  • One can extrapolate the same idea to classify different animal or birds’ voices.
    • Sentiment classification – One can leverage RNNs to classify sentiment of a texts like movie or product review or tweet etc into different classes . So here the input would be of varying lengths in each scenario, while output is of a fixed type and size. So here we leverage the entire sequence of information for classification.

Fixed input size and  varying output size –

  • Image Captioning – For tasks like Image captioning , we have a single input – the image, and a series or sequence of words as output. Here the image might be of a fixed size, but the output is a description of varying lengths .

varying input size and  varying output size –

  • Language Translation – This basically means that we have some text in a particular language let’s say Spanish , and we wish to translate it in English. Each language has it’s own semantics and would have varying lengths for the same sentence. So here the inputs as well as outputs are of varying lengths.

So now we’ve fair idea of how RNNs are used for mapping inputs to outputs of varying types, lengths and are fairly generalized in their application. Let’s list down few advantages of RNNs now.

Advantages:-

Since now we understand what is RNN , architecture of RNN , how they work & how they store the previous information so let’s list down couple of advantages of using RNNs.

  • Possibility of processing input of any length
  • Model size not increasing with size of input
  • Computation takes into account historical information
  • Weights are shared across time

Note:-In theory, RNNs can make use of information in arbitrarily long sequences, but in practice, they are limited to look back only a few steps. This is also called problem of Long-Term Dependencies .

So in next article we’ll talk about what is

  • Long-Term Dependencies
  • BPTT (Back Propagation through time)
  • Disadvantages of RNNs
  • Vanishing Gradient
  • Exploding Gradient
  • LSTM
  • GRU

Feel free to checkout below articles-

References:-

[1] http://www.deeplearningbook.org/contents/rnn.html

[2] https://gist.github.com/karpathy/d4dee566867f8291f086

[3] http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/

[4] https://medium.com/towards-artificial-intelligence/whirlwind-tour-of-rnns-a11effb7808f

[5] Excellent blog here with Awesome illustrations

Article Credit:-

Name :  Praveen Kumar Anwla
Founder : TowardsMachineLearning .Org

Leave a Comment