Basics of Neural Network for beginners in simple way

Sweta
5 min readAug 14, 2020

--

Note:

In this post, I have explained the overall basics part in very simple way to understand. This is for beginners.

Neural Network consists of “neurons” which is ordered in layers. The idea is inspired by the neurons in human brain but don’t really work similar to the brain .The idea behind neural network or ANN (Artificial neural network) is to take the input , process it and accordingly generate the output. Now, since neurons are ordered in layers , so basically there are three layers in neural network.

  • Input layer ( takes the input)
  • Hidden layer( processing and functions are applied here)
  • Output layer(predicted output)

Now, output may be close to the actual output or may be very far from it, depends on the process which you have used.

Now, process could be anything ( like using summation and some function on the neurons). The difference of actual output and predicted output is the error or loss.

Just like we human learn from our mistakes and try to improve our mistake. Same thing is done here, loss is our mistake and we try to improve our predicated output by making it close to the actual output i.e. reducing the loss. Hence, learning from mistakes and improving it.

Overall, we takes the input, process it in a layer by making summation and applying function to it and generating output ,which must comes close to the actual output. If it isn’t close, then it tries to improve the output and makes the predicted output close to the actual output.

Here is the simple picture to understand it.

Note:

  • Hidden layer can have many layer in it and can have ‘n’ number of neuron in each layer under hidden layer.
  • The function we apply is called activation function, which is applied only in hidden layer. The activation function is used to activate a neuron, which we will discuss later in detail.
  • Since the hidden layer has many layer in it. Suppose hidden layer divided in hidden layer 1 , hidden layer 2 and hidden layer 3. Suppose, Hidden layer 1 contain 3 neuron, hidden layer 2 contains 4 neuron in it and hidden layer 3 contains 3 neuron in it. Now, we can just apply a particular activation function to a particular layer of hidden layer. Suppose I applied a particular activation function in hidden layer 2, then all 4 neuron under hidden layer 2 will get the same activation function to apply upon it.

Neural Network parts

Let’s first understand it with diagrams, i didn’t got a perfect diagram to make you understand. I have drawn it on my own. Hope you will understand it.

Now, let’s understand part by part.

  • Each circle represents neuron also called unit.
  • Perceptron: artificial neuron also known as perceptron.
  • weights ( w1, w2,…) : w1, w2.. we can see in diagram, between the neuron are called weights. When the input are transmitted between neurons, the weights are applied to the input. Weights controls the signal between two neurons.
  • Input layer: the raw information that is feed into the network.
  • Hidden layer : the hidden layer takes the input from its previous layer and perform computations on the weighted inputs and produce net input which is then applied with activation function to produce the actual output.

Note:

The computation that the hidden layer perform and the activation function used depends on the type of neural network used which in turn depends on the application.

To know, What is activation function?

Also, I would highly recommend you to visit this link Complete guidelines of activation function and its types

  • Output layer: this is the final layer which gives the predicted output.
  • Loss: loss = actual output — predicted output. The method to calculate loss is called loss function. Also, average of the loss is called cost function.

Note:

Loss is used to calculate gradients. And the gradients are used to update the weights of the neural network . This is how neural net is trained.

Now you might be wondering what is gradient, hold on, hold on., we will come to this topic later. As to understand this , we will have to first understand the basic things. So, let’s continue…

Basically, our goal is to reduce the loss and make our predicated output closer to the actual output .i.e. to reduce the loss.

Now, to reduce loss, backward propagation is used.

Let’s know the basic of FFN and Backward propagation in simplest way.

FFN ( Feed Forward Network) or Feed Forward Propagation

Let’s break this down, the word “feed” means input , “forward” implies moving ahead and “propagation” is a term of saying spreading of anything.

So, FFN means, we are moving in only one direction from input to the output in a neural network.

We can think of it as moving across time, where we have no option but to forge ahead and just hope our mistake don’t come back to haunt us.

  • There is nothing to learn in FFN.
  • we don’t find any pattern here in FFN.

Backward propagation

The mistake or error we did after doing FFN, here in Backward propagation, we try to learn from our mistake and tries to make our loss as minimum as possible by going back one by one layer wise i.e. from layer N to layer 0( input layer).

It works like, it first works in layer N , then layer N-1 , then layer N-2 and so on, by changing the required weight of neuron in each layer.

  • Learning happens here
  • Starts changing the weight called optimization.

Now, to change the weight , a formula or equation is used which is applied to every active neuron.

This above equation is called as Update function.

Remember, new weight can only be calculated during backward propagation, also learning rate helps in minimising the loss.

I wont go in detail about learning rate. We will talk about it in detail in other post.

Now, how we are learning in Backward propagation

By learning, we mean, based on loss, we keep on changing the weight of a neural network until we get an optimal loss.

In Backward propagation, it keeps on adjusting the weight according to data, so that the loss will be minimum.

In simple words, learning is reducing error slowly and gradually.

Thank you guys , if this post helped you in learning basics things about neural network, then give it a clap, it will motivate me to write more.

Keep learning and keep reading ☺

--

--

Sweta
Sweta

Written by Sweta

Data Science | Deep learning | Machine learning | Python

Responses (3)