Blog data science Deep Learning machine learning MNIST Neural Network Python TensorFlow

Implement Neural Network using TensorFlow

Within the earlier article we’ve got carried out the Neural Network using Python from scratch. Nevertheless for real implementation we principally use a framework, which usually supplies quicker computation and better help for greatest practices. On this article we’ll Implement Neural Network using TensorFlow. At present, TensorFlow in all probability is the preferred deep learning framework obtainable.

Here is my previous submit on “Understand and Implement the Backpropagation Algorithm From Scratch In Python”. We will probably be implementing the similar example here using TensorFlow. In case you want a refresher please refer the article under:

Understand and Implement the Backpropagation Algorithm From Scratch In Python

There are few points value to know about TensorFlow which is totally different from other frameworks or our previous implementation.

  • TensorFlow has been written to keep production deployment in mind. I might say exactly reverse to R, which could be very much targeted on evaluation and research. Therefore, there are few design issues which want to know.
  • TensorFlow primarily supports Static Computation Graph ( Nevertheless there can be help for Dynamic Computation Graph in version 2. You may need already heard of Eager Execution.) In this example we’ll work with Static Computation Graph. The primary drawback with Static Computation Graph is help for debugging. So when you make a mistake you’ll be able to actually use Python editor to carry out line by line debugging.
  • In previous years, TensorFlow went by means of many many design modifications. Hence you may find many various methods of creating Neural Network using TensorFlow. I will probably be using the newest recommendations. In case you google you will see that many variations of the same code, nevertheless watch out with the version they’ve used, because the code may already be outdated.
  • TensorFlow is at present integrating Keras as High Degree API. In the event you go to the TensorFlow website you will discover plenty of instance using Keras.
  • In actual manufacturing implementation system usually you’ll all the time save the mannequin and run prediction by loading the model individually. Briefly, you usually save the mannequin after coaching and load it throughout testing or real makes use of. There isn’t any strategy to save the model as instance variable in Python. Since we just need to concentrate on building the Network, we’ll run practice and check both contained in the match() perform using the same session. You’ll usually never do that in real production system.

We can be using the MNIST dataset. It has 60Okay training pictures, every 28X28 pixel in gray scale. There are complete 10 courses to categorise. Yow will discover extra details about it in the following websites:

https://en.wikipedia.org/wiki/MNIST_database
http://yann.lecun.com/exdb/mnist/index.html

We’ll create a class named ANN and define the following features. As mentioned earlier, the fit_predict() perform will practice our mannequin and then run the prediction on the check knowledge using the same session.

As you could have observed we’ll move the layers needed for our community as an inventory in order that we don’t should code them explicitly.

__init__() perform:

Begin by defining the __init__() technique. The self.parameters and self.retailer dictionary can be used to save lots of the computed values during ahead() in order that these values may be reused.

initialize_parameters() perform:

The initialize_parameters() perform will probably be used to initialize the W and b parameters for our Network.

If we already know number of layers and hidden models, we will simply define them as following. Nevertheless defining like this manner won’t help if we need to check out extra layes with totally different hidden models.

Therefore we’ll dynamically define them by looping via the self.layers_size listing.

We’ll use get_variable() perform which is a comparatively newest addition to TensorFlow the place we will define a supported initializer. Right here we can be using xavier_initializer for W and zeros_initializer for b.

Notice: In case you have no idea whats Xavier Initializer, don’t fear about it. I will later make another tutorial on it.

ahead() perform:

Subsequent let’s define the ahead() perform.

We will code for the fastened number of layers like following.

Nevertheless, will probably be sensible to dynamically carry out the ahead propagation. Few factors to be famous,

  • We are using ReLu Activation perform.
  • When l=1, (A^[0]) can be equal to X
  • For all layers, calculate and retailer the (Z^[l]) in reminiscence inside the loop.
  • For all layers from l=1 to L-1, calculate and store the (A^[l]) in memory contained in the loop.
  • Since last layer will use Softmax activation, we’ll use TensorFlow’s builtin perform softmax_cross_entropy_with_logits_v2().

fit_predict() perform:

TensorFlow will mechanically calculate the derivatives for us, hence the backpropagation might be only a like of code.Lets go through the fit_predict() perform.

First we’ll find the variety of features from the shape of X_train and the number of courses from the form of Y. The shape of X_train in our instance right here is (60000, 784) and The form of Y_train is (60000, 10).

Then we’ll outline two placeholder X,Y based mostly on number of options and courses.

We’ll insert the variety of features in our layers_size listing since technically the enter layer is layer zero. We’ll want the dimensions to define the W1.

Finally we’ll name self.initialize_parameters() and self.forward() perform.

Subsequent we’ll outline our value perform after which use TensorFlow’s builtin perform for Gradient Descent Optimization. Be happy to try out other optimization features out there. The reduce() perform will help to calculate all of the derivatives with respect to the price perform.

We are accomplished with defining our static computation graph. We now have not feed the info into our model yet. Lets do this next by making a TensorFlow session.

We need to name global_variables_initializer() for TensorFlow’s International Variable Initialization. Next we’ll create a Session and loop by way of the n_iterations.

Inside the loop we could have TensorFlow compute the optimizer and price variable. At this level we need to move the info using the feed_dict parameter.

We need to calculate the Practice accuracy in each 100 iteration and in addition save the price in every 10 iteration. The under code could be very easy, we’ll examine the anticipated values with target variable. Then discover the accuracy by calling reduce_mean() perform.

As soon as training is complete we’ll calculate the accuracy of the check knowledge inside the identical session.

__main__():

Finally let’s take a look at our essential() technique. First we’ll get the info. Then we’ll preprocess it. Afterwards, call the fit_predict() perform of the ANN() class.

pre_process_data():

Within the preprocessing step we’ll first normalize the info by dividing by 255. Then we’ll use OneHotEncoder of the sklearn package deal to rework the goal variable.

Output:

Now its time to run the our code. With just a 2-Layer Network and 1000 epoch we’re getting around 94% of accuracy.

Here is the plot of the fee perform.

You possibly can attempt using totally different Network Format resembling:

Right here is the outcome. Our accuracy increased to 96%.

Right here is the plot of the price perform.

Under is the complete code of the ANN class.

You possibly can entry the complete undertaking here:

[[[[
]