Common Practices in Deep Learning

Hey Deep Learning Nerds, Welcome back . It’s been a while we blogged on Deep Learning. But we are here with an exciting topic. As we know DL involves lot of trail and error before we arrive at a sophisticated model for our problem, but how do people kick the things off whenever they have a new challenge. C’mon let’s know that.

See the source image
gif

We ran a poll in one of the populous group on Facebook with Deep Learning Enthusiasts, and we have got some decent amount of responses. Let’s see how to go ahead when we have a new DL challenge.

#1 Progressive Approach

  • Start with the simplest Architecture and progressively increase complexity.

In this approach, we would start with a very basic model which normally do fits in for the kind of problem we are dealing with i.e. regression or classification.

See the source image

Then, we would check the bias-variance tradeoff, and then improve our model architecture accordingly.


#2 State of the Art Algorithms

This is pretty straight forward approach. Go browse for the appropriate SOTA which suits your problem and then download the source code from any available open source platforms.

Train SOTA with your own data, with all the training costs either on Google Colab or deploy yourself a GPU based VM on Azure or AWS. This would probably yield good results depending on the data you have and if the algorithms suits your problem.

See the source image

#3 Designing your Own Neural Net Architecture

This would mostly fit into our first approach, but this can also mean researching on wide variety of successful architectures and come up with your own, based on the problem you have.

This might require you better understanding on some Awesome Papers available on internet, and some experience in developing your own Deep Learning Applications.


#4 Transfer Learning

Transfer Learning is also one of the most widely used approaches when you deal with similar data to that of an already deployed successful model.

This implies preserving the weights of earlier layers of your model and tuning the last few layers depending on your problem.

Pin by Paweł Cisło on Data Science | Machine learning, Learning ...

This approach is proved to be successful particularly in the domain of Computer Vision, where earlier layers in the neural network mostly focus on extracting edge based features in images.


That concludes the most common practices we use in Deep learning. Do you have a better one. Comment it down ❤.

Do follow our Facebook page at fb.com/HelloWorldFB.

We will meet again on our learning journey, with another blog. Until then Stay safe, Cheers..✌🙌.


2 responses to “Common Practices in Deep Learning”

Leave a reply to Tarun Cancel reply

Design a site like this with WordPress.com
Get started