How to train a deep learning model using Python

Hello guys..! Welcome to another great tutorial on deep learning. In this tutorial, you will learn how to train a deep learning model using python. If you are familiar with machine learning model training then you will find the difference between the deep learning model training and machine learning model training.

I will go through each step in this tutorial so that you can get this so easily. You must have some of the requirements to start this if you already have these then skip this step.

pip install tensorflow
pip install keras

pip install numpy
pip install pandas
pip install matplotlib

Here we are using keras API with tensorflow backend. lets get the data and do some pre-processing or clean the data.

let’s import all the necessary packages. I am using a very common dataset Iris. I think you already know about this dataset, which has 3 species Versicolor, Virginia, and setosa. we need to predict that species based on 4 features.

import pandas as pd
import numpy as np

import keras
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Activation,Dropout
from keras.callbacks import ModelCheckpoint, EarlyStopping

from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder

Now import the dataset to build deep learning model.

df = pd.read_csv('iris.csv')
df.head()

This will give you the overview of the data with 5 samples. let’s check the null values in the dataset.

df.isna().sum()

Fortunately, we don’t have any null values in our data and also we have multi classes ( i mean 3 flowers ) which are perfectly distributed.

So, we are ready to divide X and Y or you can call anything. X represents features and Y represents class label.

x = df.drop(['Id','Species'],axis=1).values.astype('float32')
y = df['Species']

print(x.shape)
print(y.shape)

we have total 150 data points and 4 features in x and 1 class label in y.

we need to convert the 1 column class label to 3 columns. it is required to give output shape in our deep learning model.

one = OneHotEncoder()
y = one.fit_transform(np.array(y).reshape(-1,1))

here, i used onehotencoder to convert them into 3 columns. It returns a sparse metric. Now , The class label is like below.

one = OneHotEncoder()
y = one.fit_transform(np.array(y).reshape(-1,1))

Ok, let’s split the data into train and test sets.

x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=100)

Building Deep learning model

Let’s create the deep learning model.

model = Sequential()

model.add(Dense(128, input_dim=x_train.shape[1])) 
model.add(Activation('relu'))
model.add(Dropout(0.15))

model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.15))

model.add(Dense(y_train.shape[1]))
model.add(Activation('softmax'))


model.summary()

Here, i used 2 hidden layers with dropout of 0.15 and used ‘relu’ activation function. This is very simple deep learning model. if you want you can add more layers or you can experiment with different activation functions and dropout percentages.

model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])

# ,callbacks=[EarlyStopping(patience=2,monitor='val_loss',mode='auto')]

print("Training...")
hist = model.fit(x_train, y_train, nb_epoch=100, batch_size=10, validation_data=(x_test,y_test))

Here, I used categorical loss because we have multi class classification and used adam optimizer to converge a little fast.

I gave 100 iterations and sent 10 data points each time. Now it will give the training loss and validation loss and also accuracy scores.

Read this Also

After this tutorial , jump into this article

Face mask detection using deep learning Project

Deep learning model Predictions

Let’s predict the class labels for test data.

print("Generating test predictions...")
preds = model.predict_classes(x_test)

print('Predictions:',preds)


model.predict(x_test[0].reshape(1,-1)) # For single data point.

Plotting the Loss

This can help us to understand how the loss was reduced in each epoch. i plotted validation loss with epochs.

plt.figure(figsize=(12,6))
plt.title('LOSS vs Iterations')
plt.plot(np.arange(100),hist.history['val_loss'],color='green')
plt.show()

I hope you, This tutorial will help you a lot. If you have any doubts, please comment below. Thank you for reading this…!

Leave a Reply