model.add(Conv2D(1000,kernel_size = 7 , activation = 'relu' , input_shape = (645,432,3),padding = 'same'))#padding maintains the same size of input and output
model.add(Dropout(0.25)) # IT LEAVES OR IGNORE 25 % OF NEURONS IN THe upper layer
model.add(BatchNormalization()) #---1
Regularization : we will use Batch_normalization , It takes output of a layer and rescale it to maintain mean to be zero and standard deviation of one in every batch of training
model.save_weights('First_try.h5') # Always good to save your weights after training or during training
``
I am strugling for days on this error but not getting a solution to it. It would be really great if someone can help me in this . The above mentioned is my code .
Thankyou in advance!
``import matplotlib.pyplot as plt import pandas as pd import numpy as np from keras.models import Sequential from keras.layers import Dense , Conv2D , Flatten , MaxPool2D , BatchNormalization , AveragePooling2D , Dropout from keras.callbacks import ModelCheckpoint from keras.preprocessing.image import ImageDataGenerator
Defining the model to work on it
model = Sequential()
train_set = '/home/deep/Vishesh_Breja/Depth_from_defocus/d3net_depth_estimation-master/dfd_datasets/dfd_indoor/dfd_dataset_indoor_N2_8/rgb'
train_set_labels = '/home/deep/Vishesh_Breja/Depth_from_defocus/d3net_depth_estimation-master/dfd_datasets/dfd_indoor/dfd_dataset_indoor_N2_8/depth'
test_set = '/home/deep/Vishesh_Breja/Depth_from_defocus/d3net_depth_estimation-master/dfd_datasets/dfd_indoor/dfd_dataset_indoor_N8/rgb'
test_set_labels = '/home/deep/Vishesh_Breja/Depth_from_defocus/d3net_depth_estimation-master/dfd_datasets/dfd_indoor/dfd_dataset_indoor_N8/depth'
1st layer
model.add(Conv2D(1000,kernel_size = 7 , activation = 'relu' , input_shape = (645,432,3),padding = 'same'))#padding maintains the same size of input and output model.add(Dropout(0.25)) # IT LEAVES OR IGNORE 25 % OF NEURONS IN THe upper layer model.add(BatchNormalization()) #---1
Regularization : we will use Batch_normalization , It takes output of a layer and rescale it to maintain mean to be zero and standard deviation of one in every batch of training
2nd layer
model.add(Conv2D(1000,kernel_size = 4 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#----2
3rd layer
model.add(Conv2D(1000,kernel_size = 3 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#----3 model.add(Conv2D(1000,kernel_size = 3 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#----4 model.add(Conv2D(1000,kernel_size = 3 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#----5
4th layer
model.add(AveragePooling2D(2))
model.add(Conv2D(10,kernel_size = 1 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#----6
5th layer
model.add(Conv2D(1000,kernel_size = 1 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#----7 model.add(Conv2D(1000,kernel_size = 1 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#----8 model.add(Conv2D(1000,kernel_size = 1 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#----9
6th layer
model.add(AveragePooling2D(2))
7th layer
model.add(Conv2D(1000,kernel_size = 1 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#10 model.add(Conv2D(1000,kernel_size = 1 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#11 model.add(Conv2D(1000,kernel_size = 1 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#12
8th layer
model.add(AveragePooling2D(2))
9th layer
model.add(Conv2D(1000,kernel_size = 1 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#13 model.add(Conv2D(1000,kernel_size = 1 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#14 model.add(Conv2D(1000,kernel_size = 1 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#15
10th layer
model.add(Conv2D(1000,kernel_size = 3 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#16 model.add(Conv2D(1000,kernel_size = 3 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#17 model.add(Conv2D(1000,kernel_size = 3 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#18
11th layer
model.add(Conv2D(1000,kernel_size = 4 , activation = 'relu',padding ='same')) model.add(BatchNormalization())#19
12th layer
model.add(Conv2D(1000,kernel_size = 3 , activation = 'relu', padding = 'same'))
model.add(BatchNormalization())#20
model.summary()
This is the generator that will read pictures found in subfolders of 'TRAIN/TEST' and indefinitely generate batches of augmented image data.
batch_size = 16 train_datagen = ImageDataGenerator(rescale = 1/255) test_datagen = ImageDataGenerator(rescale = 1/255)
=======================================================================================================
train_X_input = train_datagen.flow_from_directory('/home/deep/Vishesh_Breja/Depth_from_defocus/d3net_depth_estimation-master/dfd_datasets/dfd_indoor/dfd_dataset_indoor_N2_8/rgb', target_size = (645,432), batch_size = batch_size) train_X_input_labels = train_datagen.flow_from_directory('/home/deep/Vishesh_Breja/Depth_from_defocus/d3net_depth_estimation-master/dfd_datasets/dfd_indoor/dfd_dataset_indoor_N2_8/depth',target_size = (645,432) , batch_size = batch_size )
========================================================================================================
Compiling the model
model.compile(optimizer = 'adam' , loss = 'categorical_crossentropy' , metrics = ['accuracy'])
Fitting the model
model.fit_generator(train_X_input ,steps_per_epoch = 2000//batch_size , epochs = 50 , validation_data = train_X_input_labels , validation_steps = 800//batch_size)
model.save_weights('First_try.h5') # Always good to save your weights after training or during training ``
I am strugling for days on this error but not getting a solution to it. It would be really great if someone can help me in this . The above mentioned is my code . Thankyou in advance!