lim-anggun / FgSegNet

FgSegNet: Foreground Segmentation Network, Foreground Segmentation Using Convolutional Neural Networks for Multiscale Feature Encoding
https://doi.org/10.1016/j.patrec.2018.08.002
Other
238 stars 76 forks source link

Testing results #6

Closed InstantWindy closed 5 years ago

InstantWindy commented 6 years ago

When I test those remaining pictures,I found the testing results are bad.The segmentation mask max is about 0.3,I don't know why ,But I read your paper,you said the testing results are good,so I want to ask you how to test ,Can you help me ? I 'm very confused.Thank you!

Mary2333 commented 6 years ago

Hi,when I run the project it occured a issue, do you know how to solve it, thank you very much! Training ->>> turbulence / turbulence2 (50, 315, 645, 3) (50, 158, 323, 3) (50, 79, 162, 3)

ValueError Traceback (most recent call last) /apsarapangu/disk6/wuting/FgSegNet/FgSegNet-master/FgSegNet/FgSegNet.py in () 219 220 results = generateData(train_dir, dataset_dir, scene) --> 221 train(results, scene, mdl_path, log_dir, vgg_weights_path) 222 del results

/apsarapangu/disk6/wuting/FgSegNet/FgSegNet-master/FgSegNet/FgSegNet.py in train(results, scene, mdl_path, log_dir, vgg_weights_path) 147 redu = keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=reduce_factor, patience=num_patience, verbose=1, mode='auto', epsilon=0.0001, cooldown=0, min_lr=0) 148 model.fit([results[0], results[1], results[2]], results[3], validation_split=0.2, epochs=epoch, batch_size=batch_size, --> 149 callbacks=[redu, chk, tb], verbose=1, class_weight=results[4], shuffle = True) 150 151 del model, results, tb, chk, redu

/apsarapangu/disk6/wuting/Anaconda3/lib/python3.5/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs) 1356 class_weight=class_weight, 1357 check_batch_axis=False, -> 1358 batch_size=batch_size) 1359 # Prepare validation data. 1360 if validation_data:

/apsarapangu/disk6/wuting/Anaconda3/lib/python3.5/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size) 1232 self._feed_input_shapes, 1233 check_batch_axis=False, -> 1234 exception_prefix='input') 1235 y = _standardize_input_data(y, self._feed_output_names, 1236 output_shapes,

/apsarapangu/disk6/wuting/Anaconda3/lib/python3.5/site-packages/keras/engine/training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix) 138 ' to have shape ' + str(shapes[i]) + 139 ' but got array with shape ' + --> 140 str(array.shape)) 141 return arrays 142

ValueError: Error when checking input: expected ip_scale2 to have shape (None, 157, 322, 3) but got array with shape (50, 158, 323, 3)

InstantWindy commented 6 years ago

Did you use MyUpSampling2D modified or not modified?I think you should modify the keras's MyUpSampling2D

Mary2333 commented 6 years ago

Hi, thank you for answering my question! I used MyUpSampling2D not modified, just copy from the website, could you tell me how to modify it? or could you mind copy your MyUpSampling2D class in here, I want to see the differences. thank you very much!

Mary2333 commented 6 years ago

and the following is my MyUpSampling2D class.

class MyUpSampling2D(Layer): @interfaces.legacy_upsampling2d_support def init(self, size=(2, 2), num_pixels = (0, 0), data_format=None, kwargs): super(MyUpSampling2D, self).init(kwargs) self.data_format = conv_utils.normalize_data_format(data_format) self.size = conv_utils.normalize_tuple(size, 2, 'size') self.input_spec = InputSpec(ndim=4) self.num_pixels = num_pixels

def compute_output_shape(self, input_shape):
    if self.data_format == 'channels_first':
        height = self.size[0] * input_shape[2] + self.num_pixels[0] if input_shape[2] is not None else None
        width = self.size[1] * input_shape[3] + self.num_pixels[1] if input_shape[3] is not None else None
        return (input_shape[0],
                input_shape[1],
                height,
                width)
    elif self.data_format == 'channels_last':
        height = self.size[0] * input_shape[1] + self.num_pixels[0] if input_shape[1] is not None else None
        width = self.size[1] * input_shape[2] + self.num_pixels[1] if input_shape[2] is not None else None
        return (input_shape[0],
                height,
                width,
                input_shape[3])

def call(self, inputs):
    return K.resize_images(inputs, self.size[0], self.size[1],self.data_format, self.num_pixels)

def get_config(self):
    config = {'size': self.size,
              'data_format': self.data_format,
              'num_pixels': self.num_pixels}
    base_config = super(MyUpSampling2D, self).get_config()
    return dict(list(base_config.items()) + list(config.items()))
InstantWindy commented 6 years ago

coding: utf-8

In[1]:

from keras.legacy import interfaces from keras.utils import conv_utils from keras.engine import Layer,InputSpec import utils.backend as K

class MyUpSampling2D(Layer):

@interfaces.legacy_upsampling2d_support
def __init__(self, size=(2, 2), num_pixels = (0, 0), data_format=None, **kwargs):
    super(MyUpSampling2D, self).__init__(**kwargs)
    self.data_format = conv_utils.normalize_data_format(data_format)
    self.size = conv_utils.normalize_tuple(size, 2, 'size')
    self.input_spec = InputSpec(ndim=4)
    self.num_pixels = num_pixels

def compute_output_shape(self, input_shape):
    if self.data_format == 'channels_first':
        height = self.size[0] * input_shape[2] + self.num_pixels[0] if input_shape[2] is not None else None
        width = self.size[1] * input_shape[3] + self.num_pixels[1] if input_shape[3] is not None else None
        return (input_shape[0],
                input_shape[1],
                height,
                width)
    elif self.data_format == 'channels_last':
        height = self.size[0] * input_shape[1] + self.num_pixels[0] if input_shape[1] is not None else None
        width = self.size[1] * input_shape[2] + self.num_pixels[1] if input_shape[2] is not None else None
        return (input_shape[0],
                height,
                width,
                input_shape[3])

def call(self, inputs):
    return K.resize_images(inputs, self.size[0], self.size[1],
                           self.data_format, self.num_pixels)

def get_config(self):
    config = {'size': self.size,
              'data_format': self.data_format,
              'num_pixels': self.num_pixels}
    base_config = super(MyUpSampling2D, self).get_config()
    return dict(list(base_config.items()) + list(config.items()))

In[ ]:

Mary2333 commented 6 years ago

Hi, thank you very much! Our MyUpSampling2D class is the same, and the issue still occure, do you have other suggestions?

chenyuqiuwan commented 6 years ago

I want test those remaining pictures. But I did not find the test program. Can you share the test program for me? I 'm very confused.Thank you!

rajskar commented 5 years ago

Did u solve ... what you were asking . I downloaded the sample model, it works fine . but when i trained new model for same dataset (highway).. output is all black pixels ...

Tried many ways to fix .. but did not find any solution using keras 2.0.6

When I test those remaining pictures,I found the testing results are bad.The segmentation mask max is about 0.3,I don't know why ,But I read your paper,you said the testing results are good,so I want to ask you how to test ,Can you help me ? I 'm very confused.Thank you!

Swanwen commented 5 years ago

Did u solve ... what you were asking . I downloaded the sample model, it works fine . but when i trained new model for same dataset (highway).. output is all black pixels ...

Tried many ways to fix .. but did not find any solution using keras 2.0.6

When I test those remaining pictures,I found the testing results are bad.The segmentation mask max is about 0.3,I don't know why ,But I read your paper,you said the testing results are good,so I want to ask you how to test ,Can you help me ? I 'm very confused.Thank you!

Hi, I have the same problem with you,,, Have you solved it?

rajskar commented 5 years ago

@Swanwen Yes I could solve it. I added a "Batchnormalisation" layer after concatenation layer and trained the network. It works perfectly well. I request the authors (@lim-anggun ) to comment on it too.

Swanwen commented 5 years ago

Did u solve ... what you were asking . I downloaded the sample model, it works fine . but when i trained new model for same dataset (highway).. output is all black pixels ...

Tried many ways to fix .. but did not find any solution using keras 2.0.6

When I test those remaining pictures,I found the testing results are bad.The segmentation mask max is about 0.3,I don't know why ,But I read your paper,you said the testing results are good,so I want to ask you how to test ,Can you help me ? I 'm very confused.Thank you!

I did as you suggest, but it still did not work... Could you please send me a copy of your code?It means a lot for me...Thank you very much! My email address: swan_tju@163.com

rajskar commented 5 years ago

@Swanwen find batchnormalisation layer in this function added. train this and tell me if you still face any issue

def initModel_M(self, dataset_name):

    print('New model 4')
    assert dataset_name in ['CDnet', 'SBI', 'UCSD'], 'dataset_name must be either one in ["CDnet", "SBI", "UCSD"]]'
    assert len(self.img_shape)==3
    h, w, d = self.img_shape

    input_1 = Input(shape=(h, w, d), name='ip_scale1')
    vgg_layer_output = self.VGG16(input_1)
    shared_model = Model(inputs=input_1, outputs=vgg_layer_output, name='shared_model')
    shared_model.load_weights(self.vgg_weights_path, by_name=True)

    unfreeze_layers = ['block4_conv1','block4_conv2', 'block4_conv3']
    for layer in shared_model.layers:
        if(layer.name not in unfreeze_layers):
            layer.trainable = False

    # Scale 1
    x1 = shared_model.output
    # Scale 2
    input_2 = Input(shape=(int(h/2), int(w/2), d), name='ip_scale2')
    x2 = shared_model(input_2)
    x2 = UpSampling2D((2,2))(x2)
    # Scale 3
    input_3 = Input(shape=(int(h/4), int(w/4), d), name='ip_scale3')
    x3 = shared_model(input_3)
    x3 = UpSampling2D((4,4))(x3)

    if dataset_name=='CDnet':
        # Scale 1
        x1_ups = {'streetCornerAtNight':(0,1), 'tramStation':(1,0), 'turbulence2':(1,0)}
        if(self.scene=='wetSnow'):
            x1 = Cropping2D(cropping=((1, 2),(0, 0)))(x1)
        elif(self.scene=='skating'):
            x1 = Cropping2D(cropping=((1, 1),(1, 2)))(x1)
        else:
            for key, val in x1_ups.items():
                if self.scene==key:
                    # upscale by adding number of pixels to each dim.
                    x1 = MyUpSampling2D(size=(1,1), num_pixels=val)(x1)
                    break

        # Scale 2
        x2_ups = {'tunnelExit_0_35fps':(0,1),'tramCrossroad_1fps':(1,0),'bridgeEntry':(1,1),
                  'busyBoulvard':(1,0),'fluidHighway':(0,1),'streetCornerAtNight':(1,1), 
                  'tramStation':(2,0),'winterStreet':(1,0),'twoPositionPTZCam':(1,0),
                  'peopleInShade':(1,1),'turbulence2':(1,1),'turbulence3':(1,0),
                  'skating':(1,1), 'wetSnow':(0,0)}
        for key, val in x2_ups.items():
            if self.scene == key and self.scene in ['skating', 'wetSnow']:
                x2 = Cropping2D(cropping=((1, 1), val))(x2)
                break
            elif self.scene==key:
                x2 = MyUpSampling2D(size=(1, 1), num_pixels=val)(x2)
                break

        # Scale 3
        x3_ups = {'tunnelExit_0_35fps':(2,3),'tramCrossroad_1fps':(3,0),'bridgeEntry':(3,1,),
                  'busyBoulvard':(3,0),'fluidHighway':(0,3),'streetCornerAtNight':(1,1),
                  'tramStation':(2,0),'winterStreet':(1,0),'twoPositionPTZCam':(1,2),
                  'peopleInShade':(1,3),'turbulence2':(3,1),'turbulence3':(1,0),
                  'office':(0,2), 'pedestrians':(0,2), 'bungalows':(0,2), 'busStation':(0,2)}

        for key, val in x3_ups.items():
            if self.scene==key:
                x3 = MyUpSampling2D(size=(1,1), num_pixels=val)(x3)
                break

    elif dataset_name=='SBI':
        if(self.scene=='Board'):
            x2 = MyUpSampling2D(size=(1,1), num_pixels=(1,0))(x2)
            x3 = MyUpSampling2D(size=(1,1), num_pixels=(1,2))(x3)
        elif(self.scene=='CaVignal'):
            x3 = MyUpSampling2D(size=(1,1), num_pixels=(2,2))(x3)
        elif(self.scene=='Foliage'):
            x3 = MyUpSampling2D(size=(1,1), num_pixels=(0,2))(x3)
        elif(self.scene=='Toscana'):
            x3 = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(x3)

    elif dataset_name=='UCSD':
        x2_ups = {'birds':(1,0),'chopper':(1,0),'flock':(1,0),'freeway':(1,1),
                  'hockey':(1,1),'jump':(1,0),'landing':(1,1),'ocean':(1,1),
                  'rain':(1,1),'skiing':(1,0),'surf':(1,0),'traffic':(1,1),'zodiac':(1,1)}
        x3_ups = {'birds':(3,0),'boats':(0,2),'chopper':(3,0),'cyclists':(2,0),
                  'flock':(3,0),'freeway':(3,3),'hockey':(3,1),'jump':(3,0),
                  'landing':(3,1),'ocean':(1,3),'peds':(2,2),'rain':(1,1),
                  'skiing':(3,0),'surf':(3,0),'surfers':(0,2),'traffic':(1,1),'zodiac':(1,1)}

        for key, val in x2_ups.items():
            if self.scene==key:
                x2 = MyUpSampling2D(size=(1,1), num_pixels=val)(x2)
                break

        for key, val in x3_ups.items():
            if self.scene==key:
                x3 = MyUpSampling2D(size=(1,1), num_pixels=val)(x3)
                break

    # concatenate feature maps
    top = keras.layers.concatenate([x1, x2, x3], name='feature_concat')                
    top = BatchNormalization()(top) # This is the additional insertion. 

    if dataset_name=='CDnet':
        if(self.scene=='wetSnow'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(3,0))(top)
        elif(self.scene=='skating'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(2,3))(top)

    # Transposed Conv
    top = self.transposedConv(top)

    if dataset_name=='CDnet':
        if(self.scene=='tramCrossroad_1fps'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top)
        elif(self.scene=='bridgeEntry'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(2,2))(top)
        elif(self.scene=='fluidHighway'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top)
        elif(self.scene=='streetCornerAtNight'): 
            top = MyUpSampling2D(size=(1,1), num_pixels=(1,0))(top)
            top = Cropping2D(cropping=((0, 0),(0, 1)))(top)
        elif(self.scene=='tramStation'):  
            top = Cropping2D(cropping=((1, 0),(0, 0)))(top)
        elif(self.scene=='twoPositionPTZCam'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(0,2))(top)
        elif(self.scene=='turbulence2'):
            top = Cropping2D(cropping=((1, 0),(0, 0)))(top)
            top = MyUpSampling2D(size=(1,1), num_pixels=(0,1))(top)
        elif(self.scene=='turbulence3'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top)

    vision_model = Model(inputs=[input_1, input_2, input_3], outputs=top, name='vision_model')
    opt = keras.optimizers.RMSprop(lr = self.lr, rho=0.9, epsilon=1e-08, decay=0.0)

    # Since UCSD has no void label, we do not need to filter out
    if dataset_name == 'UCSD':
        c_loss = loss2
        c_acc = acc2
    else:
        c_loss = loss
        c_acc = acc

    vision_model.compile(loss=c_loss, optimizer=opt, metrics=[c_acc])
    return vision_model
Swanwen commented 5 years ago

@Swanwen find batchnormalisation layer in this function added. train this and tell me if you still face any issue

def initModel_M(self, dataset_name):

    print('New model 4')
    assert dataset_name in ['CDnet', 'SBI', 'UCSD'], 'dataset_name must be either one in ["CDnet", "SBI", "UCSD"]]'
    assert len(self.img_shape)==3
    h, w, d = self.img_shape

    input_1 = Input(shape=(h, w, d), name='ip_scale1')
    vgg_layer_output = self.VGG16(input_1)
    shared_model = Model(inputs=input_1, outputs=vgg_layer_output, name='shared_model')
    shared_model.load_weights(self.vgg_weights_path, by_name=True)

    unfreeze_layers = ['block4_conv1','block4_conv2', 'block4_conv3']
    for layer in shared_model.layers:
        if(layer.name not in unfreeze_layers):
            layer.trainable = False

    # Scale 1
    x1 = shared_model.output
    # Scale 2
    input_2 = Input(shape=(int(h/2), int(w/2), d), name='ip_scale2')
    x2 = shared_model(input_2)
    x2 = UpSampling2D((2,2))(x2)
    # Scale 3
    input_3 = Input(shape=(int(h/4), int(w/4), d), name='ip_scale3')
    x3 = shared_model(input_3)
    x3 = UpSampling2D((4,4))(x3)

    if dataset_name=='CDnet':
        # Scale 1
        x1_ups = {'streetCornerAtNight':(0,1), 'tramStation':(1,0), 'turbulence2':(1,0)}
        if(self.scene=='wetSnow'):
            x1 = Cropping2D(cropping=((1, 2),(0, 0)))(x1)
        elif(self.scene=='skating'):
            x1 = Cropping2D(cropping=((1, 1),(1, 2)))(x1)
        else:
            for key, val in x1_ups.items():
                if self.scene==key:
                    # upscale by adding number of pixels to each dim.
                    x1 = MyUpSampling2D(size=(1,1), num_pixels=val)(x1)
                    break

        # Scale 2
        x2_ups = {'tunnelExit_0_35fps':(0,1),'tramCrossroad_1fps':(1,0),'bridgeEntry':(1,1),
                  'busyBoulvard':(1,0),'fluidHighway':(0,1),'streetCornerAtNight':(1,1), 
                  'tramStation':(2,0),'winterStreet':(1,0),'twoPositionPTZCam':(1,0),
                  'peopleInShade':(1,1),'turbulence2':(1,1),'turbulence3':(1,0),
                  'skating':(1,1), 'wetSnow':(0,0)}
        for key, val in x2_ups.items():
            if self.scene == key and self.scene in ['skating', 'wetSnow']:
                x2 = Cropping2D(cropping=((1, 1), val))(x2)
                break
            elif self.scene==key:
                x2 = MyUpSampling2D(size=(1, 1), num_pixels=val)(x2)
                break

        # Scale 3
        x3_ups = {'tunnelExit_0_35fps':(2,3),'tramCrossroad_1fps':(3,0),'bridgeEntry':(3,1,),
                  'busyBoulvard':(3,0),'fluidHighway':(0,3),'streetCornerAtNight':(1,1),
                  'tramStation':(2,0),'winterStreet':(1,0),'twoPositionPTZCam':(1,2),
                  'peopleInShade':(1,3),'turbulence2':(3,1),'turbulence3':(1,0),
                  'office':(0,2), 'pedestrians':(0,2), 'bungalows':(0,2), 'busStation':(0,2)}

        for key, val in x3_ups.items():
            if self.scene==key:
                x3 = MyUpSampling2D(size=(1,1), num_pixels=val)(x3)
                break

    elif dataset_name=='SBI':
        if(self.scene=='Board'):
            x2 = MyUpSampling2D(size=(1,1), num_pixels=(1,0))(x2)
            x3 = MyUpSampling2D(size=(1,1), num_pixels=(1,2))(x3)
        elif(self.scene=='CaVignal'):
            x3 = MyUpSampling2D(size=(1,1), num_pixels=(2,2))(x3)
        elif(self.scene=='Foliage'):
            x3 = MyUpSampling2D(size=(1,1), num_pixels=(0,2))(x3)
        elif(self.scene=='Toscana'):
            x3 = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(x3)

    elif dataset_name=='UCSD':
        x2_ups = {'birds':(1,0),'chopper':(1,0),'flock':(1,0),'freeway':(1,1),
                  'hockey':(1,1),'jump':(1,0),'landing':(1,1),'ocean':(1,1),
                  'rain':(1,1),'skiing':(1,0),'surf':(1,0),'traffic':(1,1),'zodiac':(1,1)}
        x3_ups = {'birds':(3,0),'boats':(0,2),'chopper':(3,0),'cyclists':(2,0),
                  'flock':(3,0),'freeway':(3,3),'hockey':(3,1),'jump':(3,0),
                  'landing':(3,1),'ocean':(1,3),'peds':(2,2),'rain':(1,1),
                  'skiing':(3,0),'surf':(3,0),'surfers':(0,2),'traffic':(1,1),'zodiac':(1,1)}

        for key, val in x2_ups.items():
            if self.scene==key:
                x2 = MyUpSampling2D(size=(1,1), num_pixels=val)(x2)
                break

        for key, val in x3_ups.items():
            if self.scene==key:
                x3 = MyUpSampling2D(size=(1,1), num_pixels=val)(x3)
                break

    # concatenate feature maps
    top = keras.layers.concatenate([x1, x2, x3], name='feature_concat')                
    top = BatchNormalization()(top) # This is the additional insertion. 

    if dataset_name=='CDnet':
        if(self.scene=='wetSnow'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(3,0))(top)
        elif(self.scene=='skating'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(2,3))(top)

    # Transposed Conv
    top = self.transposedConv(top)

    if dataset_name=='CDnet':
        if(self.scene=='tramCrossroad_1fps'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top)
        elif(self.scene=='bridgeEntry'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(2,2))(top)
        elif(self.scene=='fluidHighway'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top)
        elif(self.scene=='streetCornerAtNight'): 
            top = MyUpSampling2D(size=(1,1), num_pixels=(1,0))(top)
            top = Cropping2D(cropping=((0, 0),(0, 1)))(top)
        elif(self.scene=='tramStation'):  
            top = Cropping2D(cropping=((1, 0),(0, 0)))(top)
        elif(self.scene=='twoPositionPTZCam'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(0,2))(top)
        elif(self.scene=='turbulence2'):
            top = Cropping2D(cropping=((1, 0),(0, 0)))(top)
            top = MyUpSampling2D(size=(1,1), num_pixels=(0,1))(top)
        elif(self.scene=='turbulence3'):
            top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top)

    vision_model = Model(inputs=[input_1, input_2, input_3], outputs=top, name='vision_model')
    opt = keras.optimizers.RMSprop(lr = self.lr, rho=0.9, epsilon=1e-08, decay=0.0)

    # Since UCSD has no void label, we do not need to filter out
    if dataset_name == 'UCSD':
        c_loss = loss2
        c_acc = acc2
    else:
        c_loss = loss
        c_acc = acc

    vision_model.compile(loss=c_loss, optimizer=opt, metrics=[c_acc])
    return vision_model

Thanks for your reply. I added the BN layer as you suggest, but it still didn't work on my computer...............

rajskar commented 5 years ago

Then pls check other criteria. Are you using keras 2.0.6 as suggested by authors.

On Tue, Jan 15, 2019 at 1:14 PM Swanwen notifications@github.com wrote:

@Swanwen https://github.com/Swanwen find batchnormalisation layer in this function added. train this and tell me if you still face any issue

def initModel_M(self, dataset_name):

print('New model 4')
assert dataset_name in ['CDnet', 'SBI', 'UCSD'], 'dataset_name must be either one in ["CDnet", "SBI", "UCSD"]]'
assert len(self.img_shape)==3
h, w, d = self.img_shape

input_1 = Input(shape=(h, w, d), name='ip_scale1')
vgg_layer_output = self.VGG16(input_1)
shared_model = Model(inputs=input_1, outputs=vgg_layer_output, name='shared_model')
shared_model.load_weights(self.vgg_weights_path, by_name=True)

unfreeze_layers = ['block4_conv1','block4_conv2', 'block4_conv3']
for layer in shared_model.layers:
    if(layer.name not in unfreeze_layers):
        layer.trainable = False

# Scale 1
x1 = shared_model.output
# Scale 2
input_2 = Input(shape=(int(h/2), int(w/2), d), name='ip_scale2')
x2 = shared_model(input_2)
x2 = UpSampling2D((2,2))(x2)
# Scale 3
input_3 = Input(shape=(int(h/4), int(w/4), d), name='ip_scale3')
x3 = shared_model(input_3)
x3 = UpSampling2D((4,4))(x3)

if dataset_name=='CDnet':
    # Scale 1
    x1_ups = {'streetCornerAtNight':(0,1), 'tramStation':(1,0), 'turbulence2':(1,0)}
    if(self.scene=='wetSnow'):
        x1 = Cropping2D(cropping=((1, 2),(0, 0)))(x1)
    elif(self.scene=='skating'):
        x1 = Cropping2D(cropping=((1, 1),(1, 2)))(x1)
    else:
        for key, val in x1_ups.items():
            if self.scene==key:
                # upscale by adding number of pixels to each dim.
                x1 = MyUpSampling2D(size=(1,1), num_pixels=val)(x1)
                break

    # Scale 2
    x2_ups = {'tunnelExit_0_35fps':(0,1),'tramCrossroad_1fps':(1,0),'bridgeEntry':(1,1),
              'busyBoulvard':(1,0),'fluidHighway':(0,1),'streetCornerAtNight':(1,1),
              'tramStation':(2,0),'winterStreet':(1,0),'twoPositionPTZCam':(1,0),
              'peopleInShade':(1,1),'turbulence2':(1,1),'turbulence3':(1,0),
              'skating':(1,1), 'wetSnow':(0,0)}
    for key, val in x2_ups.items():
        if self.scene == key and self.scene in ['skating', 'wetSnow']:
            x2 = Cropping2D(cropping=((1, 1), val))(x2)
            break
        elif self.scene==key:
            x2 = MyUpSampling2D(size=(1, 1), num_pixels=val)(x2)
            break

    # Scale 3
    x3_ups = {'tunnelExit_0_35fps':(2,3),'tramCrossroad_1fps':(3,0),'bridgeEntry':(3,1,),
              'busyBoulvard':(3,0),'fluidHighway':(0,3),'streetCornerAtNight':(1,1),
              'tramStation':(2,0),'winterStreet':(1,0),'twoPositionPTZCam':(1,2),
              'peopleInShade':(1,3),'turbulence2':(3,1),'turbulence3':(1,0),
              'office':(0,2), 'pedestrians':(0,2), 'bungalows':(0,2), 'busStation':(0,2)}

    for key, val in x3_ups.items():
        if self.scene==key:
            x3 = MyUpSampling2D(size=(1,1), num_pixels=val)(x3)
            break

elif dataset_name=='SBI':
    if(self.scene=='Board'):
        x2 = MyUpSampling2D(size=(1,1), num_pixels=(1,0))(x2)
        x3 = MyUpSampling2D(size=(1,1), num_pixels=(1,2))(x3)
    elif(self.scene=='CaVignal'):
        x3 = MyUpSampling2D(size=(1,1), num_pixels=(2,2))(x3)
    elif(self.scene=='Foliage'):
        x3 = MyUpSampling2D(size=(1,1), num_pixels=(0,2))(x3)
    elif(self.scene=='Toscana'):
        x3 = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(x3)

elif dataset_name=='UCSD':
    x2_ups = {'birds':(1,0),'chopper':(1,0),'flock':(1,0),'freeway':(1,1),
              'hockey':(1,1),'jump':(1,0),'landing':(1,1),'ocean':(1,1),
              'rain':(1,1),'skiing':(1,0),'surf':(1,0),'traffic':(1,1),'zodiac':(1,1)}
    x3_ups = {'birds':(3,0),'boats':(0,2),'chopper':(3,0),'cyclists':(2,0),
              'flock':(3,0),'freeway':(3,3),'hockey':(3,1),'jump':(3,0),
              'landing':(3,1),'ocean':(1,3),'peds':(2,2),'rain':(1,1),
              'skiing':(3,0),'surf':(3,0),'surfers':(0,2),'traffic':(1,1),'zodiac':(1,1)}

    for key, val in x2_ups.items():
        if self.scene==key:
            x2 = MyUpSampling2D(size=(1,1), num_pixels=val)(x2)
            break

    for key, val in x3_ups.items():
        if self.scene==key:
            x3 = MyUpSampling2D(size=(1,1), num_pixels=val)(x3)
            break

# concatenate feature maps
top = keras.layers.concatenate([x1, x2, x3], name='feature_concat')
top = BatchNormalization()(top) # This is the additional insertion.

if dataset_name=='CDnet':
    if(self.scene=='wetSnow'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(3,0))(top)
    elif(self.scene=='skating'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(2,3))(top)

# Transposed Conv
top = self.transposedConv(top)

if dataset_name=='CDnet':
    if(self.scene=='tramCrossroad_1fps'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top)
    elif(self.scene=='bridgeEntry'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(2,2))(top)
    elif(self.scene=='fluidHighway'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top)
    elif(self.scene=='streetCornerAtNight'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(1,0))(top)
        top = Cropping2D(cropping=((0, 0),(0, 1)))(top)
    elif(self.scene=='tramStation'):
        top = Cropping2D(cropping=((1, 0),(0, 0)))(top)
    elif(self.scene=='twoPositionPTZCam'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(0,2))(top)
    elif(self.scene=='turbulence2'):
        top = Cropping2D(cropping=((1, 0),(0, 0)))(top)
        top = MyUpSampling2D(size=(1,1), num_pixels=(0,1))(top)
    elif(self.scene=='turbulence3'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top)

vision_model = Model(inputs=[input_1, input_2, input_3], outputs=top, name='vision_model')
opt = keras.optimizers.RMSprop(lr = self.lr, rho=0.9, epsilon=1e-08, decay=0.0)

# Since UCSD has no void label, we do not need to filter out
if dataset_name == 'UCSD':
    c_loss = loss2
    c_acc = acc2
else:
    c_loss = loss
    c_acc = acc

vision_model.compile(loss=c_loss, optimizer=opt, metrics=[c_acc])
return vision_model

Thanks for your reply. I added the BN layer as you suggest, but it still didn't work on my computer...............

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/lim-anggun/FgSegNet/issues/6#issuecomment-454295951, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3QzB7NfUkh2LXrBXuTW2zSD3W0byks5vDYbtgaJpZM4Tg2kC .

-- Rajshekar

rajskar commented 5 years ago

can u send me the kind of image output you are seeing Is it all blank image ?

On Tue, Jan 15, 2019 at 3:47 PM Rajshekar Punna rskarp28@gmail.com wrote:

Then pls check other criteria. Are you using keras 2.0.6 as suggested by authors.

On Tue, Jan 15, 2019 at 1:14 PM Swanwen notifications@github.com wrote:

@Swanwen https://github.com/Swanwen find batchnormalisation layer in this function added. train this and tell me if you still face any issue

def initModel_M(self, dataset_name):

print('New model 4')
assert dataset_name in ['CDnet', 'SBI', 'UCSD'], 'dataset_name must be either one in ["CDnet", "SBI", "UCSD"]]'
assert len(self.img_shape)==3
h, w, d = self.img_shape

input_1 = Input(shape=(h, w, d), name='ip_scale1')
vgg_layer_output = self.VGG16(input_1)
shared_model = Model(inputs=input_1, outputs=vgg_layer_output, name='shared_model')
shared_model.load_weights(self.vgg_weights_path, by_name=True)

unfreeze_layers = ['block4_conv1','block4_conv2', 'block4_conv3']
for layer in shared_model.layers:
    if(layer.name not in unfreeze_layers):
        layer.trainable = False

# Scale 1
x1 = shared_model.output
# Scale 2
input_2 = Input(shape=(int(h/2), int(w/2), d), name='ip_scale2')
x2 = shared_model(input_2)
x2 = UpSampling2D((2,2))(x2)
# Scale 3
input_3 = Input(shape=(int(h/4), int(w/4), d), name='ip_scale3')
x3 = shared_model(input_3)
x3 = UpSampling2D((4,4))(x3)

if dataset_name=='CDnet':
    # Scale 1
    x1_ups = {'streetCornerAtNight':(0,1), 'tramStation':(1,0), 'turbulence2':(1,0)}
    if(self.scene=='wetSnow'):
        x1 = Cropping2D(cropping=((1, 2),(0, 0)))(x1)
    elif(self.scene=='skating'):
        x1 = Cropping2D(cropping=((1, 1),(1, 2)))(x1)
    else:
        for key, val in x1_ups.items():
            if self.scene==key:
                # upscale by adding number of pixels to each dim.
                x1 = MyUpSampling2D(size=(1,1), num_pixels=val)(x1)
                break

    # Scale 2
    x2_ups = {'tunnelExit_0_35fps':(0,1),'tramCrossroad_1fps':(1,0),'bridgeEntry':(1,1),
              'busyBoulvard':(1,0),'fluidHighway':(0,1),'streetCornerAtNight':(1,1),
              'tramStation':(2,0),'winterStreet':(1,0),'twoPositionPTZCam':(1,0),
              'peopleInShade':(1,1),'turbulence2':(1,1),'turbulence3':(1,0),
              'skating':(1,1), 'wetSnow':(0,0)}
    for key, val in x2_ups.items():
        if self.scene == key and self.scene in ['skating', 'wetSnow']:
            x2 = Cropping2D(cropping=((1, 1), val))(x2)
            break
        elif self.scene==key:
            x2 = MyUpSampling2D(size=(1, 1), num_pixels=val)(x2)
            break

    # Scale 3
    x3_ups = {'tunnelExit_0_35fps':(2,3),'tramCrossroad_1fps':(3,0),'bridgeEntry':(3,1,),
              'busyBoulvard':(3,0),'fluidHighway':(0,3),'streetCornerAtNight':(1,1),
              'tramStation':(2,0),'winterStreet':(1,0),'twoPositionPTZCam':(1,2),
              'peopleInShade':(1,3),'turbulence2':(3,1),'turbulence3':(1,0),
              'office':(0,2), 'pedestrians':(0,2), 'bungalows':(0,2), 'busStation':(0,2)}

    for key, val in x3_ups.items():
        if self.scene==key:
            x3 = MyUpSampling2D(size=(1,1), num_pixels=val)(x3)
            break

elif dataset_name=='SBI':
    if(self.scene=='Board'):
        x2 = MyUpSampling2D(size=(1,1), num_pixels=(1,0))(x2)
        x3 = MyUpSampling2D(size=(1,1), num_pixels=(1,2))(x3)
    elif(self.scene=='CaVignal'):
        x3 = MyUpSampling2D(size=(1,1), num_pixels=(2,2))(x3)
    elif(self.scene=='Foliage'):
        x3 = MyUpSampling2D(size=(1,1), num_pixels=(0,2))(x3)
    elif(self.scene=='Toscana'):
        x3 = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(x3)

elif dataset_name=='UCSD':
    x2_ups = {'birds':(1,0),'chopper':(1,0),'flock':(1,0),'freeway':(1,1),
              'hockey':(1,1),'jump':(1,0),'landing':(1,1),'ocean':(1,1),
              'rain':(1,1),'skiing':(1,0),'surf':(1,0),'traffic':(1,1),'zodiac':(1,1)}
    x3_ups = {'birds':(3,0),'boats':(0,2),'chopper':(3,0),'cyclists':(2,0),
              'flock':(3,0),'freeway':(3,3),'hockey':(3,1),'jump':(3,0),
              'landing':(3,1),'ocean':(1,3),'peds':(2,2),'rain':(1,1),
              'skiing':(3,0),'surf':(3,0),'surfers':(0,2),'traffic':(1,1),'zodiac':(1,1)}

    for key, val in x2_ups.items():
        if self.scene==key:
            x2 = MyUpSampling2D(size=(1,1), num_pixels=val)(x2)
            break

    for key, val in x3_ups.items():
        if self.scene==key:
            x3 = MyUpSampling2D(size=(1,1), num_pixels=val)(x3)
            break

# concatenate feature maps
top = keras.layers.concatenate([x1, x2, x3], name='feature_concat')
top = BatchNormalization()(top) # This is the additional insertion.

if dataset_name=='CDnet':
    if(self.scene=='wetSnow'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(3,0))(top)
    elif(self.scene=='skating'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(2,3))(top)

# Transposed Conv
top = self.transposedConv(top)

if dataset_name=='CDnet':
    if(self.scene=='tramCrossroad_1fps'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top)
    elif(self.scene=='bridgeEntry'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(2,2))(top)
    elif(self.scene=='fluidHighway'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top)
    elif(self.scene=='streetCornerAtNight'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(1,0))(top)
        top = Cropping2D(cropping=((0, 0),(0, 1)))(top)
    elif(self.scene=='tramStation'):
        top = Cropping2D(cropping=((1, 0),(0, 0)))(top)
    elif(self.scene=='twoPositionPTZCam'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(0,2))(top)
    elif(self.scene=='turbulence2'):
        top = Cropping2D(cropping=((1, 0),(0, 0)))(top)
        top = MyUpSampling2D(size=(1,1), num_pixels=(0,1))(top)
    elif(self.scene=='turbulence3'):
        top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top)

vision_model = Model(inputs=[input_1, input_2, input_3], outputs=top, name='vision_model')
opt = keras.optimizers.RMSprop(lr = self.lr, rho=0.9, epsilon=1e-08, decay=0.0)

# Since UCSD has no void label, we do not need to filter out
if dataset_name == 'UCSD':
    c_loss = loss2
    c_acc = acc2
else:
    c_loss = loss
    c_acc = acc

vision_model.compile(loss=c_loss, optimizer=opt, metrics=[c_acc])
return vision_model

Thanks for your reply. I added the BN layer as you suggest, but it still didn't work on my computer...............

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/lim-anggun/FgSegNet/issues/6#issuecomment-454295951, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3QzB7NfUkh2LXrBXuTW2zSD3W0byks5vDYbtgaJpZM4Tg2kC .

-- Rajshekar

-- Rajshekar

Swanwen commented 5 years ago

can u send me the kind of image output you are seeing Is it all blank image ? On Tue, Jan 15, 2019 at 3:47 PM Rajshekar Punna @.> wrote: Then pls check other criteria. Are you using keras 2.0.6 as suggested by authors. On Tue, Jan 15, 2019 at 1:14 PM Swanwen @.> wrote: > @Swanwen https://github.com/Swanwen > find batchnormalisation layer in this function added. train this and tell > me if you still face any issue > > def initModel_M(self, dataset_name): > > print('New model 4') > assert dataset_name in ['CDnet', 'SBI', 'UCSD'], 'dataset_name must be either one in ["CDnet", "SBI", "UCSD"]]' > assert len(self.img_shape)==3 > h, w, d = self.img_shape > > input_1 = Input(shape=(h, w, d), name='ip_scale1') > vgg_layer_output = self.VGG16(input_1) > shared_model = Model(inputs=input_1, outputs=vgg_layer_output, name='shared_model') > shared_model.load_weights(self.vgg_weights_path, by_name=True) > > unfreeze_layers = ['block4_conv1','block4_conv2', 'block4_conv3'] > for layer in shared_model.layers: > if(layer.name not in unfreeze_layers): > layer.trainable = False > > > # Scale 1 > x1 = shared_model.output > # Scale 2 > input_2 = Input(shape=(int(h/2), int(w/2), d), name='ip_scale2') > x2 = shared_model(input_2) > x2 = UpSampling2D((2,2))(x2) > # Scale 3 > input_3 = Input(shape=(int(h/4), int(w/4), d), name='ip_scale3') > x3 = shared_model(input_3) > x3 = UpSampling2D((4,4))(x3) > > > if dataset_name=='CDnet': > # Scale 1 > x1_ups = {'streetCornerAtNight':(0,1), 'tramStation':(1,0), 'turbulence2':(1,0)} > if(self.scene=='wetSnow'): > x1 = Cropping2D(cropping=((1, 2),(0, 0)))(x1) > elif(self.scene=='skating'): > x1 = Cropping2D(cropping=((1, 1),(1, 2)))(x1) > else: > for key, val in x1_ups.items(): > if self.scene==key: > # upscale by adding number of pixels to each dim. > x1 = MyUpSampling2D(size=(1,1), num_pixels=val)(x1) > break > > # Scale 2 > x2_ups = {'tunnelExit_0_35fps':(0,1),'tramCrossroad_1fps':(1,0),'bridgeEntry':(1,1), > 'busyBoulvard':(1,0),'fluidHighway':(0,1),'streetCornerAtNight':(1,1), > 'tramStation':(2,0),'winterStreet':(1,0),'twoPositionPTZCam':(1,0), > 'peopleInShade':(1,1),'turbulence2':(1,1),'turbulence3':(1,0), > 'skating':(1,1), 'wetSnow':(0,0)} > for key, val in x2_ups.items(): > if self.scene == key and self.scene in ['skating', 'wetSnow']: > x2 = Cropping2D(cropping=((1, 1), val))(x2) > break > elif self.scene==key: > x2 = MyUpSampling2D(size=(1, 1), num_pixels=val)(x2) > break > > # Scale 3 > x3_ups = {'tunnelExit_0_35fps':(2,3),'tramCrossroad_1fps':(3,0),'bridgeEntry':(3,1,), > 'busyBoulvard':(3,0),'fluidHighway':(0,3),'streetCornerAtNight':(1,1), > 'tramStation':(2,0),'winterStreet':(1,0),'twoPositionPTZCam':(1,2), > 'peopleInShade':(1,3),'turbulence2':(3,1),'turbulence3':(1,0), > 'office':(0,2), 'pedestrians':(0,2), 'bungalows':(0,2), 'busStation':(0,2)} > > for key, val in x3_ups.items(): > if self.scene==key: > x3 = MyUpSampling2D(size=(1,1), num_pixels=val)(x3) > break > > elif dataset_name=='SBI': > if(self.scene=='Board'): > x2 = MyUpSampling2D(size=(1,1), num_pixels=(1,0))(x2) > x3 = MyUpSampling2D(size=(1,1), num_pixels=(1,2))(x3) > elif(self.scene=='CaVignal'): > x3 = MyUpSampling2D(size=(1,1), num_pixels=(2,2))(x3) > elif(self.scene=='Foliage'): > x3 = MyUpSampling2D(size=(1,1), num_pixels=(0,2))(x3) > elif(self.scene=='Toscana'): > x3 = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(x3) > > elif dataset_name=='UCSD': > x2_ups = {'birds':(1,0),'chopper':(1,0),'flock':(1,0),'freeway':(1,1), > 'hockey':(1,1),'jump':(1,0),'landing':(1,1),'ocean':(1,1), > 'rain':(1,1),'skiing':(1,0),'surf':(1,0),'traffic':(1,1),'zodiac':(1,1)} > x3_ups = {'birds':(3,0),'boats':(0,2),'chopper':(3,0),'cyclists':(2,0), > 'flock':(3,0),'freeway':(3,3),'hockey':(3,1),'jump':(3,0), > 'landing':(3,1),'ocean':(1,3),'peds':(2,2),'rain':(1,1), > 'skiing':(3,0),'surf':(3,0),'surfers':(0,2),'traffic':(1,1),'zodiac':(1,1)} > > for key, val in x2_ups.items(): > if self.scene==key: > x2 = MyUpSampling2D(size=(1,1), num_pixels=val)(x2) > break > > for key, val in x3_ups.items(): > if self.scene==key: > x3 = MyUpSampling2D(size=(1,1), num_pixels=val)(x3) > break > > # concatenate feature maps > top = keras.layers.concatenate([x1, x2, x3], name='feature_concat') > top = BatchNormalization()(top) # This is the additional insertion. > > if dataset_name=='CDnet': > if(self.scene=='wetSnow'): > top = MyUpSampling2D(size=(1,1), num_pixels=(3,0))(top) > elif(self.scene=='skating'): > top = MyUpSampling2D(size=(1,1), num_pixels=(2,3))(top) > > # Transposed Conv > top = self.transposedConv(top) > > if dataset_name=='CDnet': > if(self.scene=='tramCrossroad_1fps'): > top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top) > elif(self.scene=='bridgeEntry'): > top = MyUpSampling2D(size=(1,1), num_pixels=(2,2))(top) > elif(self.scene=='fluidHighway'): > top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top) > elif(self.scene=='streetCornerAtNight'): > top = MyUpSampling2D(size=(1,1), num_pixels=(1,0))(top) > top = Cropping2D(cropping=((0, 0),(0, 1)))(top) > elif(self.scene=='tramStation'): > top = Cropping2D(cropping=((1, 0),(0, 0)))(top) > elif(self.scene=='twoPositionPTZCam'): > top = MyUpSampling2D(size=(1,1), num_pixels=(0,2))(top) > elif(self.scene=='turbulence2'): > top = Cropping2D(cropping=((1, 0),(0, 0)))(top) > top = MyUpSampling2D(size=(1,1), num_pixels=(0,1))(top) > elif(self.scene=='turbulence3'): > top = MyUpSampling2D(size=(1,1), num_pixels=(2,0))(top) > > vision_model = Model(inputs=[input_1, input_2, input_3], outputs=top, name='vision_model') > opt = keras.optimizers.RMSprop(lr = self.lr, rho=0.9, epsilon=1e-08, decay=0.0) > > # Since UCSD has no void label, we do not need to filter out > if dataset_name == 'UCSD': > c_loss = loss2 > c_acc = acc2 > else: > c_loss = loss > c_acc = acc > > vision_model.compile(loss=c_loss, optimizer=opt, metrics=[c_acc]) > return vision_model > > Thanks for your reply. I added the BN layer as you suggest, but it still > didn't work on my computer............... > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <#6 (comment)>, > or mute the thread > https://github.com/notifications/unsubscribe-auth/AgZH3QzB7NfUkh2LXrBXuTW2zSD3W0byks5vDYbtgaJpZM4Tg2kC > . > -- Rajshekar -- Rajshekar

test Sorry, I didn't check my email yesterday and I am very grateful for your reply. Yes, the test image is all black as the attachment shows. BTW, my keras version is 2.0.6 and my python version is 2.7 which is different from author's. Could you please send me your test code? It's so nice of you to help me a lot.

rajskar commented 5 years ago

https://drive.google.com/open?id=16vBj4OaR-qDCMu0xHlelkPLFnZIUjPn5

i am attaching the link for model I trained here. check if this works for you.

Swanwen commented 5 years ago

https://drive.google.com/open?id=16vBj4OaR-qDCMu0xHlelkPLFnZIUjPn5 i am attaching the link for model I trained here. check if this works for you. Thank you very much! Is it the model of highway?

rajskar commented 5 years ago

This is for office... Highway is already shared by author..

Raj

On Wed 16 Jan, 2019, 8:45 AM Swanwen, notifications@github.com wrote:

https://drive.google.com/open?id=16vBj4OaR-qDCMu0xHlelkPLFnZIUjPn5 i am attaching the link for model I trained here. check if this works for you. Thank you very much! Is it the model of highway?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/lim-anggun/FgSegNet/issues/6#issuecomment-454636435, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3SsBjkLBjNZgwmW1393KBbU0B9qyks5vDplSgaJpZM4Tg2kC .

InstantWindy commented 5 years ago

@rajskar ,How was your test result? Are you also using the author's code? I used the author's code to train model and the results were not well tested.

rajskar commented 5 years ago

I rewrote specifically for office ... and also used the authors code .. both work the same ... I had all blank images when i ran the test code using the model which author provided and also on my trained model... but after adding batchnormalisation to the code, everything started working and results were good

On Wed, Jan 16, 2019 at 12:16 PM InstantWindy notifications@github.com wrote:

@rajskar https://github.com/rajskar ,How was your test result? Are you also using the author's code? I used the author's code to train model and the results were not well tested.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lim-anggun/FgSegNet/issues/6#issuecomment-454670569, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3USmo8mot5S-eQGd85-JmLlrn5Hqks5vDsq1gaJpZM4Tg2kC .

-- Rajshekar

InstantWindy commented 5 years ago

The code provided by the author does not use batch normalization。

rajskar commented 5 years ago

No I have added the layer for FgSegNet_M

On Wed, Jan 16, 2019 at 4:12 PM InstantWindy notifications@github.com wrote:

The code provided by the author does not use batch normalization。

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lim-anggun/FgSegNet/issues/6#issuecomment-454733679, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3aFhQKDwrsTZQk7_HLSBxvz2yF9_ks5vDwIEgaJpZM4Tg2kC .

-- Rajshekar

InstantWindy commented 5 years ago

Could you send me your code ?thanks!

rajskar commented 5 years ago

I am out of my lab now ... Add batchnormalisation layer after concatenation

On Wed, Jan 16, 2019 at 5:37 PM InstantWindy notifications@github.com wrote:

Could you send me your code ?thanks!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lim-anggun/FgSegNet/issues/6#issuecomment-454756200, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3fi7SQ0WQYuxH1sIaE3OXr3wQKuTks5vDxX1gaJpZM4Tg2kC .

-- Rajshekar

InstantWindy commented 5 years ago

Do you mean you only need to add bacth normalization once? Does the other layers in the network need not be added?

rajskar commented 5 years ago

Yes only once .... because after thats is deconvolution. So before starting deconv, do normalisaton

On Wed, Jan 16, 2019 at 6:02 PM InstantWindy notifications@github.com wrote:

Do you mean you only need to add bacth normalization once? Does the other layers in the network need not be added?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lim-anggun/FgSegNet/issues/6#issuecomment-454763019, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3co5MiYFvA6TKiQml6J2ExDuKD0Mks5vDxvzgaJpZM4Tg2kC .

-- Rajshekar

InstantWindy commented 5 years ago

You know why adding batch normalization works better

rajskar commented 5 years ago

I am assuming that the output of the shared-model in the model from all the three scales are being distributed with different covarinces. so by normalising it may be bringing all the multiscale features to have identical distribution. and the layers after it (DECONV) are depending on these data for reconstruction/upsampling in a sense. its just my guess.. we can discuss more if you have any different idea

On Wed, Jan 16, 2019 at 6:26 PM InstantWindy notifications@github.com wrote:

You know why adding batch normalization works better

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lim-anggun/FgSegNet/issues/6#issuecomment-454769464, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3ZAb98yCCYJ1iP2bR7agBy3AcbWtks5vDyGMgaJpZM4Tg2kC .

-- Rajshekar

InstantWindy commented 5 years ago

Yeah,Thanks ! Are you studying foreground segmentation?

rajskar commented 5 years ago

yes its one of the modules of my project

On Wed, Jan 16, 2019 at 6:57 PM InstantWindy notifications@github.com wrote:

Yeah,Thanks ! Are you studying foreground segmentation?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lim-anggun/FgSegNet/issues/6#issuecomment-454778213, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3fLRO5YJQk4siRllYc8Sgc9SPuvRks5vDyjBgaJpZM4Tg2kC .

-- Rajshekar

InstantWindy commented 5 years ago

Same as mine, but I am a novice and hope to learn from each other in the future.

rajskar commented 5 years ago

Sure .. Cool

On Wed, Jan 16, 2019 at 7:19 PM InstantWindy notifications@github.com wrote:

Same as mine, but I am a novice and hope to learn from each other in the future.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lim-anggun/FgSegNet/issues/6#issuecomment-454784746, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3XQbTQvDcSy7d8VJzJDfJpAFi_CGks5vDy3RgaJpZM4Tg2kC .

-- Rajshekar

InstantWindy commented 5 years ago

@rajskar ,Hey, I found that there seems to be a problem with loading this part of the data, X and Y do not correspond as shown in the figure below.. image Because idx shuffle is used twice, the idx obtained twice is different

rajskar commented 5 years ago

Can you elaborate please

Raj

On Thu 17 Jan, 2019, 8:21 AM InstantWindy, notifications@github.com wrote:

@rajskar https://github.com/rajskar ,Hey, I found that there seems to be a problem with loading this part of the data, X and Y do not correspond as shown in the figure below.. [image: image] https://user-images.githubusercontent.com/23608180/51292377-6caf3c80-1a45-11e9-841e-bf625ca54ace.png

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lim-anggun/FgSegNet/issues/6#issuecomment-455023344, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3bycnuP3fOYRdRNXHQYxf4Rpb16mks5vD-U8gaJpZM4Tg2kC .

InstantWindy commented 5 years ago

X represents input image array,Y represents ground truth array. So, each X should correspond to a Y.That is to say, each input picture corresponds to one label. But ,shuffling the index of X and Y arrays results in X and Y not corresponding。

rajskar commented 5 years ago

but he is putting the same shuffled index for both X and Y .. it is not a prb

On Thu, Jan 17, 2019 at 1:15 PM InstantWindy notifications@github.com wrote:

X represents input image array,Y represents ground truth array. So, each X should correspond to a Y.That is to say, each input picture corresponds to one label. But ,shuffling the index of X and Y arrays results in X and Y not corresponding。

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lim-anggun/FgSegNet/issues/6#issuecomment-455073620, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3Qa_CzCMlk3rjEkuJi3AdgEODsNWks5vECougaJpZM4Tg2kC .

-- Rajshekar

InstantWindy commented 5 years ago

Yes,sorry,I've got the wrong idea, he shuffles the index twice. thanks!

rajskar commented 5 years ago

Yes its ok

On Thu, Jan 17, 2019 at 1:47 PM InstantWindy notifications@github.com wrote:

Yes,sorry,I've got the wrong idea, he shuffles the index twice. thanks!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lim-anggun/FgSegNet/issues/6#issuecomment-455081565, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3etkNdZYX1bR9Y948jwBb8EwZfKYks5vEDGzgaJpZM4Tg2kC .

-- Rajshekar

Swanwen commented 5 years ago

Yes its ok On Thu, Jan 17, 2019 at 1:47 PM InstantWindy @.***> wrote: Yes,sorry,I've got the wrong idea, he shuffles the index twice. thanks! — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#6 (comment)>, or mute the thread https://github.com/notifications/unsubscribe-auth/AgZH3etkNdZYX1bR9Y948jwBb8EwZfKYks5vEDGzgaJpZM4Tg2kC . -- Rajshekar

Hey, I was busy with my final exam last week. I downloaded your model, it does work well. And I checked my code and found there was some details I missed... Whatever, it finally works! Thank you for your help! Hope to communicate more in the future!

lim-anggun commented 5 years ago

Hi all, My apologizes for the delay response. For blank segmentation results, it may cause by the randomness of glob.glob due to X is not corresponding to Y (it depends on the system). To be safe, you should sort them (before reading images):

    X_list = sorted(X_list)
    Y_list = sorted(Y_list)

Codes in repo are updated. Let me know if it works. (Feel free to reopen this issue if needed)