daifeng2016 / End-to-end-CD-for-VHR-satellite-image

The project aims to contribute to geoscience community
65 stars 21 forks source link

run error #3

Open ARnnn opened 5 years ago

ARnnn commented 5 years ago

when I run the code,I got the error,how can I solve it?

Traceback (most recent call last): File "UNet++_MSOF_model.py", line 226, in train(args) File "UNet++_MSOF_model.py", line 185, in train callbacks=callable_list,max_q_size=1) File "D:\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "D:\Anaconda3\lib\site-packages\keras\engine\training.py", line 2230, in fit_generator class_weight=class_weight) File "D:\Anaconda3\lib\site-packages\keras\engine\training.py", line 1877, in train_on_batch class_weight=class_weight) File "D:\Anaconda3\lib\site-packages\keras\engine\training.py", line 1480, in _standardize_user_data exception_prefix='target') File "D:\Anaconda3\lib\site-packages\keras\engine\training.py", line 86, in _standardize_input_data str(len(data)) + ' arrays: ' + str(data)[:200] + '...') ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 5 array(s), but instead got the following list of 1 arrays: [array([[[[1., 1., 1.], [1., 1., 1.], [1., 1., 1.], ..., [1., 1., 1.], [1., 1., 1.], [1., 1., 1.]],

    [[1., 1., 1.],
     [1., 1., 1.],...
ARnnn commented 5 years ago

This is how I organize the data: def generateData(batch_size,data=[]):

print 'generateData...'

while True:  
    train_data = []  
    train_label = []  
    batch = 0  
    for i in (range(len(data))): 
        url1 = data[i]

        batch += 1 

        img=np.load(filepath1 + 'combine/' + url1)
        img = np.array(img,dtype="float")#/255.0
        img = (img-np.min(img))/(np.max(img)-np.min(img))
        img = img_to_array(img)  
        train_data.append(img)

        label = np.load(filepath1 + 'label/' + url1)
        label =np.array(label,dtype='float')/255.0
        # 做一个阈值处理,输出的概率值大于0.5的就认为是对象,否则认为是背景
        label[label > 0.5] = 1
        label[label <= 0.5] = 0
        label = img_to_array(label)
        train_label.append(label)  
        print('a',batch)
        if batch % batch_size==0: 
            #print 'get enough bacth!\n'
            train_data = np.array(train_data)  
            train_label = np.array(train_label)  
            yield (train_data,train_label)  
            train_data = []  
            train_label = []  
            batch = 0  
return train_data,train_label

and then i use the fit_generator to train it,are you know where is my wrong?

daifeng2016 commented 5 years ago

Hi, all your input to the net should be 4-D tensor (Batchsize, W,H,C), maybe your input of 'train_label' is not.

xiaoxin700 commented 4 years ago

Hi, all your input to the net should be 4-D tensor (Batchsize, W,H,C), maybe your input of 'train_label' is not.

Hi, i'm learning your paper and code of this repository. I have two questions: 1) When infer an image, the output tensor shape of the network is [batch_size, 256, 256 5] or [batch_size, 256, 256, 1] ? When i did inference, i got a output of [1, 256, 256, 1] but the model's output=[nestnet_output_1, nestnet_output_2, nestnet_output_3, nestnet_output_4, nestnet_output_5]. I am not sure whether i misunderstand the network or not. 2) How did you make data augmentation described in your paper? Could you share your code about it? Thank you !