fuzailpalnak / building-footprint-segmentation

Building footprint segmentation from satellite and aerial imagery
https://fuzailpalnak-buildingextraction-appbuilding-extraction-s-ov1rp9.streamlitapp.com/
Apache License 2.0
128 stars 32 forks source link

train model always precision : 0.00000, f1 : 0.00000, recall : 0.00000, iou : 0.00000 #46

Closed Juldeng closed 6 months ago

Juldeng commented 1 year ago

Hello Fuzail,

Thanks first , i tried train DLinkNet34 with Massachusetts Buildings Dataset , the train metrc is still correct : accuracy : 0.82643, precision : 0.94444, f1 : 0.85087, recall : 0.77669, iou : 0.77669 ; but the Valid Metric : accuracy : 0.85126, precision : 0.00000, f1 : 0.00000, recall : 0.00000, iou : 0.00000, it must someting wrong but i can't solver it . can you give me some advices? thank again, i'm looking forward for your replay!

fuzailpalnak commented 1 year ago

did you test the trained model on images?

Wangxinyu-qlz commented 11 months ago

Same here. And it seemly didn't really start training image

fuzailpalnak commented 11 months ago

@Finn-Neo what model architecture are you using ?

Wangxinyu-qlz commented 11 months ago

Do you mean this? model = segmentation.load_model(name="ReFineNet", transfer_weights=r"../weights/refine.pth", pre_trained_image_net = False, top_layers_trainable = False)

fuzailpalnak commented 11 months ago

try model = segmentation.load_model(name="DLinkNet34", transfer_weights=r"../weights/refine.pth", pre_trained_image_net = False, top_layers_trainable = False)

Wangxinyu-qlz commented 11 months ago

try model = segmentation.load_model(name="DLinkNet34", transfer_weights=r"../weights/refine.pth", pre_trained_image_net = False, top_layers_trainable = False)

got this image and then i tried model = segmentation.load_model(name="DLinkNet34", transfer_weights=r"../weights/refine.pth", pre_trained_image_net = False) got this: image I also tried set transfer_weights=r"../weights/best.pt"(best.pt in DLinkNet.zip) and got this: image

Wangxinyu-qlz commented 11 months ago

I run with config.ipynb here and change model = segmentation.load_model(name=config["Model"]["name"]) to model = segmentation.load_model(name="ReFineNet", transfer_weights=r"../weights/refine.pth", pre_trained_image_net = False, top_layers_trainable = False) according to issue #48 . Other parts are consistent.

fuzailpalnak commented 11 months ago

download this file and then change the transfer_weights argument by adding the path file that is present in the .zip - model = segmentation.load_model(name=config["Model"]["name"]) to model = segmentation.load_model(name="ReFineNet", transfer_weights=r"...", pre_trained_image_net = False, top_layers_trainable = False)

Wangxinyu-qlz commented 11 months ago

image and got error `--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[136], line 7 1 augmenters = A.Compose([ 2 A.HorizontalFlip(p=0.5), 3 A.RandomBrightnessContrast(p=0.2) 4 ]) 6 # model = segmentation.load_model(name=config["Model"]["name"]) ----> 7 model = segmentation.load_model(name="ReFineNet", transfer_weights=r"../weights/best.pt", pre_trained_image_net = False, top_layers_trainable = False) 8 # for param in model.parameters(): 9 # print(param.data) 10 # (...) 13 # if i <= 161: 14 # param.requires_grad = False 15 criterion = segmentation.load_criterion(name=config["Criterion"]["name"])

File ~\Desktop\building segmentation\building-footprint-segmentation\building_footprint_segmentation\segmentation.py:17, in Segmentation.load_model(self, name, transfer_weights, kwargs) 15 model = self.segmentation.create_network(name, kwargs) 16 if transfer_weights is not None: ---> 17 model.load_state_dict(torch.load(transfer_weights)) 18 return load_parallel_model(model)

File ~\anaconda3\envs\match\building-footprint\lib\site-packages\torch\nn\modules\module.py:2041, in Module.load_state_dict(self, state_dict, strict) 2036 error_msgs.insert( 2037 0, 'Missing key(s) in state_dict: {}. '.format( 2038 ', '.join('"{}"'.format(k) for k in missing_keys))) 2040 if len(error_msgs) > 0: -> 2041 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( 2042 self.class.name, "\n\t".join(error_msgs))) 2043 return _IncompatibleKeys(missing_keys, unexpected_keys)

RuntimeError: Error(s) in loading state_dict for ReFineNet: Missing key(s) in state_dict: "layer0.0.weight", "layer0.1.weight", "layer0.1.bias", "layer0.1.running_mean", "layer0.1.running_var", "layer1.0.conv1.weight", "layer1.0.bn1.weight", "layer1.0.bn1.bias", "layer1.0.bn1.running_mean", "layer1.0.bn1.running_var", "layer1.0.conv2.weight", "layer1.0.bn2.weight", "layer1.0.bn2.bias", "layer1.0.bn2.running_mean", "layer1.0.bn2.running_var", "layer1.1.conv1.weight", "layer1.1.bn1.weight", "layer1.1.bn1.bias", "layer1.1.bn1.running_mean", "layer1.1.bn1.running_var", "layer1.1.conv2.weight", "layer1.1.bn2.weight", "layer1.1.bn2.bias", "layer1.1.bn2.running_mean", "layer1.1.bn2.running_var", "layer1.2.conv1.weight", "layer1.2.bn1.weight", "layer1.2.bn1.bias", "layer1.2.bn1.running_mean", "layer1.2.bn1.running_var", "layer1.2.conv2.weight", "layer1.2.bn2.weight", "layer1.2.bn2.bias", "layer1.2.bn2.running_mean", "layer1.2.bn2.running_var", "layer2.0.conv1.weight", "layer2.0.bn1.weight", "layer2.0.bn1.bias", "layer2.0.bn1.running_mean", "layer2.0.bn1.running_var", "layer2.0.conv2.weight", "layer2.0.bn2.weight", "layer2.0.bn2.bias", "layer2.0.bn2.running_mean", "layer2.0.bn2.running_var", "layer2.0.downsample.0.weight", "layer2.0.downsample.1.weight", "layer2.0.downsample.1.bias", "layer2.0.downsample.1.running_mean", "layer2.0.downsample.1.running_var", "layer2.1.conv1.weight", "layer2.1.bn1.weight", "layer2.1.bn1.bias", "layer2.1.bn1.running_mean", "layer2.1.bn1.running_var", "layer2.1.conv2.weight", "layer2.1.bn2.weight", "layer2.1.bn2.bias", "layer2.1.bn2.running_mean", "layer2.1.bn2.running_var", "layer2.2.conv1.weight", "layer2.2.bn1.weight", "layer2.2.bn1.bias", "layer2.2.bn1.running_mean", "layer2.2.bn1.running_var", "layer2.2.conv2.weight", "layer2.2.bn2.weight", "layer2.2.bn2.bias", "layer2.2.bn2.running_mean", "layer2.2.bn2.running_var", "layer2.3.conv1.weight", "layer2.3.bn1.weight", "layer2.3.bn1.bias", "layer2.3.bn1.running_mean", "layer2.3.bn1.running_var", "layer2.3.conv2.weight", "layer2.3.bn2.weight", "layer2.3.bn2.bias", "layer2.3.bn2.running_mean", "layer2.3.bn2.running_var", "layer3.0.conv1.weight", "layer3.0.bn1.weight", "layer3.0.bn1.bias", "layer3.0.bn1.running_mean", "layer3.0.bn1.running_var", "layer3.0.conv2.weight", "layer3.0.bn2.weight", "layer3.0.bn2.bias", "layer3.0.bn2.running_mean", "layer3.0.bn2.running_var", "layer3.0.downsample.0.weight", "layer3.0.downsample.1.weight", "layer3.0.downsample.1.bias", "layer3.0.downsample.1.running_mean", "layer3.0.downsample.1.running_var", "layer3.1.conv1.weight", "layer3.1.bn1.weight", "layer3.1.bn1.bias", "layer3.1.bn1.running_mean", "layer3.1.bn1.running_var", "layer3.1.conv2.weight", "layer3.1.bn2.weight", "layer3.1.bn2.bias", "layer3.1.bn2.running_mean", "layer3.1.bn2.running_var", "layer3.2.conv1.weight", "layer3.2.bn1.weight", "layer3.2.bn1.bias", "layer3.2.bn1.running_mean", "layer3.2.bn1.running_var", "layer3.2.conv2.weight", "layer3.2.bn2.weight", "layer3.2.bn2.bias", "layer3.2.bn2.running_mean", "layer3.2.bn2.running_var", "layer3.3.conv1.weight", "layer3.3.bn1.weight", "layer3.3.bn1.bias", "layer3.3.bn1.running_mean", "layer3.3.bn1.running_var", "layer3.3.conv2.weight", "layer3.3.bn2.weight", "layer3.3.bn2.bias", "layer3.3.bn2.running_mean", "layer3.3.bn2.running_var", "layer3.4.conv1.weight", "layer3.4.bn1.weight", "layer3.4.bn1.bias", "layer3.4.bn1.running_mean", "layer3.4.bn1.running_var", "layer3.4.conv2.weight", "layer3.4.bn2.weight", "layer3.4.bn2.bias", "layer3.4.bn2.running_mean", "layer3.4.bn2.running_var", "layer3.5.conv1.weight", "layer3.5.bn1.weight", "layer3.5.bn1.bias", "layer3.5.bn1.running_mean", "layer3.5.bn1.running_var", "layer3.5.conv2.weight", "layer3.5.bn2.weight", "layer3.5.bn2.bias", "layer3.5.bn2.running_mean", "layer3.5.bn2.running_var", "layer4.0.conv1.weight", "layer4.0.bn1.weight", "layer4.0.bn1.bias", "layer4.0.bn1.running_mean", "layer4.0.bn1.running_var", "layer4.0.conv2.weight", "layer4.0.bn2.weight", "layer4.0.bn2.bias", "layer4.0.bn2.running_mean", "layer4.0.bn2.running_var", "layer4.0.downsample.0.weight", "layer4.0.downsample.1.weight", "layer4.0.downsample.1.bias", "layer4.0.downsample.1.running_mean", "layer4.0.downsample.1.running_var", "layer4.1.conv1.weight", "layer4.1.bn1.weight", "layer4.1.bn1.bias", "layer4.1.bn1.running_mean", "layer4.1.bn1.running_var", "layer4.1.conv2.weight", "layer4.1.bn2.weight", "layer4.1.bn2.bias", "layer4.1.bn2.running_mean", "layer4.1.bn2.running_var", "layer4.2.conv1.weight", "layer4.2.bn1.weight", "layer4.2.bn1.bias", "layer4.2.bn1.running_mean", "layer4.2.bn1.running_var", "layer4.2.conv2.weight", "layer4.2.bn2.weight", "layer4.2.bn2.bias", "layer4.2.bn2.running_mean", "layer4.2.bn2.running_var", "convolution_layer_4_dim_reduction.weight", "convolution_layer_4_dim_reduction.bias", "convolution_layer_3_dim_reduction.weight", "convolution_layer_3_dim_reduction.bias", "convolution_layer_2_dim_reduction.weight", "convolution_layer_2_dim_reduction.bias", "convolution_layer_1_dim_reduction.weight", "convolution_layer_1_dim_reduction.bias", "refine_block_4.residual_convolution_unit.convolution_layer_1.weight", "refine_block_4.residual_convolution_unit.convolution_layer_1.bias", "refine_block_4.residual_convolution_unit.convolution_layer_2.weight", "refine_block_4.residual_convolution_unit.convolution_layer_2.bias", "refine_block_4.multi_resolution_fusion.convolution_layer_lower_inputs.weight", "refine_block_4.multi_resolution_fusion.convolution_layer_lower_inputs.bias", "refine_block_4.multi_resolution_fusion.convolution_layer_higher_inputs.weight", "refine_block_4.multi_resolution_fusion.convolution_layer_higher_inputs.bias", "refine_block_4.chained_residual_pooling.convolution_layer_1.weight", "refine_block_4.chained_residual_pooling.convolution_layer_1.bias", "refine_block_3.residual_convolution_unit.convolution_layer_1.weight", "refine_block_3.residual_convolution_unit.convolution_layer_1.bias", "refine_block_3.residual_convolution_unit.convolution_layer_2.weight", "refine_block_3.residual_convolution_unit.convolution_layer_2.bias", "refine_block_3.multi_resolution_fusion.convolution_layer_lower_inputs.weight", "refine_block_3.multi_resolution_fusion.convolution_layer_lower_inputs.bias", "refine_block_3.multi_resolution_fusion.convolution_layer_higher_inputs.weight", "refine_block_3.multi_resolution_fusion.convolution_layer_higher_inputs.bias", "refine_block_3.chained_residual_pooling.convolution_layer_1.weight", "refine_block_3.chained_residual_pooling.convolution_layer_1.bias", "refine_block_2.residual_convolution_unit.convolution_layer_1.weight", "refine_block_2.residual_convolution_unit.convolution_layer_1.bias", "refine_block_2.residual_convolution_unit.convolution_layer_2.weight", "refine_block_2.residual_convolution_unit.convolution_layer_2.bias", "refine_block_2.multi_resolution_fusion.convolution_layer_lower_inputs.weight", "refine_block_2.multi_resolution_fusion.convolution_layer_lower_inputs.bias", "refine_block_2.multi_resolution_fusion.convolution_layer_higher_inputs.weight", "refine_block_2.multi_resolution_fusion.convolution_layer_higher_inputs.bias", "refine_block_2.chained_residual_pooling.convolution_layer_1.weight", "refine_block_2.chained_residual_pooling.convolution_layer_1.bias", "refine_block_1.residual_convolution_unit.convolution_layer_1.weight", "refine_block_1.residual_convolution_unit.convolution_layer_1.bias", "refine_block_1.residual_convolution_unit.convolution_layer_2.weight", "refine_block_1.residual_convolution_unit.convolution_layer_2.bias", "refine_block_1.multi_resolution_fusion.convolution_layer_lower_inputs.weight", "refine_block_1.multi_resolution_fusion.convolution_layer_lower_inputs.bias", "refine_block_1.multi_resolution_fusion.convolution_layer_higher_inputs.weight", "refine_block_1.multi_resolution_fusion.convolution_layer_higher_inputs.bias", "refine_block_1.chained_residual_pooling.convolution_layer_1.weight", "refine_block_1.chained_residual_pooling.convolution_layer_1.bias", "residual_convolution_unit.convolution_layer_1.weight", "residual_convolution_unit.convolution_layer_1.bias", "residual_convolution_unit.convolution_layer_2.weight", "residual_convolution_unit.convolution_layer_2.bias", "final_layer.weight", "final_layer.bias". Unexpected key(s) in state_dict: "model", "optimizer", "start_epoch", "step", "bst_vld_loss", "end_epoch".`

fuzailpalnak commented 11 months ago

Use this new link and update transfer_weights argument.

model = segmentation.load_model(name="DLinkNet34", transfer_weights=r"...", pre_trained_image_net = False, top_layers_trainable = False)