DIUx-xView / xView2_baseline

Baseline localization and classification models for the xView 2 challenge.
https://xview2.org
Other
201 stars 83 forks source link

Error while loading weights for damage_inference.py #18

Open nka77 opened 3 years ago

nka77 commented 3 years ago

I am getting shape mis-match error while loading weights in damage_inference.py, which is using 'classification.hdf5' file. ValueError: Cannot assign to variable conv3_block1_0_conv/kernel:0 due to variable shape (1, 1, 256, 512) and value shape (1, 1, 128, 512) are incompatible (The weights for 'localization.h5' are loaded fine.)

Below is the snapshot for error:

File "./damage_inference.py", line 93, in run_inference
    model.load_weights(model_weights)
  File "/Users/navjotkaur/miniforge3/envs/tfmacos/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 2234, in load_weights
    hdf5_format.load_weights_from_hdf5_group(f, self.layers)
  File "/Users/navjotkaur/miniforge3/envs/tfmacos/lib/python3.8/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 710, in load_weights_from_hdf5_group
    K.batch_set_value(weight_value_tuples)
  File "/Users/navjotkaur/miniforge3/envs/tfmacos/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
    return target(*args, **kwargs)
  File "/Users/navjotkaur/miniforge3/envs/tfmacos/lib/python3.8/site-packages/tensorflow/python/keras/backend.py", line 3745, in batch_set_value
    x.assign(np.asarray(value, dtype=dtype(x)))
  File "/Users/navjotkaur/miniforge3/envs/tfmacos/lib/python3.8/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 888, in assign
    raise ValueError(
ValueError: Cannot assign to variable conv3_block1_0_conv/kernel:0 due to variable shape (1, 1, 256, 512) and value shape (1, 1, 128, 512) are incompatible
suhail017 commented 2 years ago

Hi @nka77 , I am having the same issue as yours. Were you able to resolve that problem?

juka19 commented 1 year ago

Had the same problem. I managed to load the weights with python 3.6, tensorflow==1.14.0 and keras==2.3.1. Also had to downgrade h5py to 2.10.0.

rongtongxueya commented 1 year ago

I've also encountered this problem. Additionally, when I run the process_data.py file, there are no test and train groupings in the output_dir folder. However, the output in CSV can generate train.csv and test.csv. I'm wondering whether in the command: usage: python damage_classification.py [-h] --train_data TRAIN_DATA_PATH --train_csv TRAIN_CSV --test_data TEST_DATA_PATH --test_csv TEST_CSV [--model_in MODEL_WEIGHTS_PATH] --model_out MODEL_WEIGHTS_OUT the test_data and train_data can be the same directory? Has anyone managed to solve it? Is there any kind soul who can help me out? I've been stuck on this for almost a week now.

TangSL1ping commented 1 year ago

I've also encountered this problem. Additionally, when I run the process_data.py file, there are no test and train groupings in the output_dir folder. However, the output in CSV can generate train.csv and test.csv. I'm wondering whether in the command: usage: python damage_classification.py [-h] --train_data TRAIN_DATA_PATH --train_csv TRAIN_CSV --test_data TEST_DATA_PATH --test_csv TEST_CSV [--model_in MODEL_WEIGHTS_PATH] --model_out MODEL_WEIGHTS_OUT the test_data and train_data can be the same directory? Has anyone managed to solve it? Is there any kind soul who can help me out? I've been stuck on this for almost a week now.

Hello, I have the same problem as you, can you please fix this problem

juka19 commented 1 year ago

As mentioned above, try this with python 3.6 & tf 1.14.0. For me, it worked.