neptune-ai / open-solution-mapping-challenge

Open solution to the Mapping Challenge :earth_americas:
https://www.crowdai.org/challenges/mapping-challenge
MIT License
381 stars 96 forks source link

KeyError: 'inference' when applying solution weight to my data #232

Open willsbit opened 4 years ago

willsbit commented 4 years ago

I'm trying to predict on new data using the model weights that were made available on neptune.ai After struggling a bit with some other errors, I managed to get to the python main.py predict_on_dir \ command.

This is the command I'm running

!python open-solution-mapping-challenge/main.py predict-on-dir \
--pipeline_name scoring_model \
--chunk_size 1000 \
--dir_path inference_directory \
--prediction_path resultados/predictions.json

I tried predict_on_dir but it said that command doesn't exist in main.py now I'm stuck with this error:

/content/drive/My Drive/mappingChallenge/open-solution-mapping-challenge/src/utils.py:132: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(f)
/content/drive/My Drive/mappingChallenge/open-solution-mapping-challenge/src/utils.py:132: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(f)
2020-09-30 17-46-34 mapping-challenge >>> creating metadata
1it [00:00, 6260.16it/s]
2020-09-30 17-46-34 mapping-challenge >>> predicting
Traceback (most recent call last):
  File "open-solution-mapping-challenge/main.py", line 68, in <module>
    main()
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "open-solution-mapping-challenge/main.py", line 52, in predict_on_dir
    pipeline_manager.predict_on_dir(pipeline_name, dir_path, prediction_path, chunk_size)
  File "/content/drive/My Drive/mappingChallenge/open-solution-mapping-challenge/src/pipeline_manager.py", line 62, in predict_on_dir
    predict_on_dir(pipeline_name, dir_path, prediction_path, chunk_size, self.logger, self.params)
  File "/content/drive/My Drive/mappingChallenge/open-solution-mapping-challenge/src/pipeline_manager.py", line 177, in predict_on_dir
    pipeline = PIPELINES[pipeline_name]['inference'](SOLUTION_CONFIG)
KeyError: 'inference'

I'm using google colab btw Suggestions?

Edit: formatting

jakubczakon commented 4 years ago

Hi @willsbit,

I think you should get it to work with predict_on_dir -> perhaps reading about using Click (package) on colab could help. Other than that I don't know what may be wrong.

Also, I have to say, I am not sure when will I be able to get to this issue. Sorry

willsbit commented 4 years ago

@jakubczakon Using only the first level model (unet) instead of scoring_model seems to solve the inference error, but now I'm getting some numpy exceptions

/content/drive/My Drive/open-solution-mapping-challenge/src/utils.py:132: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(f)
/content/drive/My Drive/open-solution-mapping-challenge/src/utils.py:132: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(f)
2020-10-01 21-49-23 mapping-challenge >>> creating metadata
2it [00:00, 13378.96it/s]
2020-10-01 21-49-23 mapping-challenge >>> predicting
  0% 0/1 [00:00<?, ?it/s]2020-10-01 21-49-25 steps >>> step xy_inference adapting inputs
2020-10-01 21-49-25 steps >>> step xy_inference transforming...
2020-10-01 21-49-25 steps >>> step xy_inference adapting inputs
2020-10-01 21-49-25 steps >>> step xy_inference transforming...
2020-10-01 21-49-25 steps >>> step loader adapting inputs
  0% 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 1489, in squeeze
    squeeze = a.squeeze
AttributeError: 'NoneType' object has no attribute 'squeeze'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "main.py", line 68, in <module>
    main()
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "main.py", line 52, in predict_on_dir
    pipeline_manager.predict_on_dir(pipeline_name, dir_path, prediction_path, chunk_size)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/pipeline_manager.py", line 62, in predict_on_dir
    predict_on_dir(pipeline_name, dir_path, prediction_path, chunk_size, self.logger, self.params)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/pipeline_manager.py", line 178, in predict_on_dir
    prediction = generate_prediction(meta, pipeline, logger, CATEGORY_IDS, chunk_size, params.num_threads)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/pipeline_manager.py", line 188, in generate_prediction
    return _generate_prediction_in_chunks(meta_data, pipeline, logger, category_ids, chunk_size, num_threads)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/pipeline_manager.py", line 222, in _generate_prediction_in_chunks
    output = pipeline.transform(data)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/steps/base.py", line 158, in transform
    step_inputs[input_step.name] = input_step.transform(data)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/steps/base.py", line 158, in transform
    step_inputs[input_step.name] = input_step.transform(data)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/steps/base.py", line 158, in transform
    step_inputs[input_step.name] = input_step.transform(data)
  [Previous line repeated 5 more times]
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/steps/base.py", line 161, in transform
    step_inputs = self.adapt(step_inputs)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/steps/base.py", line 203, in adapt
    adapted_steps[adapted_name] = func(raw_inputs)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/utils.py", line 228, in squeeze_inputs
    return np.squeeze(inputs[0], axis=1)
  File "<__array_function__ internals>", line 6, in squeeze
  File "/usr/local/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 1491, in squeeze
    return _wrapit(a, 'squeeze', axis=axis)
  File "/usr/local/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 44, in _wrapit
    result = getattr(asarray(obj), method)(*args, **kwds)
numpy.AxisError: axis 1 is out of bounds for array of dimension 0

I noticed the squeeze_inputs function is in src/utils.py, but I don't know how to fix it

jakubczakon commented 4 years ago

The squeeze_inputs function problem, as I remember, was usually because of the incorrect pytoroch version. Are you using the version specified in the requirements.txt?

willsbit commented 4 years ago

torch==0.3.1 torchvision==0.2.0 and numpy==1.16.4 seem to fix it

and I got a new RunTimeError (ignore -- see comment below)

/content/drive/My Drive/open-solution-mapping-challenge/src/utils.py:132: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(f)
/content/drive/My Drive/open-solution-mapping-challenge/src/utils.py:132: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(f)
2020-10-02 17-18-16 mapping-challenge >>> creating metadata
2it [00:00, 10951.19it/s]
2020-10-02 17-18-16 mapping-challenge >>> predicting
  0% 0/1 [00:00<?, ?it/s]2020-10-02 17-18-20 steps >>> step xy_inference adapting inputs
2020-10-02 17-18-20 steps >>> step xy_inference transforming...
2020-10-02 17-18-20 steps >>> step xy_inference adapting inputs
2020-10-02 17-18-20 steps >>> step xy_inference transforming...
2020-10-02 17-18-20 steps >>> step loader adapting inputs
2020-10-02 17-18-20 steps >>> step loader transforming...
2020-10-02 17-18-20 steps >>> step unet unpacking inputs
2020-10-02 17-18-20 steps >>> step unet loading transformer...
  0% 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "main.py", line 68, in <module>
    main()
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "main.py", line 52, in predict_on_dir
    pipeline_manager.predict_on_dir(pipeline_name, dir_path, prediction_path, chunk_size)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/pipeline_manager.py", line 62, in predict_on_dir
    predict_on_dir(pipeline_name, dir_path, prediction_path, chunk_size, self.logger, self.params)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/pipeline_manager.py", line 178, in predict_on_dir
    prediction = generate_prediction(meta, pipeline, logger, CATEGORY_IDS, chunk_size, params.num_threads)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/pipeline_manager.py", line 188, in generate_prediction
    return _generate_prediction_in_chunks(meta_data, pipeline, logger, category_ids, chunk_size, num_threads)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/pipeline_manager.py", line 222, in _generate_prediction_in_chunks
    output = pipeline.transform(data)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/steps/base.py", line 158, in transform
    step_inputs[input_step.name] = input_step.transform(data)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/steps/base.py", line 158, in transform
    step_inputs[input_step.name] = input_step.transform(data)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/steps/base.py", line 158, in transform
    step_inputs[input_step.name] = input_step.transform(data)
  [Previous line repeated 4 more times]
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/steps/base.py", line 164, in transform
    return self._cached_transform(step_inputs)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/steps/base.py", line 170, in _cached_transform
    self.transformer.load(self.cache_filepath_step_transformer)
  File "/content/drive/My Drive/open-solution-mapping-challenge/src/steps/pytorch/models.py", line 159, in load
    self.model.load_state_dict(torch.load(filepath, map_location=lambda storage, loc: storage))
  File "/usr/local/lib/python3.6/site-packages/torch/serialization.py", line 267, in load
    return _load(f, map_location, pickle_module)
  File "/usr/local/lib/python3.6/site-packages/torch/serialization.py", line 427, in _load
    deserialized_objects[key]._set_from_file(f, offset)
RuntimeError: storage has wrong size: expected 126630668275540 got 128

I noticed that the data/checkpoints folder hadn't been created automatically... perhaps it's because I didn't train the model myself? I was thinking it could maybe be related to the issue

willsbit commented 4 years ago

forget what I said, I reuploaded unet and scoring_model to the transform folder and now it outputs the predictions

/content/drive/My Drive/open-solution-mapping-challenge/src/utils.py:132: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(f)
/content/drive/My Drive/open-solution-mapping-challenge/src/utils.py:132: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(f)
2020-10-02 18-28-13 mapping-challenge >>> creating metadata
1it [00:00, 7767.23it/s]
2020-10-02 18-28-13 mapping-challenge >>> predicting
  0% 0/1 [00:00<?, ?it/s]2020-10-02 18-28-18 steps >>> step xy_inference adapting inputs
2020-10-02 18-28-18 steps >>> step xy_inference transforming...
2020-10-02 18-28-18 steps >>> step xy_inference adapting inputs
2020-10-02 18-28-18 steps >>> step xy_inference transforming...
2020-10-02 18-28-18 steps >>> step loader adapting inputs
2020-10-02 18-28-18 steps >>> step loader transforming...
2020-10-02 18-28-18 steps >>> step unet unpacking inputs
2020-10-02 18-28-18 steps >>> step unet loading transformer...
2020-10-02 18-28-18 steps >>> step unet transforming...
2020-10-02 18-28-51 steps >>> step mask_resize adapting inputs
2020-10-02 18-28-51 steps >>> step mask_resize transforming...

100% 1/1 [00:00<00:00, 23.63it/s]
2020-10-02 18-28-51 steps >>> step mask_resize caching outputs...
2020-10-02 18-28-51 steps >>> step category_mapper adapting inputs
2020-10-02 18-28-51 steps >>> step category_mapper transforming...

100% 1/1 [00:00<00:00, 3761.71it/s]
2020-10-02 18-28-51 steps >>> step mask_erosion adapting inputs
2020-10-02 18-28-51 steps >>> step mask_erosion transforming...

100% 1/1 [00:00<00:00, 22192.08it/s]
2020-10-02 18-28-51 steps >>> step labeler adapting inputs
2020-10-02 18-28-51 steps >>> step labeler transforming...

100% 1/1 [00:00<00:00, 471.69it/s]
2020-10-02 18-28-51 steps >>> step mask_dilation adapting inputs
2020-10-02 18-28-51 steps >>> step mask_dilation transforming...

100% 1/1 [00:00<00:00, 404.50it/s]
2020-10-02 18-28-51 steps >>> step mask_resize loading output...
2020-10-02 18-28-51 steps >>> step score_builder adapting inputs
2020-10-02 18-28-51 steps >>> step score_builder transforming...

100% 1/1 [00:00<00:00, 81.49it/s]
2020-10-02 18-28-51 steps >>> step output adapting inputs
2020-10-02 18-28-51 steps >>> step output transforming...
2020-10-02 18-28-51 mapping-challenge >>> Creating annotations
100% 1/1 [00:33<00:00, 33.69s/it]
2020-10-02 18-28-51 mapping-challenge >>> submission saved to resultados/predictions.json
2020-10-02 18-28-51 mapping-challenge >>> submission head 

{'image_id': 0, 'category_id': 100, 'score': 39.3901243099063, 'segmentation': {'size': [300, 300], 'counts': '`kf09R9100OHRG4k82TGLi8=0001O0000O1000000000000001O12NO0000O10O1000000001N10O101O0000O10O1O1000000000000000000000001O0000000000001O000000001O1O1O0000000000O11O00000000O1O1O10000O1O100O1O1MUGEk8;3000000000000O1000000001O000000000000O10000000000O10000O1000000000000001O0000000000000000O107IQVi0'}, 'bbox': [78.0, 0.0, 136.0, 23.0]}

Here's the link to the image plot output (original on the left, masked on the right) https://imgur.com/a/Tkvkr3m

Edit: I tried the vizualization method suggested here but I can't do it since I don't have ground truth json's.

I tried using the predictions.json as ground truth, but after removing the {} (i was getting list type not supported), this is what i get:

loading annotations into memory...
---------------------------------------------------------------------------
JSONDecodeError                           Traceback (most recent call last)
<ipython-input-50-7f3c22a1f228> in <module>()
     15 PREDICTIONS_PATH = 'resultados/predictions.json'
     16 
---> 17 coco_ground_truth = COCO(VAL_GROUND_TRUTH_PATH)
     18 coco_pred = coco_ground_truth.loadRes(PREDICTIONS_PATH)
     19 image_ids = coco_ground_truth.getImgIds()

4 frames
/usr/lib/python3.6/json/decoder.py in raw_decode(self, s, idx)
    353         """
    354         try:
--> 355             obj, end = self.scan_once(s, idx)
    356         except StopIteration as err:
    357             raise JSONDecodeError("Expecting value", s, err.value) from None

JSONDecodeError: Expecting ',' delimiter: line 1 column 12 (char 11)
jakubczakon commented 4 years ago

I'd suggest to just use predict_on_dir to get those predictions.

It seems that there is an issue with indexing which was discussed (with fixes) here.

I hope this helps

willsbit commented 4 years ago

It works, yay! Thank you for all your help @jakubczakon Does it only work with 300x300 or can it be any image size? Also, do you know if .tif files are supported?

pscheich commented 4 years ago

Hey @willsbit

I had the same error as you: pipeline = PIPELINES[pipeline_name]['inference'](SOLUTION_CONFIG) KeyError: 'inference'

and my solution is not using --pipeline_name scoring_model I've used --pipeline_name unet_tta_scoring_model.

As described in _REPRODUCERESULTS.md -> "Predict on new data"

willsbit commented 4 years ago

@pscheich didn't work for me :/ I got

File "/content/drive/My Drive/open-solution-mapping-challenge/src/postprocessing.py", line 358, in get_contour
    _, contours, hierarchy = cv2.findContours(mask.astype(np.uint8), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
ValueError: not enough values to unpack (expected 3, got 2)
jakubczakon commented 4 years ago

Hi @vishwa15

unet_tta_scoring_model is the pipeline name that:

So in short you don't need to download it, you already have all the pieces.

vishwa15 commented 4 years ago

Hi @vishwa15

unet_tta_scoring_model is the pipeline name that:

  • uses the unet model (you downloaded)
  • uses TTA (test time augmentation)
  • uses scoring_model to pick the best thresholds per image

So in short you don't need to download it, you already have all the pieces.

@jakubczakon I got it. Usually, saved model will be a single file. Here, we need two files. Thank you for clarifying.