neptune-ai / open-solution-mapping-challenge

Open solution to the Mapping Challenge :earth_americas:
https://www.crowdai.org/challenges/mapping-challenge
MIT License
380 stars 96 forks source link

Evaluation in chunks works strangely with scoring model #156

Closed apyskir closed 6 years ago

apyskir commented 6 years ago

In both following cases: eval_data_sample: 1000 When I run: neptune run ... evaluate -c 500 -p unet_tta_scoring_model and then: neptune run ... evaluate -p unet_tta_scoring_model I get different results. I should have got the same. Something is wrong.

jakubczakon commented 6 years ago

fixed in #170