nasaharvest / crop-mask

End-to-end workflow for generating high resolution cropland maps
Apache License 2.0
95 stars 28 forks source link

Inference performance files #213

Closed Aniket-Parlikar closed 1 year ago

Aniket-Parlikar commented 1 year ago

Added new files for measuring inference performance

review-notebook-app[bot] commented 1 year ago

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

ivanzvonkov commented 1 year ago

The suggested implementation in #201 states:

  1. It appears to me that some of the performance indicators are still missing, I've made a Google Sheet to track which have been recorded here: https://docs.google.com/spreadsheets/d/1_ZqWCInh8xBGglFrd4r_L2urMZG5f_U6zBdr3wy54Jk/edit?usp=sharing Is this accurate?

  2. There is no general script which outputs a log/txt file with all performance indicators. Why not?

ivanzvonkov commented 1 year ago

@Aniket-Parlikar I see several comments are not yet addressed, please let me know when this is ready for a second look

Aniket-Parlikar commented 1 year ago

In regards to this comment, PFA the answers below.

https://github.com/nasaharvest/crop-mask/pull/213#issuecomment-1256507408

  1. It appears to me that some of the performance indicators are still missing, I've made a Google Sheet to track which have been recorded here: https://docs.google.com/spreadsheets/d/1_ZqWCInh8xBGglFrd4r_L2urMZG5f_U6zBdr3wy54Jk/edit?usp=sharing Is this accurate? Ans: I had already uploaded files which contains the information indicated in the missing fields.Malawi_2020_September.csv(I'll rename it for better understanding) contains information regarding the performance parameters of single model inside a Docker container.

Whereas, multi_models_logs.csv contains information about the performance parameters of multiple models deployed in a Docker container.

The cloudrun_logs.txt contains information about the performance parameters of multiple models deployed on Google cloud run service.

  1. There is no general script which outputs a log/txt file with all performance indicators. Why not? The main reason I believe is due to the fact that we intend to obtain the performance parameters of models deployed in various environments and hence, they need to run on each environment seperately. In addition, some of these parameters vary from environment to environment and we need different measurement approaches in such situations.