Open abhyantrika opened 6 years ago
HI Abhyantrika, Did you get any insight into this issue. I have the same query as well. Also ,can you kindly explain me why this dimension has been used in this prediction code. masks_prediction = np.zeros((1200, 1600, len(file_names))) . Because my images on which i am running prediction are of different shapes and naturally it is throwing tensor dimension mismatch error in this line. masks_prediction[:,:,i] = merged_mask But when, i am resizing the image to this (1200, 1600) dimension, the masks are not formed in the object in the image, infact the image is totally distorted. So no use of calculating any accuracy/precision here. How can i overcome this? Do you suggest training with this specific dimension as during train, i just kept it as 1024 * 1024. Kindly guide here. Thanks a lot.
I think you should follow the error metric provided in compute_ap_range. As for the second problem, I suggest keep it at 1024*1024.
Thanks a lot..however I went back to the matterport repo to calculate map score. This was confusing me.
How is the accuracy metric used in your code (prediction.ipynb) different from compute_ap_range function provided in utils.py ?
I am getting a huge jump in accuracy by using your method. Please explain the discrepancy.