Open ChrisDAT20 opened 2 years ago
I believe this may have something to do with the load_weights function in model.py.
Try to forcefully reinstall h5py, restart your environment/machine, and see if you get the same result after inference.
That was the issue that was causing this same problem for me.
The ‘model.load_weights’ seem to load the weights incorrectly due to version compatibility issues, resulting in training from scratch. So during training and evaluation, the coco and the previously trained weights were not loaded properly and hence in case of training, the training happens from scratch and in case of evaluation the loaded model predicts worse on the sample data.
Because of this reason, the losses at earlier steps at earlier epochs were too high and also the visual results looked random, not even close to the ground truth, and also the evaluation metrics such as mAP, mAR, F1 were 0. This can be solved by two ways : By using ‘tf.keras.Model.load_weights’ instead of ‘model.load_weights’ - But still this can’t be used since it doesn’t support the ‘exclude’ argument.
By downgrading tensorflow from 2.7 to 2.5 worked in both training (from coco using exclude argument and from previously trained weights) and also in evaluations.
This worked for me. Correct me if i am wrong or my understanding is wrong
Hi! did u solve this problem?
Hi! did u solve this problem?
Yess
I cloned the TF 2 branch and adapted the configuration because i have a GPU card with 2GB of memory only.
This is the test image in inference mode:
And this is the result:
Is there something i missed?