Open a-lasri opened 1 year ago
Hello, I've noticed a discrepancy between the results I see during training and the predictions made when loading the trained weights. When I train the model, the visuals look as expected. However, upon loading the model with the saved weights for further predictions, the results don't align with what I observed during training. I'm using the exact same weights from the same epoch for both the visuals during training and the predictions.
Any assistance or insights would be much appreciated. Thank you!
Do you have any insights on this? I also face a similar issue
I'm not entirely certain, but I believe the issue arises when the masks are not binary. It's essential to double-check that the masks are in a 0-1 format in float32. A crucial point to note is that the training seems to function even if the provided masks aren't binary. It's possible they are being "binarized" during training, but I'm not entirely sure about this. The same goes for predictions.
Another potential factor that might have an impact is setting training_model.image_to_discriminator='inpainted' within the train.py file.
The goal is to have consistent binary masks and ensure the same data format (dtype and value range) between training and prediction. I placed print statements to understand the process. I noticed that the prediction during training doesn't use predict.py and has a different data processing method. It's essential to harmonize the data preprocessing for the model(batch) in predict.py and the generator(batch) in training.
Thanks @a-lasri for these insights. I will check and try to uniformise training/testing data processing. Did you manage to match testing during training and prediction results?
Things are working better now as I'm getting consistent results with both train.py and test.py. However, there are still some visible traces of masks on the final output, whether it's during training or testing. To inspect the type of data, I recommend checking in saicinpainting/training/trainers/default.py. Look at the shape, dtype, and consider using torch.unique for the mask and max/min for the images/masked_image. For predictions, we'll be going through the dataset.py file under saicinpainting/training/data/
Hello, I've noticed a discrepancy between the results I see during training and the predictions made when loading the trained weights. When I train the model, the visuals look as expected. However, upon loading the model with the saved weights for further predictions, the results don't align with what I observed during training. I'm using the exact same weights from the same epoch for both the visuals during training and the predictions.
Any assistance or insights would be much appreciated. Thank you!