Closed luis-gnz11 closed 2 years ago
Hi, I haven't looked at your code (have you pushed as a branch / pull request?), but typically when this happens during inference it means that you haven't called model.eval()
to switch to evaluation mode prior to calling the model. You have to do this for a deterministic output, e.g. in order to ensure that no dropout is used and that batch normalization is used properly.
Exactly, the non-deterministic output was probably because of the dropouts/batch normalization were still active. A call to forecaster.eval() before the inference solved the problem. I thought that, when using pytorch lightning, all calls to .train() and .eval() were made implicitly. Thks George.
I've launched one epoch training for the toy2 dataset and modified the train.py code to call forecaster.predict twice for the fisrt test data sample:
But I'm getting two different predict results with the same input (same xc, yc and yt):
Why two different predictions? What am I missing?