Closed xdshang closed 3 years ago
In our experiments, nine epochs are typically enough for us to achieve a reasonable prediction performance. Still, the actual performance will also depend on the initialization and whether you are using the edge supervision or attributes information. Can you train the model for longer to see whether things will improve? You can also share with us some of your future prediction results in the form of videos. We may be able to give you some more specific advice by looking at your results.
https://github.com/chuangg/CLEVRER/blob/1a213058087da1ff0607adf077f18ee6b7553f32/temporal_reasoning/train.py#L60
In this code, the default number of epochs for training the neural dynamics predictor is 1000. Is this number required for reproducing the results of Table 3 in the paper? I have managed to train the predictor using the following command, where 9 training epoch is suggested in the supplementary. However, I couldn't get the similar results as those in Table 3.