Here, you are first training on all the batches in the meta-train set, then you are doing validation and testing.
However, the original algorithm seems to record the testing accuracy after training on every batch of tasks in the meta-train set. Do you observe the difference? I know it is a matter of implementation, we could have done the testing simultaneously, but as a matter of keeping consistent with the original implementation and the way others report their accuracy, would it make sense to observe the testing accuracy immediately after training on 1 meta-batch?
Hi Sungyub Kim,
Thanks for the great implementation.
I have a question about your code here: https://github.com/sungyubkim/GBML/blob/1577e172dc5852267ad0b94cdb9c175a5ca7018e/main.py#L69-L71
Here, you are first training on all the batches in the meta-train set, then you are doing validation and testing. However, the original algorithm seems to record the testing accuracy after training on every batch of tasks in the meta-train set. Do you observe the difference? I know it is a matter of implementation, we could have done the testing simultaneously, but as a matter of keeping consistent with the original implementation and the way others report their accuracy, would it make sense to observe the testing accuracy immediately after training on 1 meta-batch?
Thanks