Closed WMX567 closed 7 months ago
Hi,
I guess you are actually asking about why the index does not go out of range? If that is the case, the answer is that the batch_size used here equals the size of the train_set. In other words, we are actually selecting retraining data from the complete training set, so every index does not go out of range.
https://github.com/QueuQ/CGLB/blob/f6628290b34c958ceff347c1b52c5138f3f1ef23/GCGL/pipeline.py#L339
However, the loss is (batch_size, 1). Why use self.memory_data[old_task_i] won't go out of memory?