Apply caching decorator to RNN, clear every time we train. Even at 64 iterations should be a lot of repetition in the early states. Note that this would have subtle interactions with #22 (you'd have to make sure the stateful state was either kept, or that the object was built to do the forward passes from the beginning if it's state was none or something)
Apply caching decorator to RNN, clear every time we train. Even at 64 iterations should be a lot of repetition in the early states. Note that this would have subtle interactions with #22 (you'd have to make sure the stateful state was either kept, or that the object was built to do the forward passes from the beginning if it's state was none or something)