According to the code (and assuming that STEPS = 1, i dont understand how the outputs change after the adaptation:
def forward(self, x):
if self.episodic:
self.reset()
for _ in range(self.steps):
outputs = forward_and_adapt(x, self.model, self.optimizer)
return outputs
@torch.enable_grad() # ensure grads in possible no grad context for testing
def forward_and_adapt(x, model, optimizer):
"""Forward and adapt model on batch of data.
Measure entropy of the model prediction, take gradients, and update params.
"""
# forward
outputs = model(x)
# adapt
loss = softmax_entropy(outputs).mean(0)
loss.backward()
optimizer.step()
optimizer.zero_grad()
return outputs
judging by the code, you return the original outputs however they do change somehow, how?
According to the code (and assuming that
STEPS = 1
, i dont understand how the outputs change after the adaptation:judging by the code, you return the original outputs however they do change somehow, how?