In train_step of dacs.py, use optimizer.step() to update model parameters,
because backward at clean_loss.backward(retain_graph=self.enable_fdist) 、feat_loss.backward() and mix_loss.backward() of forward_traincannot use runner.outputs['loss'].backward() provided by after_train_iter of mmcv/runner/hooks/optimizer.py from mmcv for directional propagation?
def after_train_iter(self, runner):
runner.optimizer.zero_grad()
if self.detect_anomalous_params:
self.detect_anomalous_parameters(runner.outputs['loss'], runner)
runner.outputs['loss'].backward()
if self.grad_clip is not None:
grad_norm = self.clip_grads(runner.model.parameters())
if grad_norm is not None:
# Add grad norm to the logger
runner.log_buffer.update({'grad_norm': float(grad_norm)},
runner.outputs['num_samples'])
runner.optimizer.step()
Hello!
In
train_step
ofdacs.py
, useoptimizer.step()
to update model parameters,because
backward
atclean_loss.backward(retain_graph=self.enable_fdist)
、feat_loss.backward()
andmix_loss.backward()
offorward_train
cannot userunner.outputs['loss'].backward()
provided byafter_train_iter
ofmmcv/runner/hooks/optimizer.py
frommmcv
for directional propagation ?https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/optimizer.py
Thanks!