Open xuhao-anhe opened 2 years ago
Excuse me, can anyone solve this problem
Sorry for replying to you so late.
It seems to be caused by the different results of the ROI bbox head, not a bug. CWD loss requires two inputs of the same size, you can try to adjust the position where cwd loss is added.
Thank you for your reply, I will check it.
Describe the bug
A clear and concise description of what the bug is.
[here]I try to use CWD for point_ Rend for distillation.It has the following bugs:Traceback (most recent call last): File "tools/mmdet/train_mmdet.py", line 210, in
main()
File "tools/mmdet/train_mmdet.py", line 206, in main
meta=meta)
File "/media/jidong/code/xuhao/mmrazor-master/mmrazor/apis/mmdet/train.py", line 206, in train_mmdet_model
runner.run(data_loader, cfg.workflow)
File "/home/jidong/anaconda3/envs/mmrazor/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 130, in run
epoch_runner(data_loaders[i], kwargs)
File "/home/jidong/anaconda3/envs/mmrazor/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 51, in train
self.run_iter(data_batch, train_mode=True, kwargs)
File "/home/jidong/anaconda3/envs/mmrazor/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 30, in run_iter
kwargs)
File "/home/jidong/anaconda3/envs/mmrazor/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 75, in train_step
return self.module.train_step(*inputs[0], *kwargs[0])
File "/media/jidong/code/xuhao/mmrazor-master/mmrazor/models/algorithms/general_distill.py", line 49, in train_step
distill_losses = self.distiller.compute_distill_loss(data)
File "/media/jidong/code/xuhao/mmrazor-master/mmrazor/models/distillers/single_teacher.py", line 240, in compute_distill_loss
losses[loss_name] = loss_module(s_out, t_out)
File "/home/jidong/anaconda3/envs/mmrazor/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(input, kwargs)
File "/media/jidong/code/xuhao/mmrazor-master/mmrazor/models/losses/cwd.py", line 50, in forward
logsoftmax(preds_S.view(-1, W H) / self.tau)) (
RuntimeError: The size of tensor a (220) must match the size of tensor b (12) at non-singleton dimension 0
To Reproduce
The command you executed.
Post related information
pip list | grep "mmcv\|mmrazor\|^torch"
[here]mmrazor
folder. [here]Additional context
Add any other context about the problem here.
[here]