Thanks for your great codebase. I'm a little confused about some of the code in train_da.py:
with torch.cuda.amp.autocast():
ouput_dict = model(total_batch_data) # data type float16
da_loss = DA_module(ouput_dict)
When using autocast in domain adaptation, the output type will be float16, and then the output will be passed through DA_module, which is float32, which will result in a data type mismatch, and my solution is to convert it to float32 after autocasting , and it is also possible to put DA_module() directly in the region of autocast.
Thanks for your great codebase. I'm a little confused about some of the code in
train_da.py
:When using autocast in domain adaptation, the output type will be float16, and then the output will be passed through DA_module, which is float32, which will result in a data type mismatch, and my solution is to convert it to float32 after autocasting , and it is also possible to put DA_module() directly in the region of autocast.