Open Lg955 opened 3 years ago
Yes, I tried to do the same. I found here that you could change AT_DISPATCH_FLOATING_TYPES
to AT_DISPATCH_FLOATING_TYPES_AND_HALF
. But I am getting a different error now.
File "ms_deform_attn_func.py", line 26, in forward
value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, ctx.im2col_step)
RuntimeError: expected scalar type Half but found Float
I'm in the same problem. Wanted to know if anyone has fixed this to support the pytorch mixed precision training?
@noahcao sorry, I DON'T fix it
@gautamsreekumar the same error, I have given up it …>_<…
An easy way is to disable mixed precision for custom operations , see @custom_fwd & @custom_bwd in https://pytorch.org/docs/stable/notes/amp_examples.html
Is there anyone who can solve this hard question? 20230412 help!!!!
I want to use the Mixed precision(
from torch.cuda.amp import autocast, GradScaler
) when training the model, but get the error :"I want to modify it, but it relates to Cuda code, has anyone encountered the same problem ?