Open hutchinsonian opened 1 year ago
Met similar issue, I worked around it by using https://discuss.pytorch.org/t/runtimeerror-input-type-torch-cuda-floattensor-and-weight-type-torch-halftensor-should-be-the-same/104312/5
i.e. adding autocast like
from torch.cuda.amp import autocast
with autocast():
outputs = model.forward(tensor)
Met similar issue, I worked around it by using https://discuss.pytorch.org/t/runtimeerror-input-type-torch-cuda-floattensor-and-weight-type-torch-halftensor-should-be-the-same/104312/5
i.e. adding autocast like
from torch.cuda.amp import autocast with autocast(): outputs = model.forward(tensor)
may I ask what is your GPU memory usage? Is it still 22G when setting batch_size=1?
Thanks for this work!
When I train the model with
python [EXP_PATH] --amp_backend native -b 8 --gpus 8
, [EXP_PATH] isbevdepth/exps/nuscenes/mv/bev_depth_lss_r50_256x704_128x128_20e_cbgs_2key_da_ema.py
I got the following error:So I try to add the code:
The above error disappeared, but the memory usage is very high. When I set
batch_size=1
, 22G memory is still used. is this correct?