sangyun884 / HR-VITON

Official PyTorch implementation for the paper High-Resolution Virtual Try-On with Misalignment and Occlusion-Handled Conditions (ECCV 2022).
825 stars 171 forks source link

Using fp16 training error #75

Open guijuzhejiang opened 1 year ago

guijuzhejiang commented 1 year ago

When using the parameter fp16=True, the training error is as follows: fp16 training is a commonly used method to save video memory overhead. How do I modify the code?

warnings.warn(warning.format(ret)) 0%| | 0/300000 [00:06<?, ?it/s] Traceback (most recent call last): File "/home/zzg/workspace/pycharm/HR-VITON/train_condition.py", line 497, in main() File "workspace/pycharm/HR-VITON/train_condition.py", line 488, in main train(opt, train_loader, val_loader, test_loader, board, tocg, D) File "workspace/pycharm/HR-VITON/train_condition.py", line 280, in train loss_G.backward() File "miniconda3/envs/py310_DL_cu118/lib/python3.10/site-packages/torch/_tensor.py", line 487, in backward torch.autograd.backward( File "miniconda3/envs/py310_DL_cu118/lib/python3.10/site-packages/torch/autograd/init.py", line 200, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Found dtype Half but expected Float

Process finished with exit code 1