Closed JianyuanYin closed 2 years ago
explanation: s = (x - u).pow(2).mean(1, keepdim=True) convnext.py line140 the result after pow(2) would like to beyond the max num of fp16
example: model_32 = torch.load("epoch_3.pth").module.cuda().eval() model_16 = torch.load("epoch_3.pth").module.cuda().eval().half() x_32 = torch.rand((2,3,256,256)).cuda() x_16 = x_32.half() model_32(x_32)
model_16(x_16)
Hi,
We did not try directly converting to half so we cannot guarantee it will work. But our model can be evaluated using mixed precisions. You can try setting --use_amp True
explanation: s = (x - u).pow(2).mean(1, keepdim=True) convnext.py line140 the result after pow(2) would like to beyond the max num of fp16
example: model_32 = torch.load("epoch_3.pth").module.cuda().eval() model_16 = torch.load("epoch_3.pth").module.cuda().eval().half() x_32 = torch.rand((2,3,256,256)).cuda() x_16 = x_32.half() model_32(x_32)
tensor([0.2063, 0.2060], device='cuda:0', grad_fn=)
model_16(x_16)
tensor([0.5122, 0.5122], device='cuda:0', dtype=torch.float16,grad_fn=)