leongatys / PytorchNeuralStyleTransfer

Implementation of Neural Style Transfer in Pytorch
MIT License
422 stars 103 forks source link

Runtime error when moving vgg to FP16 #3

Closed michaelhuang74 closed 6 years ago

michaelhuang74 commented 6 years ago

This is indeed a pytorch issue, not the issue of the original code by Leon.

I am trying to speed up the neural style transfer on Nvidia Tesla V100 by using FP16. I modified the code to move the vgg to cuda().half(). In addition, all three images, style image, content image, and opt_img, are in FP16. I tried to keep the loss layers in FP32 because it easily can generate NaN and infinity in FP16. The code is at https://gist.github.com/michaelhuang74/009e149a2002b84696731fb599408c90

When I ran the code, I encountered the following error. +++++++++++++++++++++++++++++++++++++++++++++++++++++++ Traceback (most recent call last): File "neural-style-Gatys-half.py", line 167, in style_targets = [GramMatrix()(A).detach().cuda() for A in vgg(style_image, stylelayers)] File "/home/mqhuang2/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 319, in call result = self.forward(*input, **kwargs) File "neural-style-Gatys-half.py", line 86, in forward G.div(h*w) RuntimeError: value cannot be converted to type Half without overflow: 960000 +++++++++++++++++++++++++++++++++++++++++++++++++++++++

It seems that although I tried to keep the GramMatrix and loss functions in FP32, somehow, pytorch tried to convert FP32 to FP16 in the GramMatrix forward() method.

Any idea how to resolve this error?

michaelhuang74 commented 6 years ago

The issue has been resolved. See the code at the link in previous post.