Open MichaelMonashev opened 1 year ago
Hi, yesterday faced this issue, and solved it with a hack. Basically you should go into the unet.py (of the kandinsky library) and change _self.use_fp16 = usefp16 to _self.use_fp16 = False #usefp16; also you might probably need to chnage, in the conv.py of PyTorch library...
return F.conv2d(input, weight, bias, self.stride, self.padding, self.dilation, self.groups)
to
return F.conv2d(input.float(), weight, bias, self.stride, self.padding, self.dilation, self.groups)
...in both cases the problem is that some weights of the model are changed to fp16, which is only accepted by the GPU, thus raising an error. .float() makes them fp32, which are CPU acceptable.
@pablonieto0981, I think better place to change this is configs.py
, not unet.py
. This is what worked for me:
https://github.com/ai-forever/Kandinsky-2/pull/36
Code to reproduce:
after run
nvidia-smi