Closed yangli-lab closed 2 years ago
I think you need to check you can compile custom operators. I don't know much about compiling cuda kernels on windows, but I think you will need to setup cuda build environments and install ninja.
thanks @rosinality, and I've solved the question by replacing the decode setting in file cpp_xetention.py
however, another problem come out.
I used the same command->python convert_weight.py --repo ~/stylegan2 stylegan2-ffhq-config-f.pkl,but an error of no module named fused occured. The following are the details
Traceback (most recent call last):
File "convert_weight.py", line 11, in
Have you tested you can build custom operations? I think the problem is at it. (It is suggested by that subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
occurred in the error message.)
sorry for the late reply, I've fixed this problem by using the method provided here Once again, thanks for your patience and the helpful work you've done
sorry for the late reply, I've fixed this problem by using the method provided here Once again, thanks for your patience and the helpful work you've done
by running the command
and the following are the trackback: D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py:190: UserWarning: Error checking compiler version for cl: 'utf-8' codec can't decode byte 0xd3 in position 0: invalid continuation byte warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error)) Traceback (most recent call last): File "D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py", line 1030, in _build_extension_module check=True) File "D:\Anaconda\envs\torch_gpu\lib\subprocess.py", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "convert_weight.py", line 11, in
from model import Generator, Discriminator
File "E:\fork\fork file\stylegan2-pytorch\model.py", line 11, in
from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d, conv2d_gradfix
File "E:\fork\fork file\stylegan2-pytorch\op__init__.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "E:\fork\fork file\stylegan2-pytorch\op\fused_act.py", line 15, in
os.path.join(module_path, "fused_bias_act_kernel.cu"),
File "D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py", line 661, in load
is_python_module)
File "D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py", line 830, in _jit_compile
with_cuda=with_cuda)
File "D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py", line 883, in _write_ninja_file_and_build
_build_extension_module(name, build_directory, verbose)
File "D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py", line 1042, in _build_extension_module
message += ": {}".format(error.output.decode())
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd3 in position 1167: invalid continuation byte