Closed My12123 closed 1 year ago
It is a bug in torchvision and it has already fixed. Update torchvsion module to 0.15 or later.
For conda env,
conda update torchvision
For pip env,
pip3 install --upgrade torchvision
Also, I have noted this issue in the source code comments. https://github.com/nagadomi/nunif/blob/70b31c118b7d375616374c78f7cc799e620b60dd/waifu2x/export_onnx.py#L3
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torchaudio 0.13.1+cu116 requires torch==1.13.1, but you have torch 2.0.1 which is incompatible. torchtext 0.14.1 requires torch==1.13.1, but you have torch 2.0.1 which is incompatible. Successfully installed torch-2.0.1 torchvision-0.15.2
installation
pip install torch==1.13.1+cu116 torchvision==0.15.0+cu116 torchaudio==0.13.1 torchtext --extra-index-url https://download.pytorch.org/whl/cu116
(nunif) F:\nunif>pip install torch==1.13.1+cu116 torchvision==0.15.0+cu116 torchaudio==0.13.1 torchtext --extra-index-url https://download.pytorch.org/whl/cu116
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu116
Collecting torch==1.13.1+cu116
Using cached https://download.pytorch.org/whl/cu116/torch-1.13.1%2Bcu116-cp310-cp310-win_amd64.whl (2433.8 MB)
ERROR: Could not find a version that satisfies the requirement torchvision==0.15.0+cu116 (from versions: 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1, 0.2.2, 0.2.2.post2, 0.2.2.post3, 0.12.0, 0.13.0, 0.13.0+cu116, 0.13.1, 0.13.1+cu116, 0.14.0, 0.14.0+cu116, 0.14.1, 0.14.1+cu116, 0.15.0, 0.15.1, 0.15.2)
ERROR: No matching distribution found for torchvision==0.15.0+cu116
(nunif) F:\nunif>pip install torch==1.13.1+cu116 torchvision==0.15.0 torchaudio==0.13.1 torchtext --extra-index-url https://download.pytorch.org/whl/cu116
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu116
Collecting torch==1.13.1+cu116
Using cached https://download.pytorch.org/whl/cu116/torch-1.13.1%2Bcu116-cp310-cp310-win_amd64.whl (2433.8 MB)
Collecting torchvision==0.15.0
Downloading torchvision-0.15.0-cp310-cp310-win_amd64.whl (1.2 MB)
---------------------------------------- 1.2/1.2 MB 751.2 kB/s eta 0:00:00
Collecting torchaudio==0.13.1
Using cached https://download.pytorch.org/whl/cu116/torchaudio-0.13.1%2Bcu116-cp310-cp310-win_amd64.whl (2.3 MB)
Collecting torchtext
Downloading https://download.pytorch.org/whl/torchtext-0.15.2-cp310-cp310-win_amd64.whl (1.9 MB)
---------------------------------------- 1.9/1.9 MB 3.0 MB/s eta 0:00:00
Requirement already satisfied: typing-extensions in f:\1\envs\nunif\lib\site-packages (from torch==1.13.1+cu116) (4.6.3)
Requirement already satisfied: numpy in f:\1\envs\nunif\lib\site-packages (from torchvision==0.15.0) (1.24.3)
Requirement already satisfied: requests in f:\1\envs\nunif\lib\site-packages (from torchvision==0.15.0) (2.31.0)
INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while.
ERROR: Cannot install torch==1.13.1+cu116 and torchvision==0.15.0 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested torch==1.13.1+cu116
torchvision 0.15.0 depends on torch==2.0.0
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
(nunif) F:\nunif>python -m waifu2x.export_onnx -i waifu2x/pretrained_models -o waifu2x/onnx_models
2023-06-05 16:12:21,311:nunif: [ INFO] cunet
F:\1\envs\nunif\lib\site-packages\torch\onnx\_internal\jit_utils.py:306: UserWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. (Triggered internally at ..\torch\csrc\jit\passes\onnx\constant_fold.cpp:181.)
_C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
F:\1\envs\nunif\lib\site-packages\torch\onnx\utils.py:689: UserWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. (Triggered internally at ..\torch\csrc\jit\passes\onnx\constant_fold.cpp:181.)
_C._jit_pass_onnx_graph_shape_type_inference(
F:\1\envs\nunif\lib\site-packages\torch\onnx\utils.py:1186: UserWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. (Triggered internally at ..\torch\csrc\jit\passes\onnx\constant_fold.cpp:181.)
_C._jit_pass_onnx_graph_shape_type_inference(
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
2023-06-05 16:12:25,585:nunif: [ INFO] upcunet
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
2023-06-05 16:12:35,817:nunif: [ INFO] swin_unet
F:\nunif\waifu2x\models\swin_unet.py:169: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert x2.shape[2] % 12 == 0 and x2.shape[2] % 16 == 0
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
other warnings are not harmful. It is a normal output.
2023-06-05 19:32:59,379:nunif: [ WARNING] patch_resize_antialias: No Resize node: waifu2x/onnx_models\swin_unet\photo\noise3_scale4x.onnx: name=None
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
2023-06-05 19:34:14,260:nunif: [ WARNING] patch_resize_antialias: No Resize node: waifu2x/onnx_models\swin_unet\photo\scale4x.onnx: name=None
No Resize node: waifu2x/onnx_models\swin_unet\art_scan\noise2_scale4x.onnx: name=None
Why is it used first / then \ ?
No Resize node
It is due to some experimental codes. ignore it, no problem.
Why is it used first / then \ ?
Because you specified -o waifu2x/onnx_models
.
python -m waifu2x.export_onnx -i waifu2x/pretrained_models -o waifu2x/onnx_models