Open lalawayne opened 2 months ago
Please try larger patch_sz, such as 128, to see if it is ok. The parameter patch_sz controls patch-level parallelization. The larger the patch_sz, the more decoding steps. The more decoding steps, the less memory used in each step. The patch_sz = 64 leads to the best compression performance, because it is consistent with that used in training process. I suggest you cutting your image into subimages, if your GPU memory is too small to support DLPR.
Because of the downsamping during lossy compression, DLPR don't support patch_sz smaller than 64. We don't expect users to adjust the patch_sz, so it is better to keep it fixed.
你好,当我运行test.py尝试压缩我自己的jpg文件时,patch_sz=64会出现显存不够的错误,我的输入图片分辨率有点大,调整patch_sz成32时,报错,请问这是为什么呢,该怎么解决呢? (DLPR) dzq@ubuntu:~/DLPR/DLPR_ll$ python test.py Using /home/dzq/.cache/torch_extensions/py39_cu113 as PyTorch extensions root... Emitting ninja build file /home/dzq/.cache/torch_extensions/py39_cu113/torchac_backend/build.ninja... Building extension module torchac_backend... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module torchac_backend... Traceback (most recent call last): File "/home/dzq/DLPR/DLPR_ll/test.py", line 323, in
code_lossy, code_res, img_shape, res_range = compress(ll_module, I, COT)
File "/home/dzq/DLPR/DLPR_ll/test.py", line 105, in compress
code_lossy = model.lossy_compressor.compress(x)
File "/home/dzq/DLPR/DLPR_ll/ll_model_eval.py", line 210, in compress
y = self.encoder(input/255.)
File "/home/dzq/anaconda3/envs/DLPR/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, kwargs)
File "/home/dzq/DLPR/DLPR_ll/ll_model_eval.py", line 80, in forward
out = self.layers(input)
File "/home/dzq/anaconda3/envs/DLPR/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, *kwargs)
File "/home/dzq/anaconda3/envs/DLPR/lib/python3.9/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/home/dzq/anaconda3/envs/DLPR/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(input, kwargs)
File "/home/dzq/DLPR/DLPR_ll/custom_layers.py", line 109, in forward
b = self.conv_b(x)
File "/home/dzq/anaconda3/envs/DLPR/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, *kwargs)
File "/home/dzq/anaconda3/envs/DLPR/lib/python3.9/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/home/dzq/anaconda3/envs/DLPR/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(input, **kwargs)
File "/home/dzq/DLPR/DLPR_ll/win_attention.py", line 188, in forward
x_windows = window_partition(shifted_x, self.window_size)
File "/home/dzq/DLPR/DLPR_ll/win_attention.py", line 15, in window_partition
x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
RuntimeError: shape '[15510, 0, 4, 0, 4, 192]' is invalid for input of size 11911680