Closed terminatedkhla closed 2 years ago
Interesting. Could you also post the error lines prior, so I could see which line caused this? As a quick fix you can just run 'pip install scipy'.
Here are the prior lines:
Microsoft Windows [Version 10.0.22598.200] (c) Microsoft Corporation. All rights reserved.
C:\Users\efmcp>docker run --privileged --gpus all -it --rm -v //c/Users/efmcp/Documents/VSGAN/VSGAN-tensorrt-docker-main:/workspace/tensorrt vsgan_tensorrt:latest
===================== == NVIDIA TensorRT == =====================`
NVIDIA Release 21.12 (build 29870938) NVIDIA TensorRT Version 8.2.1 Copyright (c) 2016-2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Container image Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
https://developer.nvidia.com/tensorrt
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh
To install the open-source samples corresponding to this TensorRT release version
run /opt/tensorrt/install_opensource.sh. To build the open source parsers,
plugins, and samples for current top-of-tree on master or a different branch,
run /opt/tensorrt/install_opensource.sh -b
root@5c2d1ace25dd:/workspace# cd tensorrt root@5c2d1ace25dd:/workspace/tensorrt# vspipe -c y4m inference.py - | ffmpeg -i pipe: -c:v ffv1 -level 3 -threads 16 -slices 16 output.mkv ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 9 (Ubuntu 9.3.0-10ubuntu2) configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 / 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 / 55. 5.100 Information: Generating grammar tables from /usr/lib/python3.8/lib2to3/Grammar.txt Information: Generating grammar tables from /usr/lib/python3.8/lib2to3/PatternGrammar.txt Script evaluation failed: Python exception: No module named 'scipy'
Traceback (most recent call last):
File "src\cython\vapoursynth.pyx", line 2886, in vapoursynth._vpy_evaluate
File "src\cython\vapoursynth.pyx", line 2887, in vapoursynth._vpy_evaluate
File "inference.py", line 13, in
pipe:: Invalid data found when processing input root@5c2d1ace25dd:/workspace/tensorrt#
The pip install worked for that instance but the error came up the next time as well. I ran this sequence before:
docker run --privileged --gpus all -it --rm -v //c/Users/efmcp/Documents/VSGAN/VSGAN-tensorrt-docker-main:/workspace/tensorrt vsgan_tensorrt:latest
cd tensorrt
vspipe -c y4m inference.py - | ffmpeg -i pipe: -c:v ffv1 -level 3 -threads 16 -slices 16 output.mkv
This is the modified inference.py that I used: inference.txt
Also when I run the "pip install skipy" I get this message and an unplayable 1kb output file
root@5c2d1ace25dd:/workspace/tensorrt# vspipe -c y4m inference.py - | ffmpeg -i pipe: -c:v ffv1 -level 3 -threads 16 -slices 16 output.mkv ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 9 (Ubuntu 9.3.0-10ubuntu2) configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 / 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 / 55. 5.100 Information: Generating grammar tables from /usr/lib/python3.8/lib2to3/Grammar.txt Information: Generating grammar tables from /usr/lib/python3.8/lib2to3/PatternGrammar.txt Error: Failed to retrieve frame 0 with error: CUDA out of memory. Tried to allocate 102.00 MiB (GPU 0; 6.00 GiB total capacity; 5.12 GiB already allocated; 0 bytes free; 5.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Output 4 frames in 2.89 seconds (1.38 fps) Input #0, yuv4mpegpipe, from 'pipe:': Duration: N/A, bitrate: N/A Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p(progressive), 3840x2160, 59.94 fps, 59.94 tbr, 59.94 tbn, 59.94 tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo (native) -> ffv1 (native)) Output #0, matroska, to 'output.mkv': Metadata: encoder : Lavf58.29.100 Stream #0:0: Video: ffv1 (FFV1 / 0x31564646), yuv420p, 3840x2160, q=2-31, 200 kb/s, 59.94 fps, 1k tbn, 59.94 tbc Metadata: encoder : Lavc58.54.100 ffv1 frame= 0 fps=0.0 q=0.0 Lsize= 1kB time=00:00:00.00 bitrate=N/A speed= 0x video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
At first glance it seems like I just need to add scipy to my dockerfile since it got added as a dependency.
It actually worked afterwards, but you didn't have enough vram. It says "CUDA out of memory". Keep in mind that video super resolution models need a lot of vram compared to an image model like esrgan or compact. I would suggest using a lightweight model with that gpu, or lower the resolution. It is a hardware limit at that point. I can't open your inference.txt for some reason in my github app, so it's just general advise.
Also, you can format by using tripple quotation marks instead of simply pasting text, looks a lot cleaner.
I will include scipy in my repo later.
Thank you so much for the support! This has been a rly cool program. I thought that this was more than enough vram, but I guess I was wrong. I use an RTX 2060 so It's less capable.
On higher resolutions the vram that is needed can be quite high in some cases. Well the vram usage also depends on the input resolution, backend and what model is being used. In the context of deep learning, 6gb vram isnt even that much.
I was finally able to look at your file. I had the idea to open that file url via browser. Like I mentioned in a prior github issue, cugan needs a lot vram. With 8gb vram you cant even process 1080p from what I heard. While I added that I was using Colab with 16gb vram and I was able to process 1080p. So that you can roughly guess how much vram it needs. With 6gb you probably can do 480p as input with cugan. It needs more than esrgan and compact in comparison.
Should be fixed now.
I can't run my inference.py without getting this error message. Can someone direct me to where I can get the repo?
File "/usr/local/lib/python3.8/dist-packages/mmedit/core/evaluation/metrics.py", line 7, in
from scipy.ndimage import convolve
ModuleNotFoundError: No module named 'scipy'
pipe:: Invalid data found when processing input