master-of-zen / Av1an

Cross-platform command-line AV1 / VP9 / HEVC / H264 encoding framework with per scene quality encoding
GNU General Public License v3.0
1.46k stars 151 forks source link

High disk usage when setting a file as -i #767

Open motbob opened 1 year ago

motbob commented 1 year ago

av1an.exe -i "C:\Users\censored\Desktop\oshi09.mkv" -v "--bit-depth=10 --input-bit-depth=10 --cpu-used=3 --end-usage=q --cq-level=21 --threads=1 --kf-max-dist=999 --kf-min-dist=999 --arnr-strength=0" --extra-split 275 --photon-noise 1 --chroma-noise --scenes oshi09_scenes.log -w 18 -o oshi09.ivf

av1an.exe -i "C:\Users\censored\Desktop\oshi09.mkv" -v "--bit-depth=10 --input-bit-depth=10" -w 18 -o oshi09.ivf

Both these commands result in high hard drive usage during encode, as if av1an is reading the entire source file every time for every chunk. However, setting -i as a vapoursynth script fixes the problem, with normal low disk usage. I have lsmash installed, and av1an appears to be using it when doing -i file.mkv, since it's generating a .lwi file. I am also using lsmash when doing -i script.vpy.

I am on a slightly modded version of Vapoursynth (to fix the R62 vspipe problems), so if you cannot replicate the issue, that may be why.

I am on the nightly windows build of av1an, current as of 2023-06-14.

trixoniisama commented 1 year ago

When providing av1an a mkv file, it creates itself a vpy file which will serve as the actual input. It looks like this: from vapoursynth import core core.max_cache_size=1024 core.lsmas.LWLibavSource("cache.lwi").set_output()

This method seems to improve substantially the scene detection speed, however it leads to the issue you mention the bigger the input file is. My guess is that the cache just isn't big enough. This method has a clear limit. If that's really the issue, using a higher cache capacity won't entirely solve the problem. I don't think though that the devs are interested in making scene detection slower globally in case someone is providing a lossless multi dozens of gigabytes large input, because that's not the usual usecase of a av1an user.

I think your workaround is just fine and that it doesn't really need a "fix". However the devs may have other ideas in mind.

akhilleusuggo commented 1 year ago

facing the same issue

pysh commented 8 months ago

I also noticed that LWLibavSource indexes the input file for each segment. And for large files, this results in high hard disk usage. I solve this by manually editing loadscript.vpy during scene detection. You need to add the cache=0 parameter, i.e.

from vapoursynth import core
core.max_cache_size=1024
core.smas.LWLibavSource("{InputFile}","{cache.lwi}", cache=0).set_output()

Then LWLibavSource will not index the input file for each segment. Maybe it's worth adding this default setting?

Update Fixed in last LWLibavSource release.

akhilleusuggo commented 3 months ago

Sorry for the late reply

Update Fixed in last LWLibavSource release.

Thank you for the mention, I will test now and close the topic if it is fixed after updating