Closed VenkatLohithDasari closed 1 year ago
I have same issue ,when I switch the depth gen model MiDas_v21 to small or to large
I came here to make a similar comment: after a run using depthmap2mask the memory allocated during the run is not released, this is clearly visible in a resource monitor looking at the VRAM usage, and reproducible across two machines and several different attempts. Command line flags include --medvram, --full precision, and --no-halfs
The same thing happens with the depth mask script here , since they both use MiDAs, I'm guess that might be the common source for the error.
Ok so it appears that midas ads up to the GPU's memory after each use indeed.
how to fix?
I made a pr for the fix you can check it if your still having issues https://github.com/Extraltodeus/depthmap2mask/pull/22
the merge is done, thank you @TingTingin
Hey, As I constantly Generate Images. I see slow growth in GPU Memory, after generating around 10 images, GPU memory is 100% full. After that, It always Throws OOM Error..
I mainly use two models, one is MiDas_v21 and other is dpt_large. Doesn't matter which model it is.