Open yasavvym8 opened 1 year ago
This has never been reported before. Please find which files actually take up the space. Use WizTree, WinDirStat or something like this.
There was one recent issue that looked similar.
https://github.com/thygate/stable-diffusion-webui-depthmap-script/issues/337 here you can see it hangs in the same place. Maybe try removing boost.
There might be some conflict with newer gpus and boost. I'm not entirely sure.
Not sure, there the person tried to generate a video and it was RAM or VRAM which was not enough. Here it is drive capacity - which seems weird.
Thank you for the suggestions and insight everyone. I have attempted to capture any changes via WinDirStat, and it appears as though, somehow the pagefile.sys is somehow associated with this problem, as this file grows massively during the processing of this image. Attaching this before/after image from WinDirStat. I was also able to capture the error message from the terminal that occurs in this process. Thank you again for any insight.
Traceback (most recent call last):
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\src\common_ui.py", line 526, in run_generate
input_i, type, result = next(gen_obj)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\src\core.py", line 326, in core_generation_funnel
raise e
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\src\core.py", line 182, in core_generation_funnel
model_holder.get_raw_prediction(inputimages[count], net_width, net_height)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\src\depthmap_generation.py", line 296, in get_raw_prediction
raw_prediction = estimateboost(img, self.depth_model, self.depth_model_type, self.pix2pix_model,
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\src\depthmap_generation.py", line 647, in estimateboost
whole_estimate = doubleestimate(img, net_receptive_field_size, whole_image_optimal_size, pix2pixsize, model,
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\src\depthmap_generation.py", line 868, in doubleestimate
estimate2 = singleestimate(img, size2, model, net_type)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\src\depthmap_generation.py", line 891, in singleestimate
return estimatezoedepth(Image.fromarray(np.uint8(img 255)).convert('RGB'), model, msize, msize)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\src\depthmap_generation.py", line 346, in estimatezoedepth
prediction = model.infer_pil(img)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\dzoedepth\models\depth_model.py", line 141, in infer_pil
out_tensor = self.infer(x, pad_input=pad_input, with_flip_aug=with_flip_aug, kwargs)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\dzoedepth\models\depth_model.py", line 126, in infer
return self.infer_with_flip_aug(x, pad_input=pad_input, kwargs)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\dzoedepth\models\depth_model.py", line 110, in infer_with_flip_aug
out = self._infer_with_pad_aug(x, pad_input=pad_input, kwargs)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\dzoedepth\models\depth_model.py", line 88, in _infer_with_pad_aug
out = self._infer(x)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\dzoedepth\models\depth_model.py", line 55, in _infer
return self(x)['metric_depth']
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, kwargs)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\dzoedepth\models\zoedepth_nk\zoedepth_nk_v1.py", line 178, in forward
rel_depth, out = self.core(x, denorm=denorm, return_rel_depth=True)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\extensions\stable-diffusion-webui-depthmap-script\dzoedepth\models\base_models\midas.py", line 268, in forward
rel_depth = self.core(x)
File "C:\Users\My Profile\Desktop\My Things\stablediffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "C:\Users\My Profile/.cache\torch\hub\semjon00_MiDaS_master\midas\dpt_depth.py", line 166, in forward
return super().forward(x).squeeze(dim=1)
File "C:\Users\My Profile/.cache\torch\hub\semjon00_MiDaS_master\midas\dpt_depth.py", line 114, in forward
layers = self.forward_transformer(self.pretrained, x)
File "C:\Users\My Profile/.cache\torch\hub\semjon00_MiDaS_master\midas\backbones\beit.py", line 15, in forward_beit
return forward_adapted_unflatten(pretrained, x, "forward_features")
File "C:\Users\My Profile/.cache\torch\hub\semjon00_MiDaS_master\midas\backbones\utils.py", line 86, in forward_adapted_unflatten
exec(f"glob = pretrained.model.{function_name}(x)")
File "
So this looks like a virtual memory problem? Basically the RAM is temporarily put onto the storage drive to give more RAM capacity. You might want to lower the cap on virtual memory cap usually they are defaulted to around 1.5x of RAM.
This still is strange 8-16gb RAM should be the most that's required. Maybe for troubleshooting resize the image to 512x1024 (it should be able to do 1127x2383). This feels like a memory leak I just can't see it's origin.
I'm trying to use the ZoeD_M12_NK model on a PNG (1127x2383; 2.82 Mb) and each time I do this, my PC (13th Gen Intel(R) Core(TM) i9-13900HX; RTX 4090 Notebook; GPU 16 Gb, CPU 64Gb; Windows 11 Pro) drops from 42 Gb of remaining storage to 17 megabytes of storage in 2 minutes. I'm not sure if it's the width or height of the photo causing this but I've up until this one photo, I've done several generations with this same model and various sizes of photos with no problem. The only boxes I have ticked are BOOST, Save Outputs, and Generate Stereoscopic Images(s) Top-Bottom. Is this expected? CLI below.
To create a public link, set
share=True
inlaunch()
. Startup time: 21.0s (prepare environment: 7.3s, import torch: 2.8s, import gradio: 0.7s, setup paths: 0.9s, initialize shared: 0.2s, other imports: 0.7s, setup codeformer: 0.1s, load scripts: 4.8s, create ui: 3.0s, gradio launch: 0.5s). Loading VAE weights specified in settings: C:\Users\My Profile\Desktop\My Things\stablediffusion\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt Applying attention optimization: sdp... done. Model loaded in 8.7s (load weights from disk: 2.0s, create model: 0.6s, apply weights to model: 1.4s, load VAE: 2.2s, calculate empty prompt: 2.2s). DepthMap v0.4.4 (0d579ddd) device: cuda Loading model(s) .. Loading model weights from zoedepth_nkimg_size [384, 512] using cache found in c:\users\my profile/.cache\torch\hub\semjon00_midas_master params passed to resize transform: width: 512 height: 384 resize_target: true keep_aspect_ratio: true ensure_multiple_of: 32 resize_method: minimal using pretrained resource url::https://github.com/isl-org/zoedepth/releases/download/v1.0/zoed_m12_nk.pt loaded successfully initialize network with normal loading the model from ./models/pix2pix\latest_net_g.pth computing output(s) .. 0%| | 0/1 [00:00<?, ?it/s]wholeimage being processed in : 1344