Open oGrqpez opened 1 year ago
Please provide the full stacktrace.
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Version: v1.3.2 Commit hash: baf6946e06249c5af9851c60171692c44ef633e0 Installing requirements
Installing imageio-ffmpeg requirement for depthmap script Installing pyqt5 requirement for depthmap script
Launching Web UI with arguments: --xformers --autolaunch 2023-06-22 18:45:05,887 - ControlNet - INFO - ControlNet v1.1.224 ControlNet preprocessor location: C:\Users\User\Desktop\Stable Diffusion\webui\extensions\sd-webui-controlnet\annotator\downloads 2023-06-22 18:45:06,035 - ControlNet - INFO - ControlNet v1.1.224 Loading weights [ef8629e2c8] from C:\Users\User\Desktop\Stable Diffusion\webui\models\Stable-diffusion\protogenX34Photorealism_protogenX34.safetensors Creating model from config: C:\Users\User\Desktop\Stable Diffusion\webui\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode
Thanks for being a Gradio user! If you have questions or feedback, please join our Discord server and chat with us: https://discord.gg/feTf9x3ZSB Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
DiffusionWrapper has 859.52 M params.
Startup time: 23.9s (import torch: 6.5s, import gradio: 3.6s, import ldm: 2.1s, other imports: 3.4s, setup codeformer: 0.3s, load scripts: 6.8s, create ui: 0.6s, gradio launch: 0.5s).
Applying optimization: xformers... done.
Textual inversion embeddings loaded(0):
Model loaded in 8.5s (load weights from disk: 0.8s, create model: 2.8s, apply weights to model: 1.8s, apply half(): 0.5s, move model to device: 0.6s, load textual inversion embeddings: 2.0s).
DepthMap v0.3.11 (baf6946e) device: cuda Loading model weights from zoedepth_k
Overwriting config with config_version kitti img_size [1440, 1080] Using cache found in C:\Users\User/.cache\torch\hub\intel-isl_MiDaS_master Cannot find callable DPT_BEiT_L_384 in hubconf All done.
I have the same issue, on Ubuntu:
DepthMap v0.3.11 (baf6946e)
device: cuda
Loading model weights from zoedepth_nk
img_size [512, 512]
Using cache found in /home/pm/.cache/torch/hub/intel-isl_MiDaS_master
Cannot find callable DPT_BEiT_L_384 in hubconf
All done.
DPT_BEiT_L_384 model was installed:
DepthMap v0.3.11 (baf6946e)
device: cuda
Loading model weights from ./models/midas/dpt_beit_large_384.pt
Downloading https://github.com/isl-org/MiDaS/releases/download/v3_1/dpt_beit_large_384.pt to ./models/midas/dpt_beit_large_384.pt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1.34G/1.34G [01:55<00:00, 12.5MB/s]
initialize network with normal
loading the model from ./models/pix2pix/latest_net_G.pth
Computing depthmap(s) ..
But indeed, /home/pm/.cache/torch/hub/intel-isl_MiDaS_master
doesn't have a definition for DPT_BEiT_B_384
ControlNet annotator comes with it own hubconf.py
Copying the annotator midas_repo
folder to /home/pm/.cache/torch/hub/
and rename it as intel-isl_MiDaS_master
makes it work:
DepthMap v0.3.11 (baf6946e)
device: cuda
Loading model weights from zoedepth_nk
img_size [512, 512]
Using cache found in /home/pm/.cache/torch/hub/intel-isl_MiDaS_master
Params passed to Resize transform:
width: 512
height: 512
resize_target: True
keep_aspect_ratio: True
ensure_multiple_of: 32
resize_method: minimal
Using pretrained resource url::https://github.com/isl-org/ZoeDepth/releases/download/v1.0/ZoeD_M12_NK.pt
Downloading: "https://github.com/isl-org/ZoeDepth/releases/download/v1.0/ZoeD_M12_NK.pt" to /home/pm/.cache/torch/hub/checkpoints/ZoeD_M12_NK.pt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1.35G/1.35G [01:53<00:00, 12.7MB/s]
Loaded successfully
initialize network with normal
loading the model from ./models/pix2pix/latest_net_G.pth
Computing depthmap(s) ..
This is an ugly solution of course.
I guess --reinstall-torch
is needed.
I cannot use zoedepth models, I have them downloaded and placed in ~.cache\torch\hub\checkpoints. Are they placed in the wrong spot or is there something else I am supposed to do?