Loading captioning models...
Loading CLIP interrogator...
Loading CLIP model from ViT-H-14/laion2b_s32b_b79k
Loading BLIP model...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.66G/1.66G [01:10<00:00, 25.2MB/s]
load checkpoint from https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_caption.pth
Loading CLIP model...
Downloading (…)_pytorch_model.bin";: 9%|████████████████▎ | 336M/3.94G [00:13<02:23, 25.2MB/s]Downloading (…)_pytorch_model.bin";: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.94G/3.94G [02:41<00:00, 24.4MB/s]
Loaded CLIP model and data in 182.70 seconds.
Loading YOLOv5 interrogator...
Downloading: "https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt" to /Users/macuser/work/A1111/stable-diffusion-webui/models/yolo/yolov5m6v7.pt
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 69.0M/69.0M [00:05<00:00, 13.5MB/s]
Downloading: "https://github.com/ultralytics/yolov5/zipball/master" to /Users/macuser/.cache/torch/hub/master.zip
Traceback (most recent call last):
File "/Users/macuser/work/A1111/stable-diffusion-webui/extensions/sd_smartprocess/smartprocess.py", line 130, in preprocess
crop_clip = CropClip()
File "/Users/macuser/work/A1111/stable-diffusion-webui/extensions/sd_smartprocess/clipcrop.py", line 49, in init
self.model = torch.hub.load('ultralytics/yolov5', 'custom', model_path[0])
File "/Users/macuser/work/A1111/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/hub.py", line 540, in load
model = _load_local(repo_or_dir, model, *args, *kwargs)
File "/Users/macuser/work/A1111/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/hub.py", line 569, in _load_local
model = entry(args, **kwargs)
File "/Users/macuser/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 83, in custom
return _create(path, autoshape=autoshape, verbose=_verbose, device=device)
File "/Users/macuser/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 33, in _create
from models.common import AutoShape, DetectMultiBackend
File "/Users/macuser/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 32, in
from utils.plots import Annotator, colors, save_one_box
File "/Users/macuser/.cache/torch/hub/ultralytics_yolov5_master/utils/plots.py", line 18, in
import seaborn as sn
ModuleNotFoundError: No module named 'seaborn'
Loading captioning models... Loading CLIP interrogator... Loading CLIP model from ViT-H-14/laion2b_s32b_b79k Loading BLIP model... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.66G/1.66G [01:10<00:00, 25.2MB/s] load checkpoint from https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_caption.pth Loading CLIP model... Downloading (…)_pytorch_model.bin";: 9%|████████████████▎ | 336M/3.94G [00:13<02:23, 25.2MB/s]Downloading (…)_pytorch_model.bin";: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.94G/3.94G [02:41<00:00, 24.4MB/s] Loaded CLIP model and data in 182.70 seconds. Loading YOLOv5 interrogator... Downloading: "https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt" to /Users/macuser/work/A1111/stable-diffusion-webui/models/yolo/yolov5m6v7.pt
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 69.0M/69.0M [00:05<00:00, 13.5MB/s] Downloading: "https://github.com/ultralytics/yolov5/zipball/master" to /Users/macuser/.cache/torch/hub/master.zip Traceback (most recent call last): File "/Users/macuser/work/A1111/stable-diffusion-webui/extensions/sd_smartprocess/smartprocess.py", line 130, in preprocess crop_clip = CropClip() File "/Users/macuser/work/A1111/stable-diffusion-webui/extensions/sd_smartprocess/clipcrop.py", line 49, in init self.model = torch.hub.load('ultralytics/yolov5', 'custom', model_path[0]) File "/Users/macuser/work/A1111/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/hub.py", line 540, in load model = _load_local(repo_or_dir, model, *args, *kwargs) File "/Users/macuser/work/A1111/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/hub.py", line 569, in _load_local model = entry(args, **kwargs) File "/Users/macuser/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 83, in custom return _create(path, autoshape=autoshape, verbose=_verbose, device=device) File "/Users/macuser/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 33, in _create from models.common import AutoShape, DetectMultiBackend File "/Users/macuser/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 32, in
from utils.plots import Annotator, colors, save_one_box
File "/Users/macuser/.cache/torch/hub/ultralytics_yolov5_master/utils/plots.py", line 18, in
import seaborn as sn
ModuleNotFoundError: No module named 'seaborn'
Mac M1.