sczhou / CodeFormer

[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
Other
15.4k stars 3.25k forks source link

Error setting up CodeFormer: #194

Open CharlesFeo opened 1 year ago

CharlesFeo commented 1 year ago

(sdwebui) D:\stable-diffusion-webui>webui-user.bat venv "D:\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)] Commit hash: 22bcc7be428c94e9408f589966c2040187245d81 Installing xformers Installing requirements for Web UI Launching Web UI with arguments: --xformers Error setting up CodeFormer: Traceback (most recent call last): File "D:\stable-diffusion-webui\modules\codeformer_model.py", line 38, in setup_model from facelib.utils.face_restoration_helper import FaceRestoreHelper File "d:\stable-diffusion-webui\venv\scripts\codeformer-master\facelib\utils\face_restoration_helper.py", line 7, in from facelib.detection import init_detection_model File "d:\stable-diffusion-webui\venv\scripts\codeformer-master\facelib\detection__init__.py", line 10, in from .retinaface.retinaface import RetinaFace File "d:\stable-diffusion-webui\venv\scripts\codeformer-master\facelib\detection\retinaface\retinaface.py", line 14, in from basicsr.utils.misc import get_device ImportError: cannot import name 'get_device' from 'basicsr.utils.misc' (D:\stable-diffusion-webui\venv\lib\site-packages\basicsr\utils\misc.py)

Loading weights [fc2511737a] from D:\stable-diffusion-webui\models\Stable-diffusion\chilloutmix_NiPrunedFp32Fix.safetensors Creating model from config: D:\stable-diffusion-webui\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Applying xformers cross attention optimization. Textual inversion embeddings loaded(0): Model loaded in 2.8s (load weights from disk: 0.1s, create model: 0.3s, apply weights to model: 0.6s, apply half(): 0.6s, move model to device: 0.5s, load textual inversion embeddings: 0.7s). Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 7.4s (import torch: 1.5s, import gradio: 0.9s, import ldm: 0.5s, other imports: 0.7s, load scripts: 0.5s, load SD checkpoint: 3.0s, create ui: 0.2s, gradio launch: 0.1s).

hanoi2233 commented 1 year ago

same here

CharlieWML commented 1 year ago

I have same error, did you solve this?

ksylvan commented 1 year ago

Same error, just setting up Automatic1111 on a new Windows 11 (in WSL2).

python webui.py --xformers 
Error setting up CodeFormer:
Traceback (most recent call last):
  File "/mnt/c/Users/kayvan/src/stable-diffusion-webui/modules/codeformer_model.py", line 38, in setup_model
    from facelib.utils.face_restoration_helper import FaceRestoreHelper
  File "/mnt/c/Users/kayvan/src/stable-diffusion-webui/repositories/CodeFormer/facelib/utils/face_restoration_helper.py", line 7, in <module>
    from facelib.detection import init_detection_model
  File "/mnt/c/Users/kayvan/src/stable-diffusion-webui/repositories/CodeFormer/facelib/detection/__init__.py", line 10, in <module>
    from .retinaface.retinaface import RetinaFace
  File "/mnt/c/Users/kayvan/src/stable-diffusion-webui/repositories/CodeFormer/facelib/detection/retinaface/retinaface.py", line 14, in <module>
    from basicsr.utils.misc import get_device
ImportError: cannot import name 'get_device' from 'basicsr.utils.misc' (/home/kayvan/.miniconda3/envs/sd/lib/python3.10/site-packages/basicsr/utils/misc.py)

Loading weights [6ce0161689] from /mnt/c/Users/kayvan/src/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /mnt/c/Users/kayvan/src/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(0): 
Model loaded in 18.0s (load weights from disk: 0.9s, create model: 0.3s, apply weights to model: 16.1s, apply half(): 0.2s, move model to device: 0.5s).
Running on local URL:  http://127.0.0.1:7860
ksylvan commented 1 year ago

I think I'm close to fixing this.

python
Python 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import basicsr
>>> basicsr.__version__
'1.3.2'
>>>

But the correct version should be 1.4.2 - so there's an old version there.

ksylvan commented 1 year ago

In Automatic1111/stable-diffusion-webui, In requirememts_versions.txt: we have basicsr==1.4.2 (and even in requirements.txt when we just list basicsr) the installed package is this one: https://github.com/XPixelGroup/BasicSR

But this CodeFormers repo contains its own version of basicsr (version 1.3.2) in its subdirectory.

diff -u basicsr/utils/misc.py /home/kayvan/.miniconda3/envs/sd/lib/python3.10/site-packages/basicsr/utils/misc.py 
--- basicsr/utils/misc.py       2023-05-07 07:23:54.494896600 -0700
+++ /home/kayvan/.miniconda3/envs/sd/lib/python3.10/site-packages/basicsr/utils/misc.py 2023-05-07 09:25:48.603644727 -0700
@@ -1,35 +1,11 @@
+import numpy as np
 import os
-import re
 import random
 import time
 import torch
-import numpy as np
 from os import path as osp

 from .dist_util import master_only
-from .logger import get_root_logger
-
-IS_HIGH_VERSION = [int(m) for m in list(re.findall(r"^([0-9]+)\.([0-9]+)\.([0-9]+)([^0-9][a-zA-Z0-9]*)?(\+git.*)?$",\
-    torch.__version__)[0][:3])] >= [1, 12, 0]
-
-def gpu_is_available():
-    if IS_HIGH_VERSION:
-        if torch.backends.mps.is_available():
-            return True
-    return True if torch.cuda.is_available() and torch.backends.cudnn.is_available() else False
-
-def get_device(gpu_id=None):
-    if gpu_id is None:
-        gpu_str = ''
-    elif isinstance(gpu_id, int):
-        gpu_str = f':{gpu_id}'
-    else:
-        raise TypeError('Input should be int value.')
-
-    if IS_HIGH_VERSION:
-        if torch.backends.mps.is_available():
-            return torch.device('mps'+gpu_str)
-    return torch.device('cuda'+gpu_str if torch.cuda.is_available() and torch.backends.cudnn.is_available() else 'cpu')

 def set_random_seed(seed):
@@ -67,7 +43,9 @@
     else:
         mkdir_and_rename(path_opt.pop('results_root'))
     for key, path in path_opt.items():
-        if ('strict_load' not in key) and ('pretrain_network' not in key) and ('resume' not in key):
+        if ('strict_load' in key) or ('pretrain_network' in key) or ('resume' in key) or ('param_key' in key):
+            continue
+        else:
             os.makedirs(path, exist_ok=True)

@@ -84,7 +62,7 @@
             Default: False.

     Returns:
-        A generator for all the interested files with relative pathes.
+        A generator for all the interested files with relative paths.
     """

     if (suffix is not None) and not isinstance(suffix, (str, tuple)):
@@ -120,7 +98,6 @@
         opt (dict): Options.
         resume_iter (int): Resume iteration.
     """
-    logger = get_root_logger()
     if opt['path']['resume_state']:
         # get all the networks
         networks = [key for key in opt.keys() if key.startswith('network_')]
@@ -129,15 +106,22 @@
             if opt['path'].get(f'pretrain_{network}') is not None:
                 flag_pretrain = True
         if flag_pretrain:
-            logger.warning('pretrain_network path will be ignored during resuming.')
+            print('pretrain_network path will be ignored during resuming.')
         # set pretrained model paths
         for network in networks:
             name = f'pretrain_{network}'
             basename = network.replace('network_', '')
-            if opt['path'].get('ignore_resume_networks') is None or (basename
+            if opt['path'].get('ignore_resume_networks') is None or (network
                                                                      not in opt['path']['ignore_resume_networks']):
                 opt['path'][name] = osp.join(opt['path']['models'], f'net_{basename}_{resume_iter}.pth')
-                logger.info(f"Set {name} to {opt['path'][name]}")
+                print(f"Set {name} to {opt['path'][name]}")
+
+        # change param_key to params in resume
+        param_keys = [key for key in opt['path'].keys() if key.startswith('param_key')]
+        for param_key in param_keys:
+            if opt['path'][param_key] == 'params_ema':
+                opt['path'][param_key] = 'params'
+                print(f'Set {param_key} to params')

 def sizeof_fmt(size, suffix='B'):
@@ -148,7 +132,7 @@
         suffix (str): Suffix. Default: 'B'.

     Return:
-        str: Formated file siz.
+        str: Formatted file size.
     """
     for unit in ['', 'K', 'M', 'G', 'T', 'P', 'E', 'Z']:
         if abs(size) < 1024.0:

The correct fix would be to rely on the 1.4.2 basicSR library and pull the extra methods in some other way.

draco1023 commented 1 year ago

It seems that the local version of basicSR in CodeFormer is a modifed version from https://github.com/XPixelGroup/BasicSR. After executing the following command in the stable-diffusion-webui directory, I finally got it to work.

cp repositories/CodeFormer/basicsr/utils/misc.py venv/lib/python3.10/site-packages/basicsr/utils/misc.py

VioletCoding commented 1 year ago

It seems that the local version of basicSR in CodeFormer is a modifed version from https://github.com/XPixelGroup/BasicSR. After executing the following command in the stable-diffusion-webui directory, I finally got it to work.

cp repositories/CodeFormer/basicsr/utils/misc.py venv/lib/python3.10/site-packages/basicsr/utils/misc.py

Thanks! Work for me.