Open yiniesta opened 1 month ago
Building Docker image from environment in cog.yaml... ⚠ Stripping patch version from Python version 3.10.4 to 3.10
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 8952k 100 8952k 0 0 63889 0 0:02:23 0:02:23 --:--:-- 76997
[stage-0 14/15] RUN sed -i 's/from pkg_resources import packaging/import packaging/g' /root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/clip/clip.py: 0.179 sed: can't read /root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/clip/clip.py: No such file or directory
Dockerfile:19
17 | RUN git clone https://github.com/philz1337x/stable-diffusion-webui-cog-init /stable-diffusion-webui 18 | RUN python /stable-diffusion-webui/init_env.py --skip-torch-cuda-test 19 | >>> RUN sed -i 's/from pkg_resources import packaging/import packaging/g' /root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/clip/clip.py 20 | WORKDIR /src 21 | EXPOSE 5000
ERROR: failed to solve: process "/bin/sh -c sed -i 's/from pkg_resources import packaging/import packaging/g' /root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/clip/clip.py" did not complete successfully: exit code: 2 ⅹ Failed to build Docker image: exit status 1
The above is my error log. I don't know how to solve it. I hope someone can help me. Thank you very much
Can you describe your setup and the commands you are using while you try to run it?
These are the steps I take to run it on a GPU:
Thank you for following my question.
I have reached the final step, executing the command 'cog predict -i image="test.jpg"', I deleted the following command:
but another error occurred: Building Docker image from environment in cog.yaml... ⚠ Stripping patch version from Python version 3.10.4 to 3.10
(clarity) root@ps:/usr/local/aigc/clarity-upscaler# more nohup.out Building Docker image from environment in cog.yaml... ⚠ Stripping patch version from Python version 3.10.4 to 3.10
Starting Docker image cog-clarity-upscaler-base and running setup()...
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
Missing device driver, re-trying without GPU
Error response from daemon: page not found
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/timm/models/layers/init.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please imp
ort via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
import_hook.py tried to disable xformers, but it was not requested. Ignoring
Style database not found: /src/styles.csv
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/in
dex.aspx', memory monitor disabled
ControlNet preprocessor location: /src/extensions/sd-webui-controlnet/annotator/downloads
2024-10-28 10:46:48,785 - ControlNet - INFO - ControlNet v1.1.440
2024-10-28 10:46:48,955 - ControlNet - INFO - ControlNet v1.1.440
Loading weights [None] from /src/models/Stable-diffusion/epicrealism_naturalSinRC1VAE.safetensors
Available checkpoints: [{'title': 'epicrealism_naturalSinRC1VAE.safetensors', 'model_name': 'epicrealism_naturalSinRC1VAE', 'hash': None, 'sha256': None, 'filename': '
/src/models/Stable-diffusion/epicrealism_naturalSinRC1VAE.safetensors', 'config': None}, {'title': 'flat2DAnimerge_v45Sharp.safetensors', 'modelname': 'flat2DAnimerge
v45Sharp', 'hash': None, 'sha256': None, 'filename': '/src/models/Stable-diffusion/flat2DAnimerge_v45Sharp.safetensors', 'config': None}, {'title': 'juggernaut_reborn.s
afetensors', 'model_name': 'juggernaut_reborn', 'hash': None, 'sha256': None, 'filename': '/src/models/Stable-diffusion/juggernaut_reborn.safetensors', 'config': None}]
2024-10-28 10:46:49,233 - ControlNet - INFO - ControlNet UI callback registered.
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
Creating model from config: /src/configs/v1-inference.yaml
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/huggingface_hub/file_download.py:797: FutureWarning: resume_download
is deprecated and will be removed in v
ersion 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True
.
warnings.warn(
creating model quickly: OSError
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, self._kwargs)
File "/src/modules/initialize.py", line 147, in load_model
shared.sd_model # noqa: B018
File "/src/modules/shared_items.py", line 128, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/src/modules/sd_models.py", line 531, in get_sd_model
load_model()
File "/src/modules/sd_models.py", line 635, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in init
self.instantiate_cond_stage(cond_stage_config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local
directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer
tokenizer.
Failed to create model quickly; will retry using slow method.
loading stable diffusion model: OSError
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, self._kwargs)
File "/src/modules/initialize.py", line 147, in load_model
shared.sd_model # noqa: B018
File "/src/modules/shared_items.py", line 128, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/src/modules/sd_models.py", line 531, in get_sd_model
load_model()
File "/src/modules/sd_models.py", line 644, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in init
self.instantiate_cond_stage(cond_stage_config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(*config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local
directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer
tokenizer.
Stable diffusion model failed to load
Applying attention optimization: InvokeAI... done.
Loading weights [None] from /src/models/Stable-diffusion/epicrealism_naturalSinRC1VAE.safetensors
Creating model from config: /src/configs/v1-inference.yaml
Exception in thread Thread-5 (load_model):
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 953, in run
self._target(self._args, self._kwargs)
File "/src/modules/initialize.py", line 153, in load_model
devices.first_time_calculation()
File "/src/modules/devices.py", line 162, in first_time_calculation
linear(x)
TypeError: 'NoneType' object is not callable
creating model quickly: OSError
Traceback (most recent call last):
File "
could you describe the steps you took
I have successfully executed "python download_weights.py"
and install cog:
sudo curl -o /usr/local/bin/cog -L https://github.com/replicate/cog/releases/latest/download/cog_`uname -s_
uname -m`
sudo chmod +x /usr/local/bin/cog
and then execute: cog predict -i image="test.jpg"
That's all
Never had this bug, but I would try to fix this:
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
Have the exact same problem issue haha, but I'm using WSL on Windows 11, so I'm thinking that could be the issue (cog is supposed to be for mac/linux?)
Okay, fixed that issue by changing pyenv version in cog.yaml
:
sed -i 's/from pkg_resources import packaging/import packaging/g' /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/clip/clip.py
But now I'm getting an entirely different issue:
Starting Docker image cog-clarity-upscaler-base and running setup()...
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
import_hook.py tried to disable xformers, but it was not requested. Ignoring
Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 500: named symbol not found: str
Traceback (most recent call last):
File "/src/modules/errors.py", line 98, in run
code()
File "/src/modules/devices.py", line 76, in enable_tf32
device_id = (int(shared.cmd_opts.device_id) if shared.cmd_opts.device_id is not None and shared.cmd_opts.device_id.isdigit() else 0) or torch.cuda.current_device()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/cuda/__init__.py", line 674, in current_device
_lazy_init()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/cuda/__init__.py", line 247, in _lazy_init
torch._C._cuda_init()
RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 500: named symbol not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/server/worker.py", line 332, in _setup
run_setup(self._predictor)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/predictor.py", line 75, in run_setup
predictor.setup()
File "/src/predict.py", line 46, in setup
initialize.imports()
File "/src/modules/initialize.py", line 34, in imports
shared_init.initialize()
File "/src/modules/shared_init.py", line 17, in initialize
from modules import options, shared_options
File "/src/modules/shared_options.py", line 3, in <module>
from modules import localization, ui_components, shared_items, shared, interrogate, shared_gradio_themes
File "/src/modules/interrogate.py", line 13, in <module>
from modules import devices, paths, shared, lowvram, modelloader, errors
File "/src/modules/devices.py", line 84, in <module>
errors.run(enable_tf32, "Enabling TF32")
File "/src/modules/errors.py", line 100, in run
display(task, e)
File "/src/modules/errors.py", line 68, in display
te = traceback.TracebackException.from_exception(e)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/traceback.py", line 572, in from_exception
return cls(type(exc), exc, exc.__traceback__, *args, **kwargs)
AttributeError: 'str' object has no attribute '__traceback__'
{"logger": "cog.server.runner", "timestamp": "2024-11-03T08:35:00.387164Z", "exception": "Traceback (most recent call last):\n File \"/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/server/runner.py\", line 223, in _handle_done\n f.result()\n File \"/root/.pyenv/versions/3.10.15/lib/python3.10/concurrent/futures/_base.py\", line 451, in result\n return self.__get_result()\n File \"/root/.pyenv/versions/3.10.15/lib/python3.10/concurrent/futures/_base.py\", line 403, in __get_result\n raise self._exception\ncog.server.exceptions.FatalWorkerException: Predictor errored during setup: 'str' object has no attribute '__traceback__'", "severity": "ERROR", "message": "caught exception while running setup"}
{"logger": "cog.server.http", "timestamp": "2024-11-03T08:35:00.388297Z", "exception": "Exception: setup failed", "severity": "ERROR", "message": "encountered fatal error"}
{"logger": "cog.server.http", "timestamp": "2024-11-03T08:35:00.388547Z", "severity": "ERROR", "message": "shutting down immediately"}
ⅹ Failed to get container status: exit status 1
@philz1337x you got any idea how to fix this?
Fixed the issue! Just had to update docker, and now everything runs locally.
I have been working on it for about 3 days, constantly downloading repositories or models, or installing Python packages. But so far, it hasn't started running. I want to know if anyone has successfully deployed it, and the effect is related to the website( https://clarityai.co/dashboard )Are their functions similar?