philz1337x / clarity-upscaler

Clarity AI | AI Image Upscaler & Enhancer - free and open-source Magnific Alternative
https://ClarityAI.co
GNU Affero General Public License v3.0
3.58k stars 371 forks source link

I am new for cog, why I get this? #8

Closed yishuaidu closed 2 weeks ago

yishuaidu commented 5 months ago

=> CACHED [stage-1 6/11] RUN --mount=type=bind,from=deps,source=/dep,target=/dep cp -rf /dep/ $(pyenv prefix)/lib/python/site-packages || true 0.0s => CACHED [stage-1 7/11] RUN git config --global --add safe.directory /src 0.0s => CACHED [stage-1 8/11] RUN git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui /stable-diffusion-webui && cd /stable-diffusion-webui && git checkout 310d6b9075c6edb3b884bd2a41 0.0s => CACHED [stage-1 9/11] RUN git clone https://github.com/LLSean/cog-A1111-webui /cog-sd-webui 0.0s => CACHED [stage-1 10/11] RUN python /cog-sd-webui/init_env.py --skip-torch-cuda-test 0.0s => CACHED [stage-1 11/11] WORKDIR /src 0.0s => preparing layers for inline cache 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:6143abae3dbac3d3e24884eed7e8c5893db329a2ee5d06fd0964a654322d9e24 0.0s => => naming to docker.io/library/cog-clarity-upscaler-base 0.0s

Starting Docker image cog-clarity-upscaler-base and running setup()... Traceback (most recent call last): File "/root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/cog/server/worker.py", line 189, in _setup run_setup(self._predictor) File "/root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/cog/predictor.py", line 70, in run_setup predictor.setup() File "/src/predict.py", line 22, in setup initialize.imports() File "/src/modules/initialize.py", line 24, in imports from modules import paths, timer, import_hook, errors # noqa: F401 File "/src/modules/paths.py", line 34, in assert sd_path is not None, f"Couldn't find Stable Diffusion in any of: {possible_sd_paths}" AssertionError: Couldn't find Stable Diffusion in any of: ['/src/repositories/stable-diffusion-stability-ai', '.', '/']

DrMWeigand commented 5 months ago

same for me, put the models in the root of /models and to the subdirectories as it would be expected in an automatic1111 installation. Still, I'm getting this error.

simbrams commented 5 months ago

Same issue here.

Inside cog.yaml - python cog-sd-webui/init_env.py --skip-torch-cuda-test This command is downloading stable-diffusion-stability-ai and other folders inside /stable-diffusion-webui/repositories but the docker WORKDIR is supposed to be /src so it cannot find it at /src/repositories/stable-diffusion-stability-ai

philz1337x commented 5 months ago

try git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui before you run cog predict ...

simbrams commented 5 months ago

Hi @philz1337x, thank you for your work!

I think there are many things missing, we also need to add ControlNET + MultiDiffusion in the extensions/folder. The upscaler 4x-UltraSharp.pth in models/ESRGAN The negative embeddings in /embeddings We also need to put the correct VAE in models/VAE (vae-ft-mse-840000-ema-pruned.safetensors)

Basically if we're using CI/CD to push our model on replicate, the current implementation won't work by default, it will require a bit more work.

Would be nice to have it ready to run/deploy by default :)

In the meantime I will try to make it run on my machine and Replicate. Will keep you posted!

philz1337x commented 5 months ago

Thanks for testing I will extend the readme.

In the meantime try this to run the replicate model on your machine:

docker run -d -p 5000:5000 --gpus=all r8.im/philz1337x/clarity-upscaler@sha256:803dc4af7cdfda701188bd3e009edfd3966ea5c51f9629bc9664befc1829b865
curl -s -X POST \
  -H "Content-Type: application/json" \
  -d $'{
    "input": {
      "image": "https://link-to-image",
      "prompt": "masterpiece, best quality, highres, <lora:more_details:0.5> <lora:SDXLrender_v2.0:1>",
      "dynamic": 6,
      "sd_model": "juggernaut_reborn.safetensors [338b85bc4f]",
      "creativity": 0.485,
      "resemblance": 1.5,
      "scale_factor": 2,
      "tiling_width": 112,
      "tiling_height": 144,
      "negative_prompt": "(worst quality, low quality, normal quality:2) JuggernautNegative-neg"
    }
  }' \
  http://localhost:5000/predictions
simbrams commented 5 months ago

Thanks for testing I will extend the readme.

In the meantime try this to run the replicate model on your machine:

docker run -d -p 5000:5000 --gpus=all r8.im/philz1337x/clarity-upscaler@sha256:803dc4af7cdfda701188bd3e009edfd3966ea5c51f9629bc9664befc1829b865
curl -s -X POST \
  -H "Content-Type: application/json" \
  -d $'{
    "input": {
      "image": "https://link-to-image",
      "prompt": "masterpiece, best quality, highres, <lora:more_details:0.5> <lora:SDXLrender_v2.0:1>",
      "dynamic": 6,
      "sd_model": "juggernaut_reborn.safetensors [338b85bc4f]",
      "creativity": 0.485,
      "resemblance": 1.5,
      "scale_factor": 2,
      "tiling_width": 112,
      "tiling_height": 144,
      "negative_prompt": "(worst quality, low quality, normal quality:2) JuggernautNegative-neg"
    }
  }' \
  http://localhost:5000/predictions

The goal isn't only to run it locally but also to deploy it on replicate as a private/public model with a different config and GPU :)

simbrams commented 5 months ago

@philz1337x Do you think you can deploy a copy as a public model with a A100 GPU ? So that people can chose between A40 and A100 :)

lumore commented 5 months ago

@philz1337x Do you think you can deploy a copy as a public model with a A100 GPU ? So that people can chose between A40 and A100 :)

+1, Would love to get an option to run it on A100

philz1337x commented 5 months ago

Ok I just switched the model to A100 https://twitter.com/philz1337x/status/1772115599726178682

lumore commented 5 months ago

You're the best!

emirhanbilgic commented 5 months ago

I spent a day deploying it on replicate.com. The day is wasted but still, the same error appears: AssertionError: Couldn't find Stable Diffusion in any of: ['/src/repositories/stable-diffusion-stability-ai', '.', '/'] could you solve it? @simbrams

nivedwho commented 5 months ago

Has anyone figured out how to redeploy this repo?

@simbrams

simbrams commented 5 months ago

I haven't dig further since last time. I had some errors that looked like corrupted model weights.

Next time, I will probably download the remote cog container files from replicate, copy paste all the files and try to run it again.

philz1337x commented 5 months ago

I am working on a download-weights.py file to make the whole deployment easier. The file is a work in progress but should fix some of the bugs mentioned here.

Achuttarsing commented 4 months ago

Hi @philz1337x! Congrats on your impressive work. I've been eager to try it out, but I've hit a snag when attempting to run the model. I've tried both the cod method and the SD webui method, but I'm encountering issues with both.

cod method:

SD webui method:

I'm feeling a bit stuck here. Do you have any suggestions on how to proceed? Thanks!

emirhanbilgic commented 4 months ago

Hey @philz1337x could you solve the redeploying issue? download_weights still does not work.

e-cal commented 3 months ago

as of the current version of the repo, you need to add the vae to download weights:

# VAE
download_file(
    "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors?download=true",
    "models/VAE",
    "vae-ft-mse-840000-ema-pruned.safetensors"
)

You also need to clone the multidiffusion upscaler and controlnet extensions to extensions/.

For the controlnet extension, the most recent version does not work with this repo, so clone tag 1.1.436: git clone --depth 1 --branch 1.1.436 git@github.com:Mikubill/sd-webui-controlnet.git

e-cal commented 3 months ago

also make sure docker has access to your gpu. for me that's by using nvidia-docker and running cog with sudo

for fellow nix users:

  virtualisation.docker = {
    enable = true;
    enableOnBoot = false;
    enableNvidia = true;
    rootless = {
      enable = true;
      setSocketVariable = false;
      daemon.settings = {
        runtimes = {
          nvidia = {
            path = "${pkgs.nvidia-docker}/bin/nvidia-container-runtime";
          };
        };
      };
    };
  };
philz1337x commented 2 weeks ago

@e-cal added a great download-weights.py file to make all this easier!