Open sam-greenwood opened 2 years ago
I'm also hitting this same error, haven't found a solution for it yet.
make sure you are copying the correct endpoint url and are adding something like "/api/img2img"
I am also having the same problem
Same issue for me...
make sure you are copying the correct endpoint url and are adding something like "/api/img2img"
I have double checked and I have been doing this. Sorry I can't shed any more light on what might be causing the issue.
Hi, don't know if this issue is still relevant, but I had the same problem and managed to solve it.
The BadZipError occurs because there is an error in the api backend, hence no "valid" zip file was passed back to krita, but an error message. In the ipynb backend (https://github.com/nousr/koi/blob/main/koi_colab_backend.ipynb?short_path=8eefaeb#L223) while using the pipe, it tries to get image from the key ["sample"]
wich doesn't exist in the output of the pipe. According to https://huggingface.co/CompVis/stable-diffusion-v1-4 , pipe(...).images[0]
returns the correct output image. Changing the code respectively should solve the issue.
Hi, don't know if this issue is still relevant, but I had the same problem and managed to solve it. The BadZipError occurs because there is an error in the api backend, hence no "valid" zip file was passed back to krita, but an error message. In the ipynb backend (https://github.com/nousr/koi/blob/main/koi_colab_backend.ipynb?short_path=8eefaeb#L223) while using the pipe, it tries to get image from the key
["sample"]
wich doesn't exist in the output of the pipe. According to https://huggingface.co/CompVis/stable-diffusion-v1-4 ,pipe(...).images[0]
returns the correct output image. Changing the code respectively should solve the issue.
hey! do you think you could open a PR to fix this? @qqmok would be a great help :)
im am pretty noob in this things, what did you say is that i have to change
import torch from flask import Flask, Response, request, send_file from PIL import Image from io import BytesIO from torch import autocast from diffusers import StableDiffusionImg2ImgPipeline from click import secho from zipfile import ZipFile
from flask_ngrok import run_with_ngrok
secho("Loading Model...", fg="yellow")
pipe = StableDiffusionImg2ImgPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", use_auth_token=True, revision="fp16", torch_dtype=torch.float16, ).to("cuda")
secho("Finished!", fg="green")
app = Flask(name)
def seed_everything(seed: int): import random, os
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = True
def get_name(prompt, seed): return f'{prompt}-{seed}'
@app.route("/api/img2img", methods=["POST"]) def img2img(): global pipe
r = request
headers = r.headers
data = r.data
buff = BytesIO(data)
img = Image.open(buff).convert("RGB")
seed = int(headers["seed"])
prompt = headers['prompt']
print(r.headers)
zip_stream = BytesIO()
with ZipFile(zip_stream, 'w') as zf:
for index in range(int(headers['variations'])):
variation_seed = seed + index
seed_everything(variation_seed)
with autocast("cuda"):
return_image = pipe(
init_image=img,
prompt=prompt,
strength=float(headers["sketch_strength"]),
guidance_scale=float(headers["prompt_strength"]),
num_inference_steps=int(headers["steps"]),
)
******** ["sample"][0] ********** This thing for
********pipe(...).images[0]******** this thing
return_bytes = BytesIO()
return_image.save(return_bytes, format="JPEG")
return_bytes.seek(0)
zf.writestr(get_name(prompt, variation_seed), return_bytes.read())
zip_stream.seek(0)
return send_file(zip_stream, mimetype="application/zip")
run_with_ngrok(app) app.run()
it didnt work for me but didnt know why
Hello. I am new to Krita and Koi. just installed using colab, and getting this badZipFIle error. is this thread still active? is there a fix for this? Any way to help? much appreciated in advance :)
Same error, but that's just what you see in the "frontend". The actual error I think is this on the server:
ERROR:__main__:Exception on /api/img2img [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 2529, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "<ipython-input-4-c3399f45cf3d>", line 75, in img2img
strength=float(headers["sketch_strength"]),
File "/usr/local/lib/python3.10/dist-packages/werkzeug/datastructures/headers.py", line 493, in __getitem__
return self.environ[f"HTTP_{key}"]
KeyError: 'HTTP_SKETCH_STRENGTH'
The code causing the issue is this:
with autocast("cuda"):
return_image = pipe(
init_image=img,
prompt=prompt,
strength=float(headers["sketch_strength"]),
guidance_scale=float(headers["prompt_strength"]),
num_inference_steps=int(headers["steps"]),
).images[0]
I don't know how to fix it.
Same issue:
Bing helped me change the script a little bit to make it work:
import torch
from flask import Flask, Response, request, send_file
from PIL import Image
from io import BytesIO
from torch import autocast
from diffusers import StableDiffusionImg2ImgPipeline
from click import secho
from zipfile import ZipFile
# the following line is specific to remote environments (like google colab)
from flask_ngrok import run_with_ngrok
# Load the model for use (this may take a minute or two...or three)
secho("Loading Model...", fg="yellow")
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
use_auth_token=True,
revision="fp16",
torch_dtype=torch.float16,
safety_checker=None,
requires_safety_checker=False,
).to("cuda")
secho("Finished!", fg="green")
# Start setting up flask
app = Flask(__name__)
# Define a function to help us "control the randomness"
def seed_everything(seed: int):
import random, os
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = True
def get_name(prompt, seed):
return f'{prompt}-{seed}'
# Define one endpoint "/api/img2img" for us to communicate with
@app.route("/api/img2img", methods=["POST"])
def img2img():
global pipe
r = request
headers = r.headers
data = r.data
buff = BytesIO(data)
img = Image.open(buff).convert("RGB")
seed = int(headers.get("seed", 1337))
prompt = headers.get('prompt', 'error message')
print(r.headers)
zip_stream = BytesIO()
with ZipFile(zip_stream, 'w') as zf:
for index in range(int(headers.get('variations', 32))):
variation_seed = seed + index
seed_everything(variation_seed)
secho("Loading image...", fg="yellow")
with autocast("cuda"):
return_image = pipe(
image=img,
prompt=prompt,
strength=float(headers.get("sketch_strength", 0.4)),
guidance_scale=float(headers.get("prompt_strength", 7.5)),
num_inference_steps=int(headers.get("steps", 32)),
).images[0]
secho("Got Image!", fg="green")
return_bytes = BytesIO()
return_image.save(return_bytes, format="JPEG")
return_bytes.seek(0)
zf.writestr(get_name(prompt, variation_seed), return_bytes.read())
zip_stream.seek(0)
return send_file(zip_stream, mimetype="application/zip")
run_with_ngrok(app)
app.run()
Tried running the example notebook to use colab for the GPU compute. Setup of the server works just fine, opened fresh install of krita, pasted in the address for the server and clicked 'dream' with the default mountain landscape prompt. An error on colab and krita was produced (see below).
It seems that the inference runs fine so the stable diffusion code and connection to krita seems to work. The error seems to occur when the generated image is passed back to krita. Running on krita version 5.1.1 (AppImage) and my OS is Fedora 36 with linux kernel 5.19. The colab notebook is an unmodified copy of that included in the koi repo (https://github.com/nousr/koi/blob/main/koi_colab_backend.ipynb).
I note that in the krita error message it is using the miniconda python installed on my system, could that be an issue? Any help appreciated!
Colab error message:
And then this error on krita: