mix1009 / sdwebuiapi

Python API client for AUTOMATIC1111/stable-diffusion-webui
MIT License
1.36k stars 182 forks source link

How do you use a custom image in img2img #4

Closed SolusGod closed 1 year ago

SolusGod commented 1 year ago

First of all, thanks a lot for taking the time to make this project as it has been instrumental in making my app.

As my background is not a coder (it's a miracle I made it this far in my development), I'm wondering how you can reference a custom image and plug it into the im2img function. api.img2img(images=[result1.image] < plugging my custom image here causes it to throw an error

Proof

Thank you again.

mix1009 commented 1 year ago

Here's a simple code that reads a image file and calls img2img and saves the output image.

from PIL import Image
img = Image.open('image.png')
img = img.resize((512,512)) # if it's not 512x512

r = api.img2img(images=[img], prompt='your_prompt', cfg_scale=8.5, denoising_strength=0.2)
outimg = r.image
outimg.save('output.png')

Python(Jupyter) notebook is a better place to program if you want to interactively see the results.

SolusGod commented 1 year ago

Woah, I was just about to type that I figured it out! But your reply was faster haha! I ran into a more dazing problem however. The img2img output from scripting is severely degraded in quality compared to doing it manually.

Here's the base image that I plug into im2img UVs

Here's what I get in return when I run the API NewPath

Here's what I get when I do it manually (Same steps, denoise factor, etc) 00761-1916565298-a fantasy book with intricate and complex art, focused, ((flat lighting)), by greg rutkowski

And finally here's my full code.

import json
import requests
import io
import base64
import pip
import bpy
from PIL import Image, PngImagePlugin

pip.main(['install', 'webuiapi', '--user'])

import webuiapi 

# create API client
api = webuiapi.WebUIApi()

image = Image.open("C:\\Users\\user\\Desktop\\\My Addon Motion\\UVs.png")

result2 = api.img2img(
images=[image],
prompt="a fantasy book with intricate and complex art, focused, ((flat lighting)), by greg rutkowski",
steps=50,
seed=-1,
cfg_scale=7,
denoising_strength=0.9,
)

outimg = result2.image
outimg.save('output.png')

Is the issue my code or is this a limitation of the integration? Your knowledge would be greatly appreciated!

mix1009 commented 1 year ago

Check the sampler. The default sampler is "Euler A". Also try using the same seed, and check if it is different.

If there is difference, it might be due to extensions or scripts. Currently there is no way to use scripts or extensions using the API.

SolusGod commented 1 year ago

I made sure that the sampler, along with every other variable that is visible, are matched in both cases. I also used the same seed. Here's what I got.

Manually

00767-4283568240-a fantasy book with intricate and complex art, focused, ((flat lighting)), by greg rutkowski

Via API output

It's almost like API output is being devoid of color. I thought maybe it was a conversion problem (the conversion of the image into the PngImagePlugin class but I did some tests. And loading and unloading into the PngImagePlugin class did not affect the quality of the image. I also plugged the image through the img2img process with very little denoising value (Thinking maybe it was being distorted somewhere along the way, but that was also not the case.

I'm thinking maybe it's the quality of the input Image. So I did another test with an image produced from txt2image here's what I got.

Manually Manual

Via API API

Same seeds and same everything that's a public variable (step, seed, denoise, etc). This gave a better output than my initial image But you can still see that there's a color degradation as well as some missing objects from the API compared to doing it by hand. (keep in mind that it was the same step count).

It seems that for some reason calling Automatic through API does not give the same result as doing it by hand, despite the same variables and no external extensions or scripts, just using the 512 Depth Model.

Edit: Apologies for the long winded discussions. My only intention is to be somewhat helpful!

mix1009 commented 1 year ago

Sorry, I don't have the answer for now.

Maybe there is something lossy while converting image to base64 format?

API transmits image in base64 format. webuiapi library converts PIL Images to base64 when sending to API, and when base64 image(s) are returned it converts to PIL Image.

mix1009 commented 1 year ago

Could you try with other model other than the depth model? I think depth model is treated somewhat different in automatic1111. Could be a bug handling the depth model.

If it doesn't work, I suggest you use a web automation tool such as Playwright. I use it to automate some of my workflow with webui.

SolusGod commented 1 year ago

I think maybe that's where the loss is happening (converting to base64), but I'm no coder you'd know better.

I figured out how to invoke an img2img function inside of Blender that gives me identical results with no loss of quality, as if I would've gotten through the client manually. But I had to go back to the Automatic1111 Native API so I'm not using the Webuiapi module to achieve this.

I'll leave you the full code below. Maybe you can use it to figure out what's wrong and improve the module! Thank you for your time Mix!

Edit: I'm getting constant results with the below script with the Depth Model so I don't think the Model is the reason!

import json
import requests
import io
import base64
from PIL import Image, PngImagePlugin
from io import BytesIO

url = "http://127.0.0.1:7860"

pil_image = Image.open("C:\\Users\\User\\Desktop\\\My Addon Motion\\UVs.png")
#Convert image to base64
def pil_to_base64(pil_image):
    with BytesIO() as stream:
        pil_image.save(stream, "PNG", pnginfo=None)
        base64_str = str(base64.b64encode(stream.getvalue()), "utf-8")
        return "data:image/png;base64," + base64_str

payload = {
    "init_images": [pil_to_base64(pil_image)],#Plug converted Image to Payload
    "prompt": "a fantasy book with intricate and complex art, focused, ((flat lighting)), by greg rutkowski",
    #"negative_prompt": "",
    "steps": 150,
    "denoising_strength": 0.95,
    #"mask": "string",
    #"mask_blur": 4,
    #"inpainting_fill": 0,
    #"inpaint_full_res": True,
    #"inpaint_full_res_padding": 0,
    #"inpainting_mask_invert": 0,
    #"initial_noise_multiplier": 0,
    #"styles": ["string"],
    "seed": 4283568240,
    "sampler_name": "Euler a",
    "cfg_scale": 7,
    "width": 512,
    "height": 512,
    #"restore_faces": False,
    #"tiling": False,
    #"override_settings": {},
    #"override_settings_restore_afterwards": True,
    #"include_init_images": False
}

response = requests.post(url=f'{url}/sdapi/v1/img2img', json=payload)

r = response.json()

for i in r['images']:
    image = Image.open(io.BytesIO(base64.b64decode(i.split(",",1)[0])))

    png_payload = {
        "image": "data:image/png;base64," + i
    }
    response2 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)

    pnginfo = PngImagePlugin.PngInfo()
    pnginfo.add_text("parameters", response2.json().get("info"))
    image.save('output.png', pnginfo=pnginfo)
mix1009 commented 1 year ago

I'll try to check the code you provided.

Thank you.

mix1009 commented 1 year ago

I checked your code and compared image to base64 and base64 to image between your code and the libraries' code. Both code produced exact copies of the image and base64.

Not sure what the problem is.

mix1009 commented 1 year ago

I'm closing the issue for now. Please reopen or open a new issue if error persists.