Stability-AI / stability-sdk

SDK for interacting with stability.ai APIs (e.g. stable diffusion inference)
https://platform.stability.ai/
MIT License
2.41k stars 339 forks source link

Api call returning error when different images are used #270

Closed master-senses closed 6 months ago

master-senses commented 6 months ago

Hello! I'm trying to use img to img. This is the code I'm using:

from PIL import Image
# Set up our initial generation parameters.
answers = stability_api.generate(
    prompt= [generation.Prompt(text="top view, ((black room)), white background, 3D image, smooth edges, royal design, product view, ultra quality",parameters=generation.PromptParameters(weight=1)),
    generation.Prompt(text="blurry, non-white background, drawing, intense light, multiple rings, multiple diamonds, rough texture, text",parameters=generation.PromptParameters(weight=-2))],
    init_image=Image.open("/content/compress_20240104_153623__1_.jpg"), # Assign our previously generated img as our Initial Image for transformation.
    start_schedule=0.9, # Set the strength of our prompt in relation to our initial image.
    seed=56783, # If attempting to transform an image that was previously generated with our API,
                    # initial images benefit from having their own distinct seed rather than using the seed of the original image generation.
    steps=35, # Amount of inference steps performed on image generation. Defaults to 30.
    cfg_scale=11.0, # Influences how strongly your generation is guided to match your prompt.
                   # Setting this value higher increases the strength in which it tries to match your prompt.
                   # Defaults to 7.0 if not specified.
    width=512, # Generation width, defaults to 512 if not included.
    height=512, # Generation height, defaults to 512 if not included.
    sampler=generation.SAMPLER_K_DPMPP_2M, # Choose which sampler we want to denoise our generation with.
                                                 # Defaults to k_dpmpp_2m if not specified. Clip Guidance only supports ancestral samplers.
                                                 # (Available Samplers: ddim, plms, k_euler, k_euler_ancestral, k_heun, k_dpm_2, k_dpm_2_ancestral, k_dpmpp_2s_ancestral, k_lms, k_dpmpp_2m, k_dpmpp_sde)
   guidance_preset=generation.GUIDANCE_PRESET_FAST_BLUE # Enables CLIP Guidance.
                                                         # (Available Presets: _NONE, _FAST_BLUE, _FAST_GREEN)
)

# Set up our warning to print to the console if the adult content classifier is tripped.
# If adult content classifier is not tripped, display generated image.
for resp in answers:
    for artifact in resp.artifacts:
        if artifact.finish_reason == generation.FILTER:
            warnings.warn(
                "Your request activated the API's safety filters and could not be processed."
                "Please modify the prompt and try again.")
        if artifact.type == generation.ARTIFACT_IMAGE:
            img2 = Image.open(io.BytesIO(artifact.binary)) # Set our resulting initial image generation as 'img2' to avoid overwriting our previous 'img' generation.
            display(img2)
            display(Image.open("/content/Screenshot_2024-01-04_at_2.58.17 PM.jpg"))

When I change the image I'm using, I get this error:

_MultiThreadedRendezvous                  Traceback (most recent call last)
[<ipython-input-44-b8576d161d46>](https://localhost:8080/#) in <cell line: 26>()
     24 # Set up our warning to print to the console if the adult content classifier is tripped.
     25 # If adult content classifier is not tripped, display generated image.
---> 26 for resp in answers:
     27     for artifact in resp.artifacts:
     28         if artifact.finish_reason == generation.FILTER:

2 frames
[/usr/local/lib/python3.10/dist-packages/grpc/_channel.py](https://localhost:8080/#) in _next(self)
    879                     raise StopIteration()
    880                 elif self._state.code is not None:
--> 881                     raise self
    882 
    883 

_MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
    status = StatusCode.UNAVAILABLE
    details = "connection error: desc = "transport: Error while dialing: dial tcp: lookup stable-diffusion-xl-1024-v1-0.tenant-stabilityai-corweave.svc.tenant.chi.local on 10.135.192.10:53: no such host""
    debug_error_string = "UNKNOWN:Error received from peer ipv4:104.18.34.224:443 {created_time:"2024-01-04T12:07:19.289378002+00:00", grpc_status:14, grpc_message:"connection error: desc = \"transport: Error while dialing: dial tcp: lookup stable-diffusion-xl-1024-v1-0.tenant-stabilityai-corweave.svc.tenant.chi.local on 10.135.192.10:53: no such host\""}"

Both images are the same size (59 kb). I'm not sure, what im doing wrong. If it helps, the image where i get this error from was taken on my phone, and the rest were taken from the internet. I intially thought it was the size of the image that gave me the error, so i compressed it to 59 kb