Inpainting using the rest-api does not seem to understand the context of the image
For example in the ss provided I would expect the generator to understand that we have a deep blue background, but it does not have any interest in that whatsoever ( In several other images too it does not take any inspiration from the init_image ). At this point I have probably used 10 - 15 api:s, local and cloud based stable diffusion models and this lack of context in inpainting seems to only happen here. Is it me ? Do I not understand something ?
Inpainting using the rest-api does not seem to understand the context of the image
For example in the ss provided I would expect the generator to understand that we have a deep blue background, but it does not have any interest in that whatsoever ( In several other images too it does not take any inspiration from the init_image ). At this point I have probably used 10 - 15 api:s, local and cloud based stable diffusion models and this lack of context in inpainting seems to only happen here. Is it me ? Do I not understand something ?