I noticed that in the example provided in stable diffusion.cpp, a context is created and used for a single image generation (either image-to-image or text-to-image). I wondered if it’s possible to reuse the same context for multiple generating without recreating it each time.
I tried reusing the context for multiple generations but encountered a segmentation fault either in image-to-image or text-to-image. For example, I copy and paste results = txt2img(...); after the original generate results code section.
Could you provide an example of a context that has been reused for multiple generations, or is this approach not supported?
I noticed that in the example provided in stable diffusion.cpp, a context is created and used for a single image generation (either image-to-image or text-to-image). I wondered if it’s possible to reuse the same context for multiple generating without recreating it each time.
I tried reusing the context for multiple generations but encountered a segmentation fault either in image-to-image or text-to-image. For example, I copy and paste
results = txt2img(...);
after the original generate results code section.Could you provide an example of a context that has been reused for multiple generations, or is this approach not supported?