Closed vanewu closed 1 year ago
@rajeevsrao ^ ^
+1 for this. Could you share the code or make a pr? I've also successfully compiled a vaeencode engine, but the encoding itself had issues. It was messing up the resolution somehow, overlapping different areas of the original image. I've also separated the onnx export and engine compile script. You can check my code here: https://github.com/venetanji/videosd
@vanewu @venetanji the new demoDiffusion
code in release/8.6
branch supports img2img and inpainting as well.
Description
Currently in this demo-diffusion.py , the
VAE
model conversion ofStableDiffusion
is only supported forVAE Decoder
. Therefore, only theText2Image
pipeline is supported in this program. In daily use, theImage2Image
pipeline is also often needed. I made some modifications based on the demo-diffusion.py , added the Image2Image part, and made some modifications to the model conversion and inference program. At present, the conversion has been successfully completed and a goodresult image has been obtained.Can I submit this modification to the current demo?
Some modifications
VAEEncoder
andVAEDecoder
add_noise
method to scheduler in utilities.pymemory_pool_limits
parameter to the Engine build method in utilities.pyRest of the items:
Additionally, I found while using this script that it might be convenient to have an optional transformation of the model. For example, many times we may just want to convert the
Unet
model with different parameters, but keep theVAE
orCLIP
unchanged. It seems more convenient to provide an optional name parameter for select conversion.When
force_export
is true, maybe you want to re-export the onnx model, but the existence ofonnx_opt_path
will make it impossible to re-export and optimize onnx. Does this logic need to be modified, whenforce_export
is true, should the onnx model be forcibly exported even if the file exists.Relevant Files
demo/diffusion