Open anton-l opened 1 year ago
cc @mfuntowicz @echarlaix here - it would be good to not replicate efforts of optimum here
talked to @echarlaix and @JingyaHuang, we'll make this a part of Optimum integration (moving diffusers.OnnxRuntimeModel
to something like optimum.DiffusersModel
): https://github.com/huggingface/optimum/pull/447
Context
Currently
OnnxStableDiffusionPipeline
perform unnecessary tensor casting between torch and numpy. The downsides of that are:scheduler.step()
), and when to convert them back.CUDAExecutionProvider
latency: ideally the UNet input/outputs (latents) should stay on the same device between sampling iterations.Proposed solution Take advantage of the IO binding mechanism in ONNXRuntime to bind the pytorch tensors to model inputs and outputs and keep them on the same device. For more details see: https://onnxruntime.ai/docs/api/python/api_summary.html#data-on-device
Standalone example of torch IO binding:
This functionality can either be integrated into
OnnxRuntimeModel
or into each of the OnnxPipelines individually. For easier maintenance I would go with the first option.