I'm working on adding initial XLA support to ComfyUI https://github.com/comfyanonymous/ComfyUI/pull/5657 and would greatly appreciate any feedback or validation from the community. Specifically, I'm looking for:
Testing across different XLA-compatible hardware (e.g., TPUs or GPUs with XLA support).
Suggestions for optimizing performance with XLA in this context.
Identifying any compatibility issues or edge cases that might arise during execution.
If you're familiar with integrating XLA into PyTorch workflows or have experience with related pipelines, your input would be invaluable. Thank you in advance for your help!
Thank you for the question @radna0 - please review this SD2 perf optimized implementation for TPU backend for inspiration on how you can further optimize your workload.
❓ Questions and Help
I'm working on adding initial XLA support to ComfyUI https://github.com/comfyanonymous/ComfyUI/pull/5657 and would greatly appreciate any feedback or validation from the community. Specifically, I'm looking for:
If you're familiar with integrating XLA into PyTorch workflows or have experience with related pipelines, your input would be invaluable. Thank you in advance for your help!