NVIDIA / TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
https://developer.nvidia.com/tensorrt
Apache License 2.0
10.55k stars 2.1k forks source link

Switch to AutoencoderTiny in Diffusers Examples #3783

Open olegchomp opened 5 months ago

olegchomp commented 5 months ago

I'm trying to switch to AutoencoderTiny from AutoencoderKL in demo_txt2img_xl with Turbo model. After some attempts to changing models.py it finally works, but images comes with artifacts.

python demo_txt2img_xl.py "Einstein" --version xl-turbo --onnx-dir onnx-sdxl-turbo --engine-dir engine-sdxl-turbo --denoising-steps 1 --scheduler EulerA --guidance-scale 0  --width 512 --height 512

xl_base-Einstein-None-1-6187

theNefelibata commented 5 months ago

Did you modify the cfg value? The cfg of Taesd should be 1.0

zerollzeng commented 5 months ago

Sorry there is not much I can help here, but you can ping me if you have any TensorRT specific question or bugs.

madebyollin commented 5 months ago

Looks like VAE scaling factor may be hard-coded to .13? https://github.com/NVIDIA/TensorRT/blob/release/10.0/demo/Diffusion/demo_txt2img_xl.py#L131C12-L131C65 for TAESDXL the scaling factor should be 1.0

lix19937 commented 4 months ago

the cfg of scale need change.