NVIDIA / TensorRT-Model-Optimizer

TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
https://nvidia.github.io/TensorRT-Model-Optimizer
Other
574 stars 43 forks source link

cross_attention_kwargs ['adapter_params'] are not expected by DefaultAttnProcessor2_0 and will be ignored. #94

Open zeng121 opened 4 weeks ago

zeng121 commented 4 weeks ago

I changed the AttnProcessor when loading SDXL, but reverted back to DefaultAttnProcessor2_0 after using cache_diffusion

kevalmorabia97 commented 3 weeks ago

Is this a duplicate of https://github.com/NVIDIA/TensorRT-Model-Optimizer/issues/93 ?

zeng121 commented 3 weeks ago

sorry