TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
When I use Ipadpter in cache_diffusion, it doesn't work and prints the following message in the terminal
cross_attention_kwargs ['adapter_params'] are not expected by DefaultAttnProcessor2_0 and will be ignored.