Closed official-elinas closed 1 year ago
I converted my model from HF and I noticed that the text encoder is not converted to TensorRT with the model, rather it uses sd 1.4 CLIP.
I looked at the code, and this seems to be intentional. Any reason for this, and can we expect an ability to convert our model's text encoders?
Thanks.
TensorRT is set up so you can only do text to image, but for img2img, you need a part of CLIP that TRT cuts out - so - it's necessary to use the original one.
Please close the issue if it answers your question.
This really doesn't answer my question. Why can't we convert the text encoder just for txt2img then? It really lowers the quality of all anime models trained on booru tags.
Closing this issue as the project has been significantly updated. Please reopen if you still have problems.
I converted my model from HF and I noticed that the text encoder is not converted to TensorRT with the model, rather it uses sd 1.4 CLIP.
I looked at the code, and this seems to be intentional. Any reason for this, and can we expect an ability to convert our model's text encoders?
Thanks.