Open gyd-a opened 1 month ago
https://forums.developer.nvidia.com/t/tensorrt-inference-api-that-open-clip-vit-l-14-is-slowing-down/309551/3 My successful convertion. "ViT-L/14" ---> .onnx ---> .trt But inference with the tensorrt framework is slower. Is this a normal phenomenon? The link above has some details.
Code file name "clip_txt.py": _MODELS = [ 'RN50::openai', 'RN50::yfcc15m', 'RN50::cc12m', 'RN101::openai', 'RN101::yfcc15m', 'RN50x4::openai', 'ViT-B-32::openai', 'ViT-B-32::laion2b_e16', 'ViT-B-32::laion400m_e31', 'ViT-B-32::laion400m_e32', 'ViT-B-16::openai', 'ViT-B-16::laion400m_e31', 'ViT-B-16::laion400m_e32',
older version name format
]
'ViT-L/14' is commented out. Whether it can be opened and converte "ViT-L/14" model file.