protectai / llm-guard

The Security Toolkit for LLM Interactions
https://llm-guard.com/
MIT License
974 stars 112 forks source link

Support inference URLs for models used by scanners #101

Open adrien-lesur opened 4 months ago

adrien-lesur commented 4 months ago

Is your feature request related to a problem? Please describe. My understanding of the documentation and the code is that llm-guard will lazy-load the models required by the chosen scanners from Huggingface. I apologize if this is incorrect

This is not ideal for consumers like Kubernetes workloads because :

A third option is that you already have the models deployed somewhere in a central place so that the only information required by the scanners would be the inference URL and the authentication.

Describe the solution you'd like Users that use a platform to host and run models in a central place should be able to provide inference URLs and authentication to the scanners, instead of lazy-loading the models.

Describe alternatives you've considered The existing possible usages described by the documentation (as a library or as API).

asofter commented 4 months ago

Hey @adrien-lesur , at some point, we considered having the support of HuggingFace Inference Endpoints but we learned that it's not used widely.

How would you usually deploy those models? I assume https://github.com/neuralmagic/deepsparse or something.

adrien-lesur commented 4 months ago

Hi @asofter, The models would usually be deployed via vLLM like documented here for Mistral.