Open bluenevus opened 4 months ago
This falls under the Deberta V2 class of models which already has issues here (#281 #199).
@OlivierDehaene Is this something that can hopefully be prioritized by the maintainer team?
+1
blazing fast guardrails would be great.
Deberta V3 is widely used it should be supported
@OlivierDehaene is there any update on whether the TEI maintainer team plans to support this in the nearby future?
Feature request
This is a Bert based model however when trying to run, the message says model not supported. https://huggingface.co/meta-llama/Prompt-Guard-86M/tree/main
Motivation
LLM-powered applications are susceptible to prompt attacks, which are prompts intentionally designed to subvert the developer’s intended behavior of the LLM. Categories of prompt attacks include prompt injection and jailbreaking:
Prompt Injections are inputs that exploit the concatenation of untrusted data from third parties and users into the context window of a model to get a model to execute unintended instructions. Jailbreaks are malicious instructions designed to override the safety and security features built into a model. Prompt Guard is a classifier model trained on a large corpus of attacks, capable of detecting both explicitly malicious prompts as well as data that contains injected inputs.
Your contribution
testing. Ultimately, I'm a system administrator that loads the models in an inference engine for developers.