Open ulan-yisaev opened 1 month ago
Looks like the utility does not work very well with other languages, for example:
prompt: "Wie kann ich mich vor Legionellen schützen?"
'RiskModel(query='*', markers={'ExploitClassifier': '0.985232'}, score=2.0, passed=False, risk='high')'
Hello,
I am interested in using your library for detecting prompt injections and jailbreaks in my LLM project. Could you please let me know if it supports languages other than English, such as German? Specifically, will it detect jailbreaks or prompt injections if my prompts are in German?
Thank you in advance!