soilwise-he / natural-language-querying

Application component that provides Natural Language Querying (NLQ) services, making knowledge stored in a graph database accessible for e.g. a ChatBot UI.
MIT License
0 stars 0 forks source link

Discuss adding guardrails to safeguard LLM usage #9

Open robknapen opened 2 months ago

robknapen commented 2 months ago

LLMs with public access typically require guardrails to prevent misuse and hacking. Guardrails will try to keep generated output safe and responsible, in accordance with SoilWise standards.

BerkvensNick commented 1 month ago

Do you have any examples of guardrails that can be implemented?

A first "safety" could be to put this behind a "authorization" layer to which users with an account have access to?

robknapen commented 1 month ago

Think of things like “Don’t generate text with profanities”, or “Don’t generate text that is off topic”. This Hub has many examples that might indicate what to consider: https://hub.guardrailsai.com/

Authorisation and e.g. rate-limiting are more to control access and usage of a (LLM) service, not about what kind of text the model is allowed to generate.

BerkvensNick commented 1 month ago

thanks for the clarification Rob! Also a very nice resource for the guardrails!! If I understand correctly, we can also use this code and integrate when we want to set this up?

robknapen commented 1 month ago

I don't know that. It was just some examples that I could easily find. They seem to want to share guardrails publicly, when using the OSS package. There are pros and cons to that of course. Guardrails are usually what a hacker will be trying to circumvent first. But initially it might be sufficient. We can try it, or search for something similar.