NVIDIA / NeMo-Guardrails

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Other
4.02k stars 367 forks source link

Do I have to use OPENAI_API_KEY? #707

Closed qifuxiao closed 3 weeks ago

qifuxiao commented 1 month ago

I've looked at some tutorials and examples that use OPENAI_API_KEY. I don't have OPENAI_API_KEY. What can I do to protect llama3 with NeMo-Guardrails

Drewwb commented 1 month ago

consider trying something like this?

from transformers import LlamaForCausalLM, LlamaTokenizer

# Load the Llama model and tokenizer
model_name = "path/to/llama/model"  # Replace with your actual model path or Hugging Face model name
tokenizer = LlamaTokenizer.from_pretrained(model_name)
model = LlamaForCausalLM.from_pretrained(model_name)
qifuxiao commented 1 month ago

consider trying something like this?

from transformers import LlamaForCausalLM, LlamaTokenizer

Load the Llama model and tokenizer

model_name = "path/to/llama/model" # Replace with your actual model path or Hugging Face model name tokenizer = LlamaTokenizer.from_pretrained(model_name) model = LlamaForCausalLM.from_pretrained(model_name)

I can load the Llama model and tokenizer,but how can I use NeMo-Guardrails to protect output? where can I add config.yml?I've read the documentation,but I can't find a solution.

Pouyanpi commented 1 month ago

@qifuxiao, you must decide on which provider to use. In the examples you saw the engine is set to openai, so you must set the OPENAI_API_KEY environment variable. So these kind of setups are provider/engine specific.

Please have a look at the examples here.

And also it is worth to read the Configuration Guide.

drazvan commented 3 weeks ago

@qifuxiao : here's the pattern you should follow: https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/examples/configs/llm/hf_pipeline_dolly/config.py.

You can register your LLM as a custom LLM.