Closed qifuxiao closed 2 months ago
consider trying something like this?
from transformers import LlamaForCausalLM, LlamaTokenizer
# Load the Llama model and tokenizer
model_name = "path/to/llama/model" # Replace with your actual model path or Hugging Face model name
tokenizer = LlamaTokenizer.from_pretrained(model_name)
model = LlamaForCausalLM.from_pretrained(model_name)
consider trying something like this?
from transformers import LlamaForCausalLM, LlamaTokenizer
Load the Llama model and tokenizer
model_name = "path/to/llama/model" # Replace with your actual model path or Hugging Face model name tokenizer = LlamaTokenizer.from_pretrained(model_name) model = LlamaForCausalLM.from_pretrained(model_name)
I can load the Llama model and tokenizer,but how can I use NeMo-Guardrails to protect output? where can I add config.yml?I've read the documentation,but I can't find a solution.
@qifuxiao, you must decide on which provider to use. In the examples you saw the engine
is set to openai, so you must set the OPENAI_API_KEY environment variable. So these kind of setups are provider/engine specific.
Please have a look at the examples here.
And also it is worth to read the Configuration Guide.
@qifuxiao : here's the pattern you should follow: https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/examples/configs/llm/hf_pipeline_dolly/config.py.
You can register your LLM as a custom LLM.
I've looked at some tutorials and examples that use OPENAI_API_KEY. I don't have OPENAI_API_KEY. What can I do to protect llama3 with NeMo-Guardrails