import litellm
# Create your own custom prompt template
model="togethercomputer/llama-2-70b-chat"
litellm.register_prompt_template(
model=model,
initial_prompt_value="You are a good assistant" # [OPTIONAL]
roles={
"system": {
"pre_message": "[INST] <<SYS>>\n", # [OPTIONAL]
"post_message": "\n<</SYS>>\n [/INST]\n" # [OPTIONAL]
},
"user": {
"pre_message": "[INST] ", # [OPTIONAL]
"post_message": " [/INST]" # [OPTIONAL]
},
"assistant": {
"pre_message": "\n" # [OPTIONAL]
"post_message": "\n" # [OPTIONAL]
}
}
final_prompt_value="Now answer as best you can:" # [OPTIONAL]
)
completion(model=model, messages=messages)
and it errors out with:
BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=togethercomputer/llama-2-70b-chat
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
This error doesn't happen for model="groq/llama3-8b-8192".
Relevant log output
BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=togethercomputer/llama-2-70b-chat
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
What happened?
I try to reproduce the code from the docs:
and it errors out with:
This error doesn't happen for
model="groq/llama3-8b-8192"
.Relevant log output
Twitter / LinkedIn details
No response