Open Mikkicon opened 2 days ago
cc @init27 @heyjustinai just fyi if you see this more in the wild, please report. this seems odd and seems like we had tested this path a bunch!
@ashwinb FYI it works for this endpoint, so it is possibly an Ollama issue
LLAMA_STACK_API_TOGETHER_URL="https://llama-stack.together.ai"
@Mikkicon can you tell me
uname -a
would be useful
I followed a zero_to_hero_guide and am facing this issue for
llama-stack-client
$
llama-stack-client --endpoint http://localhost:5001 inference chat-completion --message "hello, what model are you?"
ollama docker