Closed qinguoyi closed 1 week ago
Generally LGTM, have you tested locally? Maybe we can have a e2e test because ollama can still run with CPUs.
Thanks for your review, I've done multiple tests locally before committing. Another, a complete e2e test is very necessary and I will complete it as soon as possible.
If you finished the work, feel free to Ping me, not to rush you, just a friendly reminder. 😄
If you finished the work, feel free to Ping me, not to rush you, just a friendly reminder. 😄
Thankd for your kind reply. I am sorry for late reply. I haven't finished this yet, still doing e2e unit test work. I will finish this work as soon as possible in this week
If you finished the work, feel free to Ping me, not to rush you, just a friendly reminder. 😄
I have completed the e2e ollama test, please review the code again @kerthcet
/approve
Please squash the commits.
kind ping @kerthcet, I've squash the commits
/lgtm Thanks!
/kind feature
/approve
What this PR does / why we need it
Support ollama
Which issue(s) this PR fixes
https://github.com/InftyAI/llmaz/issues/91
Special notes for your reviewer
ollama run
;because needs to start first and make sure the ollama service is startedDoes this PR introduce a user-facing change?