-
after loaded local model codellama-7b.Q4_K_M.gguf, error reported during Q&A interaction
-
I am new to AI and trying to use `llama2` model locally using `pyllama`.
I tried different options, but nothing seems to work. I downloaded llama using https://github.com/facebookresearch/llama.
…
-
Hello! Thanks for your great work, but I met some problems when trying to replicate the results.
Specifically, I cannot find convert_raw_llama_weights_to_hf.py as depicted in [README.md](https://gi…
-
Hi! I am trying to use the tool but somehow the code completion is not working. The chat functionality works just fine so I am quite sure I configured the connectors properly. Unfortunately, I couldn…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
Am trying to finetune codellama with the same idea of llama2 and using the same script to finetune.
Am not sure whether am right as the repo or blog not talking about finetune approach.
Am facin…
-
**Describe the bug**
I have setup the following providers and I checked with curl that /api/generate endpoint on http://duodesk.duo:11434 works, the extension shows loading circle but is not sending …
-
Hi,
https://github.com/PromptEngineer48/Ollama/blob/main/2-ollama-privateGPT-chat-with-docs/privateGPT.py has a couple environment variables. Like `MODEL`. Nothing sets those variables.
So when …
-
On the public GitHub API [there is no ability](https://docs.github.com/en/rest/users/users?apiVersion=2022-11-28) to create GitHub account, however we should provide this ability in our API. I suggest…
-
如题