-
> Found out that the 'OPENAI_API_TYPE' value 'llama2' does not work. I also noticed that the `llm = get_llm()` is used 3 times but not used in the code. A llm is used via `chat = get_cha…
-
please address this issue. if a sagemaker endpoint deployment hits a resource limit, it gets stuck forever and there is no option to delete it: https://stackoverflow.com/questions/65678237/sagemaker-e…
-
When trying to use this with LocalAI it just spits back at me the prompt i sent it. Please see the below Example
![image](https://github.com/jekalmin/extended_openai_conversation/assets/8059327/54…
-
I am using VSCode WSL2, Ubuntu 22.04 and Docker Engine v24.0.6
The .env file contains:
LLM=mistral #or any llama2:7b Ollama model tag, gpt-4, gpt-3.5, or claudev2
EMBEDDING_MODEL=sentence_tran…
-
**Describe the bug**
I'm trying to build Metal with the profiler enabled and getting build errors and failure. the exact error depends on the script used (either `build_with_profiler_opt.sh` or using…
-
Organizations that want to use Ollama in their enterprise will want some sort of control over the models that are available for use and where the trained models go when they get pushed. For instance,…
-
I am unable to connect to the server using CURL
**LocalAI version:**
```
./local-ai-avx-Linux-x86_64 --models-path /mnt/c/Users/dutta/Documents/GPT4AllModels/
9:35PM DBG no galleries to load
9:…
-
**例行检查**
[//]: # (方框内删除已有的空格,填 x 号)
+ [ ] 我已确认目前没有类似 issue
+ [ ] 我已确认我已升级到最新版本
+ [ ] 我已完整查看过项目 README,尤其是常见问题部分
+ [ ] 我理解并愿意跟进此 issue,协助测试和提供反馈
+ [ ] 我理解并认可上述内容,并理解项目维护者精力有限,**不遵循规则的 issue 可能…
-
**例行检查**
[//]: # (方框内删除已有的空格,填 x 号)
+ 我已确认目前没有类似 issue
+ 我已确认我已升级到最新版本
+ 我已完整查看过项目 README,尤其是常见问题部分
+ 我理解并愿意跟进此 issue,协助测试和提供反馈
+ 我理解并认可上述内容,并理解项目维护者精力有限,**不遵循规则的 issue 可能会被无视或直接关闭**
…
-
Here is the result of my command. Is this error inside the container or outside? The weird part to me is:
**genai-stack-pull-model-1 | pulling ollama model llama2 using http://llm-gpu:11434**
T…