-
-
**My objective:**
- Getting agents to execute locally defined function
- Have a minimal example running
- Using local models
**My code:**
```
from flaml import autogen
llm_config={
"…
-
Hi, I'm running local-ai in Kubernetes and download the model ggml-gpt4all-j in the same way as explained [here](https://github.com/go-skynet/LocalAI#run-localai-in-kubernetes), but got this error:
`…
-
Hello there!
I'm trying to implement the Langchain's experimental AUTOGPT agent with GPT4 model from OPENAI but I'm facing an issue when the agent has finished to proceed the query and have to retu…
-
**Description**
Integrate the UI with gpt-index (llama-index) or langchain to greatly extend features
**Additional Context**
https://github.com/jerryjliu/llama_index
https://github.com/hwchase…
-
Hi.
I was wondering which model you are using for the NSFW chats. I see that gpt3.5 and gpt4 are mentioned in the files, but I haven't been able to decipher what is used for the NSFW?
-
-
I'm using `TheBloke/openchat_3.5-16k-AWQ` through the docker image, the generation speed was noticeably faster in the image with tag `f6aa065e` compared to latest image with tag `6d9f72b8`, running on…
-
def test_readme_example(local_server):
# self-contained example used for readme, to be copied to README_CLIENT.md if changed, setting local_server = True at first
import os
# The grclie…
-
**LocalAI version:**
quay.io/go-skynet/local-ai:v1.20.0-cublas-cuda12-ffmpeg
**Environment, CPU architecture, OS, and Version:**
Docker-Compose | Intel I9-8950HK ((x86-64, IA64, and AMD64) | …