-
[ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) is the underlying format consumed by ChatGPT models. In the future, you will be able to interact with this format.
> Traditiona…
-
Hi,
I'm trying to create a customized `ChatAgent` with a simple `BufferMemory` like this:
```typescript
initializeAgent(openAIApiKey: string, agentPrompt: string): AgentExecutor {
const …
-
Hello there!
I'm trying to implement the Langchain's experimental AUTOGPT agent with GPT4 model from OPENAI but I'm facing an issue when the agent has finished to proceed the query and have to retu…
-
-
Hi, I'm running local-ai in Kubernetes and download the model ggml-gpt4all-j in the same way as explained [here](https://github.com/go-skynet/LocalAI#run-localai-in-kubernetes), but got this error:
`…
-
**My objective:**
- Getting agents to execute locally defined function
- Have a minimal example running
- Using local models
**My code:**
```
from flaml import autogen
llm_config={
"…
-
**Description**
Integrate the UI with gpt-index (llama-index) or langchain to greatly extend features
**Additional Context**
https://github.com/jerryjliu/llama_index
https://github.com/hwchase…
-
Hi.
I was wondering which model you are using for the NSFW chats. I see that gpt3.5 and gpt4 are mentioned in the files, but I haven't been able to decipher what is used for the NSFW?
-
Reported by user of h2oGPT: https://github.com/h2oai/h2ogpt/issues/1309
I used an edited version of the text streamer, only changed by printing every token instead of waiting for a space. You'll s…
-
**LocalAI version:**
quay.io/go-skynet/local-ai:v1.20.0-cublas-cuda12-ffmpeg
**Environment, CPU architecture, OS, and Version:**
Docker-Compose | Intel I9-8950HK ((x86-64, IA64, and AMD64) | …