-
Tasks
====
- [X] Bytecode support (pycdas)
- [ ] Handle new opcodes in AST builder
- [X] `CACHE`
- [X] `PUSH_NULL`
- [ ] `PUSH_EXC_INFO`
- [ ] `CHECK_EXC_MATCH`
- [ ] `CHECK_EG_MATCH…
-
I don't see any relevant logging when running this code from the **Create a basic agent** guide.
```ts
Settings.llm = new OpenAI({
apiKey: OPENAI_API_KEY.value(),
model: 'gpt-4o',
});…
-
Please do a quick search on GitHub issues first, there might be already a duplicate issue for the one you are about to create.
If the bug is trivial, just go ahead and create the issue. Otherwise, pl…
-
from vanna.openai import OpenAI_Chat
from vanna.vannadb import VannaDB_VectorStore
class MyVanna(VannaDB_VectorStore, OpenAI_Chat):
def __int__(self, vanna_model, vanna_api_key, config=None…
-
Do you plan to use gpt 4o for Llama Parse? I think it would solve the cost issues related to Llamaparse with GPT 4O and would allow much more capabilities with graphs and complex tables compared to th…
-
**Is your feature request related to a problem? Please describe.**
OpenAI has recently released a new chat completion model named GPT-4o-mini. However, our current system does not support this new mo…
-
Is possible to use gpt4 o mini as model on pr agent? if yes, how can I do it?
-
Type: Bug
## Describe the issue
I ran a simple script and ran into 429s so I issued the abort command. Terminal showed abort was issued but retries continued. Here's the output:
run ebd8003e…
bzorn updated
1 month ago
-
Hi,
I really would love to test this with neovim, but i have no idea how to setup a custom LSP. Maybe using `nvim-lspconfig`
-
```duck_chat
Using meta-llama/Llama-3-70b-chat-hf
Type /help to display the help
>>> User input №1:
hello
>>> Response №1:
Error occurred: ERR_MODEL_UNAVAILABLE
```
They use now
`meta-llama…