-
Great work @enricoros on latest updates! I'm trying to understand why the app and OpenAI AI token limit utilisation works the way it does for your app. When you hit 4097 token limit we need to remove …
-
Subscribe to this issue and stay notified about new [daily trending repos in Python](https://github.com/trending/python?since=daily).
-
Hello again sir,
I am encountering this error due to the absence of the json files of meta_models which are ignored by in git.
Traceback (most recent call last):
File "/home/user/projects/Extra…
-
Using the 0.8 release of LlamaSharp and Kernal-Memory with the samples there is an error because the LlamaSharpTextEmbeddingGeneration doesn't implement the Attributes property.
I took the source a…
-
## 🐛 Bug
Running a model parallel 2 with 8 gpus on FAIR cluster raises the following exception with the 1.3B_gptz model only when run with `arceasy`, `arcchallenge`, `openbookqa`. Works with `storycl…
-
Hi, thanks for sharing the codes and datasets. I'd like to know how to evaluate models on story analogies.
-
Got the following error using gpt-3.5 and anthropic
```
2024-05-09 21:45:21.312 | DEBUG | desci_sense.shared_functions.parsers.multi_chain_parser:batch_process_ref_posts:245 - Invoking parallel c…
-
We can make the `max_queries_per_minute` argument the "default" limit and allow passing of a dictionary where keys are models/APIs, and values are the corresponding rate limits
-
Hi, I am working on this in my fork because I need to run models on the CPU and I had issues using the llama-cpp-python server.
-
From what I've read, repo map is disabled outside gpt-4 because 3.5 cant handle the token load. But with 3.5-16k released has anyone tried with 16k?
I'm going to dive into aider codebase and try to …