-
Hi there,
Thank you for the great contributions!
There have been many new models released since the benchmark was published. Do you have any plans to include some of these recent models, such as…
-
When omitting `--model` from the command line (I have no config file), I get the following error:
```
error: the following required arguments were not provided:
--model
Usage: heygpt --model …
-
Opening a new issue as #237 was closed prematurely.
It seems that engines built using the `--paged_kv_cache` flag leak GPU memory. Below is a minimal reproducible example code that can be used to …
-
End goal would be to have something like this:
### OpenAI Assistants API Benchmark
| Model Name | Code Interpreter | Retrieval | Function Calling | JSON Mode | Tool Switching | Speed |
|-----…
-
Hi, I am unable to run your code. after setting task and dataset in the flags it seems everything should be in order but it just pends indefinitely after showing the tqdm bar of 5000 data samples with…
-
## providers (global)
- [x] OpenAI: gpt-3.5-turbo, gpt-4-turbo
- [ ] claude
## providers (domestic)
- [ ] 文心一言
- [x] 通义千问:qwen-turbo, qwen-plus, qwen-max
- [x] moonshot: moonshot-v1-8k, moonsh…
-
Hello
Playing around with the project, after running nmap with OpenAI using profile 5 or 12 i come with the error:
"message: "This model's maximum context length is 16385 tokens, however you reque…
-
### System Info
GPU:3090
CUDA:12.2
### Who can help?
@ncomly-nvidia @symphonylyh
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially…
-
### Describe the bug
Hello, training XTTSv2 leads to weird training lags - training gets stuck with no errors
with using DDP
x6 RTX a6000 and 512GB RAM
Here is monitoring GPU load graph. Purpl…
-
We can automate the process of switching between GPT 3.5 and GPT 4 word limit by checking what the color of the button is.