-
Hello, I want to do some benchmarking using OpenRLHF in a memory constrained environment (1-2 nodes each with one A30 GPU, 24 GB each). Thus, I have had to use other HF models as the ones used in the …
-
How to use MiniGPT4 for batch inference without using Chat model? I can't find a method to do so.
This is necessary for us to test on new datasets.
-
`{
"host": "0.0.0.0",
"port": 8000,
"models": [
{
"model": "models/mistral-7b-instruct-v0.1.Q4_0.gguf",
"model_alias": "mistral",
"chat_format": "chatm…
-
### Issue you'd like to raise.
For the JS SDK, I believe that the `wrapOpenAI` helper is failing to expose anything except `parse` on the beta chat completions resource.
The problem may have bee…
-
The model I am using is **Llama3-Llava-Next-8b**, and I am using a local checkpoint.
The registered model is as follows:
`
register_model(
model_id="llama3-llava-next-8b",
model_family_id…
-
Following instructions at https://github.com/NVIDIA/TensorRT-LLM/blob/main/examples/llama/README.md
I tried a bunch of different models and they all fail in `run.py` on:
```
Traceback (most recent cal…
-
Generating train split: 15000 examples [00:00, 416735.51 examples/s]
Retrieval process: 0%| | 0/1 [00:00
-
### 确认清单
- [X] 我已经阅读过 README.md 和 dependencies.md 文件
- [X] 我已经确认之前没有 issue 或 discussion 涉及此 BUG
- [X] 我已经确认问题发生在最新代码或稳定版本中
### Forge Commit 或者 Tag
2995d78825b8f02c0ab14c01bd2db679528a3034
### Pyth…
-
I am training a reward model, but after starting the training, it is killed immediately after running a step, without any explicit reason. What could be the reason? I have 8 H20 and did not use docker…
-
### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue y…