-
### Reminder
- [X] I have read the README and searched the existing issues.
### Reproduction
deepspeed --num_gpus 4 ../../src/train.py \
--deepspeed ../deepspeed/ds_z3_offload_config.json \
…
-
conda activate vrenv
cd
pip install -e ../LLaVA
pip install -e ../gym-cards
pip install gymnasium[atari,accept-rom-license]
pip install stable-baselines3 wandb deepspeed sentencepiece git+https:…
-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
…
-
### Your current environment
```text
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86…
-
### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to try to find a similar issue and didn't find one.
- [X] I searched the Marvin documentation for t…
anjor updated
9 months ago
-
## 🐛 Bug
I build the library libtvm4j_runtime_packed.so with the prebuild tar as below:
https://github.com/mlc-ai/binary-mlc-llm-libs/blob/main/Mistral-7B-Instruct-v0.2/Mistral-7B-Instruct-v0.2-q4…
-
### Your current environment
```text
The output of `python collect_env.py`
Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTor…
-
### What happened?
Related to: #1471
Currently I've tested open-mixtral-8x7b and everything works fine, but `mistral-embed` fails with a message that says 'Extra inputs are not permitted'
This…
-
Hello,
I attempted to run a BentoVLLM example on a Linux server, ensuring all prerequisites were met and following the installation guide accordingly.
However, when trying to run the BentoML Se…
-
# Expected Behavior
Convert Lora adapters for Mistral to ggml using `convert-lora-to-ggml.py`
Convert Lora adapters for LLama2 to ggml using `convert-lora-to-ggml.py`
# Current Behavior
same e…