-
After resolving the no avx support of my GPU here: https://github.com/bmaltais/kohya_ss/issues/2582, thanks @b-fission I went ahead and kicked off my lora training, it started training using just one …
-
I have a multi-GPU machine and want to run DiffDock's inference on all of the GPUs. Is it currently possible?
-
These problems can be resolved by:
1.
> Further testing shows that quantization with llm_attacks is possible by using:
>
> * transformers==4.31.0
> * fschat==2.20.0
> pip will y…
-
Everything works with 1 GPU and `num_workers` > 0, but if the number of GPUs is set to > 1 I get this error:
```
RuntimeError: Expected all tensors to be on the same device, but found at least two d…
-
Currently using multiple GPU to eval, the model will load into serveral devices, caused inference error.
How to force them eval one task in parallel on multiple device?
-
## ❓ General Questions
What is the proper way to actually utilize multiple GPUs? When I generate config, compile, and load the MLCEngine with multiple tensor shards it will still error out if the m…
-
How can I use multiple gpus when doing Classification task?
-
I'm using lm-eval v0.4.2 to evaluate Llama 7b on the open llm leaderboard benchmark.
I found that there are accuracy gaps between single GPU and multiple GPUs as below. (I used data parallel)
| |…
-
First of all, thank you for maintaining the code. I would like to try using my own dataset and training on multiple GPUs, but the following issues may arise. Very strange.
-------------------------…
-
### Python -VV
```shell
Python 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0]
```
### Pip Freeze
```shell
absl-py @ file:///home/conda/feedstock_root/build_artifact…