-
When I tested llama2-70b on an A800 graphics card, I encountered the problem of insufficient video memory. I want to ask how I should write the command if I want to test on two A800 graphics cards?I t…
-
- PyTorch-Forecasting version: 0.8.4
- PyTorch version: 1.8.0
- Python version: 3.8.8
- Operating System: CentOS
### Expected behavior
I'm working through the _Demand forecasting with the T…
-
I'm building the CUDA samples for multiple architectures, since it is documented one can do this with the `SMS` option. My build command is:
```
make -j 72 HOST_COMPILER=g++ SMS='80 86'
```
I'…
-
Hi, how does one run this on the cloud and scale it across multiple gpus?
-
### Description & Motivation
I've experienced with pytorch XLA using multitple NVIDIA A100 GPU and I observed that in most cases training is faster. So it would be really nice to have the option to…
-
### System Info
Hi,
I am currently trying to use the script run_mlm_wwm.py to perform a continual pretrianing on the Whole Word mazking task, on a Bert model, my problem occured when I am tryi…
-
Hi!
I'm trying to create multiple agents on different GPUs. When I don’t specify a particular GPU, I can create many agents as shown below:
```
clearml-agent daemon --detached --queue hello_que…
-
The current situation is that each SSD can only be controlled by one GPU. Is it possible to achieve control and reading of one SSD by multiple GPUs?
-
scvi crashes when trying to train on multiple GPUs (2x Tesla P100-PCIE-16GB)
As attempt to work around https://github.com/Lightning-AI/pytorch-lightning/issues/17212 issue `strategy='ddp_find_unus…
-
Is there any method available out of the box to run `FactChecker("hf:kundank/genaudit-usb-flanul2")` inference with multiple GPUs? My 24gigs GPU goes OOM.