-
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model par…
-
I start training with this command
'python main.py --base configs/autoencoder/vqmodel1.yaml -t --gpus 4,5'
but I got this
everything works fine, steps in one epoch are halved, but only one gpu is…
-
It is common to use dataloader with distributed sampler when training with multi gpus. So why not use a distributed sampler in `examples/simple_trainer.py`, is that for any reason?
-
Hi, thanks for your great work!
I ran the DSP and PAB examples from examples/latte on A800 GPUs. The results I obtained are as follows:
![image](https://github.com/user-attachments/assets/ec951…
-
# Implement Multi-GPU Support in Anomalib
- Depends on: https://github.com/openvinotoolkit/anomalib/issues/2257
## Background
Anomalib currently uses PyTorch Lightning under the hood, which provi…
-
I try to use --gpus=0,1,2,3 to train ,but get error:
>>raise MXNetError(py_str(_LIB.MXGetLastError()))
>>mxnet.base.MXNetError: Error in operator rois: Shape inconsistent, Provided=(1,3), inferred s…
-
Hi,
Could you please let me know how I can use the guidance with multi-gpu settings? I tried **models.transformers(modelname, device-map='auto')**, but checking with nvidia-smi during the inference t…
-
Could the distillation networks be trained on multiple gpus?
-
Hi I would like to know whether this software can use multi GPUs i.e. MPI+CUDA to accelerate the discontinuous galerkin time domain method? Thanks
-
Thanks for your great work!
I have 2 gpus so want to inference with multi gpus.
How to use multi gpu for inference?