-
GPU_COUNT=1
capture = cv2.VideoCapture(0)
capture.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
while True:
ret, frame = capture.read(…
-
Hi All,
I can train a model on one GPU with my own datasets, I want to train a model on two GPUs, how to do that?
I just change GPU_COUNT = 2 in 'mrcnn--> config.py' file is that correct? Are …
-
Thanks for your wonderful work. Is it possible to train the model using multiple gpus?
-
I try to implement your code with multi-gpus, but the network occur the problem with
> "RuntimeError : Scatter is not differentiable twice"
while add "torch.dataparallel()" to the Discriminato…
-
I used 4 GPUs on 1 node:
`torchrun --standalone --proc_per_node=4 train.py --compile=False`
But, the training speed is just like 1 GPU,why?
-
Is there a way to run this on Multiple GPUs. Or run different nodes on different RTX A6000 GPUs?
Looking to generate longer videos. Thanks.
-
I have know how to infenrence on single gpu , Use OrtSessionOptionsAppendExecutionProvider_CUDA(session_options, gpu_id).But when I inference on multi-gpus, it reports some error.
like [E:onnxruntim…
-
Hello,
We're trying to run musicgen training/fine-tuning from the audiocraft repo using dora. We've been able to run single-node training with `dora run -d solver`, When running the above using tor…
-
When I was trying to train on multiple gpus, I used `OMP_NUM_THREADS=4 WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py` according to #8. It looks like
…
-
In my experiment, this version of caffe do not support multi-gpu training. The training time of two gpu(16 batchsize per gpu) does not reduce training time half on one gpu(32 batchsize per gpu). Does …