Hi,
Just wondering if aitextgen finetuning supports multi-threading/multi-CPU (parallel/distributed) training. I understand the execution is much faster on GPUs, but in case that GPU is not available, can we get the same performance using many CPUs ?
I tried running the code twice with doubling the number of CPUs, but no increase in performance was found.
Hi, Just wondering if aitextgen finetuning supports multi-threading/multi-CPU (parallel/distributed) training. I understand the execution is much faster on GPUs, but in case that GPU is not available, can we get the same performance using many CPUs ? I tried running the code twice with doubling the number of CPUs, but no increase in performance was found.
Thanks