Open ShabnamRA opened 6 months ago
That depends on the underlying models that you choose and whether they support multi-GPU. For instance, I believe cuML's UMAP has a multi-GPU implementation although I'm not sure whether that is found in both training and inference. You would have to check the underlying models whether that is supported.
Hi Maarten, I'm attempting to execute one of your examples in Google Colab for processing large-scale databases. Here are the specifications of my machine: 8 NVIDIA A100 cards and a 50TB SSD. However, when running the code, it appears to only utilize one of the GPUs. Could you advise on how I can distribute the workload across all 8 cards?