kubeflow / pytorch-operator

PyTorch on Kubernetes
Apache License 2.0
306 stars 143 forks source link

Distributed mnist is unexpectedly slow #271

Open panchul opened 4 years ago

panchul commented 4 years ago

I ran mnist example with 2 workers on a 2-node Kubernetes cluster running on 2 VMs, and expected it be faster comparing with 1-worker case. However the time actually increased, and was even slower the more workers I added. Made several test runs, timing is reproducible:

No GPUs(explicitly disabling them in container spec template). Here is the node information:

$ k get nodes
NAME                                STATUS   ROLES   AGE   VERSION
aks-nodepool1-12102812-vmss000000   Ready    agent   28d   v1.15.10
aks-nodepool1-12102812-vmss000001   Ready    agent   28d   v1.15.10

Below is the minimally-modified pytorch-operator/examples/mnist/v1/pytorch_job_mnist_gloo.yaml I used:


apiVersion: "kubeflow.org/v1"
kind: "PyTorchJob"
metadata:
  name: "pytorch-dist-mnist-gloo"
spec:
  pytorchReplicaSpecs:
    Master:
      replicas: 1
      restartPolicy: OnFailure
      template:
        spec:
          containers:
            - name: pytorch
              image: alek8106/pytorch-dist-mnist-test:1.0
              args: ["--backend", "gloo", "--no-cuda"]
              resources:
                limits:
              #    nvidia.com/gpu: 1
    Worker:
      #replicas: 1
      replicas: 2
      restartPolicy: OnFailure
      template:
        spec:
          containers:
            - name: pytorch
              image: alek8106/pytorch-dist-mnist-test:1.0
              args: ["--backend", "gloo", "--no-cuda"]
              resources:
                limits:
               #   nvidia.com/gpu: 1```
issue-label-bot[bot] commented 4 years ago

Issue-Label Bot is automatically applying the labels:

Label Probability
kind/bug 0.78

Please mark this comment with :thumbsup: or :thumbsdown: to give our bot feedback! Links: app homepage, dashboard and code for this bot.

gaocegege commented 4 years ago

How about the bandwidth in your cluster?

panchul commented 4 years ago

@gaocegege , local network, no unusual bottlenecks.

xq2005 commented 4 years ago

@panchul I met similar problem when using DataParallel(...) in code, but I did not find good solution. Distribute deep learning learning workload tightly depends on the network bandwidth. If there is not bottlenecks on network, try to enlarge the batch size based on the number of workers.

Refer https://github.com/pytorch/pytorch/issues/3917 for more detail.

lwj1980s commented 3 years ago

I am coming across the same problem,have you solved it?

jalola commented 3 years ago

After each iteration == a batch, all of the replicas will send out their gradient (size = network size) If model size is 100MB: 1 node: no need to send gradient 2 nodes: 2 x 100MB = 200MB/it

You can check your network bandwidth and compare with the model size to see if the network is the bottleneck. If it is because of network then can use bigger batch size as xq2005 said, or using no_sync in DDP.

gaocegege commented 2 years ago

Ref https://github.com/kubeflow/training-operator/issues/1454