facebookresearch / kill-the-bits

Code for: "And the bit goes down: Revisiting the quantization of neural networks"
Other
636 stars 123 forks source link

How to implement multi-gnu training? #20

Closed yw155 closed 5 years ago

yw155 commented 5 years ago

Hi @pierrestock, I would like to ask you how to implement the codes of multi-gnu training. Which parts could be run on multi-gnu, like quantization, fine-tune and global fine-tune? Thank you.

pierrestock commented 5 years ago

Hi yw155,

Thanks for reaching out! Regarding the multi-GPU training:

You can use torch.distributed do perform the distributed training and use the barrier() function for GPUs 1-7 to wait for GPU 0 to perform quantization as well as the broadcast() function to broadcast the centroids and assignments obtained by GPU 0 to GPUs 1-7.

Hope this helps,

Pierre

yw155 commented 5 years ago

Thank you very much.

Fight-hawk commented 5 years ago

@yw155 Hi, did you implement multi-gpu training?