Lyken17 / Efficient-PyTorch

My best practice of training large dataset using PyTorch.
1.08k stars 139 forks source link

GIL Claim #19

Closed harveyslash closed 4 years ago

harveyslash commented 4 years ago

I am a bit confused about what you mentioned about the GIL not allowing truly parallelizable code. The official docs here seem to claim if you set num_workers to anything > 1, you can use individual python processes.

Lyken17 commented 4 years ago

GIL is about NN.DataParallel

Sent from my iPhone

On Jul 15, 2020, at 17:13, Harshvardhan Gupta notifications@github.com wrote:

 I am a bit confused about what you mentioned about the GIL not allowing truly parallelizable code. The official docs here seem to claim if you set num_workers to anything > 1, you can use individual python processes.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.

harveyslash commented 4 years ago

I see. Thanks for clarifying!