bioinform / neusomatic

NeuSomatic: Deep convolutional neural networks for accurate somatic mutation detection
Other
168 stars 51 forks source link

CUDA example #24

Closed lmiroslaw closed 5 years ago

lmiroslaw commented 6 years ago

how to call train.py on a workstation with CUDA enabled? I guess some of the parameters such as num_threads will not be relevant.

Could you provide an example ?

msahraeian commented 6 years ago

Hi @lmiroslaw ,

As noted in README, by default the code will use all GPUs visible to it. You can specify that you want to use GPU (and which specific GPU devices in your machine) using CUDA_VISIBLE_DEVICES. For instance:

CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py ...

means you want to use the first 4 GPU devices in your node.

And if you want to only use CPU, you can set

CUDA_VISIBLE_DEVICES= python train.py ...

num_threads only specify the number of cpu threads to use for workers (for loading data, ...)

lmiroslaw commented 5 years ago

Thanks for the hint.

Now, I am trying to determine if the CPU or GPU run was successfully executed. However in log files I don't see any reference to "We use X GPUs".

Could you update this line: if torch.cuda.device_count() > 1: logger.info("We use {} GPUs!".format(torch.cuda.device_count())) with if torch.cuda.device_count() > 1: logger.info("We use {} GPUs!".format(torch.cuda.device_count())) else "We don't use GPUs"

msahraeian commented 5 years ago

@lmiroslaw We already print a line to reflect this. If GPU is used, we will print use_cuda: True and if not it says use_cuda: False. Then the line you noted will mention how many GPU devices are being used if we are in GPU mode.