huawei-noah / trustworthyAI

Trustworthy AI related projects
Apache License 2.0
979 stars 216 forks source link

Question about castle.algorithms.RL using GPU #133

Closed Ethan-Chen-plus closed 1 year ago

Ethan-Chen-plus commented 1 year ago

image When I use castle.algorithms.RL to fit my data, I use device GPU, but by using nvidia-smi to check, there is no running progress on gpus.

shaido987 commented 1 year ago

Hello,

could you try using RL(device_type='gpu') (in lowercase) to see if it works? You can also try to specify the device ID with the device_ids parameter, e.g., setting device_ids=0.

Ethan-Chen-plus commented 1 year ago

image This works. However I still have 2 questions:

  1. Set 0~7 all the cards, it only runs on card 0.
  2. It seems that gpu can speed up compared to cpu, however it doesn't speed up much: 4.00s/it vs 4.25s/it.
shaido987 commented 1 year ago

Nice~

For your questions,

  1. Setting the device ids will change the CUDA_VISIBLE_DEVICES environment variable but I believe the code itself will only use a single device, so it's mainly useful for setting which device to use.
  2. This mainly depends on the problem size, for smaller problems there will be a lot of data overhead which makes running on the CPU faster. Another thing to note is that for the RL method here, it is slow on larger graphs. In the paper conclusion they mention that ~30 nodes is ok but larger than that can be problematic.
Ethan-Chen-plus commented 1 year ago

Thanks a lot for your help!