I am trying to to do large scale hyper-parameter tuning. I have a local setup of 4 GPU's. My model size is small (~ 1GB), so I was thinking of running multiple trials on a single GPU so that I can parallelize tuning even more.
Even setting resources_per_trial={"gpu": 0.3} is not helping.
I am trying to to do large scale hyper-parameter tuning. I have a local setup of 4 GPU's. My model size is small (~ 1GB), so I was thinking of running multiple trials on a single GPU so that I can parallelize tuning even more.
Even setting resources_per_trial={"gpu": 0.3} is not helping.
Is there a way I can do it ?
Please help.