Is there a way to set 'resources_per_trial' by sending parameter to tuning object or setting environment variable? Currently, when it is run with n_jobs=1, it sets cpu fraction to 2. But cluster only has 1, and it throws the following warning/error and does not work.
(scheduler +118h16m59s) Error: No available node types can fulfill resource request {'CPU': 2.0}. Add suitable node types to this cluster to resolve this issue.
WARNING insufficient_resources_manager.py:123 -- Ignore this message if the cluster is autoscaling. You asked for 2.0 cpu and 0 gpu per trial, but the cluster only has 1.0 cpu and 0 gpu. Stop the tuning job and adjust the resources requested per trial (possibly via `resources_per_trial` or via `num_workers` for rllib) and/or add more resources to your Ray runtime
So with my understanding when n_jobs is set to > 0, it looks for available cpu's and gets fractions of them and ceils them to set resources_per_trial for each trial. But could not find a solution for my case, where I want to set it to cpu: 1, gpu: 0 all the time.
You should be able to pass resources_per_trial as a key in the tune_params argument of the fit method. It will override whatever tune-sklearn sets automatically
Is there a way to set 'resources_per_trial' by sending parameter to tuning object or setting environment variable? Currently, when it is run with n_jobs=1, it sets cpu fraction to 2. But cluster only has 1, and it throws the following warning/error and does not work.
So with my understanding when n_jobs is set to > 0, it looks for available cpu's and gets fractions of them and ceils them to set resources_per_trial for each trial. But could not find a solution for my case, where I want to set it to cpu: 1, gpu: 0 all the time.