Closed KEggensperger closed 5 years ago
I can reproduce the issue on my machine with a freshly installed conda environment.
conda create -n autopytorch python pip
source activate autopytorch
git clone https://github.com/automl/Auto-PyTorch.git
cd Auto-PyTorch/
conda install pytorch-cpu -c pytorch
pip install -r requirements.txt
python setup.py install
cat README.md
vi test.py
python test.py
with running the example from the README
Hi,
we forgot to check if cuda is available in predict(). My last commit should fix this issue.
Cheers,
Matthias
Yes, now it works! Still, the final incumbent has cuda=True
, but feel free to close this issue for now.
The dictionary containing cuda=True is not the final incumbent, but the settings Auto-PyTorch has been started with (Hyperparameters of Auto-PyTorch).
It was not possible to set the default value of cuda to cuda.is_available(), because that caused problems with pynisher. It seems to be impossible to call cuda methods from different processes.
So we chose to set the default of cuda to True and then disable it, if it is not available.
When running
examples/basics/basic_classification.py
on a CPU, I receive the following output and error:It seems like it is searching for a GPU only for scoring the final model (also the incumbent has
'cuda': True
), but of course there is none as I am usingpytorch-cpu
. Alsopython -c "import torch; torch.cuda.is_available()"
returnsFalse
on my machine.