output:
Warning: Unable to load toolkit 'OpenEye Toolkit'. The Open Force Field Toolkit does not require the OpenEye Toolkits, and can use RDKit/AmberTools instead. However, if you have a valid license for the OpenEye Toolkits, consider installing them for faster performance and additional file format support: https://docs.eyesopen.com/toolkits/python/quickstart-python/linuxosx.html OpenEye offers free Toolkit licenses for academics: https://www.eyesopen.com/academic-licensing
Using weights from... runs/paper_baseline/weights.ckpt
set seed for random, numpy and torch
Loading test set...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 113.23it/s]
/home/yanapatj/miniconda3/envs/edm-dock/lib/python3.9/site-packages/torch/utils/data/dataloader.py:557: UserWarning: This DataLoader will create 16 worker processes in total. Our suggested max number of worker in current system is 1, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
/home/yanapatj/miniconda3/envs/edm-dock/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:441: LightningDeprecationWarning: Setting Trainer(gpus=1) is deprecated in v1.7 and will be removed in v2.0. Please use Trainer(accelerator='gpu', devices=1) instead.
rank_zero_deprecation(
Traceback (most recent call last):
File "/gpfs/projects/parisahlab/yanapatj/EDM-Dock/scripts/dock.py", line 130, in
trainer = Trainer(gpus=config['cuda'])
File "/home/yanapatj/miniconda3/envs/edm-dock/lib/python3.9/site-packages/pytorch_lightning/utilities/argparse.py", line 340, in insert_env_defaults
return fn(self, **kwargs)
File "/home/yanapatj/miniconda3/envs/edm-dock/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 414, in init
self._accelerator_connector = AcceleratorConnector(
File "/home/yanapatj/miniconda3/envs/edm-dock/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 208, in init
self._set_parallel_devices_and_init_accelerator()
File "/home/yanapatj/miniconda3/envs/edm-dock/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 528, in _set_parallel_devices_and_init_accelerator
raise MisconfigurationException(
lightning_lite.utilities.exceptions.MisconfigurationException: CUDAAccelerator can not run on your system since the accelerator is not available. The following accelerator(s) is available and can be passed into accelerator argument of Trainer: ['cpu'].
input: python scripts/dock.py --run_path runs/paper_baseline --dataset_path examples
output: Warning: Unable to load toolkit 'OpenEye Toolkit'. The Open Force Field Toolkit does not require the OpenEye Toolkits, and can use RDKit/AmberTools instead. However, if you have a valid license for the OpenEye Toolkits, consider installing them for faster performance and additional file format support: https://docs.eyesopen.com/toolkits/python/quickstart-python/linuxosx.html OpenEye offers free Toolkit licenses for academics: https://www.eyesopen.com/academic-licensing Using weights from... runs/paper_baseline/weights.ckpt set seed for random, numpy and torch Loading test set... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 113.23it/s] /home/yanapatj/miniconda3/envs/edm-dock/lib/python3.9/site-packages/torch/utils/data/dataloader.py:557: UserWarning: This DataLoader will create 16 worker processes in total. Our suggested max number of worker in current system is 1, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary. warnings.warn(_create_warning_msg( /home/yanapatj/miniconda3/envs/edm-dock/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:441: LightningDeprecationWarning: Setting
trainer = Trainer(gpus=config['cuda'])
File "/home/yanapatj/miniconda3/envs/edm-dock/lib/python3.9/site-packages/pytorch_lightning/utilities/argparse.py", line 340, in insert_env_defaults
return fn(self, **kwargs)
File "/home/yanapatj/miniconda3/envs/edm-dock/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 414, in init
self._accelerator_connector = AcceleratorConnector(
File "/home/yanapatj/miniconda3/envs/edm-dock/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 208, in init
self._set_parallel_devices_and_init_accelerator()
File "/home/yanapatj/miniconda3/envs/edm-dock/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 528, in _set_parallel_devices_and_init_accelerator
raise MisconfigurationException(
lightning_lite.utilities.exceptions.MisconfigurationException:
Trainer(gpus=1)
is deprecated in v1.7 and will be removed in v2.0. Please useTrainer(accelerator='gpu', devices=1)
instead. rank_zero_deprecation( Traceback (most recent call last): File "/gpfs/projects/parisahlab/yanapatj/EDM-Dock/scripts/dock.py", line 130, inCUDAAccelerator
can not run on your system since the accelerator is not available. The following accelerator(s) is available and can be passed intoaccelerator
argument ofTrainer
: ['cpu'].