tinkoff-ai / CORL

High-quality single-file implementations of SOTA Offline and Offline-to-Online RL algorithms: AWAC, BC, CQL, DT, EDAC, IQL, SAC-N, TD3+BC, LB-SAC, SPOT, Cal-QL, ReBRAC
https://arxiv.org/abs/2210.07105
Apache License 2.0
1.08k stars 131 forks source link

Error installing dependencies #39

Closed gballardin closed 1 year ago

gballardin commented 1 year ago

When I try to install the dependencies in a brand new Conda environment:

pip install -r requirements/requirements_dev.txt

it errors out with:

ERROR: Could not find a version that satisfies the requirement torch==1.11.0+cu113 (from versions: 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0) ERROR: No matching distribution found for torch==1.11.0+cu113

I am using Python 3.8.16. Am I using the wrong Python version?

Howuhh commented 1 year ago

Hi @gballardin! Just to check one hypothesis, do you have available gpus in this environment? I can reproduce this error only on my local laptop without gpus, while it works within/outside of the docker when gpus (or cuda) are available.

gballardin commented 1 year ago

Ah. Good point. I thought (probably incorrectly) that device was not not affecting the environment setup, but only the actual training. That is why I was going to test the basics out on CPUs, and then I was going to move to GPUs after I ironed out all the basic kinks.

Let me test that out.

Howuhh commented 1 year ago

@gballardin No, you can test it on cpu (we also do it all the time), probably for this you just have to pip install it without cuda (remove +cu113). So, this is a problem with our requirements. May be we need to provide requirements_dev_cpu in such a case

gballardin commented 1 year ago

Thank you for clarifying. "remove +cu113" did the trick in the CPU-only environment. I appreciated your help in figuring this out.