I am trying to replicate the code in the example. Since I do not have cuda, I change the device name to CPU, but it looks like the underlying code is not compatible with CPU. Is that the case or did I miss something?
Here is the code and the error I get as I do it:
DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "CPU")
Error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[3], [line 2](vscode-notebook-cell:?execution_count=3&line=2)
[1](vscode-notebook-cell:?execution_count=3&line=1) # Model parameters are pulled from the url and stored in a local models/ dir.
----> [2](vscode-notebook-cell:?execution_count=3&line=2) encoder, tokenizer = load_e3gnn_smiles_clip_e2e(
[3](vscode-notebook-cell:?execution_count=3&line=3) freeze=True,
[4](vscode-notebook-cell:?execution_count=3&line=4) device=DEVICE,
[5](vscode-notebook-cell:?execution_count=3&line=5) # model parameters to load.
[6](vscode-notebook-cell:?execution_count=3&line=6) doc_url="s3://terray-public/models/e3gnn_smiles_clip_e2e_1685977071_1686087379.pkl",
[7](vscode-notebook-cell:?execution_count=3&line=7) )
File [~/anaconda3/envs/coati/lib/python3.9/site-packages/coati/models/io/coati.py:31](/lib/python3.9/site-packages/coati/models/io/coati.py:31), in load_e3gnn_smiles_clip_e2e(doc_url, device, freeze, strict, old_architecture, override_args, model_type, print_debug)
[28](lib/python3.9/site-packages/coati/models/io/coati.py:28) print(f"Loading model from {doc_url}")
[30](lib/python3.9/site-packages/coati/models/io/coati.py:30) with cache_read(doc_url, "rb") as f_in:
---> [31](lib/python3.9/site-packages/coati/models/io/coati.py:31) model_doc = pickle.loads(f_in.read(), encoding="UTF-8")
[32](lib/python3.9/site-packages/coati/models/io/coati.py:32) model_kwargs = model_doc["model_kwargs"]
[34](lib/python3.9/site-packages/coati/models/io/coati.py:34) if print_debug:
File [lib/python3.9/site-packages/torch/storage.py:337), in _load_from_bytes(b)
[336](lib/python3.9/site-packages/torch/storage.py:336) def _load_from_bytes(b):
--> [337](lib/python3.9/site-packages/torch/storage.py:337) return torch.load(io.BytesIO(b))
File [~/anaconda3/envs/coati/lib/python3.9/site-packages/torch/serialization.py:1028](lib/python3.9/site-packages/torch/serialization.py:1028), in load(f, map_location, pickle_module, weights_only, mmap, **pickle_load_args)
[1026](lib/python3.9/site-packages/torch/serialization.py:1026) except RuntimeError as e:
[1027](lib/python3.9/site-packages/torch/serialization.py:1027) raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None
...
[262](lib/python3.9/site-packages/torch/serialization.py:262) 'to map your storages to the CPU.')
[263](lib/python3.9/site-packages/torch/serialization.py:263) device_count = torch.cuda.device_count()
[264](lib/python3.9/site-packages/torch/serialization.py:264) if device >= device_count:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.```
Dear team,
I am trying to replicate the code in the example. Since I do not have cuda, I change the device name to CPU, but it looks like the underlying code is not compatible with CPU. Is that the case or did I miss something?
Here is the code and the error I get as I do it:
DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "CPU")
Error: