Closed tkrwy closed 1 year ago
Can you provide a code snippet to reproduce your error? It sounds like you're using the HuggingFace model not this version
Thank you for your reply. Yes, I'm using HuggingFace, because I want to use your molecule embedding in our downstream anlaysis. Bellow is the code snippet. Is there another easy way to cite your model, if I don't use HuggingFace?
import torch
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ibm/MoLFormer-XL-both-10pct", deterministic_eval=True, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ibm/MoLFormer-XL-both-10pct", trust_remote_code=True)
smiles = ["Cn1c(=O)c2c(ncn2C)n(C)c1=O", "CC(=O)Oc1ccccc1C(=O)O"]
inputs = tokenizer(smiles, padding=True, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
outputs.pooler_output
I think you just need to update your version of transformers
-- it looks like this feature was added in v4.31.0.
For future reference, this repo contains a standalone version of the code which can be used for training, etc. It does not contain the huggingface code.
Hello, Thank you for your useful tool! when I use the model, it appears the following error:
How I can skip the step, thank you!