Closed hiteshsom closed 3 years ago
This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.
I have the same question.
Hello! You're using TensorFlow models (see the TF
prefix) but you're asking the tokenizer to return PyTorch tensors. You should either stick to full PyTorch (remove the TF
prefix) or full TF (ask the tokenizer to return tf
values)
I met the same issue, I did not know how to fix it
tensor([[ 0, 24948, 5357, 88, 14, 397, 1176, 6724, 7, 35297,
18109, 5814, 16, 43, 167, 4446, 37361, 381, 2, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1]],
device='cuda:0')
tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
device='cuda:0')
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-13-1309f9063eea>](https://localhost:8080/#) in <module>
1 _, tokenizer = load_pho_bert()
----> 2 infer('Cảm ơn bạn đã chạy thử model của mình. Chúc một ngày tốt lành nha!', tokenizer)
2 frames
[/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py](https://localhost:8080/#) in display_shape(shape)
269
270 def display_shape(shape):
--> 271 return str(tuple(shape.as_list()))
272
273
AttributeError: 'torch.Size' object has no attribute 'as_list'
Hello! You're using TensorFlow models (see the
TF
prefix) but you're asking the tokenizer to return PyTorch tensors. You should either stick to full PyTorch (remove theTF
prefix) or full TF (ask the tokenizer to returntf
values)
Please help me how to fix this problem? How can I change my code? def infer(text, tokenizer, max_len=120): device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') print(device) class_names = ['thế giới', 'thể thao', 'văn hóa', 'vi tính']
model = tf.keras.models.load_model('./models/cnn_nlp_text_classification_4_classer.h5')
encoded_review = tokenizer.encode_plus(
text,
max_length=max_len,
truncation=True,
add_special_tokens=True,
padding='max_length',
return_attention_mask=True,
return_token_type_ids=False,
return_tensors='pt',
)
input_ids = encoded_review['input_ids'].to(device)
print(input_ids.shape)
attention_mask = encoded_review['attention_mask'].to(device)
print(attention_mask.shape)
output = model(input_ids, attention_mask)
==> error happen here
Hello, did you manage to solve this error and if so, how. I ran into the same error.
Hello,
I ran the follownig official example script from longformerforquestionanswering
But got following error: