Closed Hansyvea closed 3 years ago
@Hansyvea thanks for feedback! You are absolutely correct. With so much happening on both 'transformers' and 'torch' at the moment it takes some work to keep 'NERDA' up to date! If you are able to fix the issue, I would really appreciate a Pull Request!
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", use_fast=False)
add parameter, "use_fast=false" in tokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", use_fast=False)
add parameter, "use_fast=false" in tokenizer
thanks! it really solved the problem. do you know how to add other decoder layer such as CRF?
thanks! I will fix this!
Thanks for your input @Dhruvit-Chaniyara and @Hansyvea. I have fixed this in the new release.
I tested the same dataset along with the same model and hyper parameters but in different versions of torch and transformers. It raises error with torch 1.81 as the following: TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
However, it works fine in the environment in which I just had NERDA pip installed...