Closed limjiayi closed 11 months ago
It's also possible to avoid the error described in the #1 by resizing the model's token embeddings, feel free to close this PR if that's the preferred solution.
model.resize_token_embeddings(tokenizer.vocab_size)
Thank you!
It's also possible to avoid the error described in the #1 by resizing the model's token embeddings, feel free to close this PR if that's the preferred solution.