Closed deep-diver closed 6 months ago
Thanks for the answers.
In tokenizer.decode()
we should add tokenizer.decode(skip_special_tokens=True)
as well.
@sayakpaul
Addressed your comments. Specifically:
skip_special_tokens=True
in decode()
, then split by the delimiter which is assistant\n
in this case..to(model.device)
after the apply_chat_template()
as you suggested!Thanks! The first resolution should help us get rid of the manual eos splitting we were doing.
Feel free to merge this!
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB