ramsrigouthamg / Questgen.ai

Question generation using state-of-the-art Natural Language Processing algorithms
https://questgen.ai/
MIT License
900 stars 287 forks source link

tokenizer with `model_max_length` #61

Open granth23 opened 1 year ago

granth23 commented 1 year ago

Hi, thank you for this project but I am facing some issues while running the code for Boolean Question Generation:

/home/codespace/.python/current/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py:163: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with truncation is True.

Canceled future for execute_request message before replies were done The Kernel crashed while executing code in the the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details.

AliHaider20 commented 9 months ago

@granth23 Are you still facing the same issue? What I see that now it gives a warning, but the code doesn't break.

@ramsrigouthamg Could you assign this task to me?

AliHaider20 commented 9 months ago

This issue is solved. Close it.