chanind / frame-semantic-transformer

Frame Semantic Parser based on T5 and FrameNet
https://chanind.github.io/frame-semantic-transformer
MIT License
54 stars 10 forks source link

remove various transformer warnings and fix training documentation #25

Closed ruckc closed 1 year ago

ruckc commented 1 year ago

This PR fixes various transformers warnings, and updates the documentation for the train script, and adds a --use-gpu flag (and a corresponding --use-cpu).

You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. If you see this, DO NOT PANIC! This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=True. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565

transformers/generation/configuration_utils.py:367: UserWarning: do_sample is set to False. However, top_p is set to 0.95 -- this flag is only used in sample-based generation modes. You should set do_sample=True or unset top_p. warnings.warn(

transformers/optimization.py:411: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set no_deprecation_warning=True to disable this warning

chanind commented 1 year ago

Thank you for finding these issues and updating the deprecation warnings!

ruckc commented 1 year ago

No problem. It was simple and it cleaned things up. Really appreciate the library.