This pull request fixes several bugs in eval.py and introduces the functionality to submit batch evaluations via a bash script. It also includes generation configurations for summarization, title generation, and paraphrasing. Additionally, a script for uploading models to the Hugging Face Hub has been added.
Changes
push_to_hub.py now enables automatic pushing of specified models to the Hugging Face hub.
eval.py: Fixed a JSON dump error caused by results containing prediction outputs, which are instances of the numpy array, and a tokenizer path error in DatasetProcessor.
generation_confs/: Added configurations for summarization, title generation, and paraphrasing.
evaluate.sh: Implemented batch evaluation, which allows for evaluating all fine-tuned models for a task at once.
This pull request fixes several bugs in eval.py and introduces the functionality to submit batch evaluations via a bash script. It also includes generation configurations for summarization, title generation, and paraphrasing. Additionally, a script for uploading models to the Hugging Face Hub has been added.
Changes
push_to_hub.py
now enables automatic pushing of specified models to the Hugging Face hub.eval.py
: Fixed a JSON dump error caused by results containing prediction outputs, which are instances of the numpy array, and a tokenizer path error in DatasetProcessor.evaluate.sh
: Implemented batch evaluation, which allows for evaluating all fine-tuned models for a task at once.