Hi, I have some questions about the submission.
For the evaluation of the Codec-SUPERB at SLT 2024, the repository instructions mention running both the bash run.sh and bash run_wrapper.sh scripts, followed by submitting the exps/results.txt and src/codec_metrics/exps/results.txt files. However, I have a few questions regarding the process:
The run.sh script includes four different stages, with the default being set to stage 4. Should we configure and run all the stages sequentially, or is it sufficient to only execute the default stage?
For the codec_metrics/run.sh script, there are several categories and datasets available for selection. Do we need to test each category and dataset, or should we only use the default settings provided in the script?
For the submission, is it correct that we only need to submit the two results.txt files by opening a new issue on the repository?
Thank you for your assistance in clarifying these points. I look forward to your response.
Hi, I have some questions about the submission. For the evaluation of the Codec-SUPERB at SLT 2024, the repository instructions mention running both the bash run.sh and bash run_wrapper.sh scripts, followed by submitting the exps/results.txt and src/codec_metrics/exps/results.txt files. However, I have a few questions regarding the process: The run.sh script includes four different stages, with the default being set to stage 4. Should we configure and run all the stages sequentially, or is it sufficient to only execute the default stage? For the codec_metrics/run.sh script, there are several categories and datasets available for selection. Do we need to test each category and dataset, or should we only use the default settings provided in the script? For the submission, is it correct that we only need to submit the two results.txt files by opening a new issue on the repository?
Thank you for your assistance in clarifying these points. I look forward to your response.