Open Thewillman opened 10 months ago
It's still a requirement for the precision of token generation because we need to ensure the probability of other tokens is higher than "\<eos>" token until meeting the length of baseline tokens. But the length of baseline tokens isn't equal to the given max_output_len since the related hyperparameter can't change. So can we get the length of baseline tokens after baseline generation?
Yes, I have encountered the same issue. When the test data is limited, this discrepancy isn't apparent. However, when testing with the full dataset, there always seems to be some deviation compared to the baseline.
Do you think it would be feasible to limit the results within a certain range for the final evaluation? Or, as @Thewillman suggested, use the length obtained from the baseline as the max_output_len?
Yes, I have encountered the same issue. When the test data is limited, this discrepancy isn't apparent. However, when testing with the full dataset, there always seems to be some deviation compared to the baseline.
Do you think it would be feasible to limit the results within a certain range for the final evaluation? Or, as @Thewillman suggested, use the length obtained from the baseline as the max_output_len?
It seems that lots of framework can't meet the condition. When the size of dataset becomes larger, the loss of output length expands a lot.
Can we get the number of token of each example in baseline for enforcing our written framework to match the same token?