ASC-Competition / ASC24-LLM-inference-optimization

The dataset and baseline code for ASC23 LLM inference optimization challenge.
31 stars 6 forks source link

About total token #8

Open Thewillman opened 10 months ago

Thewillman commented 10 months ago

Can we get the number of token of each example in baseline for enforcing our written framework to match the same token?

Thewillman commented 10 months ago

It's still a requirement for the precision of token generation because we need to ensure the probability of other tokens is higher than "\<eos>" token until meeting the length of baseline tokens. But the length of baseline tokens isn't equal to the given max_output_len since the related hyperparameter can't change. So can we get the length of baseline tokens after baseline generation?

Lvjinhong commented 10 months ago

Yes, I have encountered the same issue. When the test data is limited, this discrepancy isn't apparent. However, when testing with the full dataset, there always seems to be some deviation compared to the baseline.

Do you think it would be feasible to limit the results within a certain range for the final evaluation? Or, as @Thewillman suggested, use the length obtained from the baseline as the max_output_len?

Thewillman commented 10 months ago

Yes, I have encountered the same issue. When the test data is limited, this discrepancy isn't apparent. However, when testing with the full dataset, there always seems to be some deviation compared to the baseline.

Do you think it would be feasible to limit the results within a certain range for the final evaluation? Or, as @Thewillman suggested, use the length obtained from the baseline as the max_output_len?

It seems that lots of framework can't meet the condition. When the size of dataset becomes larger, the loss of output length expands a lot.