ludwig-ai / ludwig

Low-code framework for building custom LLMs, neural networks, and other AI models
http://ludwig.ai
Apache License 2.0
11.21k stars 1.19k forks source link

`RESPONSE` contains lot longer text than is expected based on the `output_features` and `max_sequence_length`. #3985

Closed amankhandelia closed 4 months ago

amankhandelia commented 7 months ago

RESPONSE consumed by ROUGEScoreMetric function, gets a lot longer text than is expected based on the output_features and max_sequence_length. Even when the max_sequence_length is 8 or 16 tokens, it RESPONSE contains the text which is as long as the text in prompt_template.

Based on my investigation, it is happening because in get_decoded_targets_and_predictions condition is wrong, instead of targets != IGNORE_INDEX_TOKEN_ID, it is set to predictions[PREDICTIONS] != IGNORE_INDEX_TOKEN_ID. From what I understand we should be using the targets index to truncate the predictions correctly.

When I apply this change I get the correct metric value, matching up with the expectations given the results seen during finetuning

alexsherstinsky commented 4 months ago

@amankhandelia Thank you very much for your fix!

alexsherstinsky commented 4 months ago

This has been fixed and will appear in the upcoming release.