@0x4007 This is also because of the use of a single token [here](https://github.com/ubiquibot/conversation-rewards/actions/runs/10553527862/job/29233964131#step:7:88). I'll have a look today, it seems to be the only case where it breaks, all the other ones are calculated properly with the dummy response.
Originally posted by @gentlementlegen in https://github.com/ubiquibot/conversation-rewards/issues/62#issuecomment-2309288788
Sometimes, OpenAI gives back an empty response that breaks the process and comment evaluation. This should not happen.