Thank you for your response. I still have some questions regarding the updates to the QV network.
I noticed that in DoubleCritic, our tokenizer has a truncation length of 512.
How do we ensure this 512-length limit in the webshop environment?
When I trained on the webshop, the length reached over 2000 after three rounds of conversation.
yea I think Roberta tokenizer cannot support over 2000 tokens. In that case, the context will be truncated from the start (so that the nearest round is not truncated).
Thank you for your response. I still have some questions regarding the updates to the QV network. I noticed that in DoubleCritic, our tokenizer has a truncation length of 512. How do we ensure this 512-length limit in the webshop environment? When I trained on the webshop, the length reached over 2000 after three rounds of conversation.