When using AceGPT-7B-chat and AceGPT-13B-chat models, an error occurred stating that the input must have less than 1024 tokens. The input provided had 1322 tokens. I want to clarify whether this error occurred because it was insufficient to produce just the first response. Additionally, I'm wondering if these models are designed for RAG (Retrieval-Augmented Generation) due to the context window?
When using AceGPT-7B-chat and AceGPT-13B-chat models, an error occurred stating that the input must have less than 1024 tokens. The input provided had 1322 tokens. I want to clarify whether this error occurred because it was insufficient to produce just the first response. Additionally, I'm wondering if these models are designed for RAG (Retrieval-Augmented Generation) due to the context window?