User's personalized inputs might be too long to surpass the token limits for different LLMs.
Thus, we should set a boundary for LLM prompts to comply with token limits of every LLM.
By research, the token limits are the followings respectively:
llama3-70b: 2048 (including its fine-tuned version: codellama-7b)
mistral-7b: 4096 (including its fine-tuned version: codestral-22b)
User's personalized inputs might be too long to surpass the token limits for different LLMs. Thus, we should set a boundary for LLM prompts to comply with token limits of every LLM. By research, the token limits are the followings respectively: