We're working with a couple of long context window prompts where knowing the token counts of prompt components is useful before we make the requests.
However, testing our workflows with Claude 3 has been tricky because we have no way to get these estimates until after we've made the request, which costs us a few API credits. It would be extremely useful if the tokenizer could add support for the new series of models.
We're working with a couple of long context window prompts where knowing the token counts of prompt components is useful before we make the requests.
However, testing our workflows with Claude 3 has been tricky because we have no way to get these estimates until after we've made the request, which costs us a few API credits. It would be extremely useful if the tokenizer could add support for the new series of models.