Closed simonw closed 1 year ago
There are three ways I could do this:
The "tokens" one would be most useful for working with LLMs, but for which tokenizer? Maybe that should be a separate tool.
On that basis, I'm not going to do this with this tool - I'll build something else I can pipe through instead.
I want to use this tool to pipe content into llm - so I'd like to be able to truncate the output in order to stay within the token limit.