Open haraldschilly opened 2 months ago
in the frontend we're using gpt3-tokenizer. I propose to switch to using gpt-tokenizer, because in my test it is not only 10x faster, but also supports generators. Those generators are handy, because truncating with a limit should work more easily.
gpt3-tokenizer
in the frontend we're using
gpt3-tokenizer
. I propose to switch to using gpt-tokenizer, because in my test it is not only 10x faster, but also supports generators. Those generators are handy, because truncating with a limit should work more easily.