niieani / gpt-tokenizer

JavaScript BPE Tokenizer Encoder Decoder for OpenAI's GPT-2 / GPT-3 / GPT-4. Port of OpenAI's tiktoken with additional features.
https://gpt-tokenizer.dev
MIT License
403 stars 33 forks source link

avoid expensive initialization #18

Open mimoo opened 1 year ago

mimoo commented 1 year ago

Hello,

I'm using the following:

import { encode, isWithinTokenLimit } from 'gpt-tokenizer/model/text-davinci-003';

which seems to slow down the initialization, enough that I can't deploy to cloudflare workers with this library. Is there a way to lazily initialize things?

airhorns commented 1 year ago

We're experincing this as well -- requiring this package takes ~600ms on my M1 MBP:

❯ time node -r gpt-tokenizer -e "1"

________________________________________________________
Executed in  548.82 millis    fish           external
   usr time  616.81 millis    4.71 millis  612.10 millis
   sys time   99.25 millis    9.21 millis   90.04 millis

Would it be hard to lazily require the encodings only once the first encode call is made?

zakariamehbi commented 11 months ago

Same issue on my end.

thdoan commented 8 months ago

For Cloudflare Workers I suggest you look at this: https://github.com/dqbd/tiktoken#cloudflare-workers

luizzappa commented 4 months ago

To get around the 400ms startup time limit of Cloudflare Workers, I just import the library within fetch.

export default {
    async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
              const { encode } = await import('gpt-tokenizer');
              // ....
        }
}

Regarding another suggested library by @thdoan, I couldn't get tiktoken or js-tiktoken to work within the limits of Cloudflare Workers. The js-tiktoken bundles all the encoders, so this makes the bundle larger than the 1mb limit of the Cloudflare Worker (see here). And tiktoken/lite, which allows you to import only the necessary encoder, which makes it within the size <= 1mb, has a bug that has not yet been fixed.