Open steebchen opened 2 months ago
This is a pretty cool library! I'm doing something like this:
while (enc.encode(raw).length > length) { raw = raw.slice(0, -1000) }
to truncate big data, however this is quite slow (running this 100 times can take like 1-2 seconds).
Is there any better way to do this or can I optimize this somehow?
This is a pretty cool library! I'm doing something like this:
to truncate big data, however this is quite slow (running this 100 times can take like 1-2 seconds).
Is there any better way to do this or can I optimize this somehow?