-
### Describe the issue as clearly as possible:
I'm using the [api_like_OAI.py script](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/api_like_OAI.py) from the llamacpp repo, whi…
-
Streamed responses [don't include usage info in the response](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb). Would have to calculate this via [tiktoken]…
-
```
>>> import regex as re
>>> gpt2pat = re.compile(r"""'(?i:[sdmt]|ll|ve|re)|[^\r\n\p{L}\p{N}]?+\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]++[\r\n]*|\s*[\r\n]|\s+(?!\S)|\s+""" )
>>> str = r"""हहिन्दी विकि…
-
Hello I was attempting to recreate this but with the tokenizer from llama3 (tiktoken) but with a few changes. I would be ok training a tiktoken from scratch if needed but could not find the code to do…
-
### Version
VisualStudio Code extension
### Operating System
MacOS
### What happened?
I installed Pythagora from the VS Code Extensions and after installing and registering, I clicked on "Create …
-
https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
references
https://github.com/openai/openai-python/blob/main/chatml.md
which was deleted in http…
-
Hey, just dropping it here, i think there is a memory leak, have a lot of memory issues dealing with large data
Weirdly I thought js-tiktoken was not using wasm:
```
RangeError [Error]: WebAssemb…
-
logs中:failed to get gpt-3.5-turbo token encoder: Get "https://openaipublic.blob.core.windows.net/encodings/cl100k_base.tiktoken":dial tcp: lookup openaipublic.blob.core.windows.net on ......
-
ERROR in the tokenizer Cell
-
### Problem Description
Hi,
#654 tried to enable GPTQ quantization with fine-tuned LLaMA2 models, but was closed.
I tried following a similar approach as that PR, but for fine-tuned LLaMA3 mode…