-
Is tiktoken supports **meta/llama2-70b**?
I want to find the token count of the prompt before passing the prompt to a **meta/llama2-70b** model
So how to do this with tiktoken
```
import tiktok…
-
### Is there an existing issue for the same bug?
- [X] I have checked the existing issues.
### Branch name
main
### Commit ID
081f922
### Other environment information
_No response_
### Actual…
-
C:\Users\Khalil Ur Rehman>pip install tavily-python
Collecting tavily-python
Using cached tavily_python-0.3.0-py3-none-any.whl.metadata (4.4 kB)
Requirement already satisfied: requests in c:\user…
-
Unknown encoding cl100k_base. Plugins found: ['tiktoken_ext.openai_public']
-
`PYTHONPATH=. python3 examples/llama.py --model ../.cache/tinygrad/downloads/llama3-8b-sfr/model.safetensors.index.json --gen 3`
Got this typing:
```
User: ye
Traceback (most recent call last):
…
-
Currently we are using a slow wasm implementation: https://www.npmjs.com/package/tiktoken in https://github.com/toeverything/AFFiNE/blob/v0.14.0-canary.9/packages/backend/server/package.json#L90
```[…
-
Hello,
I noticed that the code package you wrote is very impressive. However, is it only capable of counting tokens for regular simple chats?
I saw your code requires the input prompt to include "…
-
```
import tokenizers
def show_tokenization(tok, s):
ids = tok.encode(s, add_special_tokens=False).ids
print([(i, tok.decode([i])) for i in ids])
def show_tokenization_from_id(tok, id…
-
**Describe the bug**
I'm trying to create a `DocChatAgent` with embedding model from deepinfra via litellm, but it fails with the following message: *"Could not automatically map litellm/deepinfra/BA…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
## QWEN2-7B(MoE)
需要使用bf16 #4278
正常
## glm4
注释掉torch.jit行 使用bf16 参考 #4339 #3788
## chatglm3…