-
Hi Team,
First off, I want to thank you for your hard work on RD-Agent.
This project looks incredibly exciting, and I'm eager to explore its capabilities further.
However, I ran into some i…
-
When using the relatively recent ["Functions"](https://platform.openai.com/docs/guides/gpt/function-calling) feature of the ChatGPT API, it seems like `tiktoken-rs` underestimates the total number of …
-
https://github.com/mvitlov/tiktoken/blob/f2538e7205a06c7aa270a10c20c0083b47150321/lib/src/core_bpe_constructor.dart#L112
firstly thanks for your tiktoken library, and there might be a consideratio…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I recently upgraded llama_index to version 0.11.2.
I also changed from using Sentence…
-
PS C:\Docs\projects\git_projects\ChatPaper> python chat_arxiv.py --query "chatgpt robots" --page_num 2 --max_results 3 --days 20
Traceback (most recent call last):
File "C:\Docs\projects\git_proje…
-
**Is your feature request related to a problem? Please describe.**
Transformers that use LLM calls suffer from overloading their endpoints, resulting in errors like these:
```
2024-07-10T17:47:10…
tinco updated
2 months ago
-
I know most folks use `tiktoken-go` to get tokens for a given input, but it could be nice to have something built in to the client to do this for you.
-
Use Go to implement this function: https://platform.openai.com/tokenizer
-
**Bug description**
1. platform
root@qingyi:~/MetaGPT/config# lsb_release -a
LSB Version: core-11.1.0ubuntu4-noarch:security-11.1.0ubuntu4-noarch
Distributor ID: Ubuntu
Description: …
-
```
File "/home/paas/vllm/vllm/engine/llm_engine.py", line 222, in _init_tokenizer
self.tokenizer: BaseTokenizerGroup = get_tokenizer_group(
File "/home/paas/vllm/vllm/transformers_utils/to…