openai / chatgpt-retrieval-plugin

The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
MIT License
20.96k stars 3.68k forks source link

tiktoken error #352

Open HosseinNikbakhtDev opened 11 months ago

HosseinNikbakhtDev commented 11 months ago

when i want to run poetry with "poetry run start" i catch this error

Warning: Found deprecated key 'default' or 'secondary' in pyproject.toml configuration for source azure-sdk-dev. Please provide the key 'priority' instead. Accepted values are: 'default', 'primary', 'secondary', 'supplemental', 'explicit'. Warning: Found deprecated priority 'secondary' for source 'azure-sdk-dev' in pyproject.toml. Consider changing the priority to one of the non-deprecated values: 'default', 'primary', 'supplemental', 'explicit'. Warning: In a future version of Poetry, PyPI will be disabled automatically if at least one custom source is configured with another priority than 'explicit'. In order to avoid a breaking change and make your pyproject.toml forward compatible, add PyPI explicitly via 'poetry source add pypi'. By the way, this has the advantage that you can set the priority of PyPI as with any other source. Traceback (most recent call last): File "", line 1, in File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\importlib__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "", line 1204, in _gcd_import File "", line 1176, in _find_and_load File "", line 1147, in _find_and_load_unlocked File "", line 690, in _load_unlocked File "", line 940, in exec_module File "", line 241, in _call_with_frames_removed File "D:\working\chat-gpt-retrival-plugin\edit3\server\main.py", line 17, in from datastore.factory import get_datastore File "D:\working\chat-gpt-retrival-plugin\edit3\datastore\factory.py", line 1, in from datastore.datastore import DataStore File "D:\working\chat-gpt-retrival-plugin\edit3\datastore\datastore.py", line 13, in from services.chunks import get_document_chunks File "D:\working\chat-gpt-retrival-plugin\edit3\services\chunks.py", line 13, in tokenizer = tiktoken.get_encoding( ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\USER\AppData\Local\pypoetry\Cache\virtualenvs\chatgpt-retrieval-plugin-8svgH3MQ-py3.11\Lib\site-packages\tiktoken\registry.py", line 63, in get_encoding enc = Encoding(**constructor()) ^^^^^^^^^^^^^ File "C:\Users\USER\AppData\Local\pypoetry\Cache\virtualenvs\chatgpt-retrieval-plugin-8svgH3MQ-py3.11\Lib\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base mergeable_ranks = load_tiktoken_bpe( ^^^^^^^^^^^^^^^^^^ File "C:\Users\USER\AppData\Local\pypoetry\Cache\virtualenvs\chatgpt-retrieval-plugin-8svgH3MQ-py3.11\Lib\site-packages\tiktoken\load.py", line 104, in load_tiktoken_bpe return { ^ File "C:\Users\USER\AppData\Local\pypoetry\Cache\virtualenvs\chatgpt-retrieval-plugin-8svgH3MQ-py3.11\Lib\site-packages\tiktoken\load.py", line 106, in for token, rank in (line.split() for line in contents.splitlines() if line) ^^^^^^^^^^^ ValueError: not enough values to unpack (expected 2, got 1)