continuum-llms / chatgpt-memory

Allows to scale the ChatGPT API to multiple simultaneous sessions with infinite contextual and adaptive memory powered by GPT and Redis datastore.
Apache License 2.0
519 stars 65 forks source link

Do I need to initialize the Redis database first? #32

Closed BostonCodingLeo closed 1 year ago

BostonCodingLeo commented 1 year ago

Traceback (most recent call last): File "D:\Memory\RedisMemeory\chatgpt-memory\examples\simple_usage.py", line 31, in <module> memory_manager = MemoryManager(datastore=redis_datastore, embed_client=embed_client, topk=1) File "D:\Memory\RedisMemeory\chatgpt-memory\chatgpt_memory\memory\manager.py", line 34, in __init__ Memory(conversation_id=conversation_id) for conversation_id in datastore.get_all_conversation_ids() File "D:\Memory\RedisMemeory\chatgpt-memory\chatgpt_memory\datastore\redis.py", line 132, in get_all_conversation_ids result_documents = self.redis_connection.ft().search(query).docs File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\commands\search\commands.py", line 420, in search res = self.execute_command(SEARCH_CMD, *args) File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\client.py", line 1269, in execute_command return conn.retry.call_with_retry( File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\retry.py", line 46, in call_with_retry return do() File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\client.py", line 1270, in <lambda> lambda: self._send_command_parse_response( File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\client.py", line 1246, in _send_command_parse_response return self.parse_response(conn, command_name, **options) File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\client.py", line 1286, in parse_response response = connection.read_response() File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\connection.py", line 905, in read_response raise response redis.exceptions.ResponseError: idx: no such index How to create the idx index?

Also, I am using Python 3.10.12 under conda virtual environment, powershell, windows 11 with Intel x64 CPU. Why I cannot use Tiktoken for OpenAI? D:\Memory\RedisMemeory\chatgpt-memory>python .\examples\simple_usage.py OpenAI tiktoken module is not available for Python < 3.8,Linux ARM64 and AARCH64. Falling back to GPT2TokenizerFast.

BostonCodingLeo commented 1 year ago

I create a Redis stack database and enter host, port, password etc in .env file. all the scripts under ./tests/ passed with no problem.

BadlyDrawnBoy commented 1 year ago

I'm basically stuck with the same issue, except I'm using Linux / Debian 12 and Python 3.11.2 in a virtual environment

$ poetry run uvicorn rest_api:app --host 0.0.0.0 --port 8000 Traceback (most recent call last): File "/home/martin/src/chatgpt-memory/gpt-memory/bin/uvicorn", line 8, in sys.exit(main()) ^^^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/click/core.py", line 1157, in call return self.main(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/click/core.py", line 783, in invoke return __callback(args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/uvicorn/main.py", line 403, in main run( File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/uvicorn/main.py", line 568, in run server.run() File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/uvicorn/server.py", line 59, in run return asyncio.run(self.serve(sockets=sockets)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/uvicorn/server.py", line 66, in serve config.load() File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/uvicorn/config.py", line 471, in load self.loaded_app = import_from_string(self.app) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/uvicorn/importer.py", line 21, in import_from_string module = importlib.import_module(module_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/importlib/init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "", line 1206, in _gcd_import File "", line 1178, in _find_and_load File "", line 1149, in _find_and_load_unlocked File "", line 690, in _load_unlocked File "", line 940, in exec_module File "", line 241, in _call_with_frames_removed File "/home/martin/src/chatgpt-memory/rest_api.py", line 28, in memory_manager = MemoryManager(datastore=redis_datastore, embed_client=embed_client, topk=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/chatgpt_memory/memory/manager.py", line 34, in init Memory(conversation_id=conversation_id) for conversation_id in datastore.get_all_conversation_ids() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/chatgpt_memory/datastore/redis.py", line 132, in get_all_conversation_ids result_documents = self.redis_connection.ft().search(query).docs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/redis/commands/search/commands.py", line 420, in search res = self.execute_command(SEARCH_CMD, args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/redis/client.py", line 1269, in execute_command return conn.retry.call_with_retry( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/redis/retry.py", line 46, in call_with_retry return do() ^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/redis/client.py", line 1270, in lambda: self._send_command_parse_response( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/redis/client.py", line 1246, in _send_command_parse_response return self.parse_response(conn, command_name, **options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/redis/client.py", line 1286, in parse_response response = connection.read_response() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/redis/connection.py", line 905, in read_response raise response redis.exceptions.ResponseError: Unknown Index name

Regarding tiktoken, ChatGPT pointed out to me that there is an attribute 'use_tiktoken' in the class 'EmbeddingConfig' of chatgpt_memory/llm_client/openai/embedding/config.py. Setting it to 'True' does the trick....

For the Unknown Index name I don't have a solution yet. In redis.py there is the function "create_index()", but neither an index name is taken from a config file, nor is it directly in the code, and I have not yet been able to find out which index is expected or queried... I installed a Redis-Server on my own VPS / Laptop (Server: v5:7.0.11-1). The naked redis server is not enough and leads to an error, it needs the redisearch module(v1:1.2.2-4). Unfortunately even that is not enough. I was not yet able to identify what it is(it results in the 'Unknown Index name' error). When using the Redis cloud solution it works so far.

BadlyDrawnBoy commented 1 year ago

Installing the Debian redis-stack-server package from the Redis repository on my Debian 12 system solved the problem for me. I haven't figured out how to disable the modules individually to see what is really needed. Somehow the server ignores the configuration file that is passed to it via the systemd service file. Lets see when I will get back to it again.

nps1ngh commented 1 year ago

Hi, thanks a lot for your interest in the project.

Due to time constraints, we are unable to continue development on this repository.

Please check out OpenAI's official retrieval plugin, offering similar functionality and active development: openai/chatgpt-retrieval-plugin