run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.3k stars 5.18k forks source link

[Bug]: Can't import RedisCache; missing from llama_index.core.ingestion.cache #10731

Closed JoeSF49 closed 4 months ago

JoeSF49 commented 8 months ago

Bug Description

I'm having trouble loading RedisCache module using new llama-index environment 0.10.4

Using: llama-index 0.10.4 llama-index-agent-openai 0.1.1 llama-index-core 0.10.3 llama-index-embeddings-huggingface 0.1.1 llama-index-embeddings-openai 0.1.1 llama-index-extractors-entity 0.1.1 llama-index-legacy 0.9.48 llama-index-llms-anyscale 0.1.1 llama-index-llms-langchain 0.1.1 llama-index-llms-openai 0.1.1 llama-index-multi-modal-llms-openai 0.1.1 llama-index-program-openai 0.1.1 llama-index-question-gen-openai 0.1.1 llama-index-readers-file 0.1.2 llama-index-storage-docstore-redis 0.1.1 llama-index-storage-kvstore-redis 0.1.1 llama-index-vector-stores-redis 0.1.1

Version

0.10.4

Steps to Reproduce

from llama_index.core.ingestion.cache import RedisCache

Relevant Logs/Tracbacks

--------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
Cell In[7], line 27
     25 from llama_index.storage.docstore.redis import RedisDocumentStore
     26 from llama_index.vector_stores.redis import RedisVectorStore
---> 27 from llama_index.core.ingestion.cache import RedisCache

ImportError: cannot import name 'RedisCache' from 'llama_index.core.ingestion.cache' (/opt/miniforge/envs/Test/lib/python3.10/site-packages/llama_index/core/ingestion/cache.py)
dosubot[bot] commented 8 months ago

🤖

Hey @JoeSF49, nice to see you diving into the depths of LlamaIndex again! Hope you're ready for another coding adventure.

Based on the error message you provided, it seems like the RedisCache module is not available in the llama_index.core.ingestion.cache package of the llama-index-core version you're using (0.10.3).

The ImportError you're seeing typically occurs when Python cannot find the specified module or function in the package you're trying to import from. This could be due to a few reasons:

  1. The RedisCache module does not exist in the llama_index.core.ingestion.cache package of the llama-index-core version you're using.
  2. The RedisCache module was introduced in a later version of llama-index-core than the one you're using.
  3. There might be a typo or error in the import statement.

Given that the latest version of llama-index-core is 0.10.3 (as per the context provided), it's possible that the RedisCache module was introduced in a later version that hasn't been released yet, or it might not exist at all.

To resolve this issue, I recommend the following steps:

  1. Double-check the import statement for any typos or errors.
  2. Check the documentation or source code of llama-index-core to verify if the RedisCache module exists in the llama_index.core.ingestion.cache package.
  3. If the RedisCache module exists in a later version of llama-index-core, you'll need to wait until that version is released and then update your llama-index-core package.

If you've followed these steps and are still encountering the issue, it would be helpful if you could provide more information about your environment and the steps you've taken so far. This will help in diagnosing the problem more accurately.

I hope this helps! If you have any more questions or need further clarification, feel free to ask.

Here is a link to my profile if you need to reach out for more help.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
nerdai commented 8 months ago

Hey @JoeSF49 thanks for raising this and sorry that you've ran into a bit of trouble moving over to v0.10.

I can confirm that I was able to reproduce the bug on a fresh virtual environment. I believe you are using a brand new virtual environment yourself, but just in case you aren't please make sure you do as we have adopted namespacing convention in v0.10 that clashes with v0.9.xx and earlier.

So what's happened here is that in 0.9.xx RedisCache is actually an "alias" for RedisKVStore. 0.9.xx (legacy)

# old llama_index.ingestion.cache module
...
from llama_index.storage.kvstore import (
    RedisKVStore as RedisCache,
)
...

For 0.10, indeed you've correctly installed the llama-index-storage-kvstore-redis package. We didn't carry over this alias (though, perhaps we should...), but you can import it directly

from llama_index.storage.kvstore.redis import RedisKVStore as RedisCache
from llama_index.core.ingestion import IngestionCache

ingest_cache = IngestionCache(
    cache=RedisCache.from_host_and_port(host="127.0.0.1", port=6379),
    collection="my_test_cache",
)

The above worked for me when using a fresh venv that installed llama-index + the packages in the requirements.txt file shared below

# requirements.txt
llama-index-storage-docstore-redis
llama-index-storage-kvstore-redis
llama-index-vector-stores-redis
llama-index-llms-anyscale
llama-index-llms-langchain
llama-index-extractors-entity
llama-index-embeddings-huggingface
JoeSF49 commented 8 months ago

I've confirmed it works now/ Thank you.

I tested using from llama_index.storage.kvstore.redis import RedisKVStore as RedisCache

You may need to update the documentation. my original import from lama_index.core.ingestion.cache was from the current documentation.

https://docs.llamaindex.ai/en/stable/examples/ingestion/advanced_ingestion_pipeline.html#redis-cache-setup

ucekmez commented 7 months ago

I've confirmed it works now/ Thank you.

I tested using from llama_index.storage.kvstore.redis import RedisKVStore as RedisCache

You may need to update the documentation. my original import from lama_index.core.ingestion.cache was from the current documentation.

https://docs.llamaindex.ai/en/stable/examples/ingestion/advanced_ingestion_pipeline.html#redis-cache-setup

That's right, the current documentation still shows the same non-working imports

guybartal commented 4 months ago

It still does... why nobody updates it?