jonfairbanks / local-rag

Ingest files for retrieval augmented generation (RAG) with open-source Large Language Models (LLMs), all without 3rd parties or sensitive data leaving your network.
GNU General Public License v3.0
535 stars 64 forks source link

Switch to cache_resource for Document Index #54

Closed JoepdeJong closed 6 months ago

JoepdeJong commented 6 months ago

This PR solves the issue of a missing _model in the Index after loading it from cache.

It seems that it is better to use another caching decorator.

As described in https://docs.streamlit.io/develop/concepts/architecture/caching

st.cache_resource is the recommended way to cache global resources like ML models or database connections – unserializable objects that you don't want to load multiple times. Using it, you can share these resources across all reruns and sessions of an app without copying or duplication. Note that any mutations to the cached return value directly mutate the object in the cache (more details below).

Closes #53

jonfairbanks commented 6 months ago

Thank you for this PR! Caching has definitely been a headache here.

jonfairbanks commented 5 months ago

Actually since we are using _documents here, the underscore tells Streamlit to not cache that particular resource. Removing the underscore will result in an error from Streamlit.

I'll merge this up to the main branch but technically nothing is being cached in this function.

JoepdeJong commented 5 months ago

Actually since we are using _documents here, the underscore tells Streamlit to not cache that particular resource. Removing the underscore will result in an error from Streamlit.

I'll merge this up to the main branch but technically nothing is being cached in this function.

Placing an underscore in front of a parameter to exclude it from caching works, as far as i know, only for hashable objects (https://docs.streamlit.io/develop/concepts/architecture/caching#excluding-input-parameters).

Since cache_resource does not create a copy, but returns the same value every time, no hashing is required for this decorator.

Not creating a copy means there's just one global instance of the cached return object, which saves memory, e.g. when using a large ML model. In computer science terms, we create a singleton. https://docs.streamlit.io/develop/concepts/architecture/caching#behavior-1

This should also explain why _model is missing when using @st.cache_data.