Storia-AI / sage

Chat with any codebase with 2 commands
https://sage.storia.ai
Apache License 2.0
701 stars 40 forks source link

Feature Request: Consider adding in memory index for local mode #26

Open rahulgoel opened 1 month ago

rahulgoel commented 1 month ago

If i am trying to chat with a one off github repo, running docker feels a bit heavy. A local python file for embeddings, a local HF model for calculating the vectors might also be a reasonable thing to do.

iuliaturc commented 1 week ago

Great suggestion Rahul! Here are some more detailed instructions for new contributors:

Steps:

  1. Implement a subclass of VectorStore.
  2. Add an "inmemory" value to the --vector-store-provider flag.
  3. Update build_vector_store_from_args to instantiate it appropriately.
  4. Update the README.
iuliaturc commented 5 hours ago

Looks like LanceDB also has an in-memory option: https://lancedb.github.io/lancedb/basic/