xhluca / bm25s

Fast lexical search implementing BM25 in Python using Numpy, Numba and Scipy
https://bm25s.github.io
MIT License
862 stars 35 forks source link

[Feature Request] Support attaching metadata to the corpus #18

Closed logan-markewich closed 4 months ago

logan-markewich commented 4 months ago

It can be very helpful to attach metadata to a corpus, that is not indexed, but still returned during retrieval.

For example, a super naive approach:

corpus = [
  {"text": "Hello world", "metadata": {"source": "internet"}},
  ...
]

The main motivation for me is providing a more first-class integration in llama-index 😄 I can serialize the entire TextNode object to make saving/loading very smooth. But I think overall this would be a super valuable feature

xhluca commented 4 months ago

This should already work with current version of bm25s (0.1), because the corpus passed to the BM25 object is not the corpus passed to the index() method, but rather a "passthrough" that is only needed during retrieval.

Note however that the saving will only work with json serializable objects (i.e. dict, list)

import bm25s

# Create your corpus here

corpus_json = [
    {"text": "a cat is a feline and likes to purr", "metadata": {"source": "internet"}},
    {"text": "a dog is the human's best friend and loves to play", "metadata": {"source": "encyclopedia"}},
    {"text": "a bird is a beautiful animal that can fly", "metadata": {"source": "cnn"}},
    {"text": "a fish is a creature that lives in water and swims", "metadata": {"source": "i made it up"}},
]
corpus_text = [doc["text"] for doc in corpus_json]

# Tokenize the corpus and only keep the ids (faster and saves memory)
corpus_tokens = bm25s.tokenize(corpus_text, stopwords="en")

# Create the BM25 model and index the corpus
retriever = bm25s.BM25(corpus=corpus_json)
retriever.index(corpus_tokens)

# Query the corpus
query = "does the fish purr like a cat?"
query_tokens = bm25s.tokenize(query)

# Get top-k results as a tuple of (doc ids, scores). Both are arrays of shape (n_queries, k)
results, scores = retriever.retrieve(query_tokens, k=2)

for i in range(results.shape[1]):
    doc, score = results[0, i], scores[0, i]
    print(f"Rank {i+1} (score: {score:.2f}): {doc}")

# You can save the arrays to a directory...
retriever.save("animal_index_bm25")

# ...and load them when you need them
import bm25s
reloaded_retriever = bm25s.BM25.load("animal_index_bm25", load_corpus=True)
# set load_corpus=False if you don't need the corpus

Output:

Rank 1 (score: 1.06): {'text': 'a cat is a feline and likes to purr', 'metadata': {'source': 'internet'}}            
Rank 2 (score: 0.48): {'text': 'a fish is a creature that lives in water and swims', 'metadata': {'source': 'i made it up'}}
xhluca commented 4 months ago

I've added the example above to examples/:

https://github.com/xhluca/bm25s/blob/main/examples/index_with_metadata.py

logan-markewich commented 4 months ago

Awesome! I only read the readme (whoops, ha). Will update my llama-index PR to account for this :)