Anush008 / fastembed-rs

Library for generating vector embeddings, reranking in Rust
https://docs.rs/fastembed
Apache License 2.0
280 stars 36 forks source link
embeddings fastembed rag retrieval retrieval-augmented-generation vector-search

FastEmbed-rs πŸ¦€

Rust implementation of @qdrant/fastembed

Crates.io MIT Licensed Semantic release

πŸ• Features

The default model is Flag Embedding, which is top of the MTEB leaderboard.

πŸ” Not looking for Rust?

πŸ€– Models

Text Embedding

Click to see full List - [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) - [**BAAI/bge-small-zh-v1.5**](https://huggingface.co/BAAI/bge-small-zh-v1.5) - [**BAAI/bge-base-en-v1.5**](https://huggingface.co/BAAI/bge-base-en-v1.5) - [**sentence-transformers/all-MiniLM-L12-v2**](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) - [**sentence-transformers/paraphrase-MiniLM-L12-v2**](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L12-v2) - [**sentence-transformers/paraphrase-multilingual-mpnet-base-v2**](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) - [**nomic-ai/nomic-embed-text-v1**](https://huggingface.co/nomic-ai/nomic-embed-text-v1) - [**nomic-ai/nomic-embed-text-v1.5**](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) - [**intfloat/multilingual-e5-small**](https://huggingface.co/intfloat/multilingual-e5-small) - [**intfloat/multilingual-e5-base**](https://huggingface.co/intfloat/multilingual-e5-base) - [**intfloat/multilingual-e5-large**](https://huggingface.co/intfloat/multilingual-e5-large) - [**Alibaba-NLP/gte-base-en-v1.5**](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) - [**Alibaba-NLP/gte-large-en-v1.5**](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5)

Sparse Text Embedding

Image Embedding

Reranking

πŸš€ Installation

Run the following command in your project directory:

cargo add fastembed

Or add the following line to your Cargo.toml:

[dependencies]
fastembed = "3"

πŸ“– Usage

Text Embeddings

use fastembed::{TextEmbedding, InitOptions, EmbeddingModel};

// With default InitOptions
let model = TextEmbedding::try_new(Default::default())?;

// With custom InitOptions
let model = TextEmbedding::try_new(
        InitOptions::new(EmbeddingModel::AllMiniLML6V2).with_show_download_progress(true),
    )?;

let documents = vec![
    "passage: Hello, World!",
    "query: Hello, World!",
    "passage: This is an example passage.",
    // You can leave out the prefix but it's recommended
    "fastembed-rs is licensed under Apache  2.0"
    ];

 // Generate embeddings with the default batch size, 256
 let embeddings = model.embed(documents, None)?;

 println!("Embeddings length: {}", embeddings.len()); // -> Embeddings length: 4
 println!("Embedding dimension: {}", embeddings[0].len()); // -> Embedding dimension: 384

Image Embeddings

use fastembed::{ImageEmbedding, ImageInitOptions, ImageEmbeddingModel};

// With default InitOptions
let model = ImageEmbedding::try_new(Default::default())?;

// With custom InitOptions
let model = ImageEmbedding::try_new(
    ImageInitOptions::new(ImageEmbeddingModel::ClipVitB32).with_show_download_progress(true),
)?;

let images = vec!["assets/image_0.png", "assets/image_1.png"];

// Generate embeddings with the default batch size, 256
let embeddings = model.embed(images, None)?;

println!("Embeddings length: {}", embeddings.len()); // -> Embeddings length: 2
println!("Embedding dimension: {}", embeddings[0].len()); // -> Embedding dimension: 512

Candidates Reranking

use fastembed::{TextRerank, RerankInitOptions, RerankerModel};

let model = TextRerank::try_new(
    RerankInitOptions::new(RerankerModel::BGERerankerBase).with_show_download_progress(true),
)?;

let documents = vec![
    "hi",
    "The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear, is a bear species endemic to China.",
    "panda is animal",
    "i dont know",
    "kind of mammal",
];

// Rerank with the default batch size
let results = model.rerank("what is panda?", documents, true, None)?;
println!("Rerank result: {:?}", results);

Alternatively, local model files can be used for inference via the try_new_from_user_defined(...) methods of respective structs.

πŸš’ Under the hood

Why fast?

It's important we justify the "fast" in FastEmbed. FastEmbed is fast because:

  1. Quantized model weights
  2. ONNX Runtime which allows for inference on CPU, GPU, and other dedicated runtimes

Why light?

  1. No hidden dependencies via Huggingface Transformers

Why accurate?

  1. Better than OpenAI Ada-002
  2. Top of the Embedding leaderboards e.g. MTEB

πŸ“„ LICENSE

Apache 2.0 Β© 2024