patterns-ai-core / langchainrb_rails

MIT License
204 stars 21 forks source link

💎🔗 Langchain.rb for Rails

The fastest way to sprinkle AI ✨ on top of your Rails app. Add OpenAI-powered question-and-answering in minutes.

Available for paid consulting engagements! Email me.

Tests status Gem Version Docs License X

Dependencies

Table of Contents

Installation

Install the gem and add to the application's Gemfile by executing:

bundle add langchainrb_rails

If bundler is not being used to manage dependencies, install the gem by executing:

gem install langchainrb_rails

Configuration w/ Pgvector (requires Postgres 11+)

  1. Run the Rails generator to add vectorsearch to your ActiveRecord model
    rails generate langchainrb_rails:pgvector --model=Product --llm=openai

This adds required dependencies to your Gemfile, creates the config/initializers/langchainrb_rails.rb initializer file, database migrations, and adds the necessary code to the ActiveRecord model to enable vectorsearch.

  1. Bundle and migrate

    bundle install && rails db:migrate
  2. Set the env var OPENAI_API_KEY to your OpenAI API key: https://platform.openai.com/account/api-keys

    ENV["OPENAI_API_KEY"]= 
  3. Generate embeddings for your model

    Product.embed!

This can take a while depending on the number of database records.

Usage

Question and Answering

Product.ask("list the brands of shoes that are in stock")

Returns a String with a natural language answer. The answer is assembled using the following steps:

  1. An embedding is generated for the passed in question using the selected LLM.
  2. We calculate a cosine similarity to find records that most closely match your question's embedding.
  3. A prompt is created using the question and the above records (their #as_vector representation )are added as context.
  4. This prompt is passed to the LLM to generate an answer

Similarity Search

Product.similarity_search("t-shirt")

Returns ActiveRecord relation that most closely matches the query using vector search.

Customization

Changing the vector representation of a record

By default, embeddings are generated by calling the following method on your model instance:

to_json(except: :embedding)

You can override this by defining an #as_vector method in your model:

def as_vector
  { name: name, description: description, category: category.name, ... }.to_json
end

Re-generate embeddings after modifying this method:

Product.embed!

Rails Generators

Pgvector Generator

rails generate langchainrb_rails:pgvector --model=Product --llm=openai

Pinecone Generator - adds vectorsearch to your ActiveRecord model

rails generate langchainrb_rails:pinecone --model=Product --llm=openai

Qdrant Generator - adds vectorsearch to your ActiveRecord model

rails generate langchainrb_rails:qdrant --model=Product --llm=openai

Available --llm options: cohere, google_palm, hugging_face, llama_cpp, ollama, openai, and replicate. The selected LLM will be used to generate embeddings and completions.

The --model option is used to specify which ActiveRecord model vectorsearch capabilities will be added to.

Pinecone Generator does the following:

  1. Creates the config/initializers/langchainrb_rails.rb initializer file
  2. Adds necessary code to the ActiveRecord model to enable vectorsearch
  3. Adds pinecone gem to the Gemfile

Prompt Generator - adds prompt templating capabilities to your ActiveRecord model

rails generate langchainrb_rails:prompt

This generator adds the following files to your Rails project:

  1. An ActiveRecord Prompt model at app/models/prompt.rb
  2. A rails migration to create the prompts table

You can then use the Prompt model to create and manage prompts for your model.

Example usage:

prompt = Prompt.create!(template: "Tell me a {adjective} joke about {subject}.")
prompt.render(adjective: "funny", subject: "elephants")
# => "Tell me a funny joke about elephants."

Assistant Generator - adds assistant capabilities to your ActiveRecord model

rails generate langchainrb_rails:assistant