AI Resources
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
![Language](https://img.shields.io/github/languages/top/redis-developer/redis-ai-resources)
![GitHub last commit](https://img.shields.io/github/last-commit/redis-developer/redis-ai-resources)
✨ A curated repository of code recipes, demos, tutorials and resources for basic and advanced Redis use cases in the AI ecosystem. ✨
[**Demos**](#demos) | [**Recipes**](#recipes) | [**Tutorials**](#tutorials) | [**Integrations**](#integrations) | [**Content**](#content) | [**Benchmarks**](#benchmarks) | [**Docs**](#docs)
Demos
No faster way to get started than by diving in and playing around with a demo.
Demo |
Description |
Redis RAG Workbench |
Interactive demo to build a RAG-based chatbot over a user-uploaded PDF. Toggle different settings and configurations to improve chatbot performance and quality. Utilizes RedisVL, LangChain, RAGAs, and more. |
Redis VSS - Simple Streamlit Demo |
Streamlit demo of Redis Vector Search |
ArXiv Search |
Full stack implementation of Redis with React FE |
Product Search |
Vector search with Redis Stack and Redis Enterprise |
ArxivChatGuru |
Streamlit demo of RAG over Arxiv documents with Redis & OpenAI |
Recipes
Need quickstarts to begin your Redis AI journey? Start here.
Getting started with Redis & Vector Search
Retrieval Augmented Generation (RAG)
Retrieval Augmented Generation (aka RAG) is a technique to enhance the ability of an LLM to respond to user queries. The retrieval part of RAG is supported by a vector database, which can return semantically relevant results to a user’s query, serving as contextual information to augment the generative capabilities of an LLM.
To get started with RAG, either from scratch or using a popular framework like Llamaindex or LangChain, go with these recipes:
LLM Memory
Semantic Cache
An estimated 31% of LLM queries are potentially redundant (source). Redis enables semantic caching to help cut down on LLM costs quickly.
Agents
Computer Vision
Recommendation Systems
Tutorials
Need a deeper-dive through different use cases and topics?
Integrations
Redis integrates with many different players in the AI ecosystem. Here's a curated list below:
Integration |
Description |
RedisVL |
A dedicated Python client lib for Redis as a Vector DB |
AWS Bedrock |
Streamlines GenAI deployment by offering foundational models as a unified API |
LangChain Python |
Popular Python client lib for building LLM applications powered by Redis |
LangChain JS |
Popular JS client lib for building LLM applications powered by Redis |
LlamaIndex |
LlamaIndex Integration for Redis as a vector Database (formerly GPT-index) |
LiteLLM |
Popular LLM proxy layer to help manage and streamline usage of multiple foundation models |
Semantic Kernel |
Popular lib by MSFT to integrate LLMs with plugins |
RelevanceAI |
Platform to tag, search and analyze unstructured data faster, built on Redis |
DocArray |
DocArray Integration of Redis as a VectorDB by Jina AI |
Content
Benchmarks
Docs