redis-developer / redis-ai-resources

✨ A curated list of awesome community resources, integrations, and examples of Redis in the AI ecosystem.
MIT License
146 stars 13 forks source link
ai awesome-list ecosystem feature-store machine-learning redis vector-database vector-search

AI Resources

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) ![Language](https://img.shields.io/github/languages/top/redis-developer/redis-ai-resources) ![GitHub last commit](https://img.shields.io/github/last-commit/redis-developer/redis-ai-resources)
✨ A curated repository of code recipes, demos, and resources for basic and advanced Redis use cases in the AI ecosystem. ✨

Table of Contents


Demos

No faster way to get started than by diving in and playing around with one of our demos.

Demo Description
ArxivChatGuru Streamlit demo of RAG over Arxiv documents with Redis & OpenAI
Redis VSS - Simple Streamlit Demo Streamlit demo of Redis Vector Search
Vertex AI & Redis A tutorial featuring Redis with Vertex AI
Agentic RAG A tutorial focused on agentic RAG with LlamaIndex and Cohere
ArXiv Search Full stack implementation of Redis with React FE
Product Search Vector search with Redis Stack and Redis Enterprise

Recipes

Need specific sample code to help get started with Redis? Start here.

Getting started with Redis & Vector Search

Recipe Description
/redis-intro/redis_intro.ipynb The place to start if brand new to Redis
/vector-search/00_redispy.ipynb Vector search with Redis python client
/vector-search/01_redisvl.ipynb Vector search with Redis Vector Library

Getting started with RAG

Retrieval Augmented Generation (aka RAG) is a technique to enhance the ability of an LLM to respond to user queries. The retrieval part of RAG is supported by a vector database, which can return semantically relevant results to a user’s query, serving as contextual information to augment the generative capabilities of an LLM.

To get started with RAG, either from scratch or using a popular framework like Llamaindex or LangChain, go with these recipes:

Recipe Description
/RAG/01_redisvl.ipynb RAG from scratch with the Redis Vector Library
/RAG/02_langchain.ipynb RAG using Redis and LangChain
/RAG/03_llamaindex.ipynb RAG using Redis and LlamaIndex
/RAG/04_advanced_redisvl.ipynb Advanced RAG with redisvl
/RAG/05_nvidia_ai_rag_redis.ipynb RAG using Redis and Nvidia
/RAG/06_ragas_evaluation.ipynb Utilize RAGAS framework to evaluate RAG performance

LLM Session Management

LLMs are stateless. To maintain context within a conversation chat sessions must be stored and resent to the LLM. Redis manages the storage and retrieval of chat sessions to maintain context and conversational relevance. Recipe Description
/llm-session-manager/00_session_manager.ipynb LLM session manager with semantic similarity
/llm-session-manager/01_multiple_sessions.ipynb Handle multiple simultaneous chats with one instance

Semantic Cache

An estimated 31% of LLM queries are potentially redundant (source). Redis enables semantic caching to help cut down on LLM costs quickly.

Recipe Description
/semantic-cache/semantic_caching_gemini.ipynb Build a semantic cache with Redis and Google Gemini

Advanced RAG

For further insights on enhancing RAG applications with dense content representations, query re-writing, and other techniques.

Recipe Description
/RAG/04_advanced_redisvl.ipynb Notebook for additional tips and techniques to improve RAG quality

Recommendation systems

An exciting example of how Redis can power production-ready systems is highlighted in our collaboration with NVIDIA to construct a state-of-the-art recommendation system.

Within this repository, you'll find three examples, each escalating in complexity, showcasing the process of building such a system.

Integrations/Tools

Additional content

Benchmarks

Documentation