HKUDS / LightRAG

"LightRAG: Simple and Fast Retrieval-Augmented Generation"
https://arxiv.org/abs/2410.05779
MIT License
7.3k stars 816 forks source link

How to speed up insert process? #212

Open fahadh4ilyas opened 1 day ago

fahadh4ilyas commented 1 day ago

The insert process is quite slow for a small document. I tried to change llm_model_max_async value but the speed is never change. I also saw that the insert process is only using single core of my CPU. Is there any way to speed up the process? Maybe by using multiple thread or process?

JavieHush commented 1 day ago

Try to use GPU instead, the spped will boost up. The insert process of LightRAG is much faster than that in GraphRAG, based on my actual testing.

abylikhsanov commented 21 hours ago

@JavieHush Can you elaborate on that more?

Jaykumaran commented 21 hours ago

@JavieHush

Facing same issue, Could you describe how to achieve this?

JavieHush commented 19 hours ago

Guys :) I'm not quite sure about the situation you've encountered. my detailed situation is as follows

Suggestions

The insert process is highly related to LLM/Embedding model (the process use LLM to extract entities & relations, and EB model to index). This requires a significant amount of computing resources. If you run this locally, a GPU-accelerated model is recommended. if use CPU only, it will be much slower. And, use a model with few params may have a higher processing speed. (But be aware that a model with fewer params may have a worse performance. So you must make a balance) Also, I noticed that using external graph DB & Vector DB may accelerate the insert process.(also accelerate the query process) We're currently working on how to integrate all these.

About my situation

we use Ollama local service to power the framework, and a work station with 8 × Tesla P100 GPU.

Evaluation

Using a fake fairy tale (2k tokens, generated by GPT-4o, this means all LLMs don't know this story) to test the LightRAG & GraphRAG. The insert process of LightRAG cost 2~3min, while it costs more than 15min for GraphRAG.

abylikhsanov commented 19 hours ago

@JavieHush That is why I got confused as in my situation I am not running LLM locally but rather using APIs so wondered what did you mean by using GPU.

JavieHush commented 19 hours ago

@JavieHush That is why I got confused as in my situation I am not running LLM locally but rather using APIs so wondered what did you mean by using GPU.

btw, how long did it cost for u to finish the inserting process? It should be much faster using API than local model service🤔

abylikhsanov commented 18 hours ago

@JavieHush I used different document which at the end had 3k entities. I used 6.1 million GPT4o mini tokens and around 1 million embedding tokens (which is very cheap). So around $1

Jaykumaran commented 14 hours ago

@JavieHush I'm running locally with ollama, can you explain the process to make use to GPU while indexing.

import os import logging from lightrag import LightRAG, QueryParam from lightrag.llm import ollama_model_complete, ollama_embedding from lightrag.utils import EmbeddingFunc import pdfplumber

######## Environment="OLLAMA_KEEP_ALIVE=-1"

WORKING_DIR = "./mydir"

logging.basicConfig(format="%(levelname)s:%(message)s", level=logging.INFO)

if not os.path.exists(WORKING_DIR): os.mkdir(WORKING_DIR)

rag = LightRAG( working_dir=WORKING_DIR, chunk_token_size=1200, # 1200 based on resources llm_model_func=ollama_model_complete, llm_model_name="qwen2.5", llm_model_max_async=4, # reduce to 4 or 8 depending on cpu and mem resources llm_model_max_token_size=32768, llm_model_kwargs={"host": "http://localhost:11434", "options": {"num_ctx": 32768}}, embedding_func=EmbeddingFunc( embedding_dim=768, max_token_size=8192, func=lambda texts: ollama_embedding(texts, embed_model="nomic-embed-text", host="http://localhost:11434"), ), )

pdf_path = "../CompaniesAct2013.pdf"

pdf_text = ""

with pdfplumber.open(pdf_path) as pdf: for page in pdf.pages: pdf_text += page.extract_text() + "\n"

rag.insert(pdf_text)

print(rag.query("What are the top themes in this story?", param=QueryParam(mode="naive")))

print(rag.query("What are the top themes in this story?", param=QueryParam(mode="global")))

print(rag.query("What are the top themes in this story?", param=QueryParam(mode="global")))

print(rag.query("What are the top themes in this story?", param=QueryParam(mode="hybrid")))

JavieHush commented 3 hours ago

@JavieHush I'm running locally with ollama, can you explain the process to make use to GPU while indexing.

import os import logging from lightrag import LightRAG, QueryParam from lightrag.llm import ollama_model_complete, ollama_embedding from lightrag.utils import EmbeddingFunc import pdfplumber

######## Environment="OLLAMA_KEEP_ALIVE=-1"

WORKING_DIR = "./mydir"

logging.basicConfig(format="%(levelname)s:%(message)s", level=logging.INFO)

if not os.path.exists(WORKING_DIR): os.mkdir(WORKING_DIR)

rag = LightRAG( working_dir=WORKING_DIR, chunk_token_size=1200, # 1200 based on resources llm_model_func=ollama_model_complete, llm_model_name="qwen2.5", llm_model_max_async=4, # reduce to 4 or 8 depending on cpu and mem resources llm_model_max_token_size=32768, llm_model_kwargs={"host": "http://localhost:11434", "options": {"num_ctx": 32768}}, embedding_func=EmbeddingFunc( embedding_dim=768, max_token_size=8192, func=lambda texts: ollama_embedding(texts, embed_model="nomic-embed-text", host="http://localhost:11434"), ), )

pdf_path = "../CompaniesAct2013.pdf"

pdf_text = ""

with pdfplumber.open(pdf_path) as pdf: for page in pdf.pages: pdf_text += page.extract_text() + "\n"

rag.insert(pdf_text)

print(rag.query("What are the top themes in this story?", param=QueryParam(mode="naive")))

print(rag.query("What are the top themes in this story?", param=QueryParam(mode="global")))

print(rag.query("What are the top themes in this story?", param=QueryParam(mode="global")))

print(rag.query("What are the top themes in this story?", param=QueryParam(mode="hybrid")))

First of all you must make sure your GPU support accelerating model reasoning, are u using Nvidia series or ?

GPU accelerating setting should be configured in ollama settings.

plz refer to Run ollama with docker-compose and using gpu