QAnything
(Question and Answer based on Anything) is a local knowledge base question-answering system designed to support a wide range of file formats and databases, allowing for offline installation and use.
With QAnything
, you can simply drop any locally stored file of any format and receive accurate, fast, and reliable answers.
Currently supported formats include: PDF(pdf),Word(docx),PPT(pptx),XLS(xlsx),Markdown(md),Email(eml),TXT(txt),Image(jpg,jpeg,png),CSV(csv),Web links(html) and more formats coming soon…
In scenarios with a large volume of knowledge base data, the advantages of a two-stage approach are very clear. If only a first-stage embedding retrieval is used, there will be a problem of retrieval degradation as the data volume increases, as indicated by the green line in the following graph. However, after the second-stage reranking, there can be a stable increase in accuracy, the more data, the better the performance.
QAnything uses the retrieval component BCEmbedding, which is distinguished for its bilingual and crosslingual proficiency. BCEmbedding excels in bridging Chinese and English linguistic gaps, which achieves
Model | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | Avg |
---|---|---|---|---|---|---|---|
bge-base-en-v1.5 | 37.14 | 55.06 | 75.45 | 59.73 | 43.05 | 37.74 | 47.20 |
bge-base-zh-v1.5 | 47.60 | 63.72 | 77.40 | 63.38 | 54.85 | 32.56 | 53.60 |
bge-large-en-v1.5 | 37.15 | 54.09 | 75.00 | 59.24 | 42.68 | 37.32 | 46.82 |
bge-large-zh-v1.5 | 47.54 | 64.73 | 79.14 | 64.19 | 55.88 | 33.26 | 54.21 |
jina-embeddings-v2-base-en | 31.58 | 54.28 | 74.84 | 58.42 | 41.16 | 34.67 | 44.29 |
m3e-base | 46.29 | 63.93 | 71.84 | 64.08 | 52.38 | 37.84 | 53.54 |
m3e-large | 34.85 | 59.74 | 67.69 | 60.07 | 48.99 | 31.62 | 46.78 |
bce-embedding-base_v1 | 57.60 | 65.73 | 74.96 | 69.00 | 57.29 | 38.95 | 59.43 |
Model | Reranking | Avg |
---|---|---|
bge-reranker-base | 57.78 | 57.78 |
bge-reranker-large | 59.69 | 59.69 |
bce-reranker-base_v1 | 60.06 | 60.06 |
NOTE:
WithoutReranker
setting, our bce-embedding-base_v1
outperforms all the other embedding models.bce-reranker-base_v1
achieves the best performance.bce-embedding-base_v1
and bce-reranker-base_v1
is SOTA.The open source version of QAnything is based on QwenLM and has been fine-tuned on a large number of professional question-answering datasets. It greatly enhances the ability of question-answering. If you need to use it for commercial purposes, please follow the license of QwenLM. For more details, please refer to: QwenLM
Star us on GitHub, and be instantly notified for new release!
We provide two versions: Python version and Docker version The Python version is suitable for quickly experiencing new features, while the Docker version is suitable for secondary development and use in actual production environments, with new features temporarily not supported.
The features corresponding to different installation methods are as follows:
features | python version | docker version | Explanation |
---|---|---|---|
Detailed installation document | ✅ Details | ✅ Details | |
Support API | ✅ Details | ✅ Details | |
Support production environment | ❌ | ✅ | |
Support offline installation (private deployment) | ❌ | ✅ Details | |
Support multiple concurrency | ❌ | ✅ Details | When using API instead of local large models in Python, manual settings are possible.Details |
Support multi-card inference | ❌ | ✅ Details | |
Support Mac (M series chips) | ✅ | ❌ | Currently running the local LLM on Mac relies on llamacpp, and the question-answering speed is slow. It is recommended to use the OpenAI API to call the model service. |
Support Linux | ✅ | ✅ | Python version defaults to onnxruntime-gpu on Linux, automatically switching to onnxruntime when glibc<2.28. |
Support windows WSL | ✅ | ✅ | |
Support CPU only | ✅ Details | ❌ | |
Support hybrid search (BM25+embedding) | ❌ | ✅ | |
Support web search (need VPN) | ✅ Details | ❌ | Docker version plan. |
Support FAQ | ✅ Details | ❌ | Docker version plan. |
Support BOT | ✅ Details | ❌ | Docker version plan. |
Support Traceability | ✅ Details | ❌ | Docker version plan. |
Support Log retrieval by API | ✅ Details | ❌ | Docker version plan. |
Support audio file | ✅ | ❌ | In the Docker version plan, uploading files will support mp3 and wav format files. |
Support OpenCloudOS | ✅Details | ❌ | |
Support interfaces compatible with Openaiapi (including ollama) | ✅ Details | ✅ Details | The api_key, base_url, model and other parameters need to be set manually. |
PDF parsing performance improvement (including tables) | ✅ Details | ❌ | |
User-defined configuration (Experimental: Improve speed) | ✅ Details | ❌ | |
Improvement in parsing performance of other file types | ❌ | ❌ | The next version is expected to be released in 15 days. |
If you prefer not to use Docker for installation, we provide a Pure Python Installation Guide. The installation of a pure Python environment is intended for demo purposes only and is not recommended for deployment in a production environment.
System | Required item | Minimum Requirement | Note |
---|---|---|---|
Linux | NVIDIA GPU Memory | >= 4GB (use OpenAI API) | Minimum: GTX 1050Ti(use OpenAI API) Recommended: RTX 3090 |
NVIDIA Driver Version | >= 525.105.17 | ||
Docker version | >= 20.10.5 | Docker install | |
docker compose version | >= 2.23.3 | docker compose install | |
git-lfs | git-lfs install |
System | Required item | Minimum Requirement | Note | |
---|---|---|---|---|
Windows with WSL Ubuntu Subsystem | NVIDIA GPU Memory | >= 4GB (use OpenAI API) | Minimum: GTX 1050Ti(use OpenAI API) Recommended: RTX 3090 |
|
GEFORCE EXPERIENCE | >= 546.33 | GEFORCE EXPERIENCE download | ||
Docker Desktop | >= 4.26.1(131620) | Docker Desktop for Windows | ||
git-lfs | git-lfs install |
git clone https://github.com/netease-youdao/QAnything.git
bash ./run.sh -h
cd QAnything
bash run.sh # Start on GPU 0 by default.
After successful installation, you can experience the application by entering the following addresses in your web browser.
If you want to visit API, please refer to the following address:
If you want to view the relevant logs, please check the log files in the QAnything/logs/debug_logs
directory.
If you are in the Windows11 system: Need to enter the WSL environment.
bash close.sh
If you want to install QAnything offline, you can start the service using the following command.
# Download the docker image on a networked machine
docker pull quay.io/coreos/etcd:v3.5.5
docker pull minio/minio:RELEASE.2023-03-20T20-16-18Z
docker pull milvusdb/milvus:v2.3.4
docker pull mysql:latest
docker pull freeren/qanything-win:v1.2.x # From [https://github.com/netease-youdao/QAnything/blob/master/docker-compose-windows.yaml#L103] Get the latest version number.
# pack image
docker save quay.io/coreos/etcd:v3.5.5 minio/minio:RELEASE.2023-03-20T20-16-18Z milvusdb/milvus:v2.3.4 mysql:latest freeren/qanything-win:v1.2.1 -o qanything_offline.tar
# download QAnything code
wget https://github.com/netease-youdao/QAnything/archive/refs/heads/master.zip
# Copy the image qanything_offline.tar and the code qany-master.zip to the offline machine
cp QAnything-master.zip qanything_offline.tar /path/to/your/offline/machine
# Load the image on the disconnected machine
docker load -i qanything_offline.tar
# Unzip the code and run it
unzip QAnything-master.zip
cd QAnything-master
bash run.sh
# Download the docker image on a networked machine
docker pull quay.io/coreos/etcd:v3.5.5
docker pull minio/minio:RELEASE.2023-03-20T20-16-18Z
docker pull milvusdb/milvus:v2.3.4
docker pull mysql:latest
docker pull freeren/qanything:v1.2.x # From [https://github.com/netease-youdao/qanything/blob/master/docker-compose-linux.yaml#L104] Get the latest version number.
# pack image
docker save quay.io/coreos/etcd:v3.5.5 minio/minio:RELEASE.2023-03-20T20-16-18Z milvusdb/milvus:v2.3.4 mysql:latest freeren/qanything:v1.2.1 -o qanything_offline.tar
# download QAnything code
wget https://github.com/netease-youdao/QAnything/archive/refs/heads/master.zip
# Copy the image qanything_offline.tar and the code qany-master.zip to the offline machine
cp QAnything-master.zip qanything_offline.tar /path/to/your/offline/machine
# Load the image on the disconnected machine
docker load -i qanything_offline.tar
# Unzip the code and run it
unzip QAnything-master.zip
cd QAnything-master
bash run.sh
If you need to access the API, please refer to the QAnything API documentation.
We appreciate your interest in contributing to our project. Whether you're fixing a bug, improving an existing feature, or adding something completely new, your contributions are welcome!
🔎 To learn about QAnything's future plans and progress, please see here: QAnything Roadmap
🤬To provide feedback to QAnything, please see here: QAnything Feedbak
Welcome to the QAnything Discord community
Welcome to follow QAnything WeChat Official Account to get the latest information.
Welcome to scan the code to join the QAnything discussion group.
If you need to contact our team privately, please reach out to us via the following email:
qanything@rd.netease.com
Reach out to the maintainer at one of the following places:
QAnything
is licensed under Apache 2.0 License
QAnything
adopts dependencies from the following: