do-me / SemanticFinder

SemanticFinder - frontend-only live semantic search with transformers.js
https://do-me.github.io/SemanticFinder/
MIT License
226 stars 16 forks source link
ai codemirror semantic-search semanticsearch transformers

SemanticFinder

Frontend-only live semantic search and chat-with-your-documents built on transformers.js. Supports Wasm and WebGPU!

Try the web app, install the Chrome extension or read the introduction blog post.

🔥 For best performance try the WebGPU Version here! 🔥

Semantic search right in your browser! Calculates the embeddings and cosine similarity client-side without server-side inferencing, using transformers.js and latest SOTA embedding models from Huggingface.

Models

All transformers.js-compatible feature-extraction models are supported. Here is a sortable list you can go through: daily updated list. Download the compatible models table as xlsx, csv, json, parquet, or html here: https://github.com/do-me/trending-huggingface-models/. Note that the wasm backend in transformers.js supports all mentioned models. If you want the best performance, make sure to use a WebGPU-compatible model.

Catalogue

You can use super fast pre-indexed examples for really large books like the Bible or Les Misérables with hundreds of pages and search the content in less than 2 seconds 🚀. Try one of these and convince yourself:

filesize textTitle textAuthor textYear textLanguage URL modelName quantized splitParam splitType characters chunks wordsToAvoidAll wordsToCheckAll wordsToAvoidAny wordsToCheckAny exportDecimals lines textNotes textSourceURL filename
4.78 Das Kapital Karl Marx 1867 de https://do-me.github.io/SemanticFinder/?hf=Das_Kapital_c1a84fba Xenova/multilingual-e5-small True 80 Words 2003807 3164 5 28673 https://ia601605.us.archive.org/13/items/KarlMarxDasKapitalpdf/KAPITAL1.pdf Das_Kapital_c1a84fba.json.gz
2.58 Divina Commedia Dante 1321 it https://do-me.github.io/SemanticFinder/?hf=Divina_Commedia_d5a0fa67 Xenova/multilingual-e5-base True 50 Words 383782 1179 5 6225 http://www.letteratura-italiana.com/pdf/divina%20commedia/08%20Inferno%20in%20versione%20italiana.pdf Divina_Commedia_d5a0fa67.json.gz
11.92 Don Quijote Miguel de Cervantes 1605 es https://do-me.github.io/SemanticFinder/?hf=Don_Quijote_14a0b44 Xenova/multilingual-e5-base True 25 Words 1047150 7186 4 12005 https://parnaseo.uv.es/lemir/revista/revista19/textos/quijote_1.pdf Don_Quijote_14a0b44.json.gz
0.06 Hansel and Gretel Brothers Grimm 1812 en https://do-me.github.io/SemanticFinder/?hf=Hansel_and_Gretel_4de079eb TaylorAI/gte-tiny True 100 Chars 5304 55 5 9 https://www.grimmstories.com/en/grimm_fairy-tales/hansel_and_gretel Hansel_and_Gretel_4de079eb.json.gz
1.74 IPCC Report 2023 IPCC 2023 en https://do-me.github.io/SemanticFinder/?hf=IPCC_Report_2023_2b260928 Supabase/bge-small-en True 200 Chars 307811 1566 5 3230 state of knowledge of climate change https://report.ipcc.ch/ar6syr/pdf/IPCC_AR6_SYR_LongerReport.pdf IPCC_Report_2023_2b260928.json.gz
25.56 King James Bible None en https://do-me.github.io/SemanticFinder/?hf=King_James_Bible_24f6dc4c TaylorAI/gte-tiny True 200 Chars 4556163 23056 5 80496 https://www.holybooks.com/wp-content/uploads/2010/05/The-Holy-Bible-King-James-Version.pdf King_James_Bible_24f6dc4c.json.gz
11.45 King James Bible None en https://do-me.github.io/SemanticFinder/?hf=King_James_Bible_6434a78d TaylorAI/gte-tiny True 200 Chars 4556163 23056 2 80496 https://www.holybooks.com/wp-content/uploads/2010/05/The-Holy-Bible-King-James-Version.pdf King_James_Bible_6434a78d.json.gz
39.32 Les Misérables Victor Hugo 1862 fr https://do-me.github.io/SemanticFinder/?hf=Les_Misérables_2239df51 Xenova/multilingual-e5-base True 25 Words 3236941 19463 5 74491 All five acts included https://beq.ebooksgratuits.com/vents/Hugo-miserables-1.pdf Les_Misérables_2239df51.json.gz
0.46 REGULATION (EU) 2023/138 European Commission 2022 en https://do-me.github.io/SemanticFinder/?hf=REGULATION_(EU)_2023_138_c00e7ff6 Supabase/bge-small-en True 25 Words 76809 424 5 1323 https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32023R0138&qid=1704492501351 REGULATION_(EU)_2023_138_c00e7ff6.json.gz
0.07 Universal Declaration of Human Rights United Nations 1948 en https://do-me.github.io/SemanticFinder/?hf=Universal_Declaration_of_Human_Rights_0a7da79a TaylorAI/gte-tiny True \nArticle Regex 8623 63 5 109 30 articles https://www.un.org/en/about-us/universal-declaration-of-human-rights Universal_Declaration_of_Human_Rights_0a7da79a.json.gz

Import & Export

You can create indices yourself with one two clicks and save them. If it's something private, keep it for yourself, if it's a classic book or something you think other's might be interested in consider a PR on the Huggingface Repo or get in touch with us. Book requests are happily met if you provide us a good source link where we can do copy & paste. Simply open an issue here with [Book Request] or similar or contact us.

It goes without saying that no discriminating content will be tolerated.

Installation

Clone the repository and install dependencies with

npm install

Then run with

npm run start

If you want to build instead, run

npm run build

Afterwards, you'll find the index.html, main.css and bundle.js in dist.

Browser extension

Download the Chrome extension from Chrome webstore and pin it. Right click the extension icon for options:

Local build

If you want to build the browser extension locally, clone the repo and cd in extension directory then run:

Speed

Tested on the entire book of Moby Dick with 660.000 characters ~13.000 lines or ~111.000 words. Initial embedding generation takes 1-2 mins on my old i7-8550U CPU with 1000 characters as segment size. Following queries take only ~2 seconds! If you want to query larger text instead or keep an entire library of books indexed use a proper vector database instead.

Features

You can customize everything!

Usage ideas

Future ideas

Logic

Transformers.js is doing all the heavy lifting of tokenizing the input and running the model. Without it, this demo would have been impossible.

Input

Output

Pipeline

  1. All scripts are loaded. The model is loaded once from HuggingFace, after cached in the browser.
  2. A user inputs some text and a search term or phrase.
  3. Depending on the approximate length to consider (unit=characters), the text is split into segments. Words themselves are never split, that's why it's approximative.
  4. The search term embedding is created.
  5. For each segment of the text, the embedding is created.
  6. Meanwhile, the cosine similarity is calculated between every segment embedding and the search term embedding. It's written to a dictionary with the segment as key and the score as value.
  7. For every iteration, the progress bar and the highlighted sections are updated in real-time depending on the highest scores in the array.
  8. The embeddings are cached in the dictionary so that subsequent queries are quite fast. The calculation of the cosine similarity is fairly speedy in comparison to the embedding generation.
  9. Only if the user changes the segment length, the embeddings must be recalculated.

Collaboration

PRs welcome!

To Dos (no priorization)

Star History

Star History Chart

Gource Map

image

Gource image created with:

gource -1280x720 --title "SemanticFinder" --seconds-per-day 0.03 --auto-skip-seconds 0.03 --bloom-intensity 0.5 --max-user-speed 500 --highlight-dirs --multi-sampling --highlight-colour 00FF00