-
- [ ] Using pdfs used for QA creation, creating KG
- [ ] Testing on those pdfs
- [ ] Prompt tuning for entities linked below
- [ ] Testing using LLM eval.
[Microsoft graph rag repo link ](http…
-
Comply with most PEP8 rules using a auto-formatter or linter (flake8, black, pylint...). Can be orchestrated by tox
-
model_name: llama-2-7b-chat
[load_smoothquant_model] model loaded ...
modules.json: 100%|███████████████████████████████████████████████████████████████████████████| 349/349 [00:00
-
This issue tracks various action items we would like to complete with regard to the features function calling and embeddings.
### Function calling (beta)
We are calling it beta because multiple …
-
### Project Name
Intelligent Legal Document Assistant for Law Firms
### Description
Problem:
Law firms deal with vast amounts of legal documents, case files, regulations, and precedents. Lawyers a…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
Retrieval augmented generation (RAG) is a technique to enrich LLMs with the apps/org own data. It has become very popular as it lowers the complexity entry to enriching input in LLM apps, allows for b…
-
- [x] The first time user opens the app, drop a hint to create a new use case or iteration via **Use Cases+** CTA.
- [x] For a new iteration (blank state), drop a hint to start adding nodes via **Add …
-
### Problem & Motivation
There is a huge wave of interest around high accuracy Q&A, such as via Retrieval Augmented Generation (RAG). RAG accuracy is largely driven by how well vector search is abl…
-
When using the code provided in the documentation - [langchain docs](https://python.langchain.com/v0.2/docs/integrations/text_embedding/nvidia_ai_endpoints/#rag-retrieval), the expected response does …