-
-
I am planning to refactor the [existing Embeddings class](https://github.com/holoviz/lumen/blob/main/lumen/ai/embeddings.py#L6).
The purpose is to supply LLMs with up-to-date or private data using re…
-
**Is your feature request related to a problem? Please describe.**
Practitioners often split text documents into smaller chunks and embed them separately. However, chunk embeddings created in this wa…
-
Hi,
I really like your project as it provides an easy-to-use approach. I have been thinking that since the new Llama 3.1 is multilingual, could this approach also be used in that way? As we are on…
-
# OPEA Inference Microservices Integration for LangChain
This RFC proposes the integration of OPEA inference microservices (from GenAIComps) into LangChain [extensible to other frameworks], enabli…
-
Is it possible to use BERT based contextualized word embeddings along with the nmt implementation? I want to take advantage of the pretrained BERT language model so the NMT weights can be leveraged mo…
-
The script was working just an hour ago
Full error:
Error while parsing the PDF file './papers/A_Survey_on_Contextual_Embeddings.pdf': [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: s…
-
Currently the `dense_vector` field is a single-valued field. This is a limitation that forces a document to be repeated or split up into multiple documents when it's necessary to have multiple embeddi…
-
### What features would you like to see added?
The current system requires an optimized Embeddings Database (DB) that incorporates court data to support Retrieval-Augmented Generation (RAG) process…
-
## 一言でいうと
文字ベースの言語モデルを使い、固有表現認識のSOTAを達成した研究。Bi-directionalの文字ベース言語モデルを使用し、文頭=>単語終了までのforward、文末=>単語始点までのbackwardを結合して単語表現を作成している。作成した単語表現をBi-directionalのCRFに入れて予測を行う
![image](https://user-images…