-
# Problem Description
There are some cases when the LLM is trying to give us the answer based on the created prompt, it would produce low quality response as the prompt contains very mixed topics. In…
-
![image](https://github.com/FMInference/H2O/assets/13203040/39ff7799-98b8-4482-86f8-3302b5bca087)
1.full和h2o都正常得到了结果的latex(如下),但是local运行出来是空白。
![image](https://github.com/FMInference/H2O/assets/1320…
-
**Describe the bug**
I set the environment variable, but from setting, seems some setting is not reflected.
**To Reproduce**
ZEP_EMBEDDINGS_MESSAGES_MODEL=all-MiniLM-L6-v2
ZEP_EMBEDDINGS_DOCUMEN…
-
### Describe the issue:
I have no idea what isn't yet installed. Went through the documentation for solving extra dependencies, but found nothing.
Could you point me to what is wrong with `numpy.c…
4l1fe updated
2 weeks ago
-
-
Would it be possible to add the maxTokens?: number; parameter to the Ollama Class as well? Otherwise Ollama doesnt work with ChatHistory.js due to
if (!this.llm.metadata.maxTokens) {
…
-
## NLP Text Summarizer Project
**Problem Statement** - Train and test a machine learning model to produce text summary using dataset from HuggingFace
-
### Feature Description
Add progress bar feature for query pipelines or summarizer when it call LLM multiple times, or it's using some tools.
### Reason
Now for a query pipeline (especially usi…
-
![image](https://github.com/janhq/jan/assets/101145494/0b5624f3-bcf0-4c0a-843b-6f076cc932dc)
Our current Jan results in "New Thread" for each thread, as users don't title each thread manually.
We sh…
-
**Describe the bug**
Error and crash the application during startup
**To Reproduce**
Trying to upgrade from 0.15.2 to 0.21.0
**Logs**
time="2024-02-02T10:13:08Z" level=warning msg=".env file …