-
Dear,
I'm trying to follow the steps to install npu extension by "https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/NPU/HF-Transformers-AutoModels/Model/llama2", when I run "py…
-
**问题描述 / Problem Description**
LLM模型选用Qwen-1.5-72B时,启动langchain-chatchat出现报错
**复现问题的步骤 / Steps to Reproduce**
1. 执行 '...' / Run '...'
2. 点击 '...' / Click '...'
3. 滚动到 '...' / Scroll to '...'
4…
-
Running the example from 254-llm-chatbot shows the following error. Can you reproduce the error ? Thanks for your instruction.
```
Selected model mpt-7b-chat
/path/.cache/huggingface/modules/tra…
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I have s…
-
Hi, I'm getting this error when trying to run lm_eval with wandb arguments. I followed the correct installation, and can see the argument in *lm-evaluation-harness/lm_eval/__ main __.py*.
`usage: l…
-
I plan to "ollama serve" two different models locally, how should I set .env? and base URL?
`MISTRAL_API_KEY=""
OPENAI_API_KEY="ollama"
GROK_API_KEY=""
MODEL_PROVIDER="openai" # Options: […
-
This RFC is to propose a Hugging Face-compatible yet flexible Weight Only Quantization (WOQ) format in INC, and then the model quantized by INC can be loaded by IPEX for further inference optimization…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to find a …
-
template = """Use the following pieces of context to answer the question at the end. If the answer can't be determined using only the information in the provided context simply output "NO ANSWER", ju…
-
```
InvalidRequestError Traceback (most recent call last)
Cell In[43], line 2
1 question = "what did they say about matlab?"
----> 2 compressed_docs = compression_retri…