-
首先感謝您的教學,
我也遇到跟issue一樣的問題,有看到解決方法是改成 http://127.0.0.1:8080/v1
想問是單純像這樣改嗎?
因為還是出錯,所以前來詢問,
再次感您的教學,非常的易懂!
-
In the llm-rag-chatbot demo, since the pay-per-token foundation models are not available in my region, I had to create my own embedding endpoint. With some minor code changes i was able to save the ch…
-
-
參考
1. [提示工程](https://github.com/cccbook/py2gpt/wiki/prompt)
2. https://www.langchain.com/
3. https://console.groq.com/docs/quickstart
4. [02b-LLM提示工程](https://github.com/ccc112b/py2cs/tree/maste…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a…
-
If the llama3 from ollama is running on http://8.140.18.**:28275, the following code from 60th example runs fine.
```
from txtai.pipeline import LLM
llm = LLM("ollama/llama3", method="litellm", a…
-
**Describe the solution you'd like**
Collecting data from a wide range on docs and giving relevant info to a LLM is one of the most common ways to use RAG.
There is 1 point wich would improve the ex…
-
I am trying to write a simple pdf agent which would answer questions on the basis of pdf knowledge
app.py
```
llm = Ollama(base_url = url,model=model,num_gpu=2)
rag_tool = PDFSearchTool(
p…
-
i am using the version from Pinokio, it installs the script by itself. after running it i have 3 output files (my input.txt is 77KB) ;
master_list.jsonl
processed_master_list.json
simplified_data…
-
### Feature description
We currently abstract LLMs through `ragna.core.Assistant`. While this allows users to implement arbitrary assistants, it makes it unnecessarily hard to use LLMs for other task…