-
### Description of the new feature / enhancement
It should be possible to configure the model used (currently fixed as `gpt3.5-turbo`) and endpoint (currently fixed as OpenAI's) to arbitrary values
…
-
### 是否已存在现有反馈与解答?
- [X] 我确认没有已有issue或discussion,且已阅读**常见问题**。
### 是否是一个代理配置相关的疑问?
- [X] 我确认这不是一个代理配置相关的疑问。
### 错误描述
不能推理
### 复现操作
1. 模型是qwen:14b-chat-v1.5-fp16
2. 运行ollama …
-
```
Last login: Sun Sep 15 20:21:00 on console
haijs@haijsdeMBP ~ % ollama run llama3.1
>>> Send a message (/? for help)
```
>>> how to train you?
Training me involves providing me with a wide…
-
https://github.com/THUDM/CogVLM2
thanks!
-
```
import urllib.request
import sqlite3
from langchain_community.utilities.sql_database import SQLDatabase
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser im…
-
### Description
![image](https://github.com/user-attachments/assets/934f38a6-f3cc-4c8e-a7df-55dab0a24a58)
uploaded from chat are at the bottom with token counts, when reindex or on a fresh uploa…
-
Hi Simon, the responses from llama2 have been truncated. What is the good way for llm to handle this? see
% llm -m l2c "give me 20 good names for avatars" --system "you are a creator"
Sure, here…
-
Will local models be supported one day as well?
(Unless they are, and I didn't find it in the readme XD)
-
Here is my code:
```
import typing as t
import asyncio
from typing import List
from datasets import load_dataset, load_from_disk
from ragas.metrics import faithfulness, context_recall, context_p…
-
![image](https://github.com/user-attachments/assets/acac7236-6e95-4b90-b5c6-2c0a605a5d76)
[vim-ollama.log](https://github.com/user-attachments/files/17406565/vim-ollama.log)