-
**安装方式**
我是以包的方式安装的
```python
pip install langchain-chatchat -U
```
**本地环境**
Ubunt20.04
CPU服务器,无显卡
**问题**
因为本地算力有限,所以想接入在线大模型的api,但是一直没配置成功,没有在文档中找到具体的示例,有没有哪位朋友配置成功的,可以给一个示例参考一下嘛。
-
### Describe your problem
![image](https://github.com/infiniflow/ragflow/assets/22130536/30d978f3-34f1-45e7-bef7-02ae9b80cde3)
-
使用docker版本的chatchat0.3.1.1,大模型引擎选的ollama(Xinference要求的cuda版本太高,我的机器的显卡驱动版本够不到,遂放弃),ollama安装了qwen2和bge-large-zh-v1.5两个模型,相关的配置文件也做了修改,data.tar.gz也解压到对应目录下了,镜像跑起来后运行chatchat init正常,但是运行chatchat kb -r时显示…
-
Most papers and webpages include images and figures, let's extract these images and store them with some metadata. The images can be served together with the generation.
We don't need image2text or OC…
-
-
Hello
I want to download the license plate recognition model and then convert the model file to ONNX format and use it locally. Is this possible?
Please guide me if possible.
Thanks to the creators…
-
![image](https://user-images.githubusercontent.com/11919660/220833467-6f0ab06a-d2a7-4da6-8958-8138100e1861.png)
2023-02-23,06:08:33 | INFO | Rank 0 | Validation Result (epoch 3 @ 99 steps) | Valid Lo…
-
Hello, thanks for your efforts in building this powerful library, I wanted a database completely similar to the database "hezarai/persian-license-plate-v1" I also changed other settings related to pat…
-
很多模型是其他节点也使用的,建议大家以后把用到的模型全部默认放到models目录里,大家要有环保意识,尤其模型文件本就较大,复制几份不是个好的办法,修改调用路径更新又变回去了,大家为何不统一默认放到models目录下呢?按照模型作者分类二级目录及多级目录。
-
![Quicker_20240327_172339](https://github.com/zhongpei/Comfyui_image2prompt/assets/161134037/9506be4e-8953-47b4-9011-1c200925e214)