-
Can you please add local LLM support, please?
Ollama support will be nice too.
Thank you.
-
Hi,
Looking into https://github.com/PLangHQ/plang/issues/14
Is there a possibily to get `plang` working with a LLM running locally?
-
* langchain-community
-
![b9b846877c545e753d310f5dc4d092d](https://github.com/user-attachments/assets/5aa5ce35-13d1-4056-b971-61e9c463e9ab)
-
### Discussed in https://github.com/bmachek/lrc-ai-assistant/discussions/3
Originally posted by **FA-UC-HR** November 15, 2024
What do you think about using local / self hosted llms? Like olla…
-
Bonjour @stephanedebove,
Je cherche à répliquer votre projet en utilisant un LLM installé en local plutôt que de faire appel à OPENAI. Une idée de la meilleure façon d'y parvenir ?
Cordialemen…
-
Hi 您好,我根据您的代码,对 whisper-large-v3-turbo 这个模型进行编译部署,报错如下,我看 24.09-trtllm-python-py3 支持的 tensorrt-llm 是0.13.0.您那边测试是成功的吗?
```
Traceback (most recent call last):
File "/workspace/TensorRT-LLM/exam…
-
Instead of using OpenAI (#69), we want to use a local model that runs on the device (makes it free!).
-
Hey, looking a good initiative.
I have locally downloaded local LLms, can't those be used with this project? Why do I need API Keys if I don't want to use those platforms.
I have LM Studio as w…
-
### System Info
GPU: A100
Ubuntu: Ubuntu 22.04.4 LTS
Command:
```
CONVERT_CHKPT_SCRIPT=/opt/tritonserver/TensorRT_LLM_KARI/TensorRT-LLM/examples/llama/convert_checkpoint.py
python3 ${CONVERT_CHKPT_…