-
Please add ability to use Locally hosted LLM, for example using LM Studio
Getting far away from Closed Source LLM is the best for everyone.
-
## Description:
I'm encountering an error when running the llm chat command on Windows. The error appears to be related to key bindings in the pyreadline3 library, where key descriptions are conve…
-
Do you guys have some notes on the settings for local llm configs?
Do we leave API key blank?
Assuming we use the local Ollama URL in both configs?
This is running on an M3 Mac.
Here's wh…
-
### System Info
4xA100 (80G) Azure instance, x86
TensorRT-LLM 0.14.0
docker image based on nvcr.io/nvidia/tensorrt:24.09-py3 ; trt-llm installed from nvidia pip
outside docker running Ubuntu, Dri…
-
2024-10-30 22:00:48,100 - Deleted File Path: E:\Python_Code\Neo4j-llm-graph-builder\backend\merged_files\test9.txt and Deleted File Name : test9.txt
2024-10-30 22:00:48,101 - file test9.txt deleted s…
-
Would you consider using a local LLM model that is compatible to the OpenAI GPT API, but would need a config to use locally.
As information here is an API that is able to be used by a lot of model…
-
when running the command: kraken -i 2.jpg output.txt segment -bl ocr -m "C:\Users\ali\Downloads\catmus-print-fondue-large.mlmodel"
>>
Fail to import BlobReader from libmilstoragepython. No module na…
-
Is it possible to add support for local LLM, using OpenAI API compatible server or Ollama?
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
I'm not sure if I'm placing the URL for the local LLM API correctly, but I have something like this and I can't get the bot to work. Can you give me a solution so that it works with Oobabooga, thanks.…