-
### Is your feature request related to a problem? Please describe.
i tried to run ur interpreter with --local via wsl windows linux ( debian python 3.12 venv pip )
before , run lmstudio windows vers…
-
Hello,
My first try seems to generate error :
```
ASSISTANT_AIlice: !CALL
ASSISTANT_file_listing:
SYSTEM_AIlice: Agent file_listing returned:
ASSISTANT_AIlice: !CALL
ASSISTANT_file_l…
-
![QQ图片20240228174213](https://github.com/jianchang512/pyvideotrans/assets/130566478/fe8283a8-a972-4618-a3ed-5e9e12a7649c)
通过类似这个链接,浏览器里的几个翻译软件包括chatgptbox,沉浸式翻译插件都能正常运作,但在本程序测试时,gpt后台也无反应。
http://lo…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I am trying to run llama index with LM studio.
I tried with plain OpenAI setup, but i…
-
setting as below :
LLM Provider: OpenAi Chat
API Key: None
Base Path: I used LMSTUDIO, so that it is: http://localhost:1234/v1
Model: gpt-3.5-turbo
===
this setting work in my mac m2 pro, with…
-
Llamacpp-python (text generation web ui dev branch) and LM studio have both added support for Gemma models. However when merging Gemma models, then converting to GGUF, the resulting model does not loa…
-
Hello i have Ubuntu 22.04 and when i tried create vector databses i have this input.
NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3
NVIDIA GeForce RTX 3090 with 2…
-
**SETUP**
* Running Textgen Webui with `--api --api-key sk-1111`
* `config.toml`
```
LLM_API_KEY="sk-1111"
LLM_BASE_URL="http://127.0.0.1:5000/v1"
LLM_EMBEDDING_MODEL="openai"
LLM_MODEL="op…
-
I'm facing some reproducibility issues with llama.cpp vs llama-cpp-python, on the same quantized model from[ lmstudio-ai](https://huggingface.co/lmstudio-ai/gemma-2b-it-GGUF/tree/main).
Here's a re…
-
LM Studio is more convenient and easier to use than LocalAI.
https://lmstudio.ai
LM Studio also has an OpenAI drop-in replacing API.
Otherwise: Great work so far!