-
With identical seeds and options, b2447 (https://github.com/ggerganov/llama.cpp/commit/c47cf414efafb8f60596edc7edb5a2d68065e992) produces different output that seems lower in quality compared to b2446…
-
LM Studio is more convenient and easier to use than LocalAI.
https://lmstudio.ai
LM Studio also has an OpenAI drop-in replacing API.
Otherwise: Great work so far!
-
This app as a PWA looks great!
I use some openai compatible backends like text-gen-webui and tabbyAPI, I was wondering if you could expose the URL to allow a user to change it, for chatgpt or some o…
-
setting as below :
LLM Provider: OpenAi Chat
API Key: None
Base Path: I used LMSTUDIO, so that it is: http://localhost:1234/v1
Model: gpt-3.5-turbo
===
this setting work in my mac m2 pro, with…
-
### What would you like to see?
While asking the AI to generate and summary of a topic it gives a limited response based on one pdf source instead of the multiple sources available in the database. L…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I am trying to run llama index with LM studio.
I tried with plain OpenAI setup, but i…
-
Hi,
Thank you for this excellent plugin and project. I had a few problems installing the plugin with Pip so I created this issue to explain how I installed it using the Conda virtual environment/pa…
-
![QQ图片20240228174213](https://github.com/jianchang512/pyvideotrans/assets/130566478/fe8283a8-a972-4618-a3ed-5e9e12a7649c)
通过类似这个链接,浏览器里的几个翻译软件包括chatgptbox,沉浸式翻译插件都能正常运作,但在本程序测试时,gpt后台也无反应。
http://lo…
-
Llamacpp-python (text generation web ui dev branch) and LM studio have both added support for Gemma models. However when merging Gemma models, then converting to GGUF, the resulting model does not loa…
-
This is the worst kind of issue. A feature request based on a newly released related tool. I'm sorry. 🫣
With the recent release of [llamafile](https://github.com/Mozilla-Ocho/llamafile) it'd be rea…